Oct 02 10:45:34 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 10:45:34 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 10:45:34 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:45:34 localhost kernel: BIOS-provided physical RAM map:
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 10:45:34 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 10:45:34 localhost kernel: NX (Execute Disable) protection: active
Oct 02 10:45:34 localhost kernel: APIC: Static calls initialized
Oct 02 10:45:34 localhost kernel: SMBIOS 2.8 present.
Oct 02 10:45:34 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 10:45:34 localhost kernel: Hypervisor detected: KVM
Oct 02 10:45:34 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 10:45:34 localhost kernel: kvm-clock: using sched offset of 5006598423 cycles
Oct 02 10:45:34 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 10:45:34 localhost kernel: tsc: Detected 2799.998 MHz processor
Oct 02 10:45:34 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 10:45:34 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 10:45:34 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 10:45:34 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 10:45:34 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 10:45:34 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 10:45:34 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 10:45:34 localhost kernel: Using GB pages for direct mapping
Oct 02 10:45:34 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 10:45:34 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 10:45:34 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 10:45:34 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:45:34 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:45:34 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:45:34 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 10:45:34 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:45:34 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:45:34 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 10:45:34 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 10:45:34 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 10:45:34 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 10:45:34 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 10:45:34 localhost kernel: No NUMA configuration found
Oct 02 10:45:34 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 10:45:34 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 02 10:45:34 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 10:45:34 localhost kernel: Zone ranges:
Oct 02 10:45:34 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 10:45:34 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 10:45:34 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:45:34 localhost kernel:   Device   empty
Oct 02 10:45:34 localhost kernel: Movable zone start for each node
Oct 02 10:45:34 localhost kernel: Early memory node ranges
Oct 02 10:45:34 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 10:45:34 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 10:45:34 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:45:34 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 10:45:34 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 10:45:34 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 10:45:34 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 10:45:34 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 10:45:34 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 10:45:34 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 10:45:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 10:45:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 10:45:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 10:45:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 10:45:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 10:45:34 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 10:45:34 localhost kernel: TSC deadline timer available
Oct 02 10:45:34 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 10:45:34 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 10:45:34 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 10:45:34 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 10:45:34 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 10:45:34 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 10:45:34 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 10:45:34 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 10:45:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 10:45:34 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 10:45:34 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 10:45:34 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 10:45:34 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 10:45:34 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 10:45:34 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 10:45:34 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 10:45:34 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 10:45:34 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:45:34 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 10:45:34 localhost kernel: random: crng init done
Oct 02 10:45:34 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 10:45:34 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 10:45:34 localhost kernel: Fallback order for Node 0: 0 
Oct 02 10:45:34 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 10:45:34 localhost kernel: Policy zone: Normal
Oct 02 10:45:34 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 10:45:34 localhost kernel: software IO TLB: area num 8.
Oct 02 10:45:34 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 10:45:34 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 10:45:34 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 10:45:34 localhost kernel: Dynamic Preempt: voluntary
Oct 02 10:45:34 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 10:45:34 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 10:45:34 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 10:45:34 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 10:45:34 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 10:45:34 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 10:45:34 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 10:45:34 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 10:45:34 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:45:34 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:45:34 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:45:34 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 10:45:34 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 10:45:34 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 10:45:34 localhost kernel: Console: colour VGA+ 80x25
Oct 02 10:45:34 localhost kernel: printk: console [ttyS0] enabled
Oct 02 10:45:34 localhost kernel: ACPI: Core revision 20230331
Oct 02 10:45:34 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 10:45:34 localhost kernel: x2apic enabled
Oct 02 10:45:34 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 10:45:34 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 10:45:34 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 02 10:45:34 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 10:45:34 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 10:45:34 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 10:45:34 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 10:45:34 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 10:45:34 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 10:45:34 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 10:45:34 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 10:45:34 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 10:45:34 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 10:45:34 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 10:45:34 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 10:45:34 localhost kernel: x86/bugs: return thunk changed
Oct 02 10:45:34 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 10:45:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 10:45:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 10:45:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 10:45:34 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 10:45:34 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 10:45:34 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 10:45:34 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 10:45:34 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 10:45:34 localhost kernel: landlock: Up and running.
Oct 02 10:45:34 localhost kernel: Yama: becoming mindful.
Oct 02 10:45:34 localhost kernel: SELinux:  Initializing.
Oct 02 10:45:34 localhost kernel: LSM support for eBPF active
Oct 02 10:45:34 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:45:34 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:45:34 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 10:45:34 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 10:45:34 localhost kernel: ... version:                0
Oct 02 10:45:34 localhost kernel: ... bit width:              48
Oct 02 10:45:34 localhost kernel: ... generic registers:      6
Oct 02 10:45:34 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 10:45:34 localhost kernel: ... max period:             00007fffffffffff
Oct 02 10:45:34 localhost kernel: ... fixed-purpose events:   0
Oct 02 10:45:34 localhost kernel: ... event mask:             000000000000003f
Oct 02 10:45:34 localhost kernel: signal: max sigframe size: 1776
Oct 02 10:45:34 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 10:45:34 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 10:45:34 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 10:45:34 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 10:45:34 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 10:45:34 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 10:45:34 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 02 10:45:34 localhost kernel: node 0 deferred pages initialised in 19ms
Oct 02 10:45:34 localhost kernel: Memory: 7765444K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct 02 10:45:34 localhost kernel: devtmpfs: initialized
Oct 02 10:45:34 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 10:45:34 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 10:45:34 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 10:45:34 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 10:45:34 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 10:45:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 10:45:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 10:45:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 10:45:34 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 10:45:34 localhost kernel: audit: type=2000 audit(1759401932.645:1): state=initialized audit_enabled=0 res=1
Oct 02 10:45:34 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 10:45:34 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 10:45:34 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 10:45:34 localhost kernel: cpuidle: using governor menu
Oct 02 10:45:34 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 10:45:34 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 10:45:34 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 10:45:34 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 10:45:34 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 10:45:34 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 10:45:34 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 10:45:34 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 10:45:34 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:45:34 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 10:45:34 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 10:45:34 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 10:45:34 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 10:45:34 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 10:45:34 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 10:45:34 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 10:45:34 localhost kernel: ACPI: Interpreter enabled
Oct 02 10:45:34 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 10:45:34 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 10:45:34 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 10:45:34 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 10:45:34 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 10:45:34 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 10:45:34 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [3] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [4] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [5] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [6] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [7] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [8] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [9] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [10] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [11] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [12] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [13] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [14] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [15] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [16] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [17] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [18] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [19] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [20] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [21] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [22] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [23] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [24] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [25] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [26] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [27] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [28] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [29] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [30] registered
Oct 02 10:45:34 localhost kernel: acpiphp: Slot [31] registered
Oct 02 10:45:34 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 10:45:34 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 10:45:34 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 10:45:34 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:45:34 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 10:45:34 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 10:45:34 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 10:45:34 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 10:45:34 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 10:45:34 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 10:45:34 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 10:45:34 localhost kernel: iommu: Default domain type: Translated
Oct 02 10:45:34 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 10:45:34 localhost kernel: SCSI subsystem initialized
Oct 02 10:45:34 localhost kernel: ACPI: bus type USB registered
Oct 02 10:45:34 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 10:45:34 localhost kernel: usbcore: registered new interface driver hub
Oct 02 10:45:34 localhost kernel: usbcore: registered new device driver usb
Oct 02 10:45:34 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 10:45:34 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 10:45:34 localhost kernel: PTP clock support registered
Oct 02 10:45:34 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 10:45:34 localhost kernel: NetLabel: Initializing
Oct 02 10:45:34 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 10:45:34 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 10:45:34 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 10:45:34 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 10:45:34 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 10:45:34 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 10:45:34 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 10:45:34 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 10:45:34 localhost kernel: vgaarb: loaded
Oct 02 10:45:34 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 10:45:34 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 10:45:34 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 10:45:34 localhost kernel: pnp: PnP ACPI init
Oct 02 10:45:34 localhost kernel: pnp 00:03: [dma 2]
Oct 02 10:45:34 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 10:45:34 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 10:45:34 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 10:45:34 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 10:45:34 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 10:45:34 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 10:45:34 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 10:45:34 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 10:45:34 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 10:45:34 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 10:45:34 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:45:34 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:45:34 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 10:45:34 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 10:45:34 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 10:45:34 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 10:45:34 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 10:45:34 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 75785 usecs
Oct 02 10:45:34 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 10:45:34 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 10:45:34 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 10:45:34 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 10:45:34 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 10:45:34 localhost kernel: Initialise system trusted keyrings
Oct 02 10:45:34 localhost kernel: Key type blacklist registered
Oct 02 10:45:34 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 10:45:34 localhost kernel: zbud: loaded
Oct 02 10:45:34 localhost kernel: integrity: Platform Keyring initialized
Oct 02 10:45:34 localhost kernel: integrity: Machine keyring initialized
Oct 02 10:45:34 localhost kernel: Freeing initrd memory: 86104K
Oct 02 10:45:34 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 10:45:34 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 10:45:34 localhost kernel: Key type asymmetric registered
Oct 02 10:45:34 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 10:45:34 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 10:45:34 localhost kernel: io scheduler mq-deadline registered
Oct 02 10:45:34 localhost kernel: io scheduler kyber registered
Oct 02 10:45:34 localhost kernel: io scheduler bfq registered
Oct 02 10:45:34 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 10:45:34 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 10:45:34 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 10:45:34 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 10:45:34 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 10:45:34 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 10:45:34 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 10:45:34 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 10:45:34 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 10:45:34 localhost kernel: Non-volatile memory driver v1.3
Oct 02 10:45:34 localhost kernel: rdac: device handler registered
Oct 02 10:45:34 localhost kernel: hp_sw: device handler registered
Oct 02 10:45:34 localhost kernel: emc: device handler registered
Oct 02 10:45:34 localhost kernel: alua: device handler registered
Oct 02 10:45:34 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 10:45:34 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 10:45:34 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 10:45:34 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 10:45:34 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 10:45:34 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 10:45:34 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 10:45:34 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 10:45:34 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 10:45:34 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 10:45:34 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 10:45:34 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 10:45:34 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 10:45:34 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 10:45:34 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 10:45:34 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 10:45:34 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 10:45:34 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 10:45:34 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 10:45:34 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 10:45:34 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:45:33 UTC (1759401933)
Oct 02 10:45:34 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 10:45:34 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 10:45:34 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 10:45:34 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 10:45:34 localhost kernel: usbhid: USB HID core driver
Oct 02 10:45:34 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 10:45:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 10:45:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 10:45:34 localhost kernel: Initializing XFRM netlink socket
Oct 02 10:45:34 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 10:45:34 localhost kernel: Segment Routing with IPv6
Oct 02 10:45:34 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 10:45:34 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 10:45:34 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 10:45:34 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 10:45:34 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 10:45:34 localhost kernel: sched_clock: Marking stable (1242016093, 153165009)->(1519243695, -124062593)
Oct 02 10:45:34 localhost kernel: registered taskstats version 1
Oct 02 10:45:34 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 10:45:34 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:45:34 localhost kernel: page_owner is disabled
Oct 02 10:45:34 localhost kernel: Key type .fscrypt registered
Oct 02 10:45:34 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 10:45:34 localhost kernel: Key type big_key registered
Oct 02 10:45:34 localhost kernel: Key type encrypted registered
Oct 02 10:45:34 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 10:45:34 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:45:34 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 10:45:34 localhost kernel: ima: No architecture policies found
Oct 02 10:45:34 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 10:45:34 localhost kernel: evm: security.selinux
Oct 02 10:45:34 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 10:45:34 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 10:45:34 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 10:45:34 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 10:45:34 localhost kernel: evm: security.apparmor (disabled)
Oct 02 10:45:34 localhost kernel: evm: security.ima
Oct 02 10:45:34 localhost kernel: evm: security.capability
Oct 02 10:45:34 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 10:45:34 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 10:45:34 localhost kernel: Running certificate verification RSA selftest
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 10:45:34 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 10:45:34 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 10:45:34 localhost kernel: clk: Disabling unused clocks
Oct 02 10:45:34 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 10:45:34 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 10:45:34 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 10:45:34 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 10:45:34 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 10:45:34 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 10:45:34 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 10:45:34 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 10:45:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 10:45:34 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 10:45:34 localhost kernel: Run /init as init process
Oct 02 10:45:34 localhost kernel:   with arguments:
Oct 02 10:45:34 localhost kernel:     /init
Oct 02 10:45:34 localhost kernel:   with environment:
Oct 02 10:45:34 localhost kernel:     HOME=/
Oct 02 10:45:34 localhost kernel:     TERM=linux
Oct 02 10:45:34 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 10:45:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 10:45:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 10:45:34 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 10:45:34 localhost systemd[1]: Detected virtualization kvm.
Oct 02 10:45:34 localhost systemd[1]: Detected architecture x86-64.
Oct 02 10:45:34 localhost systemd[1]: Running in initrd.
Oct 02 10:45:34 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 10:45:34 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 10:45:34 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 10:45:34 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 10:45:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 10:45:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 10:45:34 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 10:45:34 localhost systemd[1]: Reached target Local File Systems.
Oct 02 10:45:34 localhost systemd[1]: Reached target Path Units.
Oct 02 10:45:34 localhost systemd[1]: Reached target Slice Units.
Oct 02 10:45:34 localhost systemd[1]: Reached target Swaps.
Oct 02 10:45:34 localhost systemd[1]: Reached target Timer Units.
Oct 02 10:45:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 10:45:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 10:45:34 localhost systemd[1]: Listening on Journal Socket.
Oct 02 10:45:34 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 10:45:34 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 10:45:34 localhost systemd[1]: Reached target Socket Units.
Oct 02 10:45:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 10:45:34 localhost systemd[1]: Starting Journal Service...
Oct 02 10:45:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 10:45:34 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 10:45:34 localhost systemd[1]: Starting Create System Users...
Oct 02 10:45:34 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 10:45:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 10:45:34 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 10:45:34 localhost systemd[1]: Finished Create System Users.
Oct 02 10:45:34 localhost systemd-journald[306]: Journal started
Oct 02 10:45:34 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/8a59133cd1384412952a4a6587089b61) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:45:34 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct 02 10:45:34 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct 02 10:45:34 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 10:45:34 localhost systemd[1]: Started Journal Service.
Oct 02 10:45:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 10:45:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 10:45:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 10:45:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 10:45:34 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 10:45:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 10:45:34 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 10:45:34 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 10:45:34 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:45:34 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 10:45:34 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 10:45:34 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 10:45:34 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 10:45:34 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 10:45:34 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 10:45:34 localhost kernel: RPC: Registered udp transport module.
Oct 02 10:45:34 localhost kernel: RPC: Registered tcp transport module.
Oct 02 10:45:34 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 10:45:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 10:45:34 localhost rpc.statd[445]: Version 2.5.4 starting
Oct 02 10:45:34 localhost rpc.statd[445]: Initializing NSM state
Oct 02 10:45:34 localhost rpc.idmapd[451]: Setting log level to 0
Oct 02 10:45:34 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 10:45:34 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 10:45:34 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 10:45:34 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 10:45:34 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 10:45:34 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 10:45:34 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 10:45:34 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 10:45:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:45:34 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 10:45:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:45:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:45:34 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:45:34 localhost systemd[1]: Reached target Network.
Oct 02 10:45:34 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:45:34 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 10:45:34 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 10:45:34 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 10:45:34 localhost kernel: libata version 3.00 loaded.
Oct 02 10:45:34 localhost kernel:  vda: vda1
Oct 02 10:45:34 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 10:45:34 localhost kernel: scsi host0: ata_piix
Oct 02 10:45:34 localhost kernel: scsi host1: ata_piix
Oct 02 10:45:34 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 10:45:34 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 10:45:34 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:45:35 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 10:45:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 10:45:35 localhost kernel: ata1: found unknown device (class 0)
Oct 02 10:45:35 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 10:45:35 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 10:45:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 10:45:35 localhost systemd[1]: Reached target System Initialization.
Oct 02 10:45:35 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:45:35 localhost systemd[1]: Reached target Basic System.
Oct 02 10:45:35 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 10:45:35 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 10:45:35 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 10:45:35 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 10:45:35 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 10:45:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 10:45:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 10:45:35 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 10:45:35 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 10:45:35 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 10:45:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 10:45:35 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 10:45:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:45:35 localhost systemd[1]: Mounting /sysroot...
Oct 02 10:45:35 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 10:45:35 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 10:45:35 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 10:45:35 localhost systemd[1]: Mounted /sysroot.
Oct 02 10:45:35 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 10:45:35 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 10:45:35 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 10:45:35 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 10:45:35 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 10:45:35 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 10:45:35 localhost systemd[1]: Starting dracut mount hook...
Oct 02 10:45:35 localhost systemd[1]: Finished dracut mount hook.
Oct 02 10:45:35 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 10:45:36 localhost rpc.idmapd[451]: exiting on signal 15
Oct 02 10:45:36 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 10:45:36 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 10:45:36 localhost systemd[1]: Stopped target Network.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Timer Units.
Oct 02 10:45:36 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 10:45:36 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Basic System.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Path Units.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Slice Units.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Socket Units.
Oct 02 10:45:36 localhost systemd[1]: Stopped target System Initialization.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Swaps.
Oct 02 10:45:36 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 10:45:36 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 10:45:36 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 10:45:36 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 10:45:36 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 10:45:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 10:45:36 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 10:45:36 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 10:45:36 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 10:45:36 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 10:45:36 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 10:45:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 10:45:36 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Closed udev Control Socket.
Oct 02 10:45:36 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 10:45:36 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 10:45:36 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 10:45:36 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 10:45:36 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 10:45:36 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 10:45:36 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Stopped Create System Users.
Oct 02 10:45:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 10:45:36 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 10:45:36 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 10:45:36 localhost systemd[1]: Reached target Switch Root.
Oct 02 10:45:36 localhost systemd[1]: Starting Switch Root...
Oct 02 10:45:36 localhost systemd[1]: Switching root.
Oct 02 10:45:36 localhost systemd-journald[306]: Journal stopped
Oct 02 10:45:37 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Oct 02 10:45:37 localhost kernel: audit: type=1404 audit(1759401936.393:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability open_perms=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:45:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:45:37 localhost kernel: audit: type=1403 audit(1759401936.558:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 02 10:45:37 localhost systemd[1]: Successfully loaded SELinux policy in 172.951ms.
Oct 02 10:45:37 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.777ms.
Oct 02 10:45:37 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 10:45:37 localhost systemd[1]: Detected virtualization kvm.
Oct 02 10:45:37 localhost systemd[1]: Detected architecture x86-64.
Oct 02 10:45:37 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 10:45:37 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Stopped Switch Root.
Oct 02 10:45:37 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 02 10:45:37 localhost systemd[1]: Created slice Slice /system/getty.
Oct 02 10:45:37 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 02 10:45:37 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 02 10:45:37 localhost systemd[1]: Created slice User and Session Slice.
Oct 02 10:45:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 10:45:37 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 02 10:45:37 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 02 10:45:37 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 10:45:37 localhost systemd[1]: Stopped target Switch Root.
Oct 02 10:45:37 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 02 10:45:37 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 02 10:45:37 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 02 10:45:37 localhost systemd[1]: Reached target Path Units.
Oct 02 10:45:37 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 02 10:45:37 localhost systemd[1]: Reached target Slice Units.
Oct 02 10:45:37 localhost systemd[1]: Reached target Swaps.
Oct 02 10:45:37 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 02 10:45:37 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 02 10:45:37 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 02 10:45:37 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 02 10:45:37 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 02 10:45:37 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 10:45:37 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 10:45:37 localhost systemd[1]: Mounting Huge Pages File System...
Oct 02 10:45:37 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 02 10:45:37 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 02 10:45:37 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 02 10:45:37 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 10:45:37 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 10:45:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:45:37 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 02 10:45:37 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 02 10:45:37 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 02 10:45:37 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 02 10:45:37 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 02 10:45:37 localhost systemd[1]: Stopped Journal Service.
Oct 02 10:45:37 localhost systemd[1]: Starting Journal Service...
Oct 02 10:45:37 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 10:45:37 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 02 10:45:37 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:45:37 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 02 10:45:37 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 02 10:45:37 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 10:45:37 localhost kernel: fuse: init (API version 7.37)
Oct 02 10:45:37 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 10:45:37 localhost systemd[1]: Mounted Huge Pages File System.
Oct 02 10:45:37 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 02 10:45:37 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 02 10:45:37 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 02 10:45:37 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 02 10:45:37 localhost systemd-journald[681]: Journal started
Oct 02 10:45:37 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:45:36 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 02 10:45:36 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Started Journal Service.
Oct 02 10:45:37 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 10:45:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:45:37 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 02 10:45:37 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 02 10:45:37 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 02 10:45:37 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 02 10:45:37 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 02 10:45:37 localhost kernel: ACPI: bus type drm_connector registered
Oct 02 10:45:37 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 10:45:37 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 02 10:45:37 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 02 10:45:37 localhost systemd[1]: Mounting FUSE Control File System...
Oct 02 10:45:37 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 10:45:37 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 02 10:45:37 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 02 10:45:37 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 02 10:45:37 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 02 10:45:37 localhost systemd[1]: Starting Create System Users...
Oct 02 10:45:37 localhost systemd[1]: Mounted FUSE Control File System.
Oct 02 10:45:37 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:45:37 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 02 10:45:37 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 02 10:45:37 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 02 10:45:37 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 10:45:37 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 10:45:37 localhost systemd[1]: Finished Create System Users.
Oct 02 10:45:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 10:45:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 10:45:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 02 10:45:37 localhost systemd[1]: Reached target Local File Systems.
Oct 02 10:45:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 02 10:45:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 02 10:45:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 02 10:45:37 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 02 10:45:37 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 02 10:45:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 02 10:45:37 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 10:45:37 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Oct 02 10:45:37 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 02 10:45:37 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 10:45:37 localhost systemd[1]: Starting Security Auditing Service...
Oct 02 10:45:37 localhost systemd[1]: Starting RPC Bind...
Oct 02 10:45:37 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 02 10:45:37 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 02 10:45:37 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 02 10:45:37 localhost systemd[1]: Started RPC Bind.
Oct 02 10:45:37 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 02 10:45:37 localhost augenrules[710]: /sbin/augenrules: No change
Oct 02 10:45:37 localhost augenrules[725]: No rules
Oct 02 10:45:37 localhost augenrules[725]: enabled 1
Oct 02 10:45:37 localhost augenrules[725]: failure 1
Oct 02 10:45:37 localhost augenrules[725]: pid 705
Oct 02 10:45:37 localhost augenrules[725]: rate_limit 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:45:37 localhost augenrules[725]: lost 0
Oct 02 10:45:37 localhost augenrules[725]: backlog 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:45:37 localhost augenrules[725]: enabled 1
Oct 02 10:45:37 localhost augenrules[725]: failure 1
Oct 02 10:45:37 localhost augenrules[725]: pid 705
Oct 02 10:45:37 localhost augenrules[725]: rate_limit 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:45:37 localhost augenrules[725]: lost 0
Oct 02 10:45:37 localhost augenrules[725]: backlog 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:45:37 localhost augenrules[725]: enabled 1
Oct 02 10:45:37 localhost augenrules[725]: failure 1
Oct 02 10:45:37 localhost augenrules[725]: pid 705
Oct 02 10:45:37 localhost augenrules[725]: rate_limit 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:45:37 localhost augenrules[725]: lost 0
Oct 02 10:45:37 localhost augenrules[725]: backlog 0
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:45:37 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:45:37 localhost systemd[1]: Started Security Auditing Service.
Oct 02 10:45:37 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 02 10:45:37 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 02 10:45:37 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 02 10:45:37 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 02 10:45:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 10:45:37 localhost systemd[1]: Starting Update is Completed...
Oct 02 10:45:37 localhost systemd[1]: Finished Update is Completed.
Oct 02 10:45:37 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 10:45:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 10:45:38 localhost systemd[1]: Reached target System Initialization.
Oct 02 10:45:38 localhost systemd[1]: Started dnf makecache --timer.
Oct 02 10:45:38 localhost systemd[1]: Started Daily rotation of log files.
Oct 02 10:45:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 02 10:45:38 localhost systemd[1]: Reached target Timer Units.
Oct 02 10:45:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 10:45:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 02 10:45:38 localhost systemd[1]: Reached target Socket Units.
Oct 02 10:45:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 02 10:45:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:45:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 02 10:45:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:45:38 localhost systemd-udevd[744]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:45:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:45:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:45:38 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 02 10:45:38 localhost systemd[1]: Reached target Basic System.
Oct 02 10:45:38 localhost dbus-broker-lau[764]: Ready
Oct 02 10:45:38 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 02 10:45:38 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 02 10:45:38 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 02 10:45:38 localhost systemd[1]: Starting NTP client/server...
Oct 02 10:45:38 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 02 10:45:38 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 02 10:45:38 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 02 10:45:38 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 02 10:45:38 localhost systemd[1]: Started irqbalance daemon.
Oct 02 10:45:38 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 02 10:45:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:45:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:45:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:45:38 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 02 10:45:38 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 02 10:45:38 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 02 10:45:38 localhost systemd[1]: Starting User Login Management...
Oct 02 10:45:38 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 02 10:45:38 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 02 10:45:38 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 02 10:45:38 localhost kernel: Console: switching to colour dummy device 80x25
Oct 02 10:45:38 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 02 10:45:38 localhost kernel: [drm] features: -context_init
Oct 02 10:45:38 localhost kernel: [drm] number of scanouts: 1
Oct 02 10:45:38 localhost kernel: [drm] number of cap sets: 0
Oct 02 10:45:38 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 10:45:38 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 10:45:38 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 02 10:45:38 localhost systemd-logind[789]: New seat seat0.
Oct 02 10:45:38 localhost systemd[1]: Started User Login Management.
Oct 02 10:45:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 02 10:45:38 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 02 10:45:38 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 02 10:45:38 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 02 10:45:38 localhost chronyd[805]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 10:45:38 localhost chronyd[805]: Loaded 0 symmetric keys
Oct 02 10:45:38 localhost chronyd[805]: Using right/UTC timezone to obtain leap second data
Oct 02 10:45:38 localhost chronyd[805]: Loaded seccomp filter (level 2)
Oct 02 10:45:38 localhost systemd[1]: Started NTP client/server.
Oct 02 10:45:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 02 10:45:38 localhost kernel: kvm_amd: TSC scaling supported
Oct 02 10:45:38 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 02 10:45:38 localhost kernel: kvm_amd: Nested Paging enabled
Oct 02 10:45:38 localhost kernel: kvm_amd: LBR virtualization supported
Oct 02 10:45:38 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Oct 02 10:45:38 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 02 10:45:38 localhost cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 10:45:38 +0000. Up 6.61 seconds.
Oct 02 10:45:39 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 02 10:45:39 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 02 10:45:39 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpch633osg.mount: Deactivated successfully.
Oct 02 10:45:39 localhost systemd[1]: Starting Hostname Service...
Oct 02 10:45:39 localhost systemd[1]: Started Hostname Service.
Oct 02 10:45:39 np0005465986.novalocal systemd-hostnamed[855]: Hostname set to <np0005465986.novalocal> (static)
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Reached target Preparation for Network.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Starting Network Manager...
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5271] NetworkManager (version 1.54.1-1.el9) is starting... (boot:6763d233-2040-484e-9688-138dedaa0224)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5275] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5443] manager[0x556cb18c8080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5493] hostname: hostname: using hostnamed
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5493] hostname: static hostname changed from (none) to "np0005465986.novalocal"
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5497] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5619] manager[0x556cb18c8080]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5622] manager[0x556cb18c8080]: rfkill: WWAN hardware radio set enabled
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5725] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5725] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5726] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5727] manager: Networking is enabled by state file
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5733] settings: Loaded settings plugin: keyfile (internal)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5768] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5803] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5835] dhcp: init: Using DHCP client 'internal'
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5840] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5863] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5882] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5896] device (lo): Activation: starting connection 'lo' (c4c28c02-0185-48e9-b2e5-06b78ea3e48d)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5913] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5917] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Started Network Manager.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5961] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5966] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5968] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5969] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5972] device (eth0): carrier: link connected
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5974] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Reached target Network.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5985] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.5996] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6001] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6002] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6004] manager: NetworkManager state is now CONNECTING
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6006] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6012] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6015] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6073] dhcp4 (eth0): state changed new lease, address=38.129.56.184
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6083] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6118] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6153] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6155] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6172] device (lo): Activation: successful, device activated.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6182] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6183] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6185] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6187] device (eth0): Activation: successful, device activated.
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6191] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 10:45:39 np0005465986.novalocal NetworkManager[859]: <info>  [1759401939.6193] manager: startup complete
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Reached target NFS client services.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Reached target Remote File Systems.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 10:45:39 np0005465986.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 02 10:45:39 np0005465986.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 10:45:39 +0000. Up 7.64 seconds.
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.129.56.184         | 255.255.255.0 | global | fa:16:3e:fd:8c:4a |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fefd:8c4a/64 |       .       |  link  | fa:16:3e:fd:8c:4a |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 02 10:45:40 np0005465986.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Oct 02 10:45:41 np0005465986.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Generating public/private rsa key pair.
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: SHA256:pEPCH3ZLq0JZf1qIfNneENkSCpiboEMERq4uYGSnRqs root@np0005465986.novalocal
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +---[RSA 3072]----+
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |=+   o.   .      |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |o...o  . . +     |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |.* ooo= = + .    |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |B + oO O * o     |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |o*  o * S =      |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |*  .   + = o     |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |E.  . . . . .    |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |.    .           |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |                 |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: SHA256:MhWcqIKi59NIG5uMW0rgzI9Xy42YXvVCkxK1GuOelDc root@np0005465986.novalocal
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +---[ECDSA 256]---+
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |       +..       |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |      o +.       |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: | .   = ..        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |o . o *..        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |+  . *oES        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |* + o.*o+        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: | @.B=++. .       |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |oo@+o+ ..        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |ooo+             |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: SHA256:6DT4OpGq84z2g6EzoNR5BcJKvVD/UUUqF24ZDQ3ZS+c root@np0005465986.novalocal
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +--[ED25519 256]--+
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |  +.     *Xo     |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: | o +..  o.== .   |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |. o o..o *. +    |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: | . . ..o=  . E   |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |  . o.=.S        |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |.o oo= .         |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |= o...o          |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |*=....           |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: |+*+.o.           |
Oct 02 10:45:41 np0005465986.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Reached target Network is Online.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting System Logging Service...
Oct 02 10:45:42 np0005465986.novalocal sm-notify[1006]: Version 2.5.4 starting
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting Permit User Sessions...
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Finished Permit User Sessions.
Oct 02 10:45:42 np0005465986.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Oct 02 10:45:42 np0005465986.novalocal sshd[1008]: Server listening on :: port 22.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started Command Scheduler.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started Getty on tty1.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 02 10:45:42 np0005465986.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Oct 02 10:45:42 np0005465986.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 02 10:45:42 np0005465986.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 62% if used.)
Oct 02 10:45:42 np0005465986.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Reached target Login Prompts.
Oct 02 10:45:42 np0005465986.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Oct 02 10:45:42 np0005465986.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Started System Logging Service.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Reached target Multi-User System.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 02 10:45:42 np0005465986.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1020]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 10:45:42 +0000. Up 10.00 seconds.
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1022]: Unable to negotiate with 38.102.83.114 port 44976: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1024]: Connection reset by 38.102.83.114 port 44982 [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1026]: Unable to negotiate with 38.102.83.114 port 44998: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1028]: Unable to negotiate with 38.102.83.114 port 45010: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 02 10:45:42 np0005465986.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1019]: Connection closed by 38.102.83.114 port 44960 [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1033]: Connection reset by 38.102.83.114 port 45038 [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1035]: Unable to negotiate with 38.102.83.114 port 45048: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1037]: Unable to negotiate with 38.102.83.114 port 45060: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 02 10:45:42 np0005465986.novalocal sshd-session[1031]: Connection closed by 38.102.83.114 port 45024 [preauth]
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1041]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 10:45:42 +0000. Up 10.45 seconds.
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1043]: #############################################################
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1044]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1046]: 256 SHA256:MhWcqIKi59NIG5uMW0rgzI9Xy42YXvVCkxK1GuOelDc root@np0005465986.novalocal (ECDSA)
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1048]: 256 SHA256:6DT4OpGq84z2g6EzoNR5BcJKvVD/UUUqF24ZDQ3ZS+c root@np0005465986.novalocal (ED25519)
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1050]: 3072 SHA256:pEPCH3ZLq0JZf1qIfNneENkSCpiboEMERq4uYGSnRqs root@np0005465986.novalocal (RSA)
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1051]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1052]: #############################################################
Oct 02 10:45:42 np0005465986.novalocal cloud-init[1041]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 10:45:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.63 seconds
Oct 02 10:45:43 np0005465986.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 02 10:45:43 np0005465986.novalocal systemd[1]: Reached target Cloud-init target.
Oct 02 10:45:43 np0005465986.novalocal systemd[1]: Startup finished in 1.585s (kernel) + 2.485s (initrd) + 6.631s (userspace) = 10.702s.
Oct 02 10:45:45 np0005465986.novalocal chronyd[805]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Oct 02 10:45:45 np0005465986.novalocal chronyd[805]: System clock TAI offset set to 37 seconds
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 25 affinity is now unmanaged
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 31 affinity is now unmanaged
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 28 affinity is now unmanaged
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 32 affinity is now unmanaged
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 30 affinity is now unmanaged
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 02 10:45:48 np0005465986.novalocal irqbalance[784]: IRQ 29 affinity is now unmanaged
Oct 02 10:45:49 np0005465986.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 10:46:09 np0005465986.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 10:46:26 np0005465986.novalocal sshd-session[1058]: Accepted publickey for zuul from 38.102.83.114 port 35430 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 02 10:46:26 np0005465986.novalocal systemd-logind[789]: New session 1 of user zuul.
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Queued start job for default target Main User Target.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Created slice User Application Slice.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Reached target Paths.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Reached target Timers.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Starting D-Bus User Message Bus Socket...
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Starting Create User's Volatile Files and Directories...
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Finished Create User's Volatile Files and Directories.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Listening on D-Bus User Message Bus Socket.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Reached target Sockets.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Reached target Basic System.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Reached target Main User Target.
Oct 02 10:46:26 np0005465986.novalocal systemd[1062]: Startup finished in 138ms.
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 02 10:46:26 np0005465986.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 02 10:46:26 np0005465986.novalocal sshd-session[1058]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:46:27 np0005465986.novalocal python3[1145]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 10:46:33 np0005465986.novalocal python3[1173]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 10:46:49 np0005465986.novalocal python3[1231]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 10:46:51 np0005465986.novalocal python3[1271]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 02 10:46:54 np0005465986.novalocal python3[1297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4HLvw7SnHU3ZU9fE0jIv/0FBRDzTnHfUtei1LSOQahXMfp0JTrMJ0Rj7BbYXxImr2WcDV3bv5FU3LkNsWWkvKZ+/YTg55vh88jhcTTSwOPfyQ0NCsgJ787HDXojmkTKqvsS4ZyAP6VcPlCZbUWNnTSSbJUSyaHZMV5ihm0q6iSgctKks2z5A9UayATNjnXUmG/mYZF8TjRztR4mgHBNFbBBfNYFztb1B2fe+vxBnNa4ls2O1rLzC/5crDuKj3ook8+1X4UTHys4s5ONjn9jxIkB3P5jnGl8ibSdVQRN46RVP9p93WUUJmVLRZdoaq0MrLEwnG1poWzqFMv/cX91UaFWkVQBmiX85KvlSQJqVymdz6LOcPNL+U30yMk55hrLdmeOMl1B9DW4VKD/rr+vtK+HpbaJYYHv9A+woTHFawd+Lkd7oFFavN5+ce0qiO8pg3vZYBLUXgFNDIcMMuu3Th/9xHdxwKfNaQJ9ESqYe+DAE5UBQd/CAmWzb1hdJnFX8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:46:55 np0005465986.novalocal python3[1321]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:46:56 np0005465986.novalocal python3[1420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:46:56 np0005465986.novalocal python3[1491]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402015.8026507-251-144491645293711/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ce6939fdbe5b42ab81e6b7883c8c12f2_id_rsa follow=False checksum=205f22b2d368cfc213016f6ec99460e215b81c8a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:46:57 np0005465986.novalocal python3[1614]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:46:57 np0005465986.novalocal python3[1685]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402016.9879599-306-222933808906665/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ce6939fdbe5b42ab81e6b7883c8c12f2_id_rsa.pub follow=False checksum=e1273b820e94a75c0885ab32202b04dec1652e4b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:46:59 np0005465986.novalocal python3[1733]: ansible-ping Invoked with data=pong
Oct 02 10:47:01 np0005465986.novalocal python3[1757]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 10:47:03 np0005465986.novalocal python3[1815]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 02 10:47:05 np0005465986.novalocal python3[1847]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:05 np0005465986.novalocal python3[1871]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:05 np0005465986.novalocal python3[1895]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:05 np0005465986.novalocal python3[1919]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:06 np0005465986.novalocal python3[1943]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:06 np0005465986.novalocal python3[1967]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:08 np0005465986.novalocal sudo[1991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozuykdnttfcioqqblvenvjagaklgidqy ; /usr/bin/python3'
Oct 02 10:47:08 np0005465986.novalocal sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:08 np0005465986.novalocal python3[1993]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:08 np0005465986.novalocal sudo[1991]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:09 np0005465986.novalocal sudo[2069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvufhcdirsekqowwyuvlgzhiaxorcyic ; /usr/bin/python3'
Oct 02 10:47:09 np0005465986.novalocal sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:09 np0005465986.novalocal python3[2071]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:47:09 np0005465986.novalocal sudo[2069]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:10 np0005465986.novalocal sudo[2142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgqkpknickdhwsjhdhkcpcmfsijnswu ; /usr/bin/python3'
Oct 02 10:47:10 np0005465986.novalocal sudo[2142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:10 np0005465986.novalocal python3[2144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402029.1770813-31-217379958714757/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:10 np0005465986.novalocal sudo[2142]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:11 np0005465986.novalocal python3[2192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:11 np0005465986.novalocal python3[2216]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:11 np0005465986.novalocal python3[2240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:12 np0005465986.novalocal python3[2264]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:12 np0005465986.novalocal python3[2288]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:12 np0005465986.novalocal python3[2312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:12 np0005465986.novalocal python3[2336]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:13 np0005465986.novalocal python3[2360]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:13 np0005465986.novalocal python3[2384]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:13 np0005465986.novalocal python3[2408]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:14 np0005465986.novalocal python3[2432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:14 np0005465986.novalocal python3[2456]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:14 np0005465986.novalocal python3[2480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:15 np0005465986.novalocal python3[2504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:15 np0005465986.novalocal python3[2528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:15 np0005465986.novalocal python3[2552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:16 np0005465986.novalocal python3[2576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:16 np0005465986.novalocal python3[2600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:16 np0005465986.novalocal python3[2624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:17 np0005465986.novalocal python3[2648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:17 np0005465986.novalocal python3[2672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:17 np0005465986.novalocal python3[2696]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:18 np0005465986.novalocal python3[2720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:18 np0005465986.novalocal python3[2744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:18 np0005465986.novalocal python3[2768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:18 np0005465986.novalocal python3[2792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:47:21 np0005465986.novalocal sudo[2816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doyooylfexeixzbnmdqufhclgklmpawb ; /usr/bin/python3'
Oct 02 10:47:21 np0005465986.novalocal sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:21 np0005465986.novalocal python3[2818]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 10:47:21 np0005465986.novalocal systemd[1]: Starting Time & Date Service...
Oct 02 10:47:21 np0005465986.novalocal systemd[1]: Started Time & Date Service.
Oct 02 10:47:21 np0005465986.novalocal systemd-timedated[2820]: Changed time zone to 'UTC' (UTC).
Oct 02 10:47:21 np0005465986.novalocal sudo[2816]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:21 np0005465986.novalocal sudo[2847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmecllqtegrjrbkpylgmiiohaevnjmy ; /usr/bin/python3'
Oct 02 10:47:21 np0005465986.novalocal sudo[2847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:22 np0005465986.novalocal python3[2849]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:22 np0005465986.novalocal sudo[2847]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:22 np0005465986.novalocal python3[2925]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:47:23 np0005465986.novalocal python3[2996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759402042.4344673-251-263505715940035/source _original_basename=tmpi35lz2ir follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:23 np0005465986.novalocal python3[3096]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:47:24 np0005465986.novalocal python3[3167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402043.532649-301-168569767165388/source _original_basename=tmps40epqx2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:25 np0005465986.novalocal sudo[3267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oruqqyndsxpnhrjmatbicrvvipdipcxu ; /usr/bin/python3'
Oct 02 10:47:25 np0005465986.novalocal sudo[3267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:25 np0005465986.novalocal python3[3269]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:47:25 np0005465986.novalocal sudo[3267]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:26 np0005465986.novalocal sudo[3340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtejlrlidfifpqypkpzshalyjrspdhxl ; /usr/bin/python3'
Oct 02 10:47:26 np0005465986.novalocal sudo[3340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:26 np0005465986.novalocal python3[3342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402045.4476538-381-120657856001586/source _original_basename=tmp2wudqumq follow=False checksum=ae3de4f50d96aebac1680ec3c876b229eacb296b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:26 np0005465986.novalocal sudo[3340]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:27 np0005465986.novalocal python3[3390]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:47:27 np0005465986.novalocal python3[3416]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:47:27 np0005465986.novalocal sudo[3494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frxmdizfrduxqgvtcpmdsenubhsxvqlm ; /usr/bin/python3'
Oct 02 10:47:27 np0005465986.novalocal sudo[3494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:27 np0005465986.novalocal python3[3496]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:47:27 np0005465986.novalocal sudo[3494]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:28 np0005465986.novalocal sudo[3567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjkufhpjvuweqfywifrnfoqlbounttv ; /usr/bin/python3'
Oct 02 10:47:28 np0005465986.novalocal sudo[3567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:28 np0005465986.novalocal python3[3569]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402047.5472326-451-34242243828636/source _original_basename=tmpgc2wk9_e follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:28 np0005465986.novalocal sudo[3567]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:29 np0005465986.novalocal sudo[3618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znonjmhwsjjhxhqvkwwjuaykinlhdxwu ; /usr/bin/python3'
Oct 02 10:47:29 np0005465986.novalocal sudo[3618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:29 np0005465986.novalocal python3[3620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-2436-8b23-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:47:29 np0005465986.novalocal sudo[3618]: pam_unix(sudo:session): session closed for user root
Oct 02 10:47:30 np0005465986.novalocal python3[3648]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-2436-8b23-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 02 10:47:32 np0005465986.novalocal python3[3676]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:51 np0005465986.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 10:47:54 np0005465986.novalocal sudo[3702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snjmmwzzwtwqpkxxeyiekrxzaqglvttx ; /usr/bin/python3'
Oct 02 10:47:54 np0005465986.novalocal sudo[3702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:47:54 np0005465986.novalocal python3[3704]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:47:54 np0005465986.novalocal sudo[3702]: pam_unix(sudo:session): session closed for user root
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 02 10:48:51 np0005465986.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 02 10:48:51 np0005465986.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1020] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 10:48:51 np0005465986.novalocal systemd-udevd[3705]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1177] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1202] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1204] device (eth1): carrier: link connected
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1206] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1211] policy: auto-activating connection 'Wired connection 1' (efea40a0-3c75-3072-953b-755f76bcf27c)
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1215] device (eth1): Activation: starting connection 'Wired connection 1' (efea40a0-3c75-3072-953b-755f76bcf27c)
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1216] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1218] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1222] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 10:48:51 np0005465986.novalocal NetworkManager[859]: <info>  [1759402131.1225] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:48:51 np0005465986.novalocal systemd[1062]: Starting Mark boot as successful...
Oct 02 10:48:51 np0005465986.novalocal systemd[1062]: Finished Mark boot as successful.
Oct 02 10:48:52 np0005465986.novalocal python3[3733]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-3fa1-89d7-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:49:02 np0005465986.novalocal sudo[3811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hclbniucbaryjcoqqommxpogwvnuguun ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 10:49:02 np0005465986.novalocal sudo[3811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:49:02 np0005465986.novalocal python3[3813]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:49:02 np0005465986.novalocal sudo[3811]: pam_unix(sudo:session): session closed for user root
Oct 02 10:49:02 np0005465986.novalocal sudo[3884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnhknsktefndkqcpgugokiywihbvuzfd ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 10:49:02 np0005465986.novalocal sudo[3884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:49:02 np0005465986.novalocal python3[3886]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402142.0998917-104-128643095136004/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=69a0477015a6eb7f043dac82ecdb5413d17f18b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:49:02 np0005465986.novalocal sudo[3884]: pam_unix(sudo:session): session closed for user root
Oct 02 10:49:03 np0005465986.novalocal sudo[3934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfhisiokflicjglnddayxlpgaeucapg ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 10:49:03 np0005465986.novalocal sudo[3934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:49:03 np0005465986.novalocal python3[3936]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Stopping Network Manager...
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7086] caught SIGTERM, shutting down normally.
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7095] dhcp4 (eth0): canceled DHCP transaction
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7095] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7096] dhcp4 (eth0): state changed no lease
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7098] manager: NetworkManager state is now CONNECTING
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7169] dhcp4 (eth1): canceled DHCP transaction
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7169] dhcp4 (eth1): state changed no lease
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[859]: <info>  [1759402143.7520] exiting (success)
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Stopped Network Manager.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: NetworkManager.service: Consumed 1.170s CPU time, 9.9M memory peak.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Starting Network Manager...
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.8297] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6763d233-2040-484e-9688-138dedaa0224)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.8300] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.8368] manager[0x561fe9526070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Starting Hostname Service...
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Started Hostname Service.
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9152] hostname: hostname: using hostnamed
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9154] hostname: static hostname changed from (none) to "np0005465986.novalocal"
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9161] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9166] manager[0x561fe9526070]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9167] manager[0x561fe9526070]: rfkill: WWAN hardware radio set enabled
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9198] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9198] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9199] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9199] manager: Networking is enabled by state file
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9201] settings: Loaded settings plugin: keyfile (internal)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9209] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9233] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9244] dhcp: init: Using DHCP client 'internal'
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9246] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9251] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9255] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9263] device (lo): Activation: starting connection 'lo' (c4c28c02-0185-48e9-b2e5-06b78ea3e48d)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9268] device (eth0): carrier: link connected
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9272] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9275] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9276] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9282] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9288] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9293] device (eth1): carrier: link connected
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9298] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9302] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (efea40a0-3c75-3072-953b-755f76bcf27c) (indicated)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9302] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9306] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9313] device (eth1): Activation: starting connection 'Wired connection 1' (efea40a0-3c75-3072-953b-755f76bcf27c)
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Started Network Manager.
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9332] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9336] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9339] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9341] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9343] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9346] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9349] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9352] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9355] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9362] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9366] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9381] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9387] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9414] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9420] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402143.9426] device (lo): Activation: successful, device activated.
Oct 02 10:49:03 np0005465986.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 10:49:03 np0005465986.novalocal sudo[3934]: pam_unix(sudo:session): session closed for user root
Oct 02 10:49:04 np0005465986.novalocal python3[4001]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-3fa1-89d7-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.1912] dhcp4 (eth0): state changed new lease, address=38.129.56.184
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.1925] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2592] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2622] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2623] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2626] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2629] device (eth0): Activation: successful, device activated.
Oct 02 10:49:05 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402145.2632] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 10:49:15 np0005465986.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 10:49:33 np0005465986.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3090] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 10:49:49 np0005465986.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 10:49:49 np0005465986.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3364] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3367] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3377] device (eth1): Activation: successful, device activated.
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3385] manager: startup complete
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3387] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <warn>  [1759402189.3396] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3405] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3629] dhcp4 (eth1): canceled DHCP transaction
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3630] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3630] dhcp4 (eth1): state changed no lease
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3651] policy: auto-activating connection 'ci-private-network' (899e034d-b20c-5f63-a638-c327b5fc6ba0)
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3659] device (eth1): Activation: starting connection 'ci-private-network' (899e034d-b20c-5f63-a638-c327b5fc6ba0)
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3660] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3664] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3675] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3687] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3742] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3745] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 10:49:49 np0005465986.novalocal NetworkManager[3953]: <info>  [1759402189.3759] device (eth1): Activation: successful, device activated.
Oct 02 10:49:59 np0005465986.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 10:50:04 np0005465986.novalocal sshd-session[1072]: Received disconnect from 38.102.83.114 port 35430:11: disconnected by user
Oct 02 10:50:04 np0005465986.novalocal sshd-session[1072]: Disconnected from user zuul 38.102.83.114 port 35430
Oct 02 10:50:04 np0005465986.novalocal sshd-session[1058]: pam_unix(sshd:session): session closed for user zuul
Oct 02 10:50:04 np0005465986.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Oct 02 10:51:11 np0005465986.novalocal sshd-session[4049]: Accepted publickey for zuul from 38.102.83.114 port 41692 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 10:51:11 np0005465986.novalocal systemd-logind[789]: New session 3 of user zuul.
Oct 02 10:51:11 np0005465986.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 02 10:51:11 np0005465986.novalocal sshd-session[4049]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:51:11 np0005465986.novalocal sudo[4128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvipeuyarlzrzafamzpaukhcmdrzdxrk ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 10:51:11 np0005465986.novalocal sudo[4128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:51:11 np0005465986.novalocal python3[4130]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:51:11 np0005465986.novalocal sudo[4128]: pam_unix(sudo:session): session closed for user root
Oct 02 10:51:12 np0005465986.novalocal sudo[4201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juohvgybpyouxhhymkwtfgomxgnsqqnj ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 10:51:12 np0005465986.novalocal sudo[4201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:51:12 np0005465986.novalocal python3[4203]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402271.3952618-373-227365454376119/source _original_basename=tmphf4f6sd4 follow=False checksum=21cbaa2be6010ff877e8cd89353655f79ca4b790 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:51:12 np0005465986.novalocal sudo[4201]: pam_unix(sudo:session): session closed for user root
Oct 02 10:51:16 np0005465986.novalocal sshd-session[4052]: Connection closed by 38.102.83.114 port 41692
Oct 02 10:51:16 np0005465986.novalocal sshd-session[4049]: pam_unix(sshd:session): session closed for user zuul
Oct 02 10:51:16 np0005465986.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 02 10:51:16 np0005465986.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Oct 02 10:51:16 np0005465986.novalocal systemd-logind[789]: Removed session 3.
Oct 02 10:51:58 np0005465986.novalocal systemd[1062]: Created slice User Background Tasks Slice.
Oct 02 10:51:58 np0005465986.novalocal systemd[1062]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 10:51:58 np0005465986.novalocal systemd[1062]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 10:56:45 np0005465986.novalocal sshd-session[4234]: Accepted publickey for zuul from 38.102.83.114 port 42016 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 10:56:45 np0005465986.novalocal systemd-logind[789]: New session 4 of user zuul.
Oct 02 10:56:45 np0005465986.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 02 10:56:45 np0005465986.novalocal sshd-session[4234]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:56:45 np0005465986.novalocal sudo[4261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbpyqdxfzsysppdzuxdixnhbrydcuatl ; /usr/bin/python3'
Oct 02 10:56:46 np0005465986.novalocal sudo[4261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:46 np0005465986.novalocal python3[4263]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-bd1e-1dbd-000000000ca2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:46 np0005465986.novalocal sudo[4261]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:46 np0005465986.novalocal sudo[4289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rchspbfoacgdjnduyrfulajczlnxoogd ; /usr/bin/python3'
Oct 02 10:56:46 np0005465986.novalocal sudo[4289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:46 np0005465986.novalocal python3[4291]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:56:46 np0005465986.novalocal sudo[4289]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:46 np0005465986.novalocal sudo[4316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epzbvjqokcopmvwkgepsrqsnglwqkjzh ; /usr/bin/python3'
Oct 02 10:56:46 np0005465986.novalocal sudo[4316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:47 np0005465986.novalocal python3[4318]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:56:47 np0005465986.novalocal sudo[4316]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:47 np0005465986.novalocal sudo[4342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpyrpauyytkzjotoywsxfrdfgrpbtvmq ; /usr/bin/python3'
Oct 02 10:56:47 np0005465986.novalocal sudo[4342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:47 np0005465986.novalocal python3[4344]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:56:47 np0005465986.novalocal sudo[4342]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:47 np0005465986.novalocal sudo[4368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgiyqbtzfcdcmljmowshnxycfnfzujzv ; /usr/bin/python3'
Oct 02 10:56:47 np0005465986.novalocal sudo[4368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:47 np0005465986.novalocal python3[4370]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:56:47 np0005465986.novalocal sudo[4368]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:47 np0005465986.novalocal sudo[4394]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmxwootnvkqbbjzcapepuslsyfnwdpku ; /usr/bin/python3'
Oct 02 10:56:47 np0005465986.novalocal sudo[4394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:48 np0005465986.novalocal python3[4396]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:56:48 np0005465986.novalocal python3[4396]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 02 10:56:48 np0005465986.novalocal sudo[4394]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:48 np0005465986.novalocal sudo[4420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxjxshhnqsasrxbaquivbnsgiconqqv ; /usr/bin/python3'
Oct 02 10:56:48 np0005465986.novalocal sudo[4420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:49 np0005465986.novalocal python3[4422]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 10:56:49 np0005465986.novalocal systemd[1]: Reloading.
Oct 02 10:56:49 np0005465986.novalocal systemd-rc-local-generator[4445]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 10:56:49 np0005465986.novalocal sudo[4420]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:50 np0005465986.novalocal sudo[4476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wryiwxwjmfwdxkvhbipwhobjpvkdnteg ; /usr/bin/python3'
Oct 02 10:56:50 np0005465986.novalocal sudo[4476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:50 np0005465986.novalocal python3[4478]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 02 10:56:50 np0005465986.novalocal sudo[4476]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:51 np0005465986.novalocal sudo[4502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujztoqqbnqnolqurhgvrnzwarkxekmun ; /usr/bin/python3'
Oct 02 10:56:51 np0005465986.novalocal sudo[4502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:51 np0005465986.novalocal python3[4504]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:51 np0005465986.novalocal sudo[4502]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:51 np0005465986.novalocal sudo[4530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suutbpuulcbkacdoryhajkijbxosvuth ; /usr/bin/python3'
Oct 02 10:56:51 np0005465986.novalocal sudo[4530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:51 np0005465986.novalocal python3[4532]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:51 np0005465986.novalocal sudo[4530]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:51 np0005465986.novalocal sudo[4558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcsmmxbwqucydrfhyxwwooiynmasmzlr ; /usr/bin/python3'
Oct 02 10:56:51 np0005465986.novalocal sudo[4558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:52 np0005465986.novalocal python3[4560]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:52 np0005465986.novalocal sudo[4558]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:52 np0005465986.novalocal sudo[4586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhknkyonvflgyunbwampbtyioksbwoau ; /usr/bin/python3'
Oct 02 10:56:52 np0005465986.novalocal sudo[4586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:52 np0005465986.novalocal python3[4588]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:52 np0005465986.novalocal sudo[4586]: pam_unix(sudo:session): session closed for user root
Oct 02 10:56:52 np0005465986.novalocal python3[4615]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-bd1e-1dbd-000000000ca8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:56:53 np0005465986.novalocal python3[4645]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 10:56:56 np0005465986.novalocal sshd-session[4237]: Connection closed by 38.102.83.114 port 42016
Oct 02 10:56:56 np0005465986.novalocal sshd-session[4234]: pam_unix(sshd:session): session closed for user zuul
Oct 02 10:56:56 np0005465986.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 02 10:56:56 np0005465986.novalocal systemd[1]: session-4.scope: Consumed 3.849s CPU time.
Oct 02 10:56:56 np0005465986.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Oct 02 10:56:56 np0005465986.novalocal systemd-logind[789]: Removed session 4.
Oct 02 10:56:58 np0005465986.novalocal sshd-session[4650]: Accepted publickey for zuul from 38.102.83.114 port 44286 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 10:56:58 np0005465986.novalocal systemd-logind[789]: New session 5 of user zuul.
Oct 02 10:56:58 np0005465986.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 02 10:56:58 np0005465986.novalocal sshd-session[4650]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:56:58 np0005465986.novalocal sudo[4677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycxzzeadahcygqkotnscftxoeuopwov ; /usr/bin/python3'
Oct 02 10:56:58 np0005465986.novalocal sudo[4677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:56:58 np0005465986.novalocal python3[4679]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:57:18 np0005465986.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:57:28 np0005465986.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:57:37 np0005465986.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:57:39 np0005465986.novalocal setsebool[4746]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 02 10:57:39 np0005465986.novalocal setsebool[4746]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:57:50 np0005465986.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:58:10 np0005465986.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 10:58:10 np0005465986.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 10:58:10 np0005465986.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 02 10:58:10 np0005465986.novalocal systemd[1]: Reloading.
Oct 02 10:58:10 np0005465986.novalocal systemd-rc-local-generator[5499]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 10:58:11 np0005465986.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 10:58:13 np0005465986.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 02 10:58:13 np0005465986.novalocal PackageKit[6422]: daemon start
Oct 02 10:58:13 np0005465986.novalocal systemd[1]: Starting Authorization Manager...
Oct 02 10:58:13 np0005465986.novalocal polkitd[6475]: Started polkitd version 0.117
Oct 02 10:58:13 np0005465986.novalocal polkitd[6475]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 10:58:13 np0005465986.novalocal polkitd[6475]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 10:58:13 np0005465986.novalocal polkitd[6475]: Finished loading, compiling and executing 3 rules
Oct 02 10:58:13 np0005465986.novalocal systemd[1]: Started Authorization Manager.
Oct 02 10:58:13 np0005465986.novalocal polkitd[6475]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 02 10:58:13 np0005465986.novalocal systemd[1]: Started PackageKit Daemon.
Oct 02 10:58:14 np0005465986.novalocal sudo[4677]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:14 np0005465986.novalocal python3[7480]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-e981-cfdf-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 10:58:15 np0005465986.novalocal kernel: evm: overlay not supported
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: Starting D-Bus User Message Bus...
Oct 02 10:58:15 np0005465986.novalocal dbus-broker-launch[8493]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 02 10:58:15 np0005465986.novalocal dbus-broker-launch[8493]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: Started D-Bus User Message Bus.
Oct 02 10:58:15 np0005465986.novalocal dbus-broker-lau[8493]: Ready
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: Created slice Slice /user.
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: podman-8359.scope: unit configures an IP firewall, but not running as root.
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: (This warning is only shown for the first unit using IP firewalling.)
Oct 02 10:58:15 np0005465986.novalocal systemd[1062]: Started podman-8359.scope.
Oct 02 10:58:16 np0005465986.novalocal systemd[1062]: Started podman-pause-221ab7c3.scope.
Oct 02 10:58:16 np0005465986.novalocal sudo[9184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wodaxyzswmqhtdckxnfchxcaqztmacau ; /usr/bin/python3'
Oct 02 10:58:16 np0005465986.novalocal sudo[9184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:16 np0005465986.novalocal python3[9217]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.102.83.59:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.102.83.59:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:58:16 np0005465986.novalocal sudo[9184]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:17 np0005465986.novalocal sshd-session[4653]: Connection closed by 38.102.83.114 port 44286
Oct 02 10:58:17 np0005465986.novalocal sshd-session[4650]: pam_unix(sshd:session): session closed for user zuul
Oct 02 10:58:17 np0005465986.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 02 10:58:17 np0005465986.novalocal systemd[1]: session-5.scope: Consumed 1min 967ms CPU time.
Oct 02 10:58:17 np0005465986.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Oct 02 10:58:17 np0005465986.novalocal systemd-logind[789]: Removed session 5.
Oct 02 10:58:36 np0005465986.novalocal sshd-session[16670]: Connection closed by 38.129.56.219 port 52480 [preauth]
Oct 02 10:58:36 np0005465986.novalocal sshd-session[16673]: Unable to negotiate with 38.129.56.219 port 52498: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 10:58:36 np0005465986.novalocal sshd-session[16676]: Unable to negotiate with 38.129.56.219 port 52508: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 10:58:36 np0005465986.novalocal sshd-session[16677]: Connection closed by 38.129.56.219 port 52488 [preauth]
Oct 02 10:58:36 np0005465986.novalocal sshd-session[16675]: Unable to negotiate with 38.129.56.219 port 52520: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 10:58:38 np0005465986.novalocal sshd-session[17482]: Connection closed by 167.99.55.34 port 36162
Oct 02 10:58:41 np0005465986.novalocal sshd-session[18460]: Accepted publickey for zuul from 38.102.83.114 port 49068 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 10:58:41 np0005465986.novalocal systemd-logind[789]: New session 6 of user zuul.
Oct 02 10:58:41 np0005465986.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 02 10:58:41 np0005465986.novalocal sshd-session[18460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 10:58:42 np0005465986.novalocal python3[18565]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:58:42 np0005465986.novalocal sudo[18772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgksiifykktgslftvgwogmsvidssjsy ; /usr/bin/python3'
Oct 02 10:58:42 np0005465986.novalocal sudo[18772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:42 np0005465986.novalocal python3[18784]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:58:42 np0005465986.novalocal sudo[18772]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:43 np0005465986.novalocal sudo[19159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhqnzxpqdwgmdnfzmoigjufumldkmwj ; /usr/bin/python3'
Oct 02 10:58:43 np0005465986.novalocal sudo[19159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:44 np0005465986.novalocal python3[19169]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005465986.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 02 10:58:44 np0005465986.novalocal useradd[19239]: new group: name=cloud-admin, GID=1002
Oct 02 10:58:44 np0005465986.novalocal useradd[19239]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 02 10:58:44 np0005465986.novalocal sudo[19159]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:44 np0005465986.novalocal sudo[19414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upqcqimwkjanrsqqpcsgyqnyhkkhzsoo ; /usr/bin/python3'
Oct 02 10:58:44 np0005465986.novalocal sudo[19414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:44 np0005465986.novalocal python3[19420]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 10:58:44 np0005465986.novalocal sudo[19414]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:45 np0005465986.novalocal sudo[19662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcdmglynunuicchlwsqilombyyraytv ; /usr/bin/python3'
Oct 02 10:58:45 np0005465986.novalocal sudo[19662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:45 np0005465986.novalocal python3[19672]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 10:58:45 np0005465986.novalocal sudo[19662]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:45 np0005465986.novalocal sudo[19902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uomarcmhdgujkormuvkkemcvkxfwxxya ; /usr/bin/python3'
Oct 02 10:58:45 np0005465986.novalocal sudo[19902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:45 np0005465986.novalocal python3[19910]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402724.889942-167-146939752109181/source _original_basename=tmpip9dpwpn follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 10:58:45 np0005465986.novalocal sudo[19902]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:46 np0005465986.novalocal sudo[20235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthzrndoeoiifurxjoniawgeemfmdool ; /usr/bin/python3'
Oct 02 10:58:46 np0005465986.novalocal sudo[20235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 10:58:46 np0005465986.novalocal python3[20242]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 02 10:58:46 np0005465986.novalocal systemd[1]: Starting Hostname Service...
Oct 02 10:58:46 np0005465986.novalocal systemd[1]: Started Hostname Service.
Oct 02 10:58:46 np0005465986.novalocal systemd-hostnamed[20337]: Changed pretty hostname to 'compute-0'
Oct 02 10:58:46 compute-0 systemd-hostnamed[20337]: Hostname set to <compute-0> (static)
Oct 02 10:58:46 compute-0 NetworkManager[3953]: <info>  [1759402726.6969] hostname: static hostname changed from "np0005465986.novalocal" to "compute-0"
Oct 02 10:58:46 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 10:58:46 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 10:58:46 compute-0 sudo[20235]: pam_unix(sudo:session): session closed for user root
Oct 02 10:58:47 compute-0 sshd-session[18508]: Connection closed by 38.102.83.114 port 49068
Oct 02 10:58:47 compute-0 sshd-session[18460]: pam_unix(sshd:session): session closed for user zuul
Oct 02 10:58:47 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Oct 02 10:58:47 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 02 10:58:47 compute-0 systemd[1]: session-6.scope: Consumed 2.471s CPU time.
Oct 02 10:58:47 compute-0 systemd-logind[789]: Removed session 6.
Oct 02 10:58:48 compute-0 irqbalance[784]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 02 10:58:48 compute-0 irqbalance[784]: IRQ 27 affinity is now unmanaged
Oct 02 10:58:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 10:59:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 10:59:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 10:59:06 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 5.133s CPU time.
Oct 02 10:59:06 compute-0 systemd[1]: run-r197e74f748a4495abbcf98b5c2cb273d.service: Deactivated successfully.
Oct 02 10:59:16 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:00:58 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 02 11:00:58 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 02 11:00:58 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 02 11:00:58 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 02 11:01:01 compute-0 CROND[26579]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 11:01:01 compute-0 run-parts[26582]: (/etc/cron.hourly) starting 0anacron
Oct 02 11:01:01 compute-0 anacron[26590]: Anacron started on 2025-10-02
Oct 02 11:01:01 compute-0 anacron[26590]: Will run job `cron.daily' in 27 min.
Oct 02 11:01:01 compute-0 anacron[26590]: Will run job `cron.weekly' in 47 min.
Oct 02 11:01:01 compute-0 anacron[26590]: Will run job `cron.monthly' in 67 min.
Oct 02 11:01:01 compute-0 anacron[26590]: Jobs will be executed sequentially
Oct 02 11:01:01 compute-0 run-parts[26592]: (/etc/cron.hourly) finished 0anacron
Oct 02 11:01:01 compute-0 CROND[26578]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 11:03:19 compute-0 PackageKit[6422]: daemon quit
Oct 02 11:03:19 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:03:50 compute-0 sshd-session[26593]: Accepted publickey for zuul from 38.129.56.219 port 41884 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 11:03:50 compute-0 systemd-logind[789]: New session 7 of user zuul.
Oct 02 11:03:50 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 02 11:03:50 compute-0 sshd-session[26593]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:03:51 compute-0 python3[26669]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:03:53 compute-0 sudo[26783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inoagpzuqpamxnxwhwnfbmrtwcjplyxl ; /usr/bin/python3'
Oct 02 11:03:53 compute-0 sudo[26783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:54 compute-0 python3[26785]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:54 compute-0 sudo[26783]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:54 compute-0 sudo[26856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkfrmuaejwzigitfwjdszrrmubpazkw ; /usr/bin/python3'
Oct 02 11:03:54 compute-0 sudo[26856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:54 compute-0 python3[26858]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:54 compute-0 sudo[26856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:54 compute-0 sudo[26882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heackrskoasfwellxoxubddoptnyrrap ; /usr/bin/python3'
Oct 02 11:03:54 compute-0 sudo[26882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:54 compute-0 python3[26884]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:54 compute-0 sudo[26882]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:55 compute-0 sudo[26955]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoiaxvudknrvzivbtsavyqcwmirmyjfw ; /usr/bin/python3'
Oct 02 11:03:55 compute-0 sudo[26955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:55 compute-0 python3[26957]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:55 compute-0 sudo[26955]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:55 compute-0 sudo[26982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygagcznjllcfzsbbifbnsbbsqhdelljz ; /usr/bin/python3'
Oct 02 11:03:55 compute-0 sudo[26982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:55 compute-0 python3[26984]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:55 compute-0 sudo[26982]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:55 compute-0 sudo[27055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peyshiwtnfrkehrpbofgyprmwkuaxhbm ; /usr/bin/python3'
Oct 02 11:03:55 compute-0 sudo[27055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:55 compute-0 python3[27057]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:55 compute-0 sudo[27055]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:56 compute-0 sudo[27081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobjtweemsaxfvyghqlpqdzpkubmjodo ; /usr/bin/python3'
Oct 02 11:03:56 compute-0 sudo[27081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:56 compute-0 python3[27083]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:56 compute-0 sudo[27081]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:56 compute-0 sudo[27154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exsjtlncxhtzrghhnerjfbajmiwevtgq ; /usr/bin/python3'
Oct 02 11:03:56 compute-0 sudo[27154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:56 compute-0 python3[27156]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:56 compute-0 sudo[27154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:56 compute-0 sudo[27180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geyiuybhndzxjeerffyrxgkfdaalnwjn ; /usr/bin/python3'
Oct 02 11:03:56 compute-0 sudo[27180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:57 compute-0 python3[27182]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:57 compute-0 sudo[27180]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:57 compute-0 sudo[27253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbqnczyuuntdwpaurpuegegejxbmoulz ; /usr/bin/python3'
Oct 02 11:03:57 compute-0 sudo[27253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:57 compute-0 python3[27255]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:57 compute-0 sudo[27253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:57 compute-0 sudo[27279]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeaeskzuuxwvovihtqeqkydgtpzwsygl ; /usr/bin/python3'
Oct 02 11:03:57 compute-0 sudo[27279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:57 compute-0 python3[27281]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:57 compute-0 sudo[27279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:58 compute-0 sudo[27352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkxqgvhwkrvssgkrtqooqxeooofuhln ; /usr/bin/python3'
Oct 02 11:03:58 compute-0 sudo[27352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:58 compute-0 python3[27354]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:58 compute-0 sudo[27352]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:58 compute-0 sudo[27378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wplqfqahdagevtryhmwrsyrcyxvmjekv ; /usr/bin/python3'
Oct 02 11:03:58 compute-0 sudo[27378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:58 compute-0 python3[27380]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:03:58 compute-0 sudo[27378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:03:58 compute-0 sudo[27451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvdwhacantzyvpisegfcdfhyvgwbxko ; /usr/bin/python3'
Oct 02 11:03:58 compute-0 sudo[27451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:58 compute-0 python3[27453]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7802043-30613-182809272704175/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:58 compute-0 sudo[27451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:04:01 compute-0 sshd-session[27478]: Unable to negotiate with 192.168.122.11 port 39624: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 11:04:01 compute-0 sshd-session[27480]: Connection closed by 192.168.122.11 port 39608 [preauth]
Oct 02 11:04:01 compute-0 sshd-session[27481]: Unable to negotiate with 192.168.122.11 port 39632: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 11:04:01 compute-0 sshd-session[27479]: Unable to negotiate with 192.168.122.11 port 39644: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 11:04:01 compute-0 sshd-session[27483]: Connection closed by 192.168.122.11 port 39602 [preauth]
Oct 02 11:04:10 compute-0 python3[27511]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:06:03 compute-0 sshd-session[27514]: Invalid user node from 167.99.55.34 port 33818
Oct 02 11:06:03 compute-0 sshd-session[27514]: Connection closed by invalid user node 167.99.55.34 port 33818 [preauth]
Oct 02 11:09:10 compute-0 sshd-session[26596]: Received disconnect from 38.129.56.219 port 41884:11: disconnected by user
Oct 02 11:09:10 compute-0 sshd-session[26596]: Disconnected from user zuul 38.129.56.219 port 41884
Oct 02 11:09:10 compute-0 sshd-session[26593]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:09:10 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 02 11:09:10 compute-0 systemd[1]: session-7.scope: Consumed 5.934s CPU time.
Oct 02 11:09:10 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Oct 02 11:09:10 compute-0 systemd-logind[789]: Removed session 7.
Oct 02 11:13:33 compute-0 sshd-session[27517]: Invalid user mapr from 167.99.55.34 port 55392
Oct 02 11:13:33 compute-0 sshd-session[27517]: Connection closed by invalid user mapr 167.99.55.34 port 55392 [preauth]
Oct 02 11:19:32 compute-0 sshd-session[27521]: Accepted publickey for zuul from 192.168.122.30 port 33638 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:19:32 compute-0 systemd-logind[789]: New session 8 of user zuul.
Oct 02 11:19:32 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 02 11:19:32 compute-0 sshd-session[27521]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:19:33 compute-0 python3.9[27674]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:19:35 compute-0 sudo[27853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roxptfpnegtpbtcgoosnhielchkpisgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759403974.6957414-61-88855366907770/AnsiballZ_command.py'
Oct 02 11:19:35 compute-0 sudo[27853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:35 compute-0 python3.9[27855]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:19:42 compute-0 sudo[27853]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:48 compute-0 sshd-session[27524]: Connection closed by 192.168.122.30 port 33638
Oct 02 11:19:48 compute-0 sshd-session[27521]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:19:48 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 02 11:19:48 compute-0 systemd[1]: session-8.scope: Consumed 8.096s CPU time.
Oct 02 11:19:48 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Oct 02 11:19:48 compute-0 systemd-logind[789]: Removed session 8.
Oct 02 11:20:04 compute-0 sshd-session[27914]: Accepted publickey for zuul from 192.168.122.30 port 41260 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:20:04 compute-0 systemd-logind[789]: New session 9 of user zuul.
Oct 02 11:20:04 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 02 11:20:04 compute-0 sshd-session[27914]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:20:04 compute-0 python3.9[28067]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 11:20:06 compute-0 python3.9[28241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:20:06 compute-0 sudo[28391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfgyaifpdrhuhltkuiqqtvtcbohgeuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404006.4151182-98-260191374419281/AnsiballZ_command.py'
Oct 02 11:20:06 compute-0 sudo[28391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:07 compute-0 python3.9[28393]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:20:07 compute-0 sudo[28391]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:07 compute-0 sudo[28544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrbrdgnxbtexnptcvdiuwkbsegxtoav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404007.4873147-134-202844622147908/AnsiballZ_stat.py'
Oct 02 11:20:07 compute-0 sudo[28544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:08 compute-0 python3.9[28546]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:20:08 compute-0 sudo[28544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:08 compute-0 sudo[28696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdaagtoqollapyushsjrluejhhzwuyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404008.3811946-158-97892287229915/AnsiballZ_file.py'
Oct 02 11:20:08 compute-0 sudo[28696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:09 compute-0 python3.9[28698]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:20:09 compute-0 sudo[28696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:09 compute-0 sudo[28848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnodqtvhlbgeshbtenwngwsqwxwpkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404009.31991-182-149125883589605/AnsiballZ_stat.py'
Oct 02 11:20:09 compute-0 sudo[28848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:09 compute-0 python3.9[28850]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:20:09 compute-0 sudo[28848]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:10 compute-0 sudo[28971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxgqpkobhwidbrsmufruhbxxjukzdbmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404009.31991-182-149125883589605/AnsiballZ_copy.py'
Oct 02 11:20:10 compute-0 sudo[28971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:10 compute-0 python3.9[28973]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404009.31991-182-149125883589605/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:20:10 compute-0 sudo[28971]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:11 compute-0 sudo[29123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysgfezzemzddvsfjjypxtzrruwbtpvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404010.8535523-227-145198427251026/AnsiballZ_setup.py'
Oct 02 11:20:11 compute-0 sudo[29123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:11 compute-0 python3.9[29125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:20:11 compute-0 sudo[29123]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:12 compute-0 sudo[29279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waulxtdeawxqkcuinwbmuupmzooykcva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404011.9167402-251-124825465499486/AnsiballZ_file.py'
Oct 02 11:20:12 compute-0 sudo[29279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:12 compute-0 python3.9[29281]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:20:12 compute-0 sudo[29279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:13 compute-0 python3.9[29431]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:20:17 compute-0 python3.9[29686]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:20:18 compute-0 python3.9[29836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:20:19 compute-0 python3.9[29990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:20:20 compute-0 sudo[30146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyvqdglsuccexghsiwzchexnsposmzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404020.317396-395-169281560451975/AnsiballZ_setup.py'
Oct 02 11:20:20 compute-0 sudo[30146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:20 compute-0 python3.9[30148]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:20:21 compute-0 sudo[30146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:20:21 compute-0 sudo[30230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmulkullftmnidxnsbcbooeeuvrjkimg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404020.317396-395-169281560451975/AnsiballZ_dnf.py'
Oct 02 11:20:21 compute-0 sudo[30230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:20:21 compute-0 python3.9[30232]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:21:04 compute-0 sshd-session[30380]: Invalid user oneadmin from 167.99.55.34 port 58548
Oct 02 11:21:04 compute-0 sshd-session[30380]: Connection closed by invalid user oneadmin 167.99.55.34 port 58548 [preauth]
Oct 02 11:21:05 compute-0 systemd[1]: Reloading.
Oct 02 11:21:05 compute-0 systemd-rc-local-generator[30430]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:21:05 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 11:21:05 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 02 11:21:05 compute-0 dnf[30440]: Failed determining last makecache time.
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-barbican-42b4c41831408a8e323 130 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 159 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-cinder-1c00d6490d88e436f26ef 164 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-stevedore-c4acc5639fd2329372142 150 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd[1]: Reloading.
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-cloudkitty-tests-tempest-3961dc 159 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-os-net-config-28598c2978b9e2207dd19fc4 129 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd-rc-local-generator[30476]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 107 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-designate-tests-tempest-347fdbc 162 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-glance-1fd12c29b339f30fe823e 167 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 166 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-manila-3c01b7181572c95dac462 175 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-whitebox-neutron-tests-tempest- 148 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-octavia-ba397f07a7331190208c 163 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-watcher-c014f81a8647287f6dcc 177 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-edpm-image-builder-55ba53cf215b14ed95b 160 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 142 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd[1]: Reloading.
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-swift-dc98a8463506ac520c469a 184 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-python-tempestconf-8515371b7cceebd4282 190 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 dnf[30440]: delorean-openstack-heat-ui-013accbfd179753bc3f0 171 kB/s | 3.0 kB     00:00
Oct 02 11:21:06 compute-0 systemd-rc-local-generator[30531]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:21:06 compute-0 dnf[30440]: CentOS Stream 9 - BaseOS                         72 kB/s | 6.7 kB     00:00
Oct 02 11:21:06 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 02 11:21:06 compute-0 dnf[30440]: CentOS Stream 9 - AppStream                      58 kB/s | 6.8 kB     00:00
Oct 02 11:21:06 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:21:06 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:21:06 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:21:06 compute-0 dnf[30440]: CentOS Stream 9 - CRB                            58 kB/s | 6.6 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: CentOS Stream 9 - Extras packages                83 kB/s | 8.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: dlrn-antelope-testing                           121 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: dlrn-antelope-build-deps                        114 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: centos9-rabbitmq                                 97 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: centos9-storage                                  91 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: centos9-opstools                                116 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: NFV SIG OpenvSwitch                             139 kB/s | 3.0 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: repo-setup-centos-appstream                     142 kB/s | 4.4 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: repo-setup-centos-baseos                         99 kB/s | 3.9 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: repo-setup-centos-highavailability               65 kB/s | 3.9 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: repo-setup-centos-powertools                    150 kB/s | 4.3 kB     00:00
Oct 02 11:21:07 compute-0 dnf[30440]: Extra Packages for Enterprise Linux 9 - x86_64  212 kB/s |  33 kB     00:00
Oct 02 11:21:08 compute-0 dnf[30440]: Metadata cache created.
Oct 02 11:21:08 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 11:21:08 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 11:21:08 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.745s CPU time.
Oct 02 11:22:09 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:22:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:22:10 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 02 11:22:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:22:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:22:10 compute-0 systemd[1]: Reloading.
Oct 02 11:22:10 compute-0 systemd-rc-local-generator[30869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:22:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:22:10 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 11:22:10 compute-0 PackageKit[31123]: daemon start
Oct 02 11:22:10 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 11:22:11 compute-0 sudo[30230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:22:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:22:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:22:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.053s CPU time.
Oct 02 11:22:11 compute-0 systemd[1]: run-raa2601bc32df4533b81b914d12c88b4b.service: Deactivated successfully.
Oct 02 11:22:47 compute-0 sudo[31792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpirpvbfplxebqkdkufqkxeoeyszlun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404167.714316-431-179836021301380/AnsiballZ_command.py'
Oct 02 11:22:47 compute-0 sudo[31792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:22:48 compute-0 python3.9[31794]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:22:49 compute-0 sudo[31792]: pam_unix(sudo:session): session closed for user root
Oct 02 11:22:52 compute-0 sudo[32073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbdagfnjbcxawehoscqajatrnkwcyfmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404171.528546-455-189201623167172/AnsiballZ_selinux.py'
Oct 02 11:22:52 compute-0 sudo[32073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:22:52 compute-0 python3.9[32075]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 11:22:52 compute-0 sudo[32073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:22:53 compute-0 sudo[32225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhgnkdkfdzpfuspqjyitjzwskzxhkbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404173.1585736-488-39479931589724/AnsiballZ_command.py'
Oct 02 11:22:53 compute-0 sudo[32225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:22:53 compute-0 python3.9[32227]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 11:22:54 compute-0 sudo[32225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:22:55 compute-0 sudo[32378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzksevrukcrneakoceweybushicufumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404174.9355857-512-31170490411258/AnsiballZ_file.py'
Oct 02 11:22:55 compute-0 sudo[32378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:22:56 compute-0 python3.9[32380]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:22:56 compute-0 sudo[32378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:00 compute-0 sudo[32530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urehywzmoylwjxwrdgyflloudbqhusha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404179.718878-536-100726745401011/AnsiballZ_mount.py'
Oct 02 11:23:00 compute-0 sudo[32530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:03 compute-0 python3.9[32532]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 11:23:03 compute-0 sudo[32530]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:08 compute-0 sudo[32682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhzrdtvlunclggaziiasnobhudmamyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404188.0950432-620-23651861869722/AnsiballZ_file.py'
Oct 02 11:23:08 compute-0 sudo[32682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:08 compute-0 python3.9[32684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:23:08 compute-0 sudo[32682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:09 compute-0 sudo[32834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtyfrlcgwvgzphgqpeyutlizwdlgxhxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404188.8301458-644-198697150474655/AnsiballZ_stat.py'
Oct 02 11:23:09 compute-0 sudo[32834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:09 compute-0 python3.9[32836]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:23:09 compute-0 sudo[32834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:09 compute-0 sudo[32957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaphhdeamlfacwaqmycdgbxoolvkqkyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404188.8301458-644-198697150474655/AnsiballZ_copy.py'
Oct 02 11:23:09 compute-0 sudo[32957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:09 compute-0 python3.9[32959]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404188.8301458-644-198697150474655/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:23:09 compute-0 sudo[32957]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:11 compute-0 sudo[33109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkguusjnptqweucmysbqgayajlzsulrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404190.8584614-725-262929559387077/AnsiballZ_getent.py'
Oct 02 11:23:11 compute-0 sudo[33109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:11 compute-0 python3.9[33111]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 11:23:11 compute-0 sudo[33109]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:12 compute-0 sudo[33262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brawekcmbajxpbpilptbtxswyuzhsyyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404191.8066947-749-6809533766295/AnsiballZ_group.py'
Oct 02 11:23:12 compute-0 sudo[33262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:12 compute-0 python3.9[33264]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:23:12 compute-0 groupadd[33265]: group added to /etc/group: name=qemu, GID=107
Oct 02 11:23:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:23:12 compute-0 groupadd[33265]: group added to /etc/gshadow: name=qemu
Oct 02 11:23:12 compute-0 groupadd[33265]: new group: name=qemu, GID=107
Oct 02 11:23:12 compute-0 sudo[33262]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:13 compute-0 sudo[33421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqpjifsyzkzqnxzciqpjbixyzkdrrds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404192.7648787-773-40672488857298/AnsiballZ_user.py'
Oct 02 11:23:13 compute-0 sudo[33421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:13 compute-0 python3.9[33423]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:23:13 compute-0 useradd[33425]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:23:13 compute-0 sudo[33421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:14 compute-0 sudo[33581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baggwzsjplvlfwmsbppmciilzzdhsrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404194.0887146-797-137632231786431/AnsiballZ_getent.py'
Oct 02 11:23:14 compute-0 sudo[33581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:14 compute-0 python3.9[33583]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 11:23:14 compute-0 sudo[33581]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:15 compute-0 sudo[33734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xicokiircczobunrhtmiozknwnfeaqkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404194.8881295-821-233616781582067/AnsiballZ_group.py'
Oct 02 11:23:15 compute-0 sudo[33734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:15 compute-0 python3.9[33736]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:23:15 compute-0 groupadd[33737]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 02 11:23:15 compute-0 groupadd[33737]: group added to /etc/gshadow: name=hugetlbfs
Oct 02 11:23:15 compute-0 groupadd[33737]: new group: name=hugetlbfs, GID=42477
Oct 02 11:23:15 compute-0 sudo[33734]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:16 compute-0 sudo[33892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmpcoadcwetivebnivbgjdjzviiudhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404195.8596675-848-276072932051355/AnsiballZ_file.py'
Oct 02 11:23:16 compute-0 sudo[33892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:16 compute-0 python3.9[33894]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 11:23:16 compute-0 sudo[33892]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:17 compute-0 sudo[34044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjpcfjvjcsumfwbwbsguuxrpxdhmpknn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404196.804484-881-43513937465585/AnsiballZ_dnf.py'
Oct 02 11:23:17 compute-0 sudo[34044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:17 compute-0 python3.9[34046]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:23:18 compute-0 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 02 11:23:18 compute-0 irqbalance[784]: IRQ 26 affinity is now unmanaged
Oct 02 11:23:19 compute-0 sudo[34044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:20 compute-0 sudo[34197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkoyjsopjqyihquftavvbqfscizvcehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404199.8374536-905-143128894753082/AnsiballZ_file.py'
Oct 02 11:23:20 compute-0 sudo[34197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:20 compute-0 python3.9[34199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:23:20 compute-0 sudo[34197]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:21 compute-0 sudo[34349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewghirjtfhscxjegqgslekzusglxunwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404200.7309086-929-230295366208119/AnsiballZ_stat.py'
Oct 02 11:23:21 compute-0 sudo[34349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:21 compute-0 python3.9[34351]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:23:21 compute-0 sudo[34349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:21 compute-0 sudo[34472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigysnmuwvkhvwonbxruusmtcsgldlic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404200.7309086-929-230295366208119/AnsiballZ_copy.py'
Oct 02 11:23:21 compute-0 sudo[34472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:21 compute-0 sshd-session[34475]: banner exchange: Connection from 194.165.16.164 port 65193: invalid format
Oct 02 11:23:21 compute-0 python3.9[34474]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404200.7309086-929-230295366208119/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:23:21 compute-0 sudo[34472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:22 compute-0 sudo[34625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbcluovrdkwdjuergacnaibeygbqcbrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404202.188207-974-28706052460874/AnsiballZ_systemd.py'
Oct 02 11:23:22 compute-0 sudo[34625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:23 compute-0 python3.9[34627]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:23:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 11:23:23 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 02 11:23:23 compute-0 kernel: Bridge firewalling registered
Oct 02 11:23:23 compute-0 systemd-modules-load[34631]: Inserted module 'br_netfilter'
Oct 02 11:23:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 11:23:23 compute-0 sudo[34625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:23 compute-0 sudo[34784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqsimwcwpuunypluctwjtgytmdqyijui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404203.5559006-998-254516269952691/AnsiballZ_stat.py'
Oct 02 11:23:23 compute-0 sudo[34784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:24 compute-0 python3.9[34786]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:23:24 compute-0 sudo[34784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:24 compute-0 sudo[34907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhnkulkrsesgngafdjxwecceijyunbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404203.5559006-998-254516269952691/AnsiballZ_copy.py'
Oct 02 11:23:24 compute-0 sudo[34907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:24 compute-0 python3.9[34909]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404203.5559006-998-254516269952691/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:23:24 compute-0 sudo[34907]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:25 compute-0 sudo[35059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvihxundjkyyoyhiqwxstdyozrzaasyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404205.2086785-1052-184367629580782/AnsiballZ_dnf.py'
Oct 02 11:23:25 compute-0 sudo[35059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:25 compute-0 python3.9[35061]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:23:29 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:23:29 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:23:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:23:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:23:29 compute-0 systemd[1]: Reloading.
Oct 02 11:23:29 compute-0 systemd-rc-local-generator[35120]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:23:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:23:31 compute-0 sudo[35059]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:23:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:23:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.223s CPU time.
Oct 02 11:23:35 compute-0 systemd[1]: run-r8b05696dc65b41399ab92b23f4362dcd.service: Deactivated successfully.
Oct 02 11:23:35 compute-0 python3.9[38808]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:23:36 compute-0 python3.9[38960]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 11:23:37 compute-0 python3.9[39110]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:23:38 compute-0 sudo[39260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdtercepziobsurvaftjxayerakjrdze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404218.0243304-1169-105919878141365/AnsiballZ_command.py'
Oct 02 11:23:38 compute-0 sudo[39260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:38 compute-0 python3.9[39262]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:23:38 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:23:39 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:23:39 compute-0 sudo[39260]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:40 compute-0 sudo[39633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrzecamvqvtaehnmhozodrdzdoxgwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404219.9265435-1196-36466808274679/AnsiballZ_systemd.py'
Oct 02 11:23:40 compute-0 sudo[39633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:40 compute-0 python3.9[39635]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:23:40 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 11:23:40 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 11:23:40 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 11:23:40 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:23:40 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:23:40 compute-0 sudo[39633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:41 compute-0 python3.9[39797]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 11:23:44 compute-0 sudo[39947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parsndigomqfndwhteqjozvngvclenqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404224.476769-1367-136508436973067/AnsiballZ_systemd.py'
Oct 02 11:23:44 compute-0 sudo[39947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:45 compute-0 python3.9[39949]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:23:45 compute-0 systemd[1]: Reloading.
Oct 02 11:23:45 compute-0 systemd-rc-local-generator[39972]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:23:45 compute-0 sudo[39947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:45 compute-0 sudo[40137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbrktrymoqqgjgwlkkbxqxdroqummlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404225.5772576-1367-86528948986800/AnsiballZ_systemd.py'
Oct 02 11:23:45 compute-0 sudo[40137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:46 compute-0 python3.9[40139]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:23:46 compute-0 systemd[1]: Reloading.
Oct 02 11:23:46 compute-0 systemd-rc-local-generator[40169]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:23:46 compute-0 sudo[40137]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:47 compute-0 sudo[40326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntjtiyuhjvgvhwtdwiggwgjlsvnnmebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404227.2561376-1415-83638256930240/AnsiballZ_command.py'
Oct 02 11:23:47 compute-0 sudo[40326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:47 compute-0 python3.9[40328]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:23:47 compute-0 sudo[40326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:48 compute-0 sudo[40479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krajuttilggnipdzaeqdtvlrtcoqdjrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404228.229445-1439-121739994626944/AnsiballZ_command.py'
Oct 02 11:23:48 compute-0 sudo[40479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:48 compute-0 python3.9[40481]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:23:48 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 02 11:23:48 compute-0 sudo[40479]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:49 compute-0 sudo[40632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbawmoeukymaxlhkmqfhhztnnsesgbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404229.1557932-1463-189330881908103/AnsiballZ_command.py'
Oct 02 11:23:49 compute-0 sudo[40632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:49 compute-0 python3.9[40634]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:23:50 compute-0 sudo[40632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:51 compute-0 sudo[40794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhetohtvbuphfjuzayzrsiolkvxbzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404231.4626188-1487-131141082081398/AnsiballZ_command.py'
Oct 02 11:23:51 compute-0 sudo[40794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:51 compute-0 python3.9[40796]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:23:51 compute-0 sudo[40794]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:52 compute-0 sudo[40947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kobfulfysdgzcyalrlmnqkuwhrguuajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404232.3093126-1511-3157785143060/AnsiballZ_systemd.py'
Oct 02 11:23:52 compute-0 sudo[40947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:23:52 compute-0 python3.9[40949]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:23:52 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 11:23:52 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 02 11:23:52 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 02 11:23:52 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 02 11:23:52 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 11:23:52 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 02 11:23:53 compute-0 sudo[40947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:23:53 compute-0 sshd-session[27917]: Connection closed by 192.168.122.30 port 41260
Oct 02 11:23:53 compute-0 sshd-session[27914]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:23:53 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 02 11:23:53 compute-0 systemd[1]: session-9.scope: Consumed 2min 12.278s CPU time.
Oct 02 11:23:53 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Oct 02 11:23:53 compute-0 systemd-logind[789]: Removed session 9.
Oct 02 11:23:58 compute-0 sshd-session[40979]: Accepted publickey for zuul from 192.168.122.30 port 45822 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:23:58 compute-0 systemd-logind[789]: New session 10 of user zuul.
Oct 02 11:23:58 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 02 11:23:58 compute-0 sshd-session[40979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:24:00 compute-0 python3.9[41132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:24:01 compute-0 sudo[41286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypnispmxcstnbzcerizgzyyvjmkqjdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404240.7097883-73-171535865801962/AnsiballZ_getent.py'
Oct 02 11:24:01 compute-0 sudo[41286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:01 compute-0 python3.9[41288]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 11:24:01 compute-0 sudo[41286]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:01 compute-0 sudo[41439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozgvgtnwjtywglzeuwhrwxkqohxrcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404241.537633-97-172097676255285/AnsiballZ_group.py'
Oct 02 11:24:02 compute-0 sudo[41439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:02 compute-0 python3.9[41441]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:24:02 compute-0 groupadd[41442]: group added to /etc/group: name=openvswitch, GID=42476
Oct 02 11:24:02 compute-0 groupadd[41442]: group added to /etc/gshadow: name=openvswitch
Oct 02 11:24:02 compute-0 groupadd[41442]: new group: name=openvswitch, GID=42476
Oct 02 11:24:02 compute-0 sudo[41439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:03 compute-0 sudo[41597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioezrldjhtifllzjsnfozstpnysdnais ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404242.78526-121-194184544107067/AnsiballZ_user.py'
Oct 02 11:24:03 compute-0 sudo[41597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:03 compute-0 python3.9[41599]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:24:03 compute-0 useradd[41601]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:24:03 compute-0 useradd[41601]: add 'openvswitch' to group 'hugetlbfs'
Oct 02 11:24:03 compute-0 useradd[41601]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 02 11:24:03 compute-0 sudo[41597]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:05 compute-0 sudo[41757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokolilnexocemgxxfpzqhzmyasyvbmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404245.056346-151-27389896860233/AnsiballZ_setup.py'
Oct 02 11:24:05 compute-0 sudo[41757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:05 compute-0 python3.9[41759]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:24:05 compute-0 sudo[41757]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:06 compute-0 sudo[41841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uehvfulcsejsoddgoodmdpzyffarqmof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404245.056346-151-27389896860233/AnsiballZ_dnf.py'
Oct 02 11:24:06 compute-0 sudo[41841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:06 compute-0 python3.9[41843]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:24:08 compute-0 sudo[41841]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:11 compute-0 sudo[42004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sedptqabduopttswyybynhzxadbahxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404251.2694206-193-90888059781176/AnsiballZ_dnf.py'
Oct 02 11:24:11 compute-0 sudo[42004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:11 compute-0 python3.9[42006]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:24:22 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:24:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:24:23 compute-0 groupadd[42029]: group added to /etc/group: name=unbound, GID=993
Oct 02 11:24:23 compute-0 groupadd[42029]: group added to /etc/gshadow: name=unbound
Oct 02 11:24:23 compute-0 groupadd[42029]: new group: name=unbound, GID=993
Oct 02 11:24:23 compute-0 useradd[42036]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 02 11:24:23 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 02 11:24:23 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 02 11:24:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:24:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:24:24 compute-0 systemd[1]: Reloading.
Oct 02 11:24:24 compute-0 systemd-sysv-generator[42561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:24:24 compute-0 systemd-rc-local-generator[42556]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:24:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:24:25 compute-0 sudo[42004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:24:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:24:25 compute-0 systemd[1]: run-r858e46eac1bf4d6f87d7f2fbe88a738c.service: Deactivated successfully.
Oct 02 11:24:27 compute-0 sudo[43106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exnpvptrobsmmaiyitwcmuhostrfqfpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404267.3638716-217-146823365735295/AnsiballZ_systemd.py'
Oct 02 11:24:27 compute-0 sudo[43106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:28 compute-0 python3.9[43108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:24:28 compute-0 systemd[1]: Reloading.
Oct 02 11:24:28 compute-0 systemd-sysv-generator[43142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:24:28 compute-0 systemd-rc-local-generator[43139]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:24:28 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 02 11:24:28 compute-0 chown[43150]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 02 11:24:28 compute-0 ovs-ctl[43155]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 02 11:24:28 compute-0 ovs-ctl[43155]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 02 11:24:28 compute-0 ovs-ctl[43155]: Starting ovsdb-server [  OK  ]
Oct 02 11:24:28 compute-0 ovs-vsctl[43204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 02 11:24:28 compute-0 ovs-vsctl[43224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"6718a9ec-e13c-42f0-978a-6c44c48d0d54\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 02 11:24:28 compute-0 ovs-ctl[43155]: Configuring Open vSwitch system IDs [  OK  ]
Oct 02 11:24:28 compute-0 ovs-ctl[43155]: Enabling remote OVSDB managers [  OK  ]
Oct 02 11:24:28 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 02 11:24:28 compute-0 ovs-vsctl[43230]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 11:24:28 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 02 11:24:28 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 02 11:24:28 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 02 11:24:29 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 02 11:24:29 compute-0 ovs-ctl[43274]: Inserting openvswitch module [  OK  ]
Oct 02 11:24:29 compute-0 ovs-ctl[43243]: Starting ovs-vswitchd [  OK  ]
Oct 02 11:24:29 compute-0 ovs-ctl[43243]: Enabling remote OVSDB managers [  OK  ]
Oct 02 11:24:29 compute-0 ovs-vsctl[43292]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 11:24:29 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 02 11:24:29 compute-0 systemd[1]: Starting Open vSwitch...
Oct 02 11:24:29 compute-0 systemd[1]: Finished Open vSwitch.
Oct 02 11:24:29 compute-0 sudo[43106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:30 compute-0 python3.9[43443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:24:31 compute-0 sudo[43593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-legawcfwngoaaxaazgloutusrcmnjomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404270.6264653-271-13485274704563/AnsiballZ_sefcontext.py'
Oct 02 11:24:31 compute-0 sudo[43593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:31 compute-0 python3.9[43595]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 11:24:32 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:24:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:24:32 compute-0 sudo[43593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:34 compute-0 python3.9[43750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:24:35 compute-0 sudo[43906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyunofhmurlkgdwhyjtgsgbvvbwguhdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404275.5684073-325-244988681858247/AnsiballZ_dnf.py'
Oct 02 11:24:35 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 02 11:24:35 compute-0 sudo[43906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:36 compute-0 python3.9[43908]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:24:37 compute-0 sudo[43906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:37 compute-0 sudo[44059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqldwgkyzrftqknhmurncxqtuololgux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404277.595857-349-123977126491328/AnsiballZ_command.py'
Oct 02 11:24:38 compute-0 sudo[44059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:38 compute-0 python3.9[44061]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:24:38 compute-0 sudo[44059]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:39 compute-0 sudo[44346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqipqhihgodyrvvfhjmllrntxptjfqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404279.2445612-373-227415673659379/AnsiballZ_file.py'
Oct 02 11:24:39 compute-0 sudo[44346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:39 compute-0 python3.9[44348]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:24:39 compute-0 sudo[44346]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:41 compute-0 python3.9[44498]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:24:41 compute-0 sudo[44650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpomhfpaccioziveqvasuzhxuxkssaqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404281.238827-421-211665145015808/AnsiballZ_dnf.py'
Oct 02 11:24:41 compute-0 sudo[44650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:41 compute-0 python3.9[44652]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:24:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:24:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:24:43 compute-0 systemd[1]: Reloading.
Oct 02 11:24:43 compute-0 systemd-sysv-generator[44695]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:24:43 compute-0 systemd-rc-local-generator[44692]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:24:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:24:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:24:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:24:44 compute-0 systemd[1]: run-raea82f10950640d08da5991eb682b861.service: Deactivated successfully.
Oct 02 11:24:44 compute-0 sudo[44650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:45 compute-0 sudo[44967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninmftbcslzhlekzbfxjntppfsezfoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404285.4468913-445-3491731764740/AnsiballZ_systemd.py'
Oct 02 11:24:45 compute-0 sudo[44967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:46 compute-0 python3.9[44969]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:24:46 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 11:24:46 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 02 11:24:46 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 02 11:24:46 compute-0 systemd[1]: Stopping Network Manager...
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1017] caught SIGTERM, shutting down normally.
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1028] dhcp4 (eth0): canceled DHCP transaction
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1028] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1028] dhcp4 (eth0): state changed no lease
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1030] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 11:24:46 compute-0 NetworkManager[3953]: <info>  [1759404286.1105] exiting (success)
Oct 02 11:24:46 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:24:46 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:24:46 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 11:24:46 compute-0 systemd[1]: Stopped Network Manager.
Oct 02 11:24:46 compute-0 systemd[1]: NetworkManager.service: Consumed 13.246s CPU time, 4.2M memory peak, read 0B from disk, written 20.0K to disk.
Oct 02 11:24:46 compute-0 systemd[1]: Starting Network Manager...
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.1787] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6763d233-2040-484e-9688-138dedaa0224)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.1790] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.1852] manager[0x5598d36ec090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 11:24:46 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 11:24:46 compute-0 systemd[1]: Started Hostname Service.
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2605] hostname: hostname: using hostnamed
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2605] hostname: static hostname changed from (none) to "compute-0"
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2609] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2614] manager[0x5598d36ec090]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2614] manager[0x5598d36ec090]: rfkill: WWAN hardware radio set enabled
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2633] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2642] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2643] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2643] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2644] manager: Networking is enabled by state file
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2646] settings: Loaded settings plugin: keyfile (internal)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2649] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2675] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2685] dhcp: init: Using DHCP client 'internal'
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2687] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2692] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2697] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2703] device (lo): Activation: starting connection 'lo' (c4c28c02-0185-48e9-b2e5-06b78ea3e48d)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2708] device (eth0): carrier: link connected
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2712] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2715] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2716] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2721] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2727] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2732] device (eth1): carrier: link connected
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2735] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2740] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (899e034d-b20c-5f63-a638-c327b5fc6ba0) (indicated)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2740] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2745] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2751] device (eth1): Activation: starting connection 'ci-private-network' (899e034d-b20c-5f63-a638-c327b5fc6ba0)
Oct 02 11:24:46 compute-0 systemd[1]: Started Network Manager.
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2757] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2764] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2766] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2767] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2770] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2773] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2775] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2777] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2782] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2789] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2792] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2802] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2814] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2824] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2826] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2832] device (lo): Activation: successful, device activated.
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2841] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2844] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2847] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 02 11:24:46 compute-0 NetworkManager[44987]: <info>  [1759404286.2850] device (eth1): Activation: successful, device activated.
Oct 02 11:24:46 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 02 11:24:46 compute-0 sudo[44967]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:47 compute-0 sudo[45174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haspicnhcdxbdeaskebtoduyqaecqxii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404286.766764-469-150429069813130/AnsiballZ_dnf.py'
Oct 02 11:24:47 compute-0 sudo[45174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:47 compute-0 python3.9[45176]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4187] dhcp4 (eth0): state changed new lease, address=38.129.56.184
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4196] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4259] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4290] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4292] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4295] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4298] device (eth0): Activation: successful, device activated.
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4309] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 11:24:47 compute-0 NetworkManager[44987]: <info>  [1759404287.4313] manager: startup complete
Oct 02 11:24:47 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 02 11:24:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:24:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:24:52 compute-0 systemd[1]: Reloading.
Oct 02 11:24:52 compute-0 systemd-rc-local-generator[45240]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:24:52 compute-0 systemd-sysv-generator[45248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:24:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:24:53 compute-0 sudo[45174]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:24:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:24:54 compute-0 systemd[1]: run-rf78b1fc8024f4f79a0bf04fd432fa04a.service: Deactivated successfully.
Oct 02 11:24:54 compute-0 sudo[45655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gluknswewybcurubfmetcfntsuxponaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404294.5614893-505-73124048128319/AnsiballZ_stat.py'
Oct 02 11:24:54 compute-0 sudo[45655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:55 compute-0 python3.9[45657]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:24:55 compute-0 sudo[45655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:55 compute-0 sudo[45807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrffhnygbrkwjiloetbiirpcwdfwmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404295.256673-532-101549080318722/AnsiballZ_ini_file.py'
Oct 02 11:24:55 compute-0 sudo[45807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:55 compute-0 python3.9[45809]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:24:55 compute-0 sudo[45807]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:56 compute-0 sudo[45961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwucknxtwofedzokontyitipmgodryx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404296.2073638-562-76987208305986/AnsiballZ_ini_file.py'
Oct 02 11:24:56 compute-0 sudo[45961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:56 compute-0 python3.9[45963]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:24:56 compute-0 sudo[45961]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:57 compute-0 sudo[46113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzxcjmywxixtzjbphcfsorveswupezva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404296.8095667-562-13873310658809/AnsiballZ_ini_file.py'
Oct 02 11:24:57 compute-0 sudo[46113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:57 compute-0 python3.9[46115]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:24:57 compute-0 sudo[46113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:57 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:24:57 compute-0 sudo[46265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdnfwqgqfagfqgfjdbwxcquvfzfzicn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404297.5562956-607-66299299184452/AnsiballZ_ini_file.py'
Oct 02 11:24:57 compute-0 sudo[46265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:57 compute-0 python3.9[46267]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:24:58 compute-0 sudo[46265]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:58 compute-0 sudo[46417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmfyrrboilmvjunkocqpoermnvwwlbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404298.1421702-607-4659829000640/AnsiballZ_ini_file.py'
Oct 02 11:24:58 compute-0 sudo[46417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:58 compute-0 python3.9[46419]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:24:58 compute-0 sudo[46417]: pam_unix(sudo:session): session closed for user root
Oct 02 11:24:59 compute-0 sudo[46569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogsjsrlfnzgorwcnxdncmecwgxmfnjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404299.0677369-652-81099919986923/AnsiballZ_stat.py'
Oct 02 11:24:59 compute-0 sudo[46569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:24:59 compute-0 python3.9[46571]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:24:59 compute-0 sudo[46569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:00 compute-0 sudo[46692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrroajaxwjrfbryesplfbodeeofsdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404299.0677369-652-81099919986923/AnsiballZ_copy.py'
Oct 02 11:25:00 compute-0 sudo[46692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:00 compute-0 python3.9[46694]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404299.0677369-652-81099919986923/.source _original_basename=.bz78kq16 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:00 compute-0 sudo[46692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:00 compute-0 sudo[46844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwnmoylxudpxroqzrltmrxipqxkeddzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404300.4820383-697-114474055174099/AnsiballZ_file.py'
Oct 02 11:25:00 compute-0 sudo[46844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:01 compute-0 python3.9[46846]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:01 compute-0 sudo[46844]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:01 compute-0 sudo[46996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mehyqaozlbqdfezpgqsrlbeqdextagqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404301.3856244-721-233019315085031/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 02 11:25:01 compute-0 sudo[46996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:02 compute-0 python3.9[46998]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 02 11:25:02 compute-0 sudo[46996]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:02 compute-0 sudo[47148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mltudpfspqoatvhjunppfzxirzprsisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404302.435769-748-242561046261191/AnsiballZ_file.py'
Oct 02 11:25:02 compute-0 sudo[47148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:02 compute-0 python3.9[47150]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:02 compute-0 sudo[47148]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:03 compute-0 sudo[47300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkdplxhtbkvgqsevgkobvndequecwzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404303.3707082-778-221312024687822/AnsiballZ_stat.py'
Oct 02 11:25:03 compute-0 sudo[47300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:03 compute-0 sudo[47300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:04 compute-0 sudo[47423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvtetvsqfcjolrpswgndmiiuulbooind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404303.3707082-778-221312024687822/AnsiballZ_copy.py'
Oct 02 11:25:04 compute-0 sudo[47423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:04 compute-0 sudo[47423]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:05 compute-0 sudo[47575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmmlayazqbmelpvickzltttsttnnfzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404304.7927854-823-166215670833118/AnsiballZ_slurp.py'
Oct 02 11:25:05 compute-0 sudo[47575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:05 compute-0 python3.9[47577]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 02 11:25:05 compute-0 sudo[47575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:06 compute-0 sudo[47750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxckuxrigfhdkymdirbtnrtacaozbnjv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404305.7519755-850-107460278507701/async_wrapper.py j155294657383 300 /home/zuul/.ansible/tmp/ansible-tmp-1759404305.7519755-850-107460278507701/AnsiballZ_edpm_os_net_config.py _'
Oct 02 11:25:06 compute-0 sudo[47750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:06 compute-0 ansible-async_wrapper.py[47752]: Invoked with j155294657383 300 /home/zuul/.ansible/tmp/ansible-tmp-1759404305.7519755-850-107460278507701/AnsiballZ_edpm_os_net_config.py _
Oct 02 11:25:06 compute-0 ansible-async_wrapper.py[47755]: Starting module and watcher
Oct 02 11:25:06 compute-0 ansible-async_wrapper.py[47755]: Start watching 47756 (300)
Oct 02 11:25:06 compute-0 ansible-async_wrapper.py[47756]: Start module (47756)
Oct 02 11:25:06 compute-0 ansible-async_wrapper.py[47752]: Return async_wrapper task started.
Oct 02 11:25:06 compute-0 sudo[47750]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:06 compute-0 python3.9[47757]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 02 11:25:07 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 02 11:25:07 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 02 11:25:07 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 02 11:25:07 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 02 11:25:07 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.6653] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.6672] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7129] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7131] audit: op="connection-add" uuid="ca68938f-3710-4d6e-9077-0aceb9867027" name="br-ex-br" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7146] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7147] audit: op="connection-add" uuid="9f530009-558d-4bb1-b084-1a5a56064e8b" name="br-ex-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7159] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7160] audit: op="connection-add" uuid="a51c1e30-3ad3-4794-a9a8-66e94b7ade06" name="eth1-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7172] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7174] audit: op="connection-add" uuid="0d179398-c979-494b-bb04-f5eef9ddad12" name="vlan20-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7184] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7186] audit: op="connection-add" uuid="0d6e8600-b42c-4608-b50a-a1bf07d28d65" name="vlan21-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7196] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7197] audit: op="connection-add" uuid="9c90c4fa-bf07-4933-b7d4-535a1de026aa" name="vlan22-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7208] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7210] audit: op="connection-add" uuid="64808132-8a91-4831-a82a-22cae4892006" name="vlan23-port" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7229] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7243] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7245] audit: op="connection-add" uuid="6766491d-bbdc-49b0-a5db-e30111f4adde" name="br-ex-if" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7288] audit: op="connection-update" uuid="899e034d-b20c-5f63-a638-c327b5fc6ba0" name="ci-private-network" args="connection.master,connection.timestamp,connection.controller,connection.port-type,connection.slave-type,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.method,ovs-external-ids.data,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routes,ovs-interface.type" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7303] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7305] audit: op="connection-add" uuid="0a8368e5-0406-4352-a919-f77c9a8eba80" name="vlan20-if" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7318] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7320] audit: op="connection-add" uuid="5cf6e243-aff3-4874-bd1a-934cb646cef4" name="vlan21-if" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7333] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7335] audit: op="connection-add" uuid="62e4333b-97e6-4b53-88c1-016187626ba4" name="vlan22-if" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7349] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7351] audit: op="connection-add" uuid="c280aad0-0d6e-4008-8437-5addc3a79fe3" name="vlan23-if" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7362] audit: op="connection-delete" uuid="efea40a0-3c75-3072-953b-755f76bcf27c" name="Wired connection 1" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7373] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7382] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7386] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (ca68938f-3710-4d6e-9077-0aceb9867027)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7387] audit: op="connection-activate" uuid="ca68938f-3710-4d6e-9077-0aceb9867027" name="br-ex-br" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7390] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7396] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7400] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9f530009-558d-4bb1-b084-1a5a56064e8b)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7403] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7409] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7412] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a51c1e30-3ad3-4794-a9a8-66e94b7ade06)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7414] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7421] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7425] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0d179398-c979-494b-bb04-f5eef9ddad12)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7427] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7434] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7438] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (0d6e8600-b42c-4608-b50a-a1bf07d28d65)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7440] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7447] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7451] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (9c90c4fa-bf07-4933-b7d4-535a1de026aa)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7453] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7459] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7464] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (64808132-8a91-4831-a82a-22cae4892006)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7466] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7468] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7471] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7476] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7482] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7486] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (6766491d-bbdc-49b0-a5db-e30111f4adde)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7488] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7492] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7494] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7496] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7497] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7507] device (eth1): disconnecting for new activation request.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7509] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7541] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7544] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7544] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7546] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7549] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7552] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0a8368e5-0406-4352-a919-f77c9a8eba80)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7553] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7554] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7555] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7556] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7558] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7562] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7567] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5cf6e243-aff3-4874-bd1a-934cb646cef4)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7568] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7571] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7573] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7574] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7576] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7581] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7586] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (62e4333b-97e6-4b53-88c1-016187626ba4)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7587] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7591] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7592] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7593] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7596] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7600] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7604] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c280aad0-0d6e-4008-8437-5addc3a79fe3)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7604] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7607] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7609] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7610] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7611] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7622] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7624] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7626] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7629] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7634] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7637] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7641] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7645] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7647] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7652] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7655] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7658] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7660] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 systemd-udevd[47763]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:25:08 compute-0 kernel: Timeout policy base is empty
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7667] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7673] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7676] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7678] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7683] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7686] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7689] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7691] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7695] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7698] dhcp4 (eth0): canceled DHCP transaction
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7699] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7699] dhcp4 (eth0): state changed no lease
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7700] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7715] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7718] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47758 uid=0 result="fail" reason="Device is not activated"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7722] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7760] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7764] dhcp4 (eth0): state changed new lease, address=38.129.56.184
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7770] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 02 11:25:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7821] device (eth1): disconnecting for new activation request.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7823] audit: op="connection-activate" uuid="899e034d-b20c-5f63-a638-c327b5fc6ba0" name="ci-private-network" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7829] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7837] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7967] device (eth1): Activation: starting connection 'ci-private-network' (899e034d-b20c-5f63-a638-c327b5fc6ba0)
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7972] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7974] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7978] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7980] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7982] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7984] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.7985] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 kernel: br-ex: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8002] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8007] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8013] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8018] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8023] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8027] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8031] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8035] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8037] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8040] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8042] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8045] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8049] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8053] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8057] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8061] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8069] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47758 uid=0 result="success"
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8071] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 kernel: vlan22: entered promiscuous mode
Oct 02 11:25:08 compute-0 systemd-udevd[47762]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8123] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8131] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8137] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 kernel: vlan21: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8173] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8188] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8200] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8206] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8208] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8213] device (eth1): Activation: successful, device activated.
Oct 02 11:25:08 compute-0 kernel: vlan23: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8223] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8225] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8232] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 systemd-udevd[47764]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:25:08 compute-0 kernel: vlan20: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8279] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8290] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8297] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8344] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8349] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8383] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8389] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8398] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8411] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8415] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8417] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8420] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8425] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8430] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8434] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8440] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8441] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:25:08 compute-0 NetworkManager[44987]: <info>  [1759404308.8445] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:25:09 compute-0 NetworkManager[44987]: <info>  [1759404309.9741] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.1277] checkpoint[0x5598d36c2950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.1279] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 sudo[48115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzfgjadtlheewskckonbaomfgdacobla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404309.8006823-850-77305090445986/AnsiballZ_async_status.py'
Oct 02 11:25:10 compute-0 sudo[48115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:10 compute-0 python3.9[48117]: ansible-ansible.legacy.async_status Invoked with jid=j155294657383.47752 mode=status _async_dir=/root/.ansible_async
Oct 02 11:25:10 compute-0 sudo[48115]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.4430] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.4447] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.6814] audit: op="networking-control" arg="global-dns-configuration" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.6846] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.6875] audit: op="networking-control" arg="global-dns-configuration" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.6900] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.8551] checkpoint[0x5598d36c2a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 02 11:25:10 compute-0 NetworkManager[44987]: <info>  [1759404310.8555] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47758 uid=0 result="success"
Oct 02 11:25:10 compute-0 ansible-async_wrapper.py[47756]: Module complete (47756)
Oct 02 11:25:11 compute-0 ansible-async_wrapper.py[47755]: Done in kid B.
Oct 02 11:25:13 compute-0 sudo[48220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzcjrkqocscyxengcztolnzsaokahum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404309.8006823-850-77305090445986/AnsiballZ_async_status.py'
Oct 02 11:25:13 compute-0 sudo[48220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:13 compute-0 python3.9[48222]: ansible-ansible.legacy.async_status Invoked with jid=j155294657383.47752 mode=status _async_dir=/root/.ansible_async
Oct 02 11:25:13 compute-0 sudo[48220]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:14 compute-0 sudo[48320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximxgnpdzsfkfrzbpemahcthekkosbgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404309.8006823-850-77305090445986/AnsiballZ_async_status.py'
Oct 02 11:25:14 compute-0 sudo[48320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:14 compute-0 python3.9[48322]: ansible-ansible.legacy.async_status Invoked with jid=j155294657383.47752 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:25:14 compute-0 sudo[48320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:15 compute-0 sudo[48472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obcrehhdsabjkvckxwlsfphzkehvszrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404314.936692-931-45707541404303/AnsiballZ_stat.py'
Oct 02 11:25:15 compute-0 sudo[48472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:15 compute-0 python3.9[48474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:25:15 compute-0 sudo[48472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:15 compute-0 sudo[48595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhnrnnumkutbxduzpncusnnmvmssllro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404314.936692-931-45707541404303/AnsiballZ_copy.py'
Oct 02 11:25:15 compute-0 sudo[48595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:16 compute-0 python3.9[48597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404314.936692-931-45707541404303/.source.returncode _original_basename=.h7b0cv0q follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:16 compute-0 sudo[48595]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:16 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:25:16 compute-0 sudo[48749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oehrhpjggxehdravxlwhiagbdsciblws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404316.4921384-979-84744648809274/AnsiballZ_stat.py'
Oct 02 11:25:16 compute-0 sudo[48749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:17 compute-0 python3.9[48751]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:25:17 compute-0 sudo[48749]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:17 compute-0 sudo[48873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsyhvhmondaaiddtaoweyozmrmtymdxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404316.4921384-979-84744648809274/AnsiballZ_copy.py'
Oct 02 11:25:17 compute-0 sudo[48873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:17 compute-0 python3.9[48875]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404316.4921384-979-84744648809274/.source.cfg _original_basename=.01smjoa6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:17 compute-0 sudo[48873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:18 compute-0 sudo[49025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlvlbkmcnuxhyggsbmscbchydbpjbzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404318.1791477-1024-92842993602872/AnsiballZ_systemd.py'
Oct 02 11:25:18 compute-0 sudo[49025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:18 compute-0 python3.9[49027]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:25:18 compute-0 systemd[1]: Reloading Network Manager...
Oct 02 11:25:18 compute-0 NetworkManager[44987]: <info>  [1759404318.8072] audit: op="reload" arg="0" pid=49031 uid=0 result="success"
Oct 02 11:25:18 compute-0 NetworkManager[44987]: <info>  [1759404318.8081] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 02 11:25:18 compute-0 systemd[1]: Reloaded Network Manager.
Oct 02 11:25:18 compute-0 sudo[49025]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:19 compute-0 sshd-session[40982]: Connection closed by 192.168.122.30 port 45822
Oct 02 11:25:19 compute-0 sshd-session[40979]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:25:19 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 02 11:25:19 compute-0 systemd[1]: session-10.scope: Consumed 48.631s CPU time.
Oct 02 11:25:19 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Oct 02 11:25:19 compute-0 systemd-logind[789]: Removed session 10.
Oct 02 11:25:24 compute-0 sshd-session[49062]: Accepted publickey for zuul from 192.168.122.30 port 48954 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:25:24 compute-0 systemd-logind[789]: New session 11 of user zuul.
Oct 02 11:25:24 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 02 11:25:24 compute-0 sshd-session[49062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:25:25 compute-0 python3.9[49215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:25:26 compute-0 python3.9[49369]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:25:28 compute-0 python3.9[49563]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:25:28 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:25:28 compute-0 sshd-session[49065]: Connection closed by 192.168.122.30 port 48954
Oct 02 11:25:28 compute-0 sshd-session[49062]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:25:28 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 02 11:25:28 compute-0 systemd[1]: session-11.scope: Consumed 2.214s CPU time.
Oct 02 11:25:28 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Oct 02 11:25:28 compute-0 systemd-logind[789]: Removed session 11.
Oct 02 11:25:35 compute-0 sshd-session[49592]: Accepted publickey for zuul from 192.168.122.30 port 45826 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:25:35 compute-0 systemd-logind[789]: New session 12 of user zuul.
Oct 02 11:25:35 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 02 11:25:35 compute-0 sshd-session[49592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:25:36 compute-0 python3.9[49745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:25:37 compute-0 python3.9[49900]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:25:38 compute-0 sudo[50054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbofkarxsolyzszdfdkoiyqgmblhqiok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404337.7499275-85-31370459067080/AnsiballZ_setup.py'
Oct 02 11:25:38 compute-0 sudo[50054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:38 compute-0 python3.9[50056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:25:38 compute-0 sudo[50054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:38 compute-0 sudo[50138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixwdcvgsztrfpupvondjovfkdubyjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404337.7499275-85-31370459067080/AnsiballZ_dnf.py'
Oct 02 11:25:38 compute-0 sudo[50138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:39 compute-0 python3.9[50140]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:25:40 compute-0 sudo[50138]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:41 compute-0 sudo[50292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ealxzypgqaedhhexaphjuptkslababuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404340.7423725-121-271036938742212/AnsiballZ_setup.py'
Oct 02 11:25:41 compute-0 sudo[50292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:41 compute-0 python3.9[50294]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:25:41 compute-0 sudo[50292]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:42 compute-0 sudo[50487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbamatiivwegabcwsvcffhphvmpotcot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404342.0562572-154-127383510362187/AnsiballZ_file.py'
Oct 02 11:25:42 compute-0 sudo[50487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:42 compute-0 python3.9[50489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:42 compute-0 sudo[50487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:43 compute-0 sudo[50639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscysppjnggvhtltejrpvnedhdxxvcir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404343.0106218-178-277923601029295/AnsiballZ_command.py'
Oct 02 11:25:43 compute-0 sudo[50639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:43 compute-0 python3.9[50641]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4129873036-merged.mount: Deactivated successfully.
Oct 02 11:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2509238781-merged.mount: Deactivated successfully.
Oct 02 11:25:44 compute-0 podman[50642]: 2025-10-02 11:25:44.34739074 +0000 UTC m=+0.634160703 system refresh
Oct 02 11:25:44 compute-0 sudo[50639]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:45 compute-0 sudo[50802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvnuubdtnhuxihzcqkkukyovjtmemod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404344.565889-202-194185523692463/AnsiballZ_stat.py'
Oct 02 11:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:25:45 compute-0 sudo[50802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:45 compute-0 python3.9[50804]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:25:45 compute-0 sudo[50802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:45 compute-0 sudo[50925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpitdxyxegazkqldudqcqtqusgdikif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404344.565889-202-194185523692463/AnsiballZ_copy.py'
Oct 02 11:25:45 compute-0 sudo[50925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:45 compute-0 python3.9[50927]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404344.565889-202-194185523692463/.source.json follow=False _original_basename=podman_network_config.j2 checksum=cf8f76116e88c6c60e5bb068da733ac1f6d4ec02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:25:45 compute-0 sudo[50925]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:46 compute-0 sudo[51077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptnaxnkvfoopmvpgviilnguntuxazfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404346.1912801-247-30872330615867/AnsiballZ_stat.py'
Oct 02 11:25:46 compute-0 sudo[51077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:46 compute-0 python3.9[51079]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:25:46 compute-0 sudo[51077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:47 compute-0 sudo[51200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziuychcqzujghwvbgrjpgnzdvqkffjaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404346.1912801-247-30872330615867/AnsiballZ_copy.py'
Oct 02 11:25:47 compute-0 sudo[51200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:47 compute-0 python3.9[51202]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404346.1912801-247-30872330615867/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b0997da0dac7c72916bfa4feb1650346bde4dfbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:25:47 compute-0 sudo[51200]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:48 compute-0 sudo[51352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwplycktkdatcgfnybvpigaylonzvunb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404347.6560936-295-171471859468892/AnsiballZ_ini_file.py'
Oct 02 11:25:48 compute-0 sudo[51352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:48 compute-0 python3.9[51354]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:25:48 compute-0 sudo[51352]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:48 compute-0 sudo[51504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcksmatlzztezyunraltygwqarcueead ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404348.456397-295-255378937047979/AnsiballZ_ini_file.py'
Oct 02 11:25:48 compute-0 sudo[51504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:48 compute-0 python3.9[51506]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:25:48 compute-0 sudo[51504]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:49 compute-0 sudo[51656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twijmzpgioboqfgckhsxclgdsnqqfkhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404349.0945113-295-253702442323369/AnsiballZ_ini_file.py'
Oct 02 11:25:49 compute-0 sudo[51656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:49 compute-0 python3.9[51658]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:25:49 compute-0 sudo[51656]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:50 compute-0 sudo[51808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmsqwjboecgjdvgsmrfzwyapomztdfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404349.780405-295-120557039032950/AnsiballZ_ini_file.py'
Oct 02 11:25:50 compute-0 sudo[51808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:50 compute-0 python3.9[51810]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:25:50 compute-0 sudo[51808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:51 compute-0 sudo[51960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhbfefhmndqxnhvnlogxrjaynvoxxah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404350.8205135-388-75496104896208/AnsiballZ_dnf.py'
Oct 02 11:25:51 compute-0 sudo[51960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:51 compute-0 python3.9[51962]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:25:52 compute-0 sudo[51960]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:53 compute-0 sudo[52113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckugkafjvqexxwhsfuynuwnveyjfhjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404353.03195-421-26007004344707/AnsiballZ_setup.py'
Oct 02 11:25:53 compute-0 sudo[52113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:53 compute-0 python3.9[52115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:25:53 compute-0 sudo[52113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:54 compute-0 sudo[52268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hitprdlbundmuqvlqlsmwyiofoeovbcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404353.9180183-445-92773301059781/AnsiballZ_stat.py'
Oct 02 11:25:54 compute-0 sudo[52268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:54 compute-0 python3.9[52270]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:25:54 compute-0 sudo[52268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:54 compute-0 sudo[52420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyqftedhmxkdobpafjrzfeayxtiwqxts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404354.6895845-472-254790256183980/AnsiballZ_stat.py'
Oct 02 11:25:54 compute-0 sudo[52420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:55 compute-0 python3.9[52422]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:25:55 compute-0 sudo[52420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:25:55 compute-0 sudo[52572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahfrvaybinooigowsabvaneaopvkhhdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404355.4996815-502-73749831455510/AnsiballZ_service_facts.py'
Oct 02 11:25:55 compute-0 sudo[52572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:25:56 compute-0 python3.9[52574]: ansible-service_facts Invoked
Oct 02 11:25:56 compute-0 network[52591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:25:56 compute-0 network[52592]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:25:56 compute-0 network[52593]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:25:59 compute-0 sudo[52572]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:00 compute-0 sudo[52878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rayrnzgvonogluuxovwcprezyrwmrzda ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759404360.4394894-541-116210112192335/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759404360.4394894-541-116210112192335/args'
Oct 02 11:26:00 compute-0 sudo[52878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:00 compute-0 sudo[52878]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:01 compute-0 sudo[53045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgqvorerpwsptfyrffwlcyuffmgxognz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404361.2267756-574-181534914964128/AnsiballZ_dnf.py'
Oct 02 11:26:01 compute-0 sudo[53045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:01 compute-0 python3.9[53047]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:26:03 compute-0 sudo[53045]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:04 compute-0 sudo[53198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kneodqskpaiekiqnqngjufsrjsohgass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404363.4110298-613-105850248136618/AnsiballZ_package_facts.py'
Oct 02 11:26:04 compute-0 sudo[53198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:04 compute-0 python3.9[53200]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 11:26:04 compute-0 sudo[53198]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:05 compute-0 sudo[53350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoojrbdoohvliydxywyrbkofseuycyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404365.3779461-643-231023276315335/AnsiballZ_stat.py'
Oct 02 11:26:05 compute-0 sudo[53350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:05 compute-0 python3.9[53352]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:05 compute-0 sudo[53350]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:06 compute-0 sudo[53475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwacrpyissznjmzdmgwjjxoscolcqdry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404365.3779461-643-231023276315335/AnsiballZ_copy.py'
Oct 02 11:26:06 compute-0 sudo[53475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:06 compute-0 python3.9[53477]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404365.3779461-643-231023276315335/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:06 compute-0 sudo[53475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:07 compute-0 sudo[53629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnsvcswehvgtuujlflfwtsmbzmwkbwyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404366.7851286-688-127916898333116/AnsiballZ_stat.py'
Oct 02 11:26:07 compute-0 sudo[53629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:07 compute-0 python3.9[53631]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:07 compute-0 sudo[53629]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:07 compute-0 sudo[53754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxzznabhtuuqbhdrrknvcduyxidfpxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404366.7851286-688-127916898333116/AnsiballZ_copy.py'
Oct 02 11:26:07 compute-0 sudo[53754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:07 compute-0 python3.9[53756]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404366.7851286-688-127916898333116/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:07 compute-0 sudo[53754]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:09 compute-0 sudo[53908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siiognzzjwqxzgdqyaxdpafbrzxkrtbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404368.9264097-751-184274869590730/AnsiballZ_lineinfile.py'
Oct 02 11:26:09 compute-0 sudo[53908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:09 compute-0 python3.9[53910]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:09 compute-0 sudo[53908]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:10 compute-0 sudo[54062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmalomzzjyjawtrbbwxypavkunmbhoxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404370.5149758-796-156281523159015/AnsiballZ_setup.py'
Oct 02 11:26:10 compute-0 sudo[54062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:11 compute-0 python3.9[54064]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:26:11 compute-0 sudo[54062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:11 compute-0 sudo[54146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbhvsgoorfmrwuxwfqjihkfzsbavdluh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404370.5149758-796-156281523159015/AnsiballZ_systemd.py'
Oct 02 11:26:11 compute-0 sudo[54146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:12 compute-0 python3.9[54148]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:12 compute-0 sudo[54146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:13 compute-0 sudo[54300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfgkkcvprowbwioibojqyoiyqpgjscih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404373.035889-844-8984846092063/AnsiballZ_setup.py'
Oct 02 11:26:13 compute-0 sudo[54300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:13 compute-0 python3.9[54302]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:26:13 compute-0 sudo[54300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:14 compute-0 sudo[54384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbszpwxehlvfkayquozyqtnebzpwmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404373.035889-844-8984846092063/AnsiballZ_systemd.py'
Oct 02 11:26:14 compute-0 sudo[54384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:14 compute-0 python3.9[54386]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:26:14 compute-0 systemd[1]: Stopping NTP client/server...
Oct 02 11:26:14 compute-0 chronyd[805]: chronyd exiting
Oct 02 11:26:14 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 02 11:26:14 compute-0 systemd[1]: Stopped NTP client/server.
Oct 02 11:26:14 compute-0 systemd[1]: Starting NTP client/server...
Oct 02 11:26:14 compute-0 chronyd[54395]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 11:26:14 compute-0 chronyd[54395]: Frequency -23.760 +/- 0.204 ppm read from /var/lib/chrony/drift
Oct 02 11:26:14 compute-0 chronyd[54395]: Loaded seccomp filter (level 2)
Oct 02 11:26:14 compute-0 systemd[1]: Started NTP client/server.
Oct 02 11:26:14 compute-0 sudo[54384]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:15 compute-0 sshd-session[49595]: Connection closed by 192.168.122.30 port 45826
Oct 02 11:26:15 compute-0 sshd-session[49592]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:26:15 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 02 11:26:15 compute-0 systemd[1]: session-12.scope: Consumed 24.311s CPU time.
Oct 02 11:26:15 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Oct 02 11:26:15 compute-0 systemd-logind[789]: Removed session 12.
Oct 02 11:26:21 compute-0 sshd-session[54421]: Accepted publickey for zuul from 192.168.122.30 port 37492 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:26:21 compute-0 systemd-logind[789]: New session 13 of user zuul.
Oct 02 11:26:21 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 02 11:26:21 compute-0 sshd-session[54421]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:26:21 compute-0 sudo[54574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvkdzobltthswujvjvvfsurcqwppasal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404381.2928202-31-269019492168056/AnsiballZ_file.py'
Oct 02 11:26:21 compute-0 sudo[54574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:21 compute-0 python3.9[54576]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:22 compute-0 sudo[54574]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:22 compute-0 sudo[54726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpocuzqibfkadccjsemhqgwpcolklfpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404382.2893095-67-177282204520787/AnsiballZ_stat.py'
Oct 02 11:26:22 compute-0 sudo[54726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:22 compute-0 python3.9[54728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:22 compute-0 sudo[54726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:23 compute-0 sudo[54849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlxtfvnqukdejhxyrusuttmfpgnbruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404382.2893095-67-177282204520787/AnsiballZ_copy.py'
Oct 02 11:26:23 compute-0 sudo[54849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:23 compute-0 python3.9[54851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404382.2893095-67-177282204520787/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:23 compute-0 sudo[54849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:24 compute-0 sshd-session[54424]: Connection closed by 192.168.122.30 port 37492
Oct 02 11:26:24 compute-0 sshd-session[54421]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:26:24 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 02 11:26:24 compute-0 systemd[1]: session-13.scope: Consumed 1.491s CPU time.
Oct 02 11:26:24 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Oct 02 11:26:24 compute-0 systemd-logind[789]: Removed session 13.
Oct 02 11:26:30 compute-0 sshd-session[54876]: Accepted publickey for zuul from 192.168.122.30 port 34378 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:26:30 compute-0 systemd-logind[789]: New session 14 of user zuul.
Oct 02 11:26:30 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 02 11:26:30 compute-0 sshd-session[54876]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:26:31 compute-0 python3.9[55029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:26:32 compute-0 sudo[55183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjoimrbysobaetffazhnbsajicuyqexu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404391.8484535-64-259166215609447/AnsiballZ_file.py'
Oct 02 11:26:32 compute-0 sudo[55183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:32 compute-0 python3.9[55185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:32 compute-0 sudo[55183]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:33 compute-0 sudo[55358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdxvmtkwaoejyjjlkapffppyplhlcvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404392.633173-88-266068347924515/AnsiballZ_stat.py'
Oct 02 11:26:33 compute-0 sudo[55358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:33 compute-0 python3.9[55360]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:33 compute-0 sudo[55358]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:33 compute-0 sudo[55481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjwgkqaudtuszskldtleihxvjokelizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404392.633173-88-266068347924515/AnsiballZ_copy.py'
Oct 02 11:26:33 compute-0 sudo[55481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:34 compute-0 python3.9[55483]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759404392.633173-88-266068347924515/.source.json _original_basename=.zs5i0djz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:34 compute-0 sudo[55481]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:34 compute-0 sudo[55633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuuttycrcqgdbfmfxcqwafrpvnqhqjou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404394.6487224-157-57059466275599/AnsiballZ_stat.py'
Oct 02 11:26:34 compute-0 sudo[55633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:35 compute-0 python3.9[55635]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:35 compute-0 sudo[55633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:35 compute-0 sudo[55756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heznkapahiwcedbtgawwedmhhqgszzzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404394.6487224-157-57059466275599/AnsiballZ_copy.py'
Oct 02 11:26:35 compute-0 sudo[55756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:35 compute-0 python3.9[55758]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404394.6487224-157-57059466275599/.source _original_basename=.0u5omij7 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:35 compute-0 sudo[55756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:36 compute-0 sudo[55908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcnqddhqqcjcmttdvzzmktxwvshbbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404395.969393-205-204674907257015/AnsiballZ_file.py'
Oct 02 11:26:36 compute-0 sudo[55908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:36 compute-0 python3.9[55910]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:26:36 compute-0 sudo[55908]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:36 compute-0 sudo[56060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjugwnndjuwfkjubsuiunpvtsvsqxltl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404396.6778905-229-128846369816079/AnsiballZ_stat.py'
Oct 02 11:26:36 compute-0 sudo[56060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:37 compute-0 python3.9[56062]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:37 compute-0 sudo[56060]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:37 compute-0 sudo[56183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgbzmddzdwquvsicdfirqecxcvasjvij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404396.6778905-229-128846369816079/AnsiballZ_copy.py'
Oct 02 11:26:37 compute-0 sudo[56183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:37 compute-0 python3.9[56185]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404396.6778905-229-128846369816079/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:26:37 compute-0 sudo[56183]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:38 compute-0 sudo[56335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfoinkisrxpacxmpaagtkmaddxbwulmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404397.8241374-229-213706614452402/AnsiballZ_stat.py'
Oct 02 11:26:38 compute-0 sudo[56335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:38 compute-0 python3.9[56337]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:38 compute-0 sudo[56335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:38 compute-0 sudo[56458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubvsnuhsvxjjvrsedndvywzhfofwvflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404397.8241374-229-213706614452402/AnsiballZ_copy.py'
Oct 02 11:26:38 compute-0 sudo[56458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:38 compute-0 python3.9[56460]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404397.8241374-229-213706614452402/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:26:38 compute-0 sudo[56458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:39 compute-0 sudo[56610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfrocinfjmrnwecnbswvmrkjjgdpykq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404399.3893504-316-178838393454225/AnsiballZ_file.py'
Oct 02 11:26:39 compute-0 sudo[56610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:39 compute-0 python3.9[56612]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:39 compute-0 sudo[56610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:40 compute-0 sudo[56762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoibntlixnkvgoomnafluiejleodcltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404400.1219707-340-15451110910236/AnsiballZ_stat.py'
Oct 02 11:26:40 compute-0 sudo[56762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:40 compute-0 python3.9[56764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:40 compute-0 sudo[56762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:40 compute-0 sudo[56885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbkgioywymaoixhlyxdahgxabpqyjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404400.1219707-340-15451110910236/AnsiballZ_copy.py'
Oct 02 11:26:40 compute-0 sudo[56885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:41 compute-0 python3.9[56887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404400.1219707-340-15451110910236/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:41 compute-0 sudo[56885]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:41 compute-0 sudo[57037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvisctcrjzckyiuedhkkngemzzucwhjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404401.5203238-385-35516049980474/AnsiballZ_stat.py'
Oct 02 11:26:41 compute-0 sudo[57037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:41 compute-0 python3.9[57039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:41 compute-0 sudo[57037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:42 compute-0 sudo[57160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzoojwmvfcaicrzfbjnhygzrorntqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404401.5203238-385-35516049980474/AnsiballZ_copy.py'
Oct 02 11:26:42 compute-0 sudo[57160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:42 compute-0 python3.9[57162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404401.5203238-385-35516049980474/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:42 compute-0 sudo[57160]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:43 compute-0 sudo[57312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzhcjcnshkkikunqjvnuzaucffzpliv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404403.083353-430-103348005031776/AnsiballZ_systemd.py'
Oct 02 11:26:43 compute-0 sudo[57312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:44 compute-0 python3.9[57314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:44 compute-0 systemd[1]: Reloading.
Oct 02 11:26:44 compute-0 systemd-sysv-generator[57346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:44 compute-0 systemd-rc-local-generator[57342]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:44 compute-0 systemd[1]: Reloading.
Oct 02 11:26:44 compute-0 systemd-rc-local-generator[57380]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:44 compute-0 systemd-sysv-generator[57383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:44 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 02 11:26:44 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 02 11:26:44 compute-0 sudo[57312]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:45 compute-0 sudo[57541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvzsbwatimsvbzfyhtjpfkwdgogjwpdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404404.8398044-454-120455799972747/AnsiballZ_stat.py'
Oct 02 11:26:45 compute-0 sudo[57541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:45 compute-0 python3.9[57543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:45 compute-0 sudo[57541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:45 compute-0 sudo[57664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajleivmdheupsgbjvuveokiktrrtfegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404404.8398044-454-120455799972747/AnsiballZ_copy.py'
Oct 02 11:26:45 compute-0 sudo[57664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:46 compute-0 python3.9[57666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404404.8398044-454-120455799972747/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:46 compute-0 sudo[57664]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:46 compute-0 sudo[57816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jidtvnlcxyjzweoeqdagxjakmorgwxyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404406.2585754-499-75426250703504/AnsiballZ_stat.py'
Oct 02 11:26:46 compute-0 sudo[57816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:46 compute-0 python3.9[57818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:46 compute-0 sudo[57816]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:47 compute-0 sudo[57939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdcueqwtskbjbsfuogonllkhobrvchm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404406.2585754-499-75426250703504/AnsiballZ_copy.py'
Oct 02 11:26:47 compute-0 sudo[57939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:47 compute-0 python3.9[57941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404406.2585754-499-75426250703504/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:47 compute-0 sudo[57939]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:48 compute-0 sudo[58091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubkgdrdgdqyexaledbufnkddakkgovsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404407.8731785-544-277149884460210/AnsiballZ_systemd.py'
Oct 02 11:26:48 compute-0 sudo[58091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:48 compute-0 python3.9[58093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:48 compute-0 systemd[1]: Reloading.
Oct 02 11:26:48 compute-0 systemd-sysv-generator[58124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:48 compute-0 systemd-rc-local-generator[58120]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:48 compute-0 systemd[1]: Reloading.
Oct 02 11:26:48 compute-0 systemd-rc-local-generator[58158]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:48 compute-0 systemd-sysv-generator[58162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:48 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:26:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:26:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:26:48 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:26:48 compute-0 sudo[58091]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:49 compute-0 python3.9[58321]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:26:49 compute-0 network[58338]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:26:49 compute-0 network[58339]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:26:49 compute-0 network[58340]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:26:53 compute-0 sudo[58602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbrkxxevcjgpcpjdhyiuryvdviybybia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404413.4784527-592-180128691752399/AnsiballZ_systemd.py'
Oct 02 11:26:53 compute-0 sudo[58602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:54 compute-0 python3.9[58604]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:54 compute-0 systemd[1]: Reloading.
Oct 02 11:26:54 compute-0 systemd-rc-local-generator[58634]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:54 compute-0 systemd-sysv-generator[58637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:54 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 02 11:26:54 compute-0 iptables.init[58644]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 02 11:26:54 compute-0 iptables.init[58644]: iptables: Flushing firewall rules: [  OK  ]
Oct 02 11:26:54 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 02 11:26:54 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 02 11:26:54 compute-0 sudo[58602]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:55 compute-0 sudo[58839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaimqfukqnnwfnqflfevukkcffumhfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404414.801867-592-278779089509413/AnsiballZ_systemd.py'
Oct 02 11:26:55 compute-0 sudo[58839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:55 compute-0 python3.9[58841]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:55 compute-0 sudo[58839]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:56 compute-0 sudo[58993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznpdbmefkxtzmclwyefcqkvuumvtxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404416.0392838-640-57765893196170/AnsiballZ_systemd.py'
Oct 02 11:26:56 compute-0 sudo[58993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:56 compute-0 python3.9[58995]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:26:56 compute-0 systemd[1]: Reloading.
Oct 02 11:26:56 compute-0 systemd-sysv-generator[59029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:26:56 compute-0 systemd-rc-local-generator[59026]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:26:56 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 02 11:26:56 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 02 11:26:57 compute-0 sudo[58993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:57 compute-0 sudo[59186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-birksbmmjqqkklanyhoguyhmqyyiyrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404417.3485065-664-230581323509977/AnsiballZ_command.py'
Oct 02 11:26:57 compute-0 sudo[59186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:57 compute-0 python3.9[59188]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:26:57 compute-0 sudo[59186]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:58 compute-0 sudo[59339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfycrplmdxhrrcuxuebsxfzuootcgwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404418.669834-706-204442131838208/AnsiballZ_stat.py'
Oct 02 11:26:58 compute-0 sudo[59339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:59 compute-0 python3.9[59341]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:26:59 compute-0 sudo[59339]: pam_unix(sudo:session): session closed for user root
Oct 02 11:26:59 compute-0 sudo[59464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqvovhenqngqtlflvyhspsochdjdufjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404418.669834-706-204442131838208/AnsiballZ_copy.py'
Oct 02 11:26:59 compute-0 sudo[59464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:26:59 compute-0 python3.9[59466]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404418.669834-706-204442131838208/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:26:59 compute-0 sudo[59464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:00 compute-0 python3.9[59617]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:27:00 compute-0 polkitd[6475]: Registered Authentication Agent for unix-process:59619:248846 (system bus name :1.524 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 11:27:25 compute-0 polkitd[6475]: Unregistered Authentication Agent for unix-process:59619:248846 (system bus name :1.524, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 11:27:25 compute-0 polkitd[6475]: Operator of unix-process:59619:248846 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.523 [<unknown>] (owned by unix-user:zuul)
Oct 02 11:27:25 compute-0 polkit-agent-helper-1[59631]: pam_unix(polkit-1:auth): conversation failed
Oct 02 11:27:25 compute-0 polkit-agent-helper-1[59631]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 02 11:27:26 compute-0 sshd-session[54879]: Connection closed by 192.168.122.30 port 34378
Oct 02 11:27:26 compute-0 sshd-session[54876]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:27:26 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 02 11:27:26 compute-0 systemd[1]: session-14.scope: Consumed 18.418s CPU time.
Oct 02 11:27:26 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Oct 02 11:27:26 compute-0 systemd-logind[789]: Removed session 14.
Oct 02 11:27:39 compute-0 sshd-session[59657]: Accepted publickey for zuul from 192.168.122.30 port 56380 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:27:39 compute-0 systemd-logind[789]: New session 15 of user zuul.
Oct 02 11:27:39 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 02 11:27:39 compute-0 sshd-session[59657]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:27:40 compute-0 python3.9[59810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:27:41 compute-0 sudo[59964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqpprwsybiymhinwluugznvvhtofghsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404461.4384468-64-221975092576061/AnsiballZ_file.py'
Oct 02 11:27:41 compute-0 sudo[59964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:42 compute-0 python3.9[59966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:42 compute-0 sudo[59964]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:42 compute-0 sudo[60139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxoouuighdtxmjcuzwyflkqllughbnuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404462.3595123-88-263480132517344/AnsiballZ_stat.py'
Oct 02 11:27:42 compute-0 sudo[60139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:43 compute-0 python3.9[60141]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:43 compute-0 sudo[60139]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:43 compute-0 sudo[60217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzsksrssvtkvpnbtylqhjyfbkxlqtgvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404462.3595123-88-263480132517344/AnsiballZ_file.py'
Oct 02 11:27:43 compute-0 sudo[60217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:43 compute-0 python3.9[60219]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.g0l94ckk recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:43 compute-0 sudo[60217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:44 compute-0 sudo[60369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmlbpyqjpvpmywzlyqvozohuclgnepg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404463.9936054-148-259868010212478/AnsiballZ_stat.py'
Oct 02 11:27:44 compute-0 sudo[60369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:44 compute-0 python3.9[60371]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:44 compute-0 sudo[60369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:44 compute-0 sudo[60447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvakngirrtmivftyclplbhqcbcbcecwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404463.9936054-148-259868010212478/AnsiballZ_file.py'
Oct 02 11:27:44 compute-0 sudo[60447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:45 compute-0 python3.9[60449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.bbzblggm recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:45 compute-0 sudo[60447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:45 compute-0 sudo[60599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnnmqwwemfkxckrqbecfzsqtfkzzuouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404465.4356687-187-195197556473849/AnsiballZ_file.py'
Oct 02 11:27:45 compute-0 sudo[60599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:45 compute-0 python3.9[60601]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:27:45 compute-0 sudo[60599]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:46 compute-0 sudo[60751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmucftczhcoyhpkbvfdnsatvxaqjgiia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404466.1599395-211-270163195871676/AnsiballZ_stat.py'
Oct 02 11:27:46 compute-0 sudo[60751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:46 compute-0 python3.9[60753]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:46 compute-0 sudo[60751]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:46 compute-0 sudo[60829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aijxfkztplbwyfbabcmiqdjwrcmgirue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404466.1599395-211-270163195871676/AnsiballZ_file.py'
Oct 02 11:27:46 compute-0 sudo[60829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:47 compute-0 python3.9[60831]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:27:47 compute-0 sudo[60829]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:47 compute-0 sudo[60981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggvpztqhagsvjxfzwdewkljdssrwepbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404467.3285363-211-173004926281869/AnsiballZ_stat.py'
Oct 02 11:27:47 compute-0 sudo[60981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:47 compute-0 python3.9[60983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:47 compute-0 sudo[60981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:48 compute-0 sudo[61059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyioupxhgnhcvcrhfmuietxkqtdhltci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404467.3285363-211-173004926281869/AnsiballZ_file.py'
Oct 02 11:27:48 compute-0 sudo[61059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:48 compute-0 python3.9[61061]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:27:48 compute-0 sudo[61059]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:48 compute-0 sudo[61211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgjznemxocmbdjzibzumwmzazclyegty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404468.4107492-280-127586547048274/AnsiballZ_file.py'
Oct 02 11:27:48 compute-0 sudo[61211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:48 compute-0 python3.9[61213]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:48 compute-0 sudo[61211]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:49 compute-0 sudo[61363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndowekhlnqpayaqgsndxrgfkkukbvlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404469.1351914-304-148088987638488/AnsiballZ_stat.py'
Oct 02 11:27:49 compute-0 sudo[61363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:49 compute-0 python3.9[61365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:49 compute-0 sudo[61363]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:49 compute-0 sudo[61441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbkihbcetyhmwxdacemnkbztobbqpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404469.1351914-304-148088987638488/AnsiballZ_file.py'
Oct 02 11:27:49 compute-0 sudo[61441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:49 compute-0 python3.9[61443]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:49 compute-0 sudo[61441]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:50 compute-0 sudo[61593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxuvavcfexmhfislaqjvunqkyisyrmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404470.3326218-340-40502407107598/AnsiballZ_stat.py'
Oct 02 11:27:50 compute-0 sudo[61593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:50 compute-0 python3.9[61595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:50 compute-0 sudo[61593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:51 compute-0 sudo[61671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibqyiablkwldphwqjimelrrojdblsnyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404470.3326218-340-40502407107598/AnsiballZ_file.py'
Oct 02 11:27:51 compute-0 sudo[61671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:51 compute-0 python3.9[61673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:51 compute-0 sudo[61671]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:52 compute-0 sudo[61823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjdrltovqevpuhuidzivnackxqiwvsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404471.5183997-376-130847739299316/AnsiballZ_systemd.py'
Oct 02 11:27:52 compute-0 sudo[61823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:52 compute-0 python3.9[61825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:27:52 compute-0 systemd[1]: Reloading.
Oct 02 11:27:52 compute-0 systemd-sysv-generator[61856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:27:52 compute-0 systemd-rc-local-generator[61853]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:27:52 compute-0 sudo[61823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:53 compute-0 sudo[62012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqcoagpxcxanhwtmjnoquxwwyxhvdgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404472.992769-400-97247179240332/AnsiballZ_stat.py'
Oct 02 11:27:53 compute-0 sudo[62012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:53 compute-0 python3.9[62014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:53 compute-0 sudo[62012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:53 compute-0 sudo[62090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szwrfkzrngozrvsqrrawkznzeaftkpxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404472.992769-400-97247179240332/AnsiballZ_file.py'
Oct 02 11:27:53 compute-0 sudo[62090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:53 compute-0 python3.9[62092]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:53 compute-0 sudo[62090]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:54 compute-0 sudo[62242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrpqbfyekofosiaulfifvvwacqvhtxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404474.289932-436-165627449196971/AnsiballZ_stat.py'
Oct 02 11:27:54 compute-0 sudo[62242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:54 compute-0 python3.9[62244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:27:54 compute-0 sudo[62242]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:55 compute-0 sudo[62320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkjvkpzhxkevckzzbdgsjhfeyvvftmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404474.289932-436-165627449196971/AnsiballZ_file.py'
Oct 02 11:27:55 compute-0 sudo[62320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:55 compute-0 python3.9[62322]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:27:55 compute-0 sudo[62320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:55 compute-0 sudo[62472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biugegtwcfxgvkjbjxxixoixblrwsgkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404475.662401-472-116194769093611/AnsiballZ_systemd.py'
Oct 02 11:27:55 compute-0 sudo[62472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:27:56 compute-0 python3.9[62474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:27:56 compute-0 systemd[1]: Reloading.
Oct 02 11:27:56 compute-0 systemd-rc-local-generator[62501]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:27:56 compute-0 systemd-sysv-generator[62505]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:27:56 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:27:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:27:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:27:56 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:27:56 compute-0 sudo[62472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:27:57 compute-0 python3.9[62665]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:27:57 compute-0 network[62682]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:27:57 compute-0 network[62683]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:27:57 compute-0 network[62684]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:28:01 compute-0 anacron[26590]: Job `cron.daily' started
Oct 02 11:28:01 compute-0 anacron[26590]: Job `cron.daily' terminated
Oct 02 11:28:02 compute-0 sudo[62947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wecfsawdvcisvvwvuooanarzidlcsuud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404482.076408-550-219029402643196/AnsiballZ_stat.py'
Oct 02 11:28:02 compute-0 sudo[62947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:02 compute-0 python3.9[62949]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:02 compute-0 sudo[62947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:02 compute-0 sudo[63025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuolajtfexwxuybfqghtmohctjtojnfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404482.076408-550-219029402643196/AnsiballZ_file.py'
Oct 02 11:28:02 compute-0 sudo[63025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:02 compute-0 python3.9[63027]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:02 compute-0 sudo[63025]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:03 compute-0 sudo[63177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvmvgedvdhbsrrmajuyxrtnlnqfuemn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404483.4410324-589-117902809206998/AnsiballZ_file.py'
Oct 02 11:28:03 compute-0 sudo[63177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:03 compute-0 python3.9[63179]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:04 compute-0 sudo[63177]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:04 compute-0 sudo[63329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkfejcmujykenbqrvrlznpkiruehirra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404484.426744-613-159926497218510/AnsiballZ_stat.py'
Oct 02 11:28:04 compute-0 sudo[63329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:04 compute-0 python3.9[63331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:04 compute-0 sudo[63329]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:05 compute-0 sudo[63452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsccnrxhbiyqucrssvjdszthjyoabwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404484.426744-613-159926497218510/AnsiballZ_copy.py'
Oct 02 11:28:05 compute-0 sudo[63452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:05 compute-0 python3.9[63454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404484.426744-613-159926497218510/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:05 compute-0 sudo[63452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:06 compute-0 sudo[63604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgcnwhgmusnlywibxazaiktmxmtjhrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404486.101123-667-191896120347081/AnsiballZ_timezone.py'
Oct 02 11:28:06 compute-0 sudo[63604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:06 compute-0 python3.9[63606]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:28:06 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 11:28:06 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 11:28:06 compute-0 sudo[63604]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:07 compute-0 sudo[63760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywojqwmjvebieadquaifcdancvnhgvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404487.253731-694-107605499410390/AnsiballZ_file.py'
Oct 02 11:28:07 compute-0 sudo[63760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:07 compute-0 python3.9[63762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:07 compute-0 sudo[63760]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:08 compute-0 sudo[63912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eclntgyovcbsqntianffdqhvpwijmkrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404488.0192974-718-45626247996504/AnsiballZ_stat.py'
Oct 02 11:28:08 compute-0 sudo[63912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:08 compute-0 python3.9[63914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:08 compute-0 sudo[63912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:08 compute-0 sudo[64035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atkjrvhahsskffqnriiqaulzwqfwvfps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404488.0192974-718-45626247996504/AnsiballZ_copy.py'
Oct 02 11:28:08 compute-0 sudo[64035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:09 compute-0 python3.9[64037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404488.0192974-718-45626247996504/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:09 compute-0 sudo[64035]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:09 compute-0 sudo[64187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrqiznfayzywonwnagwjxfotgcnmjkbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404489.6199534-763-93263084537622/AnsiballZ_stat.py'
Oct 02 11:28:09 compute-0 sudo[64187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:10 compute-0 python3.9[64189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:10 compute-0 sudo[64187]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:10 compute-0 sudo[64310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptexwcksbrhvhdxpcsizqihczzubxdqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404489.6199534-763-93263084537622/AnsiballZ_copy.py'
Oct 02 11:28:10 compute-0 sudo[64310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:10 compute-0 python3.9[64312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404489.6199534-763-93263084537622/.source.yaml _original_basename=.3b1alzuk follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:10 compute-0 sudo[64310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:11 compute-0 sudo[64462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdlvoeqqwuiutwnwzkacayiodxjifznd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404491.0818956-808-17100736659473/AnsiballZ_stat.py'
Oct 02 11:28:11 compute-0 sudo[64462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:11 compute-0 python3.9[64464]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:11 compute-0 sudo[64462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:11 compute-0 sudo[64585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxzjxsfnqkudytidssliaaevkqfcmap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404491.0818956-808-17100736659473/AnsiballZ_copy.py'
Oct 02 11:28:11 compute-0 sudo[64585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:12 compute-0 python3.9[64587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404491.0818956-808-17100736659473/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:12 compute-0 sudo[64585]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:12 compute-0 sudo[64737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acctpxrgzkikczapysrhiusyqmbjavav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404492.5038471-853-102456258253789/AnsiballZ_command.py'
Oct 02 11:28:12 compute-0 sudo[64737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:13 compute-0 python3.9[64739]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:28:13 compute-0 sudo[64737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:13 compute-0 sudo[64890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwtnlpcptnyezsvdlveihhbgswjbssj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404493.376618-877-13008163774401/AnsiballZ_command.py'
Oct 02 11:28:13 compute-0 sudo[64890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:13 compute-0 python3.9[64892]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:28:14 compute-0 sudo[64890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:14 compute-0 sudo[65043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixsxezrbuwpalgapgaxbfadvhderuglx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759404494.4207406-901-44800811396811/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:28:14 compute-0 sudo[65043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:15 compute-0 python3[65045]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:28:15 compute-0 sudo[65043]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:15 compute-0 sudo[65195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasdphekileecvlhtazihftmrxojblsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404495.4080436-925-195215132707337/AnsiballZ_stat.py'
Oct 02 11:28:15 compute-0 sudo[65195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:15 compute-0 python3.9[65197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:15 compute-0 sudo[65195]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:16 compute-0 sudo[65318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxruevoywtmfnlbinlmvrjebrqfmxksy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404495.4080436-925-195215132707337/AnsiballZ_copy.py'
Oct 02 11:28:16 compute-0 sudo[65318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:16 compute-0 python3.9[65320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404495.4080436-925-195215132707337/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:16 compute-0 sudo[65318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:17 compute-0 sudo[65470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxskaonpmzyicitnrqesoiuyitkmkalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404496.8374226-970-201746502667426/AnsiballZ_stat.py'
Oct 02 11:28:17 compute-0 sudo[65470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:17 compute-0 python3.9[65472]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:17 compute-0 sudo[65470]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:17 compute-0 sudo[65593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfhywyaattyhzibmcusbnvwaxfmgyqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404496.8374226-970-201746502667426/AnsiballZ_copy.py'
Oct 02 11:28:17 compute-0 sudo[65593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:17 compute-0 python3.9[65595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404496.8374226-970-201746502667426/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:17 compute-0 sudo[65593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:18 compute-0 sudo[65745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrgrwkybyymkhwihrwaywrrmeibfkfds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404498.4471352-1015-62322445769422/AnsiballZ_stat.py'
Oct 02 11:28:18 compute-0 sudo[65745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:18 compute-0 python3.9[65747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:18 compute-0 sudo[65745]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:19 compute-0 sudo[65868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhiangijijpnybkzantvtulvyjbkaiqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404498.4471352-1015-62322445769422/AnsiballZ_copy.py'
Oct 02 11:28:19 compute-0 sudo[65868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:19 compute-0 python3.9[65870]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404498.4471352-1015-62322445769422/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:19 compute-0 sudo[65868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:20 compute-0 sudo[66020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifalglziwqvpchptziyfnfhihcnoykf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404499.9020932-1060-40731757644714/AnsiballZ_stat.py'
Oct 02 11:28:20 compute-0 sudo[66020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:20 compute-0 python3.9[66022]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:20 compute-0 sudo[66020]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:20 compute-0 sudo[66143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonawqiqiaakcphfhfjqacbmmvexhvoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404499.9020932-1060-40731757644714/AnsiballZ_copy.py'
Oct 02 11:28:20 compute-0 sudo[66143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:20 compute-0 python3.9[66145]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404499.9020932-1060-40731757644714/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:20 compute-0 sudo[66143]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:22 compute-0 sudo[66295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmqqkmzralukjbmwwcfegxhfiaiaxrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404501.7547302-1105-126150731356130/AnsiballZ_stat.py'
Oct 02 11:28:22 compute-0 sudo[66295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:22 compute-0 python3.9[66297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:28:22 compute-0 sudo[66295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:22 compute-0 sudo[66418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxwgkbvqmnuhcfuuyozvddnmsjfkwpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404501.7547302-1105-126150731356130/AnsiballZ_copy.py'
Oct 02 11:28:22 compute-0 sudo[66418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:22 compute-0 python3.9[66420]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404501.7547302-1105-126150731356130/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:22 compute-0 sudo[66418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:23 compute-0 sudo[66570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnuylshktdhwxneynlfkagepctobgncj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404503.3806486-1150-9453887992355/AnsiballZ_file.py'
Oct 02 11:28:23 compute-0 sudo[66570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:23 compute-0 python3.9[66572]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:23 compute-0 sudo[66570]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:24 compute-0 chronyd[54395]: Selected source 167.160.187.12 (pool.ntp.org)
Oct 02 11:28:24 compute-0 sudo[66722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuashufdqjuivfjwgboqqrqmjquuiqtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404504.279756-1174-146035778116101/AnsiballZ_command.py'
Oct 02 11:28:24 compute-0 sudo[66722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:24 compute-0 python3.9[66724]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:28:24 compute-0 sudo[66722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:25 compute-0 sudo[66881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavsltjvhzbqxhkfkmmscilojkwjsfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404505.1248696-1198-186665805148721/AnsiballZ_blockinfile.py'
Oct 02 11:28:25 compute-0 sudo[66881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:25 compute-0 python3.9[66883]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:25 compute-0 sudo[66881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:26 compute-0 sudo[67034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqzbknsmmpcdbdsixneuljidkjelwjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404506.0571113-1225-126873040757328/AnsiballZ_file.py'
Oct 02 11:28:26 compute-0 sudo[67034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:26 compute-0 python3.9[67036]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:26 compute-0 sudo[67034]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:26 compute-0 sudo[67186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syxockolmqpzbjrelmfmqthfuwemzidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404506.6875868-1225-5551898625362/AnsiballZ_file.py'
Oct 02 11:28:26 compute-0 sudo[67186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:27 compute-0 python3.9[67188]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:27 compute-0 sudo[67186]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:28 compute-0 sudo[67338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqtwlkfzjgbnmlbkkzwobfdgqqcoqno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404507.7383614-1270-174930866113748/AnsiballZ_mount.py'
Oct 02 11:28:28 compute-0 sudo[67338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:28 compute-0 python3.9[67340]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:28:28 compute-0 sudo[67338]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:28 compute-0 sudo[67491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxlucwkbrkkyjsvibwfyzijgdcskaonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404508.5766776-1270-90391319343981/AnsiballZ_mount.py'
Oct 02 11:28:28 compute-0 sudo[67491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:29 compute-0 python3.9[67493]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:28:29 compute-0 sudo[67491]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:29 compute-0 sshd-session[59660]: Connection closed by 192.168.122.30 port 56380
Oct 02 11:28:29 compute-0 sshd-session[59657]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:28:29 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 02 11:28:29 compute-0 systemd[1]: session-15.scope: Consumed 29.343s CPU time.
Oct 02 11:28:29 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Oct 02 11:28:29 compute-0 systemd-logind[789]: Removed session 15.
Oct 02 11:28:35 compute-0 sshd-session[67519]: Accepted publickey for zuul from 192.168.122.30 port 49250 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:28:35 compute-0 systemd-logind[789]: New session 16 of user zuul.
Oct 02 11:28:35 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 02 11:28:35 compute-0 sshd-session[67519]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:28:35 compute-0 sshd-session[67523]: Invalid user vyos from 167.99.55.34 port 33632
Oct 02 11:28:35 compute-0 sshd-session[67523]: Connection closed by invalid user vyos 167.99.55.34 port 33632 [preauth]
Oct 02 11:28:35 compute-0 sudo[67674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwslhtgcdsekmmjpzxdeifoutzcqzomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404515.1252718-23-216277183473514/AnsiballZ_tempfile.py'
Oct 02 11:28:35 compute-0 sudo[67674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:35 compute-0 python3.9[67676]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 11:28:35 compute-0 sudo[67674]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:36 compute-0 sudo[67826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlurwzrsfxkagjdzwyvwqwzipsdtpvpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404516.017359-59-126140957249781/AnsiballZ_stat.py'
Oct 02 11:28:36 compute-0 sudo[67826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:36 compute-0 python3.9[67828]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:28:36 compute-0 sudo[67826]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:28:37 compute-0 sudo[67980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjhepezantyuakcgtdzycgjzsgyzrtwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404516.991529-89-171819502268034/AnsiballZ_setup.py'
Oct 02 11:28:37 compute-0 sudo[67980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:37 compute-0 python3.9[67982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:28:37 compute-0 sudo[67980]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:38 compute-0 sudo[68132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkamuofayfyxejlrywdthbujnewpmuxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404518.1993868-114-157861193881831/AnsiballZ_blockinfile.py'
Oct 02 11:28:38 compute-0 sudo[68132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:38 compute-0 python3.9[68134]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpTq9G8ymc65djWd0YMUA1/KMKQbBxw7LoyOCyAnPotUx6UyYfBtjYX5I4TzqzEugao1w+4AHDZ5XKSwr8sv9kaSGm0ERmNxz22+5cmKwWxcvUfNGQQXbk6gk6z5p0qpH/Ue9e19xDUC+RDUMGcwrysoGQ05aVcGDaEmNUxvYjj0UUfs45KX/pHPk5xQ4c0WjiL0BfzPJmphY2PAj6O9b4iFA3HjIJgvQ3+i3jEOkvA1FsXm5s7O1/wEjqwsdfKPlX0LUuCqXyxI4uhWY16Ofi89lEtsdQRwFyoZcDMJUDHMH8oJSopUNwwMEe7UBD1MHJSIzrd6NUGnvRjhqH6dE/IoT2X3f4JN/Six+J9ayDqiIkd1QNsJzPBr6G2Lj/dQbUusb3nXhPk5TXKMOXm5i+J940nYQv8/Y9rf2H1qltGaDEOS95ktKpcL6EVplOsQand/Qmb/ShKbiAo2dr3YC3v/FFE2AAj+0Dnh4xob14bhivkYHDhIF0zyzcVGhHZXc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICufzWCrq7lQCIqxq8UNP+WfGRQD+uOEPLr+ZneqofrM
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHP494uEOdMq07v1W25s7bKFki4bQuHkde7xWzYJuUT44SD4tSCrPbQiOkLCqtg9H5yxKL0Ovnl22PYLf1HMKAs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ry6wCZyHJmZsI4Z83U5DYzCaQhL5JmDbykEZokepEcnLFJt20bbnTU0eQzXJylkCgp7rhmpZo7V7qVNnZUMI3aLUHK30Yr5jzQVofHBRg6ZnAIq1MAwqwGH1s6vfNo+//zth4OMHvolMSEO6zSmOWeAsuHM2DTEJ6IdRasKfhOCc3oI/Tcf5vOUyVGg/BH+fFOHKzPiyJNXozsvw2u4ppfdkMJvVC9w2oTNHMIGcDxSsx0zD2bLdYe5l23tFIOaBM149ktg5KPPsKYyQFymOi5qJHHnf9027MqZ4N7Z9SYuQrqt2nY4C/XmaVFOmUIFNNMZ5qMWDsc38V9cHCgurSaMsQ4em1srXr9nzADLh9bw4WksIRfrtt3twMp7FG9fMsw8rdmFt0+4/IdHr/3wCmHeF07qp10kJPXa5z9dApoIKiQlbIl+UCzlaN5tHD6vb4q0MyhqAtU4mSA1zGz67c2lLSGbF4FTgU9yza15FZjHzQ0ArNu/1KIheA8nrpkE=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHwAEaDivXDvJkCgJw1MVhYQArg6qfdDb4SKBZRPdoOc
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMghqQyWdigdn5yyuBSIQ3tHLq/tZwQO222aoRtckuDI9Ml6snE/xKJ7YWmTvRTsqj2tqCqXIllFFfreYY7Apw=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFg5rufAFy1itLjBBGlAJUDsQsaZUavZeI3stNJBLolkBBMB4sBpwAvQFbu2iUhtVavUC7q9xD2LsX0DVBu9DCaQn6tETqUUvMQqzvmaXd34gwo5fH6vo+bjqVdZEih0pIVI1O2OfOUvnv2MFLdKx8MWLQd54beGjWQsC3xCnYVuh0W/aAQtRC2EA77nBo+r40u5V3HXOhdmUbFNvL0r6I8FwP4IvbKC5jkBTtqIzewh+/cyJrURCh0aCpeUjBqNqw3ADhtuR2h5n3ioq+IwPXbhHViJUWQyJ5XKmlSzupEEYA+RV8i1Y3eHJK2RuYlCXkpRP3MEsyBxmISTPhVdQwfxClvyi/mTQkl6k5XFGyZher7KbE6lx4qzp8iCOyOWkw32N3tG0AlnOtPI5HJw8uKbwWl2Apb7RncDQ5fpNOKNFcB1sg61g2Vvew7xJs62OxhkOTiSkEEUYoFfXAqNLiH8gC0+Go12qYleZKbfzL00BDT2boQ2UxYn2rWK7YifU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB29M+5Yr1BRNmm2RoLe921umFtraZRFTbdptrBdgsAV
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7vUjfSNyE5eIqsBh1jfLF/N1YKOXT7KtCRIxAQ1i9+ljB9j4j/dQgL6TGk3m+hQRPyAVxTDwUpeBxHWIpFjU=
                                             create=True mode=0644 path=/tmp/ansible.i5x8yvo6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:38 compute-0 sudo[68132]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:39 compute-0 sudo[68284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdneocoddyckmkwgttpldvufdbvoibii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404519.0824122-138-155286305509924/AnsiballZ_command.py'
Oct 02 11:28:39 compute-0 sudo[68284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:39 compute-0 python3.9[68286]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.i5x8yvo6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:28:39 compute-0 sudo[68284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:40 compute-0 sudo[68438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzebcrdywgwszaqahxmahbnvvctaxguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404520.01486-162-84818338023412/AnsiballZ_file.py'
Oct 02 11:28:40 compute-0 sudo[68438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:28:40 compute-0 python3.9[68440]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.i5x8yvo6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:28:40 compute-0 sudo[68438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:28:41 compute-0 sshd-session[67522]: Connection closed by 192.168.122.30 port 49250
Oct 02 11:28:41 compute-0 sshd-session[67519]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:28:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 02 11:28:41 compute-0 systemd[1]: session-16.scope: Consumed 3.073s CPU time.
Oct 02 11:28:41 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Oct 02 11:28:41 compute-0 systemd-logind[789]: Removed session 16.
Oct 02 11:28:50 compute-0 sshd-session[68466]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 11:28:50 compute-0 sshd-session[68466]: Connection reset by 45.140.17.97 port 5377
Oct 02 11:28:57 compute-0 sshd-session[68467]: Accepted publickey for zuul from 192.168.122.30 port 53958 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:28:57 compute-0 systemd-logind[789]: New session 17 of user zuul.
Oct 02 11:28:57 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 02 11:28:57 compute-0 sshd-session[68467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:28:59 compute-0 python3.9[68620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:29:00 compute-0 sudo[68774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ranpdxlgoimocpeegokqtwzyofxxatly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404539.599611-61-181412920189968/AnsiballZ_systemd.py'
Oct 02 11:29:00 compute-0 sudo[68774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:00 compute-0 python3.9[68776]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:29:00 compute-0 sudo[68774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:01 compute-0 sudo[68928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywotnezqrnifsgcajahzlkgrsbeyjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404541.0147007-85-275933841493693/AnsiballZ_systemd.py'
Oct 02 11:29:01 compute-0 sudo[68928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:01 compute-0 python3.9[68930]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:29:01 compute-0 sudo[68928]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:02 compute-0 sudo[69081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqiumwskyhfttfirkrvrjsdjplxplsoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404541.9599144-112-155685523626596/AnsiballZ_command.py'
Oct 02 11:29:02 compute-0 sudo[69081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:02 compute-0 python3.9[69083]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:02 compute-0 sudo[69081]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:03 compute-0 sudo[69234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqijxfwwzdeimpwbvvnotydhcyfdgfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404542.8540933-136-15720498115083/AnsiballZ_stat.py'
Oct 02 11:29:03 compute-0 sudo[69234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:03 compute-0 python3.9[69236]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:29:03 compute-0 sudo[69234]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:04 compute-0 sudo[69388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcoklgepvvynudwxkxvyvfkjizhnyja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404543.8446703-160-153135104724754/AnsiballZ_command.py'
Oct 02 11:29:04 compute-0 sudo[69388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:04 compute-0 python3.9[69390]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:04 compute-0 sudo[69388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:05 compute-0 sudo[69543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbswlolkfmejpfmjfglwwnyfxcmqdgas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404544.7278366-184-21806556654268/AnsiballZ_file.py'
Oct 02 11:29:05 compute-0 sudo[69543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:05 compute-0 python3.9[69545]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:29:05 compute-0 sudo[69543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:05 compute-0 sshd-session[68470]: Connection closed by 192.168.122.30 port 53958
Oct 02 11:29:05 compute-0 sshd-session[68467]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:29:05 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 02 11:29:05 compute-0 systemd[1]: session-17.scope: Consumed 4.247s CPU time.
Oct 02 11:29:05 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Oct 02 11:29:05 compute-0 systemd-logind[789]: Removed session 17.
Oct 02 11:29:13 compute-0 sshd-session[69570]: Accepted publickey for zuul from 192.168.122.30 port 49192 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:29:13 compute-0 systemd-logind[789]: New session 18 of user zuul.
Oct 02 11:29:13 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 02 11:29:13 compute-0 sshd-session[69570]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:29:15 compute-0 python3.9[69723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:29:16 compute-0 sudo[69877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwzelkadpwxgwviquaiumujehkkmppyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404555.9046884-67-81544071092735/AnsiballZ_setup.py'
Oct 02 11:29:16 compute-0 sudo[69877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:16 compute-0 python3.9[69879]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:29:16 compute-0 sudo[69877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:17 compute-0 sudo[69961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzcauwrbcxyxtsfxrblgyasnyvlcqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404555.9046884-67-81544071092735/AnsiballZ_dnf.py'
Oct 02 11:29:17 compute-0 sudo[69961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:17 compute-0 python3.9[69963]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:29:18 compute-0 sudo[69961]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:19 compute-0 python3.9[70114]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:21 compute-0 python3.9[70265]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:29:22 compute-0 python3.9[70415]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:29:22 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:29:22 compute-0 python3.9[70566]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:29:23 compute-0 sshd-session[69573]: Connection closed by 192.168.122.30 port 49192
Oct 02 11:29:23 compute-0 sshd-session[69570]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:29:23 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 02 11:29:23 compute-0 systemd[1]: session-18.scope: Consumed 5.798s CPU time.
Oct 02 11:29:23 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Oct 02 11:29:23 compute-0 systemd-logind[789]: Removed session 18.
Oct 02 11:29:32 compute-0 sshd-session[70591]: Accepted publickey for zuul from 38.129.56.219 port 49246 ssh2: RSA SHA256:eDDJfQsywiNd5Pkwcn5EDkt3/zs20cKnlJwgA2GMDKU
Oct 02 11:29:32 compute-0 systemd-logind[789]: New session 19 of user zuul.
Oct 02 11:29:32 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 02 11:29:32 compute-0 sshd-session[70591]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:29:33 compute-0 sudo[70667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccjfsbmpnwplwrqnizdddjyqcxbsbsiv ; /usr/bin/python3'
Oct 02 11:29:33 compute-0 sudo[70667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:33 compute-0 useradd[70671]: new group: name=ceph-admin, GID=42478
Oct 02 11:29:33 compute-0 useradd[70671]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 02 11:29:33 compute-0 sudo[70667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:33 compute-0 sudo[70753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjqjwuilbgxthyppmfotagrjdaohqhtm ; /usr/bin/python3'
Oct 02 11:29:33 compute-0 sudo[70753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:34 compute-0 sudo[70753]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:34 compute-0 sudo[70826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbupaecphgmhzcpntycktqcqrpqbyzks ; /usr/bin/python3'
Oct 02 11:29:34 compute-0 sudo[70826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:34 compute-0 sudo[70826]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:35 compute-0 sudo[70876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlaskhuxbzifvqclipvornjdzqcizgol ; /usr/bin/python3'
Oct 02 11:29:35 compute-0 sudo[70876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:35 compute-0 sudo[70876]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:35 compute-0 sudo[70902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kussdfhempoqpjsspvbnrdxvlddjbwxs ; /usr/bin/python3'
Oct 02 11:29:35 compute-0 sudo[70902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:35 compute-0 sudo[70902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:36 compute-0 sudo[70928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlhlxzgnububxxdnxpfalyuzxszptah ; /usr/bin/python3'
Oct 02 11:29:36 compute-0 sudo[70928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:36 compute-0 sudo[70928]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:36 compute-0 sudo[70954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfagoilejgkqaafrdxmeovqaxtfvzsx ; /usr/bin/python3'
Oct 02 11:29:36 compute-0 sudo[70954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:36 compute-0 sudo[70954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:37 compute-0 sudo[71032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmkfbfttfqutwbjtusbbgegyciobaypv ; /usr/bin/python3'
Oct 02 11:29:37 compute-0 sudo[71032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:37 compute-0 sudo[71032]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:37 compute-0 sudo[71105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jklvimyzbbqtcfpihdwwfsccskpoifqd ; /usr/bin/python3'
Oct 02 11:29:37 compute-0 sudo[71105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:37 compute-0 sudo[71105]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:38 compute-0 sudo[71207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkkgyjmobzcqtswgoecraeonyynrksj ; /usr/bin/python3'
Oct 02 11:29:38 compute-0 sudo[71207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:38 compute-0 sudo[71207]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:38 compute-0 sudo[71280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkgeffyxlrirogasgmsbvwcuwholnzi ; /usr/bin/python3'
Oct 02 11:29:38 compute-0 sudo[71280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:38 compute-0 sudo[71280]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:39 compute-0 sudo[71330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-macviqtwhqhbkgppjsgajphxbwhuwdvn ; /usr/bin/python3'
Oct 02 11:29:39 compute-0 sudo[71330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:39 compute-0 python3[71332]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:29:40 compute-0 sudo[71330]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:41 compute-0 sudo[71426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfakcrwtopioxhfjvaputfcecwfpvnx ; /usr/bin/python3'
Oct 02 11:29:41 compute-0 sudo[71426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:41 compute-0 python3[71428]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:29:42 compute-0 sudo[71426]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:42 compute-0 sudo[71453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkzxisqgqogfjuxezdotdkawtngesnv ; /usr/bin/python3'
Oct 02 11:29:42 compute-0 sudo[71453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:43 compute-0 python3[71455]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:29:43 compute-0 sudo[71453]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:43 compute-0 sudo[71479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzugblcowgzxxynpaynyjqcrgieupri ; /usr/bin/python3'
Oct 02 11:29:43 compute-0 sudo[71479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:43 compute-0 python3[71481]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:43 compute-0 kernel: loop: module loaded
Oct 02 11:29:43 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Oct 02 11:29:43 compute-0 sudo[71479]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:43 compute-0 sudo[71514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuqafrdskhjufwnfcdebwgmkttedmrv ; /usr/bin/python3'
Oct 02 11:29:43 compute-0 sudo[71514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:43 compute-0 python3[71516]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:43 compute-0 lvm[71519]: PV /dev/loop3 not used.
Oct 02 11:29:44 compute-0 lvm[71521]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:29:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 02 11:29:44 compute-0 lvm[71523]:   0 logical volume(s) in volume group "ceph_vg0" now active
Oct 02 11:29:44 compute-0 lvm[71524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:29:44 compute-0 lvm[71524]: VG ceph_vg0 finished
Oct 02 11:29:44 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 02 11:29:44 compute-0 lvm[71532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:29:44 compute-0 lvm[71532]: VG ceph_vg0 finished
Oct 02 11:29:44 compute-0 sudo[71514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:44 compute-0 sudo[71608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpcqsbzjasfgxpiksfjchmpnkcqbyhu ; /usr/bin/python3'
Oct 02 11:29:44 compute-0 sudo[71608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:45 compute-0 python3[71610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:29:45 compute-0 sudo[71608]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:45 compute-0 sudo[71681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavatlempuhvzjmlqhtbladecrdnnmhj ; /usr/bin/python3'
Oct 02 11:29:45 compute-0 sudo[71681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:45 compute-0 python3[71683]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404584.8136103-33529-1851154263533/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:29:45 compute-0 sudo[71681]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:46 compute-0 sudo[71731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuuwwebdncvilxwqpedmwdjusmjwxeuk ; /usr/bin/python3'
Oct 02 11:29:46 compute-0 sudo[71731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:46 compute-0 python3[71733]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:29:46 compute-0 systemd[1]: Reloading.
Oct 02 11:29:46 compute-0 systemd-rc-local-generator[71762]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:29:46 compute-0 systemd-sysv-generator[71765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:29:46 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 02 11:29:46 compute-0 bash[71773]: /dev/loop3: [64513]:4329715 (/var/lib/ceph-osd-0.img)
Oct 02 11:29:46 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 02 11:29:46 compute-0 sudo[71731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:46 compute-0 lvm[71775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:29:46 compute-0 lvm[71775]: VG ceph_vg0 finished
Oct 02 11:29:48 compute-0 python3[71799]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:29:51 compute-0 sudo[71890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpslfyhztjolgxkvbxseiqkzyxvheroc ; /usr/bin/python3'
Oct 02 11:29:51 compute-0 sudo[71890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:51 compute-0 python3[71892]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:29:53 compute-0 groupadd[71898]: group added to /etc/group: name=cephadm, GID=992
Oct 02 11:29:53 compute-0 groupadd[71898]: group added to /etc/gshadow: name=cephadm
Oct 02 11:29:53 compute-0 groupadd[71898]: new group: name=cephadm, GID=992
Oct 02 11:29:53 compute-0 useradd[71905]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 02 11:29:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:29:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:29:55 compute-0 sudo[71890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:55 compute-0 sudo[72005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldrjwaxuwwzzytwtcbucoxwuwjpcgcy ; /usr/bin/python3'
Oct 02 11:29:55 compute-0 sudo[72005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:55 compute-0 python3[72007]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:29:55 compute-0 sudo[72005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:55 compute-0 sudo[72033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpqtpzkkuybgytvjxrcxjuydhmexypv ; /usr/bin/python3'
Oct 02 11:29:55 compute-0 sudo[72033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:56 compute-0 python3[72035]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:29:56 compute-0 sudo[72033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:57 compute-0 sudo[72097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nioeppzkciuyzhogdespqeyigoftqvcm ; /usr/bin/python3'
Oct 02 11:29:57 compute-0 sudo[72097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:57 compute-0 python3[72099]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:29:57 compute-0 sudo[72097]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:57 compute-0 sudo[72123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnpkboldxgvozwroedzkbdlqzkdxgmgj ; /usr/bin/python3'
Oct 02 11:29:57 compute-0 sudo[72123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:29:57 compute-0 python3[72125]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:29:57 compute-0 sudo[72123]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:58 compute-0 sudo[72201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otfonigvgbvcfrxrjfpdsifzlunndanc ; /usr/bin/python3'
Oct 02 11:29:58 compute-0 sudo[72201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:58 compute-0 python3[72203]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:29:58 compute-0 sudo[72201]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:58 compute-0 sudo[72274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzwgnutcrnmobbzyifxnmknyyuxrunf ; /usr/bin/python3'
Oct 02 11:29:58 compute-0 sudo[72274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:29:58 compute-0 python3[72276]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404598.1876037-33720-36014728600303/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:29:58 compute-0 sudo[72274]: pam_unix(sudo:session): session closed for user root
Oct 02 11:29:59 compute-0 sudo[72376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ormvnamvruqmsgdmzidmriqvcecslqru ; /usr/bin/python3'
Oct 02 11:29:59 compute-0 sudo[72376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:00 compute-0 python3[72378]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:30:00 compute-0 sudo[72376]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:00 compute-0 sudo[72449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgdnewityvtcgbfvufplicnmuyxlsmrb ; /usr/bin/python3'
Oct 02 11:30:00 compute-0 sudo[72449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:00 compute-0 python3[72451]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404599.4496758-33738-158970494638083/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:30:00 compute-0 sudo[72449]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:00 compute-0 sudo[72499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvaqtphlcgqqkkjamgxdtmzqjpohrmzi ; /usr/bin/python3'
Oct 02 11:30:00 compute-0 sudo[72499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:00 compute-0 python3[72501]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:30:00 compute-0 sudo[72499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:01 compute-0 sudo[72527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhsppggqnyxgmcnsvnfixjbfdkktakma ; /usr/bin/python3'
Oct 02 11:30:01 compute-0 sudo[72527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:01 compute-0 python3[72529]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:30:01 compute-0 sudo[72527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:01 compute-0 sudo[72555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqbzfnrhyxeiyzcmvbftxytocjnmciqh ; /usr/bin/python3'
Oct 02 11:30:01 compute-0 sudo[72555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:01 compute-0 python3[72557]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:30:01 compute-0 sudo[72555]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:01 compute-0 sudo[72583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjiiqejiprlhyfchgfpbjvznxrowixe ; /usr/bin/python3'
Oct 02 11:30:01 compute-0 sudo[72583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:30:02 compute-0 python3[72585]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:02 compute-0 sshd-session[72602]: Accepted publickey for ceph-admin from 192.168.122.100 port 57244 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:30:02 compute-0 systemd-logind[789]: New session 20 of user ceph-admin.
Oct 02 11:30:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 11:30:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 11:30:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 11:30:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 11:30:02 compute-0 systemd[72606]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:30:02 compute-0 systemd[72606]: Queued start job for default target Main User Target.
Oct 02 11:30:02 compute-0 systemd[72606]: Created slice User Application Slice.
Oct 02 11:30:02 compute-0 systemd[72606]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:30:02 compute-0 systemd[72606]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:30:02 compute-0 systemd[72606]: Reached target Paths.
Oct 02 11:30:02 compute-0 systemd[72606]: Reached target Timers.
Oct 02 11:30:02 compute-0 systemd[72606]: Starting D-Bus User Message Bus Socket...
Oct 02 11:30:02 compute-0 systemd[72606]: Starting Create User's Volatile Files and Directories...
Oct 02 11:30:02 compute-0 systemd[72606]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:30:02 compute-0 systemd[72606]: Reached target Sockets.
Oct 02 11:30:02 compute-0 systemd[72606]: Finished Create User's Volatile Files and Directories.
Oct 02 11:30:02 compute-0 systemd[72606]: Reached target Basic System.
Oct 02 11:30:02 compute-0 systemd[72606]: Reached target Main User Target.
Oct 02 11:30:02 compute-0 systemd[72606]: Startup finished in 116ms.
Oct 02 11:30:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 11:30:02 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct 02 11:30:02 compute-0 sshd-session[72602]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:30:02 compute-0 sudo[72623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 02 11:30:02 compute-0 sudo[72623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:30:02 compute-0 sudo[72623]: pam_unix(sudo:session): session closed for user root
Oct 02 11:30:02 compute-0 sshd-session[72622]: Received disconnect from 192.168.122.100 port 57244:11: disconnected by user
Oct 02 11:30:02 compute-0 sshd-session[72622]: Disconnected from user ceph-admin 192.168.122.100 port 57244
Oct 02 11:30:02 compute-0 sshd-session[72602]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 02 11:30:02 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 02 11:30:02 compute-0 systemd-logind[789]: Session 20 logged out. Waiting for processes to exit.
Oct 02 11:30:02 compute-0 systemd-logind[789]: Removed session 20.
Oct 02 11:30:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:30:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:30:03 compute-0 systemd[1]: run-r73617242fda744c6b788c78584016534.service: Deactivated successfully.
Oct 02 11:30:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2583379306-lower\x2dmapped.mount: Deactivated successfully.
Oct 02 11:30:12 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 02 11:30:12 compute-0 systemd[72606]: Activating special unit Exit the Session...
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped target Main User Target.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped target Basic System.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped target Paths.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped target Sockets.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped target Timers.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:30:12 compute-0 systemd[72606]: Closed D-Bus User Message Bus Socket.
Oct 02 11:30:12 compute-0 systemd[72606]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:30:12 compute-0 systemd[72606]: Removed slice User Application Slice.
Oct 02 11:30:12 compute-0 systemd[72606]: Reached target Shutdown.
Oct 02 11:30:12 compute-0 systemd[72606]: Finished Exit the Session.
Oct 02 11:30:12 compute-0 systemd[72606]: Reached target Exit the Session.
Oct 02 11:30:12 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 02 11:30:12 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 02 11:30:12 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 02 11:30:12 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 02 11:30:12 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 02 11:30:12 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 02 11:30:12 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 02 11:30:36 compute-0 podman[72660]: 2025-10-02 11:30:36.966293442 +0000 UTC m=+34.285276186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.045207784 +0000 UTC m=+0.056514590 container create 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:30:37 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 02 11:30:37 compute-0 systemd[1]: Started libpod-conmon-7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade.scope.
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.015226387 +0000 UTC m=+0.026533243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.194320737 +0000 UTC m=+0.205627633 container init 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.201396059 +0000 UTC m=+0.212702865 container start 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.359047737 +0000 UTC m=+0.370354553 container attach 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:37 compute-0 vigilant_shirley[72737]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 11:30:37 compute-0 systemd[1]: libpod-7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade.scope: Deactivated successfully.
Oct 02 11:30:37 compute-0 podman[72721]: 2025-10-02 11:30:37.512665759 +0000 UTC m=+0.523972555 container died 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-519c88d9e25f27224feab4ec18c38601c038650e24e8c7b479d75b776eb57939-merged.mount: Deactivated successfully.
Oct 02 11:30:38 compute-0 podman[72721]: 2025-10-02 11:30:38.032994095 +0000 UTC m=+1.044300901 container remove 7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade (image=quay.io/ceph/ceph:v18, name=vigilant_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:38 compute-0 systemd[1]: libpod-conmon-7f18e8ecae45a8e7578e60c7a6de4c194345e2a922facf0e61d52b3944c3dade.scope: Deactivated successfully.
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.072160474 +0000 UTC m=+0.021151254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.233638836 +0000 UTC m=+0.182629586 container create c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:30:38 compute-0 systemd[1]: Started libpod-conmon-c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2.scope.
Oct 02 11:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.399390172 +0000 UTC m=+0.348381002 container init c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.405180542 +0000 UTC m=+0.354171332 container start c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:30:38 compute-0 charming_hypatia[72772]: 167 167
Oct 02 11:30:38 compute-0 systemd[1]: libpod-c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2.scope: Deactivated successfully.
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.46867517 +0000 UTC m=+0.417665910 container attach c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.469330296 +0000 UTC m=+0.418321046 container died c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:30:38 compute-0 podman[72755]: 2025-10-02 11:30:38.623950511 +0000 UTC m=+0.572941281 container remove c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2 (image=quay.io/ceph/ceph:v18, name=charming_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:30:38 compute-0 systemd[1]: libpod-conmon-c46e3f7d36c5b54f908f46dc755af0bebb7069783ece93a0445e762d25a09af2.scope: Deactivated successfully.
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.694005629 +0000 UTC m=+0.050496415 container create e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.663142581 +0000 UTC m=+0.019633397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:38 compute-0 systemd[1]: Started libpod-conmon-e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616.scope.
Oct 02 11:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.808741738 +0000 UTC m=+0.165232544 container init e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.814403755 +0000 UTC m=+0.170894541 container start e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:38 compute-0 nostalgic_lehmann[72807]: AQBeYt5oUrO2MRAAwz/eeOaHlSn+K9hMdgrOEA==
Oct 02 11:30:38 compute-0 systemd[1]: libpod-e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616.scope: Deactivated successfully.
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.840355965 +0000 UTC m=+0.196846761 container attach e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:30:38 compute-0 podman[72791]: 2025-10-02 11:30:38.841157465 +0000 UTC m=+0.197648281 container died e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 02 11:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f108b8980f54379f7d5367e04892bec160cced3e63d20a91ca8bc98c5fc4926b-merged.mount: Deactivated successfully.
Oct 02 11:30:39 compute-0 podman[72791]: 2025-10-02 11:30:39.042905952 +0000 UTC m=+0.399396768 container remove e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616 (image=quay.io/ceph/ceph:v18, name=nostalgic_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:39 compute-0 systemd[1]: libpod-conmon-e9a947cb3158289357743154f2ff041531af5368234b54915022b642903ed616.scope: Deactivated successfully.
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.118013211 +0000 UTC m=+0.057615787 container create eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.082138143 +0000 UTC m=+0.021740739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:39 compute-0 systemd[1]: Started libpod-conmon-eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9.scope.
Oct 02 11:30:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.219528111 +0000 UTC m=+0.159130697 container init eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.226396697 +0000 UTC m=+0.165999263 container start eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.238666214 +0000 UTC m=+0.178268780 container attach eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:30:39 compute-0 suspicious_bartik[72844]: AQBfYt5oExuMDhAA7f/HRR8IOARtDO/rN1/E7Q==
Oct 02 11:30:39 compute-0 systemd[1]: libpod-eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9.scope: Deactivated successfully.
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.247609761 +0000 UTC m=+0.187212327 container died eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:39 compute-0 podman[72827]: 2025-10-02 11:30:39.649058697 +0000 UTC m=+0.588661263 container remove eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9 (image=quay.io/ceph/ceph:v18, name=suspicious_bartik, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.745996726 +0000 UTC m=+0.077660073 container create 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.688364689 +0000 UTC m=+0.020028066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:39 compute-0 systemd[1]: Started libpod-conmon-5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9.scope.
Oct 02 11:30:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:39 compute-0 systemd[1]: libpod-conmon-eadebe20dc4248fa38bf7b4c9225945a5a992c67d99c9b358acc8378e6b4edf9.scope: Deactivated successfully.
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.869541639 +0000 UTC m=+0.201205096 container init 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.878256159 +0000 UTC m=+0.209919546 container start 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.895874137 +0000 UTC m=+0.227537504 container attach 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:39 compute-0 interesting_lalande[72880]: AQBfYt5oDbt7NRAA8/+AlrZ/+R9lon9c15cvpg==
Oct 02 11:30:39 compute-0 systemd[1]: libpod-5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9.scope: Deactivated successfully.
Oct 02 11:30:39 compute-0 podman[72863]: 2025-10-02 11:30:39.900789495 +0000 UTC m=+0.232452842 container died 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f1e29ed53fb01bae1fa1f9360a87460daeb5d2d27ec56f38a8815e4e1642a13-merged.mount: Deactivated successfully.
Oct 02 11:30:40 compute-0 podman[72863]: 2025-10-02 11:30:40.097079981 +0000 UTC m=+0.428743328 container remove 5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:40 compute-0 systemd[1]: libpod-conmon-5b985d62172a951461cc328e95d1062d6a353d6ad80cdc636a64175c7605a7e9.scope: Deactivated successfully.
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.224661763 +0000 UTC m=+0.102137676 container create 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.147732768 +0000 UTC m=+0.025208701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:40 compute-0 systemd[1]: Started libpod-conmon-6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5.scope.
Oct 02 11:30:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/174c3a36fef1199d9727d2f2463a3115fe8059ae231a60d77709385635073200/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.475054748 +0000 UTC m=+0.352530711 container init 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.4813206 +0000 UTC m=+0.358796513 container start 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.504601244 +0000 UTC m=+0.382077157 container attach 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:30:40 compute-0 crazy_shirley[72917]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 02 11:30:40 compute-0 crazy_shirley[72917]: setting min_mon_release = pacific
Oct 02 11:30:40 compute-0 crazy_shirley[72917]: /usr/bin/monmaptool: set fsid to fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:40 compute-0 crazy_shirley[72917]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 02 11:30:40 compute-0 systemd[1]: libpod-6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5.scope: Deactivated successfully.
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.524909386 +0000 UTC m=+0.402385299 container died 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:30:40 compute-0 podman[72901]: 2025-10-02 11:30:40.979079669 +0000 UTC m=+0.856555592 container remove 6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5 (image=quay.io/ceph/ceph:v18, name=crazy_shirley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:40 compute-0 systemd[1]: libpod-conmon-6dea78c9e08decc33737d0bce00f5a6ba42e4f96fe697707cbff79e9bee391b5.scope: Deactivated successfully.
Oct 02 11:30:41 compute-0 podman[72937]: 2025-10-02 11:30:41.100046249 +0000 UTC m=+0.099446490 container create a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:30:41 compute-0 podman[72937]: 2025-10-02 11:30:41.021782303 +0000 UTC m=+0.021182574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:41 compute-0 systemd[1]: Started libpod-conmon-a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1.scope.
Oct 02 11:30:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d17d099228f85ce9e71810c18adec1095a8e0ecd7eef760b3c8695d248eb05/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d17d099228f85ce9e71810c18adec1095a8e0ecd7eef760b3c8695d248eb05/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d17d099228f85ce9e71810c18adec1095a8e0ecd7eef760b3c8695d248eb05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d17d099228f85ce9e71810c18adec1095a8e0ecd7eef760b3c8695d248eb05/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:41 compute-0 podman[72937]: 2025-10-02 11:30:41.325759099 +0000 UTC m=+0.325159390 container init a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:30:41 compute-0 podman[72937]: 2025-10-02 11:30:41.331702382 +0000 UTC m=+0.331102623 container start a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:30:41 compute-0 podman[72937]: 2025-10-02 11:30:41.414295573 +0000 UTC m=+0.413695814 container attach a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:30:42 compute-0 systemd[1]: libpod-a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1.scope: Deactivated successfully.
Oct 02 11:30:42 compute-0 podman[72937]: 2025-10-02 11:30:42.27770389 +0000 UTC m=+1.277104161 container died a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d17d099228f85ce9e71810c18adec1095a8e0ecd7eef760b3c8695d248eb05-merged.mount: Deactivated successfully.
Oct 02 11:30:42 compute-0 podman[72937]: 2025-10-02 11:30:42.584119313 +0000 UTC m=+1.583519554 container remove a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1 (image=quay.io/ceph/ceph:v18, name=goofy_chebyshev, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:42 compute-0 systemd[1]: libpod-conmon-a2abf994ad6efa06d71b7f2139707fa6ee94b2026a78cf3044849910bbdc81d1.scope: Deactivated successfully.
Oct 02 11:30:42 compute-0 systemd[1]: Reloading.
Oct 02 11:30:42 compute-0 systemd-rc-local-generator[73019]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:42 compute-0 systemd-sysv-generator[73023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:42 compute-0 systemd[1]: Reloading.
Oct 02 11:30:42 compute-0 systemd-rc-local-generator[73056]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:42 compute-0 systemd-sysv-generator[73060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:43 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 02 11:30:43 compute-0 systemd[1]: Reloading.
Oct 02 11:30:43 compute-0 systemd-rc-local-generator[73094]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:43 compute-0 systemd-sysv-generator[73101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:43 compute-0 systemd[1]: Reached target Ceph cluster fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:43 compute-0 systemd[1]: Reloading.
Oct 02 11:30:43 compute-0 systemd-sysv-generator[73138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:43 compute-0 systemd-rc-local-generator[73135]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:43 compute-0 systemd[1]: Reloading.
Oct 02 11:30:43 compute-0 systemd-rc-local-generator[73175]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:43 compute-0 systemd-sysv-generator[73178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:44 compute-0 systemd[1]: Created slice Slice /system/ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:44 compute-0 systemd[1]: Reached target System Time Set.
Oct 02 11:30:44 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 02 11:30:44 compute-0 systemd[1]: Starting Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:44 compute-0 podman[73231]: 2025-10-02 11:30:44.235914401 +0000 UTC m=+0.018528281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:44 compute-0 podman[73231]: 2025-10-02 11:30:44.348799486 +0000 UTC m=+0.131413346 container create dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c779efdbd31d59cdaed20144ef3f6839e95d8c09a899e195c941b903d47cf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c779efdbd31d59cdaed20144ef3f6839e95d8c09a899e195c941b903d47cf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c779efdbd31d59cdaed20144ef3f6839e95d8c09a899e195c941b903d47cf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c779efdbd31d59cdaed20144ef3f6839e95d8c09a899e195c941b903d47cf2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:44 compute-0 podman[73231]: 2025-10-02 11:30:44.53551238 +0000 UTC m=+0.318126270 container init dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:30:44 compute-0 podman[73231]: 2025-10-02 11:30:44.541015782 +0000 UTC m=+0.323629642 container start dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:30:44 compute-0 ceph-mon[73251]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:30:44 compute-0 ceph-mon[73251]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: pidfile_write: ignore empty --pid-file
Oct 02 11:30:44 compute-0 ceph-mon[73251]: load: jerasure load: lrc 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Git sha 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: DB SUMMARY
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: DB Session ID:  MVA9PA1P1I0ESVJY97A5
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                                     Options.env: 0x55a3b3e13c40
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                                Options.info_log: 0x55a3b5908ec0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                                 Options.wal_dir: 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                    Options.write_buffer_manager: 0x55a3b5918b40
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                               Options.row_cache: None
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                              Options.wal_filter: None
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.wal_compression: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.max_background_jobs: 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Compression algorithms supported:
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kZSTD supported: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:           Options.merge_operator: 
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:        Options.compaction_filter: None
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a3b5908aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a3b59011f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.compression: NoCompression
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.num_levels: 7
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf9dce4d-b8a2-404e-8460-1e42af6fea10
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404644578161, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404644611441, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "MVA9PA1P1I0ESVJY97A5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404644611605, "job": 1, "event": "recovery_finished"}
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 02 11:30:44 compute-0 bash[73231]: dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a
Oct 02 11:30:44 compute-0 systemd[1]: Started Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a3b592ae00
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: DB pointer 0x55a3b59b4000
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:30:44 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a3b59011f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:30:44 compute-0 ceph-mon[73251]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: mon.compute-0@-1(???) e0 preinit fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:44 compute-0 ceph-mon[73251]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 02 11:30:44 compute-0 ceph-mon[73251]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 02 11:30:44 compute-0 ceph-mon[73251]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 02 11:30:44 compute-0 podman[73273]: 2025-10-02 11:30:44.876780907 +0000 UTC m=+0.023887560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:44 compute-0 ceph-mon[73251]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:30:44 compute-0 ceph-mon[73251]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:45 compute-0 podman[73273]: 2025-10-02 11:30:45.002664657 +0000 UTC m=+0.149771290 container create 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 02 11:30:45 compute-0 systemd[1]: Started libpod-conmon-2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca.scope.
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 11:30:45 compute-0 ceph-mon[73251]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:30:45 compute-0 ceph-mon[73251]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e67f4f44514453c59ff86279e1ec314bcca5ff272ffa47e959d1879859d771/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e67f4f44514453c59ff86279e1ec314bcca5ff272ffa47e959d1879859d771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e67f4f44514453c59ff86279e1ec314bcca5ff272ffa47e959d1879859d771/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:45 compute-0 ceph-mon[73251]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:30:41.432504Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 02 11:30:45 compute-0 podman[73273]: 2025-10-02 11:30:45.166286401 +0000 UTC m=+0.313393024 container init 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:30:45 compute-0 podman[73273]: 2025-10-02 11:30:45.175123874 +0000 UTC m=+0.322230477 container start 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).mds e1 new map
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 11:30:45 compute-0 ceph-mon[73251]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:30:45 compute-0 podman[73273]: 2025-10-02 11:30:45.237460275 +0000 UTC m=+0.384566878 container attach 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mkfs fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 02 11:30:45 compute-0 ceph-mon[73251]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:30:45 compute-0 ceph-mon[73251]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Oct 02 11:30:45 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Oct 02 11:30:46 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 11:30:46 compute-0 ceph-mon[73251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843403082' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:   cluster:
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     id:     fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     health: HEALTH_OK
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:  
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:   services:
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     mon: 1 daemons, quorum compute-0 (age 1.08642s)
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     mgr: no daemons active
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     osd: 0 osds: 0 up, 0 in
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:  
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:   data:
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     pools:   0 pools, 0 pgs
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     objects: 0 objects, 0 B
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     usage:   0 B used, 0 B / 0 B avail
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:     pgs:     
Oct 02 11:30:46 compute-0 nice_maxwell[73306]:  
Oct 02 11:30:46 compute-0 systemd[1]: libpod-2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca.scope: Deactivated successfully.
Oct 02 11:30:46 compute-0 podman[73273]: 2025-10-02 11:30:46.204338809 +0000 UTC m=+1.351445432 container died 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:30:46 compute-0 ceph-mon[73251]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:46 compute-0 ceph-mon[73251]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:30:46 compute-0 ceph-mon[73251]: fsmap 
Oct 02 11:30:46 compute-0 ceph-mon[73251]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:30:46 compute-0 ceph-mon[73251]: mgrmap e1: no daemons active
Oct 02 11:30:46 compute-0 ceph-mon[73251]: from='client.? 192.168.122.100:0/3843403082' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 11:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-70e67f4f44514453c59ff86279e1ec314bcca5ff272ffa47e959d1879859d771-merged.mount: Deactivated successfully.
Oct 02 11:30:46 compute-0 podman[73273]: 2025-10-02 11:30:46.749714303 +0000 UTC m=+1.896820906 container remove 2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca (image=quay.io/ceph/ceph:v18, name=nice_maxwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:46 compute-0 systemd[1]: libpod-conmon-2cb186d54b97eba9cb19cf60ee270bc9e79f4888d8eb6e71de6eb38cbe826dca.scope: Deactivated successfully.
Oct 02 11:30:46 compute-0 podman[73345]: 2025-10-02 11:30:46.889254633 +0000 UTC m=+0.111072192 container create cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:30:46 compute-0 podman[73345]: 2025-10-02 11:30:46.802106391 +0000 UTC m=+0.023923970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:46 compute-0 systemd[1]: Started libpod-conmon-cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4.scope.
Oct 02 11:30:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9867c32de37fd9b983405e793349150b90d9579927b67e4705823ab3248d6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9867c32de37fd9b983405e793349150b90d9579927b67e4705823ab3248d6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9867c32de37fd9b983405e793349150b90d9579927b67e4705823ab3248d6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9867c32de37fd9b983405e793349150b90d9579927b67e4705823ab3248d6c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 podman[73345]: 2025-10-02 11:30:47.026520518 +0000 UTC m=+0.248338107 container init cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:47 compute-0 podman[73345]: 2025-10-02 11:30:47.041230584 +0000 UTC m=+0.263048183 container start cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:30:47 compute-0 podman[73345]: 2025-10-02 11:30:47.128066118 +0000 UTC m=+0.349883737 container attach cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:30:47 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:30:47 compute-0 ceph-mon[73251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2278033918' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:30:47 compute-0 ceph-mon[73251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2278033918' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:30:47 compute-0 charming_ardinghelli[73361]: 
Oct 02 11:30:47 compute-0 charming_ardinghelli[73361]: [global]
Oct 02 11:30:47 compute-0 charming_ardinghelli[73361]:         fsid = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:47 compute-0 charming_ardinghelli[73361]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 02 11:30:47 compute-0 systemd[1]: libpod-cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4.scope: Deactivated successfully.
Oct 02 11:30:47 compute-0 podman[73345]: 2025-10-02 11:30:47.436288465 +0000 UTC m=+0.658106044 container died cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:30:47 compute-0 ceph-mon[73251]: from='client.? 192.168.122.100:0/2278033918' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:30:47 compute-0 ceph-mon[73251]: from='client.? 192.168.122.100:0/2278033918' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9867c32de37fd9b983405e793349150b90d9579927b67e4705823ab3248d6c-merged.mount: Deactivated successfully.
Oct 02 11:30:47 compute-0 podman[73345]: 2025-10-02 11:30:47.743434776 +0000 UTC m=+0.965252335 container remove cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4 (image=quay.io/ceph/ceph:v18, name=charming_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:30:47 compute-0 systemd[1]: libpod-conmon-cff618f9fffcc0116183220bc3d3a07dce4e05082f0504af7e653920de26d1e4.scope: Deactivated successfully.
Oct 02 11:30:47 compute-0 podman[73401]: 2025-10-02 11:30:47.804118716 +0000 UTC m=+0.039534148 container create 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:30:47 compute-0 systemd[1]: Started libpod-conmon-6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233.scope.
Oct 02 11:30:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd816cd8bdf59cda157b20a76e844619604f9606d247eba937b7a9076c2cefee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd816cd8bdf59cda157b20a76e844619604f9606d247eba937b7a9076c2cefee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd816cd8bdf59cda157b20a76e844619604f9606d247eba937b7a9076c2cefee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd816cd8bdf59cda157b20a76e844619604f9606d247eba937b7a9076c2cefee/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:47 compute-0 podman[73401]: 2025-10-02 11:30:47.786993382 +0000 UTC m=+0.022408834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:47 compute-0 podman[73401]: 2025-10-02 11:30:47.892481407 +0000 UTC m=+0.127896829 container init 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:47 compute-0 podman[73401]: 2025-10-02 11:30:47.899481667 +0000 UTC m=+0.134897109 container start 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:30:47 compute-0 podman[73401]: 2025-10-02 11:30:47.903879133 +0000 UTC m=+0.139294575 container attach 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:30:48 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:30:48 compute-0 ceph-mon[73251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774003304' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:30:48 compute-0 systemd[1]: libpod-6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233.scope: Deactivated successfully.
Oct 02 11:30:48 compute-0 podman[73401]: 2025-10-02 11:30:48.337617161 +0000 UTC m=+0.573032593 container died 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd816cd8bdf59cda157b20a76e844619604f9606d247eba937b7a9076c2cefee-merged.mount: Deactivated successfully.
Oct 02 11:30:48 compute-0 podman[73401]: 2025-10-02 11:30:48.39411694 +0000 UTC m=+0.629532362 container remove 6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233 (image=quay.io/ceph/ceph:v18, name=xenodochial_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:30:48 compute-0 systemd[1]: libpod-conmon-6b7f2f6d0ee938297ce4e022c06d6e7df6bcf583fa394c5579fca7d9e42a8233.scope: Deactivated successfully.
Oct 02 11:30:48 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:30:48 compute-0 ceph-mon[73251]: from='client.? 192.168.122.100:0/2774003304' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:30:48 compute-0 ceph-mon[73251]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 11:30:48 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 11:30:48 compute-0 ceph-mon[73251]: mon.compute-0@0(leader) e1 shutdown
Oct 02 11:30:48 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0[73247]: 2025-10-02T11:30:48.557+0000 7f0812b65640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 11:30:48 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0[73247]: 2025-10-02T11:30:48.557+0000 7f0812b65640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 11:30:48 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 11:30:48 compute-0 ceph-mon[73251]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 11:30:48 compute-0 podman[73488]: 2025-10-02 11:30:48.608277028 +0000 UTC m=+0.081019414 container died dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c779efdbd31d59cdaed20144ef3f6839e95d8c09a899e195c941b903d47cf2-merged.mount: Deactivated successfully.
Oct 02 11:30:48 compute-0 podman[73488]: 2025-10-02 11:30:48.646443614 +0000 UTC m=+0.119185990 container remove dbb5c1bf9bce042fb2f8bf7147656ad7be2706a887cb9c43a0d9ce6d0e97065a (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:30:48 compute-0 bash[73488]: ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0
Oct 02 11:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:30:48 compute-0 systemd[1]: ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mon.compute-0.service: Deactivated successfully.
Oct 02 11:30:48 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:48 compute-0 systemd[1]: Starting Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:30:48 compute-0 podman[73587]: 2025-10-02 11:30:48.970714349 +0000 UTC m=+0.035149782 container create 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd85bd08028d71b3fe9a9e9887881018c2a7f61777bb432bf7521a7f914346d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd85bd08028d71b3fe9a9e9887881018c2a7f61777bb432bf7521a7f914346d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd85bd08028d71b3fe9a9e9887881018c2a7f61777bb432bf7521a7f914346d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd85bd08028d71b3fe9a9e9887881018c2a7f61777bb432bf7521a7f914346d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 podman[73587]: 2025-10-02 11:30:49.02605764 +0000 UTC m=+0.090493113 container init 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:30:49 compute-0 podman[73587]: 2025-10-02 11:30:49.032177278 +0000 UTC m=+0.096612721 container start 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:49 compute-0 bash[73587]: 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db
Oct 02 11:30:49 compute-0 podman[73587]: 2025-10-02 11:30:48.955609073 +0000 UTC m=+0.020044516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:49 compute-0 systemd[1]: Started Ceph mon.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:49 compute-0 ceph-mon[73607]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: pidfile_write: ignore empty --pid-file
Oct 02 11:30:49 compute-0 ceph-mon[73607]: load: jerasure load: lrc 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Git sha 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: DB SUMMARY
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: DB Session ID:  6KM81CTAEWLQE5MWBG31
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55504 ; 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                                     Options.env: 0x5581bc780c40
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                                Options.info_log: 0x5581be5e9040
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                                 Options.wal_dir: 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                    Options.write_buffer_manager: 0x5581be5f8b40
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                               Options.row_cache: None
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                              Options.wal_filter: None
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.wal_compression: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.max_background_jobs: 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Compression algorithms supported:
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kZSTD supported: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:           Options.merge_operator: 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:        Options.compaction_filter: None
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581be5e8c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5581be5e11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.compression: NoCompression
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.num_levels: 7
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf9dce4d-b8a2-404e-8460-1e42af6fea10
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404649069785, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404649074559, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 142, "table_properties": {"data_size": 53532, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2962, "raw_average_key_size": 29, "raw_value_size": 51148, "raw_average_value_size": 506, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404649, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404649074682, "job": 1, "event": "recovery_finished"}
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5581be60ae00
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: DB pointer 0x5581be694000
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.61 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   55.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:30:49 compute-0 ceph-mon[73607]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???) e1 preinit fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).mds e1 new map
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 11:30:49 compute-0 ceph-mon[73607]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:30:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:30:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:30:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.10489714 +0000 UTC m=+0.043650868 container create 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:30:49 compute-0 systemd[1]: Started libpod-conmon-000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445.scope.
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:30:49 compute-0 ceph-mon[73607]: fsmap 
Oct 02 11:30:49 compute-0 ceph-mon[73607]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mgrmap e1: no daemons active
Oct 02 11:30:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.089476836 +0000 UTC m=+0.028230584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dde499933a72e2e6f0b3e150b33a672c44eb4c01659f7b2607796f9b4a2d9ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dde499933a72e2e6f0b3e150b33a672c44eb4c01659f7b2607796f9b4a2d9ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dde499933a72e2e6f0b3e150b33a672c44eb4c01659f7b2607796f9b4a2d9ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.210482137 +0000 UTC m=+0.149235885 container init 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.220208573 +0000 UTC m=+0.158962331 container start 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.226466025 +0000 UTC m=+0.165219773 container attach 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 02 11:30:49 compute-0 systemd[1]: libpod-000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445.scope: Deactivated successfully.
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.669504808 +0000 UTC m=+0.608258546 container died 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dde499933a72e2e6f0b3e150b33a672c44eb4c01659f7b2607796f9b4a2d9ce-merged.mount: Deactivated successfully.
Oct 02 11:30:49 compute-0 podman[73608]: 2025-10-02 11:30:49.709919848 +0000 UTC m=+0.648673576 container remove 000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445 (image=quay.io/ceph/ceph:v18, name=tender_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:30:49 compute-0 systemd[1]: libpod-conmon-000cc0d7771ba5d062b50e6f37c99b2d3abe818d0280a84558bfb1815f70b445.scope: Deactivated successfully.
Oct 02 11:30:49 compute-0 podman[73700]: 2025-10-02 11:30:49.774680206 +0000 UTC m=+0.043206467 container create 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:30:49 compute-0 systemd[1]: Started libpod-conmon-4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6.scope.
Oct 02 11:30:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a8bb4993bc56b86aebb1153fbac7e679d3a5df930a820383a29536920daf39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a8bb4993bc56b86aebb1153fbac7e679d3a5df930a820383a29536920daf39/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a8bb4993bc56b86aebb1153fbac7e679d3a5df930a820383a29536920daf39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:49 compute-0 podman[73700]: 2025-10-02 11:30:49.753254328 +0000 UTC m=+0.021780609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:49 compute-0 podman[73700]: 2025-10-02 11:30:49.853597439 +0000 UTC m=+0.122123730 container init 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:49 compute-0 podman[73700]: 2025-10-02 11:30:49.860688871 +0000 UTC m=+0.129215132 container start 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:30:49 compute-0 podman[73700]: 2025-10-02 11:30:49.863939509 +0000 UTC m=+0.132465770 container attach 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 02 11:30:50 compute-0 systemd[1]: libpod-4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6.scope: Deactivated successfully.
Oct 02 11:30:50 compute-0 podman[73700]: 2025-10-02 11:30:50.305652971 +0000 UTC m=+0.574179232 container died 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a8bb4993bc56b86aebb1153fbac7e679d3a5df930a820383a29536920daf39-merged.mount: Deactivated successfully.
Oct 02 11:30:51 compute-0 podman[73700]: 2025-10-02 11:30:51.619673775 +0000 UTC m=+1.888200036 container remove 4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6 (image=quay.io/ceph/ceph:v18, name=sweet_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 11:30:51 compute-0 systemd[1]: Reloading.
Oct 02 11:30:51 compute-0 systemd-rc-local-generator[73783]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:51 compute-0 systemd-sysv-generator[73787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:51 compute-0 systemd[1]: libpod-conmon-4285fe5484c1c4276b84f5b7443c91023e0da48073e5d001cb48a5a700ebeed6.scope: Deactivated successfully.
Oct 02 11:30:51 compute-0 systemd[1]: Reloading.
Oct 02 11:30:51 compute-0 systemd-rc-local-generator[73821]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:30:51 compute-0 systemd-sysv-generator[73824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:30:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.fmcstn for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:30:52 compute-0 podman[73881]: 2025-10-02 11:30:52.342367473 +0000 UTC m=+0.041047695 container create 40049824dedfdb4c2d5e555144af61c717c75ec1a23bd69ad5e9a919d590a5e3 (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b476715e90dbac918e52afefac939013b962813f0c5d42ac4a55a37915ccda9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b476715e90dbac918e52afefac939013b962813f0c5d42ac4a55a37915ccda9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b476715e90dbac918e52afefac939013b962813f0c5d42ac4a55a37915ccda9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b476715e90dbac918e52afefac939013b962813f0c5d42ac4a55a37915ccda9/merged/var/lib/ceph/mgr/ceph-compute-0.fmcstn supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 podman[73881]: 2025-10-02 11:30:52.401754273 +0000 UTC m=+0.100434515 container init 40049824dedfdb4c2d5e555144af61c717c75ec1a23bd69ad5e9a919d590a5e3 (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:52 compute-0 podman[73881]: 2025-10-02 11:30:52.406410175 +0000 UTC m=+0.105090387 container start 40049824dedfdb4c2d5e555144af61c717c75ec1a23bd69ad5e9a919d590a5e3 (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 02 11:30:52 compute-0 bash[73881]: 40049824dedfdb4c2d5e555144af61c717c75ec1a23bd69ad5e9a919d590a5e3
Oct 02 11:30:52 compute-0 podman[73881]: 2025-10-02 11:30:52.322249916 +0000 UTC m=+0.020930168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:52 compute-0 systemd[1]: Started Ceph mgr.compute-0.fmcstn for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.484479467 +0000 UTC m=+0.044735915 container create c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:52 compute-0 systemd[1]: Started libpod-conmon-c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066.scope.
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 02 11:30:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19740febe871520f34fc3429aaf2995cf6c9547e0b09aa0f6d72d880be9c1d01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19740febe871520f34fc3429aaf2995cf6c9547e0b09aa0f6d72d880be9c1d01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19740febe871520f34fc3429aaf2995cf6c9547e0b09aa0f6d72d880be9c1d01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.463592651 +0000 UTC m=+0.023849109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.575455731 +0000 UTC m=+0.135712179 container init c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.582359388 +0000 UTC m=+0.142615836 container start c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.585232548 +0000 UTC m=+0.145488996 container attach c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:30:52 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 02 11:30:52 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:52.872+0000 7f9d2e64e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:30:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:30:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987521463' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:52 compute-0 amazing_buck[73939]: 
Oct 02 11:30:52 compute-0 amazing_buck[73939]: {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "health": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "status": "HEALTH_OK",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "checks": {},
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "mutes": []
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "election_epoch": 5,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "quorum": [
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         0
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     ],
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "quorum_names": [
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "compute-0"
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     ],
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "quorum_age": 3,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "monmap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "epoch": 1,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "min_mon_release_name": "reef",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_mons": 1
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "osdmap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "epoch": 1,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_osds": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_up_osds": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "osd_up_since": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_in_osds": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "osd_in_since": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_remapped_pgs": 0
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "pgmap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "pgs_by_state": [],
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_pgs": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_pools": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_objects": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "data_bytes": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "bytes_used": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "bytes_avail": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "bytes_total": 0
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "fsmap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "epoch": 1,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "by_rank": [],
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "up:standby": 0
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "mgrmap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "available": false,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "num_standbys": 0,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "modules": [
Oct 02 11:30:52 compute-0 amazing_buck[73939]:             "iostat",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:             "nfs",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:             "restful"
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         ],
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "services": {}
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "servicemap": {
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "epoch": 1,
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:30:52 compute-0 amazing_buck[73939]:         "services": {}
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     },
Oct 02 11:30:52 compute-0 amazing_buck[73939]:     "progress_events": {}
Oct 02 11:30:52 compute-0 amazing_buck[73939]: }
Oct 02 11:30:52 compute-0 systemd[1]: libpod-c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066.scope: Deactivated successfully.
Oct 02 11:30:52 compute-0 podman[73902]: 2025-10-02 11:30:52.979260833 +0000 UTC m=+0.539517311 container died c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-19740febe871520f34fc3429aaf2995cf6c9547e0b09aa0f6d72d880be9c1d01-merged.mount: Deactivated successfully.
Oct 02 11:30:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3987521463' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:53 compute-0 podman[73902]: 2025-10-02 11:30:53.035042085 +0000 UTC m=+0.595298533 container remove c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066 (image=quay.io/ceph/ceph:v18, name=amazing_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:53 compute-0 systemd[1]: libpod-conmon-c7f4a7e12914b913ed005227f7ce5e35f5fb0ce3acb90a795ea40200c36bc066.scope: Deactivated successfully.
Oct 02 11:30:53 compute-0 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:30:53 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:53.151+0000 7f9d2e64e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:30:53 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.10402591 +0000 UTC m=+0.047726657 container create 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:30:55 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 02 11:30:55 compute-0 systemd[1]: Started libpod-conmon-9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378.scope.
Oct 02 11:30:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed24c90fc3e893174d3adefa39905f77765421639fa14b506ed03aa0a5da53e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed24c90fc3e893174d3adefa39905f77765421639fa14b506ed03aa0a5da53e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed24c90fc3e893174d3adefa39905f77765421639fa14b506ed03aa0a5da53e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.079117166 +0000 UTC m=+0.022817943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.183328631 +0000 UTC m=+0.127029378 container init 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.188404244 +0000 UTC m=+0.132104991 container start 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.191384916 +0000 UTC m=+0.135085663 container attach 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:30:55 compute-0 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:30:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:55.393+0000 7f9d2e64e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:30:55 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 02 11:30:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:30:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118078117' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:55 compute-0 reverent_liskov[74006]: 
Oct 02 11:30:55 compute-0 reverent_liskov[74006]: {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "health": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "status": "HEALTH_OK",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "checks": {},
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "mutes": []
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "election_epoch": 5,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "quorum": [
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         0
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     ],
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "quorum_names": [
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "compute-0"
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     ],
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "quorum_age": 6,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "monmap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "epoch": 1,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "min_mon_release_name": "reef",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_mons": 1
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "osdmap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "epoch": 1,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_osds": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_up_osds": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "osd_up_since": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_in_osds": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "osd_in_since": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_remapped_pgs": 0
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "pgmap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "pgs_by_state": [],
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_pgs": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_pools": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_objects": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "data_bytes": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "bytes_used": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "bytes_avail": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "bytes_total": 0
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "fsmap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "epoch": 1,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "by_rank": [],
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "up:standby": 0
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "mgrmap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "available": false,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "num_standbys": 0,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "modules": [
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:             "iostat",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:             "nfs",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:             "restful"
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         ],
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "services": {}
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "servicemap": {
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "epoch": 1,
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:         "services": {}
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     },
Oct 02 11:30:55 compute-0 reverent_liskov[74006]:     "progress_events": {}
Oct 02 11:30:55 compute-0 reverent_liskov[74006]: }
Oct 02 11:30:55 compute-0 systemd[1]: libpod-9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378.scope: Deactivated successfully.
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.620576614 +0000 UTC m=+0.564277361 container died 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed24c90fc3e893174d3adefa39905f77765421639fa14b506ed03aa0a5da53e0-merged.mount: Deactivated successfully.
Oct 02 11:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/118078117' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:55 compute-0 podman[73990]: 2025-10-02 11:30:55.660722476 +0000 UTC m=+0.604423223 container remove 9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378 (image=quay.io/ceph/ceph:v18, name=reverent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:30:55 compute-0 systemd[1]: libpod-conmon-9f16ddf4873de8930a100da1400f1ac3583d1a0de73a833c17b1cb799871e378.scope: Deactivated successfully.
Oct 02 11:30:56 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:57.127+0000 7f9d2e64e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   from numpy import show_config as show_numpy_config
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:57.696+0000 7f9d2e64e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 02 11:30:57 compute-0 podman[74046]: 2025-10-02 11:30:57.720482599 +0000 UTC m=+0.039463687 container create 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:30:57 compute-0 systemd[1]: Started libpod-conmon-0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34.scope.
Oct 02 11:30:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3421a6bc14d35c15f5f52389ac56aa2d3ce0316b9129f1d2fcbaf7054cf0d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3421a6bc14d35c15f5f52389ac56aa2d3ce0316b9129f1d2fcbaf7054cf0d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3421a6bc14d35c15f5f52389ac56aa2d3ce0316b9129f1d2fcbaf7054cf0d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:30:57 compute-0 podman[74046]: 2025-10-02 11:30:57.701874648 +0000 UTC m=+0.020855746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:30:57 compute-0 podman[74046]: 2025-10-02 11:30:57.804597836 +0000 UTC m=+0.123578914 container init 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:30:57 compute-0 podman[74046]: 2025-10-02 11:30:57.8096799 +0000 UTC m=+0.128660978 container start 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:30:57 compute-0 podman[74046]: 2025-10-02 11:30:57.812871057 +0000 UTC m=+0.131852155 container attach 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:57.956+0000 7f9d2e64e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:30:57 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 02 11:30:58 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 02 11:30:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:30:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4069782165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]: 
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]: {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "health": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "status": "HEALTH_OK",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "checks": {},
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "mutes": []
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "election_epoch": 5,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "quorum": [
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         0
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     ],
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "quorum_names": [
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "compute-0"
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     ],
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "quorum_age": 9,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "monmap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "epoch": 1,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "min_mon_release_name": "reef",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_mons": 1
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "osdmap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "epoch": 1,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_osds": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_up_osds": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "osd_up_since": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_in_osds": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "osd_in_since": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_remapped_pgs": 0
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "pgmap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "pgs_by_state": [],
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_pgs": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_pools": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_objects": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "data_bytes": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "bytes_used": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "bytes_avail": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "bytes_total": 0
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "fsmap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "epoch": 1,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "by_rank": [],
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "up:standby": 0
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "mgrmap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "available": false,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "num_standbys": 0,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "modules": [
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:             "iostat",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:             "nfs",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:             "restful"
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         ],
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "services": {}
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "servicemap": {
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "epoch": 1,
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:         "services": {}
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     },
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]:     "progress_events": {}
Oct 02 11:30:58 compute-0 trusting_cartwright[74063]: }
Oct 02 11:30:58 compute-0 systemd[1]: libpod-0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34.scope: Deactivated successfully.
Oct 02 11:30:58 compute-0 podman[74046]: 2025-10-02 11:30:58.230058784 +0000 UTC m=+0.549039902 container died 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3421a6bc14d35c15f5f52389ac56aa2d3ce0316b9129f1d2fcbaf7054cf0d9-merged.mount: Deactivated successfully.
Oct 02 11:30:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4069782165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:30:58 compute-0 podman[74046]: 2025-10-02 11:30:58.275169518 +0000 UTC m=+0.594150596 container remove 0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34 (image=quay.io/ceph/ceph:v18, name=trusting_cartwright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:30:58 compute-0 systemd[1]: libpod-conmon-0f363ab857a9be2c8fc605416fc848819dc567bf6d606f458dd39779e03a4e34.scope: Deactivated successfully.
Oct 02 11:30:58 compute-0 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:30:58 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 02 11:30:58 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:30:58.445+0000 7f9d2e64e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:31:00 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 02 11:31:00 compute-0 podman[74101]: 2025-10-02 11:31:00.316609365 +0000 UTC m=+0.020360614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:00 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 11:31:00 compute-0 podman[74101]: 2025-10-02 11:31:00.496010801 +0000 UTC m=+0.199762060 container create 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:00 compute-0 systemd[1]: Started libpod-conmon-83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671.scope.
Oct 02 11:31:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5fc2efa82b004634fff7eba7a3339000357e01f7a0e67bb81ae60b4f04017/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5fc2efa82b004634fff7eba7a3339000357e01f7a0e67bb81ae60b4f04017/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5fc2efa82b004634fff7eba7a3339000357e01f7a0e67bb81ae60b4f04017/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:00 compute-0 podman[74101]: 2025-10-02 11:31:00.736738634 +0000 UTC m=+0.440489883 container init 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:31:00 compute-0 podman[74101]: 2025-10-02 11:31:00.743067066 +0000 UTC m=+0.446818295 container start 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:00 compute-0 podman[74101]: 2025-10-02 11:31:00.825952314 +0000 UTC m=+0.529703593 container attach 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:31:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982846434' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:01 compute-0 happy_swanson[74117]: 
Oct 02 11:31:01 compute-0 happy_swanson[74117]: {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "health": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "status": "HEALTH_OK",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "checks": {},
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "mutes": []
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "election_epoch": 5,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "quorum": [
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         0
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     ],
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "quorum_names": [
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "compute-0"
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     ],
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "quorum_age": 12,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "monmap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "epoch": 1,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "min_mon_release_name": "reef",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_mons": 1
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "osdmap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "epoch": 1,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_osds": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_up_osds": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "osd_up_since": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_in_osds": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "osd_in_since": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_remapped_pgs": 0
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "pgmap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "pgs_by_state": [],
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_pgs": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_pools": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_objects": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "data_bytes": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "bytes_used": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "bytes_avail": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "bytes_total": 0
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "fsmap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "epoch": 1,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "by_rank": [],
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "up:standby": 0
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "mgrmap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "available": false,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "num_standbys": 0,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "modules": [
Oct 02 11:31:01 compute-0 happy_swanson[74117]:             "iostat",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:             "nfs",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:             "restful"
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         ],
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "services": {}
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "servicemap": {
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "epoch": 1,
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:01 compute-0 happy_swanson[74117]:         "services": {}
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     },
Oct 02 11:31:01 compute-0 happy_swanson[74117]:     "progress_events": {}
Oct 02 11:31:01 compute-0 happy_swanson[74117]: }
Oct 02 11:31:01 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 02 11:31:01 compute-0 systemd[1]: libpod-83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671.scope: Deactivated successfully.
Oct 02 11:31:01 compute-0 podman[74101]: 2025-10-02 11:31:01.137888722 +0000 UTC m=+0.841639971 container died 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/982846434' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bd5fc2efa82b004634fff7eba7a3339000357e01f7a0e67bb81ae60b4f04017-merged.mount: Deactivated successfully.
Oct 02 11:31:01 compute-0 podman[74101]: 2025-10-02 11:31:01.381697688 +0000 UTC m=+1.085448917 container remove 83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671 (image=quay.io/ceph/ceph:v18, name=happy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:31:01 compute-0 systemd[1]: libpod-conmon-83c8bb204389dcd864c1505e9cc30d417d158f43e2386bbd732a156c2c263671.scope: Deactivated successfully.
Oct 02 11:31:01 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 02 11:31:02 compute-0 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:31:02 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 02 11:31:02 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:02.102+0000 7f9d2e64e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:31:02 compute-0 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:02 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 11:31:02 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:02.807+0000 7f9d2e64e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 02 11:31:03 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:03.094+0000 7f9d2e64e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 11:31:03 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:03.348+0000 7f9d2e64e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 podman[74158]: 2025-10-02 11:31:03.436424938 +0000 UTC m=+0.025312064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:31:03 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 02 11:31:03 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:03.752+0000 7f9d2e64e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:31:04 compute-0 podman[74158]: 2025-10-02 11:31:04.801149431 +0000 UTC m=+1.390036547 container create 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:04 compute-0 systemd[1]: Started libpod-conmon-001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad.scope.
Oct 02 11:31:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a07e9a7b22eb29e903f65cd9d9a74ccee35c83e986b872e8634b909425b09b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a07e9a7b22eb29e903f65cd9d9a74ccee35c83e986b872e8634b909425b09b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a07e9a7b22eb29e903f65cd9d9a74ccee35c83e986b872e8634b909425b09b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:04 compute-0 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:31:04 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 02 11:31:04 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:04.892+0000 7f9d2e64e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:31:04 compute-0 podman[74158]: 2025-10-02 11:31:04.922446649 +0000 UTC m=+1.511333795 container init 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:04 compute-0 podman[74158]: 2025-10-02 11:31:04.928054645 +0000 UTC m=+1.516941761 container start 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:31:04 compute-0 podman[74158]: 2025-10-02 11:31:04.941239095 +0000 UTC m=+1.530126241 container attach 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:31:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376542288' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:05 compute-0 laughing_liskov[74175]: 
Oct 02 11:31:05 compute-0 laughing_liskov[74175]: {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "health": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "status": "HEALTH_OK",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "checks": {},
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "mutes": []
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "election_epoch": 5,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "quorum": [
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         0
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     ],
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "quorum_names": [
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "compute-0"
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     ],
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "quorum_age": 16,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "monmap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "epoch": 1,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "min_mon_release_name": "reef",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_mons": 1
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "osdmap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "epoch": 1,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_osds": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_up_osds": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "osd_up_since": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_in_osds": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "osd_in_since": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_remapped_pgs": 0
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "pgmap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "pgs_by_state": [],
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_pgs": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_pools": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_objects": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "data_bytes": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "bytes_used": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "bytes_avail": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "bytes_total": 0
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "fsmap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "epoch": 1,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "by_rank": [],
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "up:standby": 0
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "mgrmap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "available": false,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "num_standbys": 0,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "modules": [
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:             "iostat",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:             "nfs",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:             "restful"
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         ],
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "services": {}
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "servicemap": {
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "epoch": 1,
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:         "services": {}
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     },
Oct 02 11:31:05 compute-0 laughing_liskov[74175]:     "progress_events": {}
Oct 02 11:31:05 compute-0 laughing_liskov[74175]: }
Oct 02 11:31:05 compute-0 systemd[1]: libpod-001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad.scope: Deactivated successfully.
Oct 02 11:31:05 compute-0 podman[74158]: 2025-10-02 11:31:05.342157888 +0000 UTC m=+1.931045014 container died 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a07e9a7b22eb29e903f65cd9d9a74ccee35c83e986b872e8634b909425b09b-merged.mount: Deactivated successfully.
Oct 02 11:31:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/376542288' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:05 compute-0 podman[74158]: 2025-10-02 11:31:05.388884999 +0000 UTC m=+1.977772115 container remove 001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad (image=quay.io/ceph/ceph:v18, name=laughing_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:05 compute-0 systemd[1]: libpod-conmon-001d0c8cc381335b557eb0b9f9cd9efabfdc5e164dfec522403904a4a24cadad.scope: Deactivated successfully.
Oct 02 11:31:06 compute-0 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:31:06 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:06.019+0000 7f9d2e64e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:31:06 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 02 11:31:06 compute-0 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:31:06 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:06.386+0000 7f9d2e64e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:31:06 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 02 11:31:07 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 02 11:31:07 compute-0 podman[74212]: 2025-10-02 11:31:07.458305645 +0000 UTC m=+0.043195448 container create fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:07 compute-0 systemd[1]: Started libpod-conmon-fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60.scope.
Oct 02 11:31:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a6a5dcee1ff89702fb0010fe194ff6a8213dad168b1383ea750197eb785264/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a6a5dcee1ff89702fb0010fe194ff6a8213dad168b1383ea750197eb785264/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a6a5dcee1ff89702fb0010fe194ff6a8213dad168b1383ea750197eb785264/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:07 compute-0 podman[74212]: 2025-10-02 11:31:07.528536046 +0000 UTC m=+0.113425869 container init fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:31:07 compute-0 podman[74212]: 2025-10-02 11:31:07.533987548 +0000 UTC m=+0.118877351 container start fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:07 compute-0 podman[74212]: 2025-10-02 11:31:07.441443926 +0000 UTC m=+0.026333759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:07 compute-0 podman[74212]: 2025-10-02 11:31:07.538274262 +0000 UTC m=+0.123164095 container attach fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:31:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371481312' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]: 
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]: {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "health": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "status": "HEALTH_OK",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "checks": {},
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "mutes": []
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "election_epoch": 5,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "quorum": [
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         0
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     ],
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "quorum_names": [
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "compute-0"
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     ],
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "quorum_age": 18,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "monmap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "epoch": 1,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "min_mon_release_name": "reef",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_mons": 1
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "osdmap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "epoch": 1,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_osds": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_up_osds": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "osd_up_since": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_in_osds": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "osd_in_since": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_remapped_pgs": 0
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "pgmap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "pgs_by_state": [],
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_pgs": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_pools": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_objects": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "data_bytes": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "bytes_used": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "bytes_avail": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "bytes_total": 0
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "fsmap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "epoch": 1,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "by_rank": [],
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "up:standby": 0
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "mgrmap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "available": false,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "num_standbys": 0,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "modules": [
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:             "iostat",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:             "nfs",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:             "restful"
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         ],
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "services": {}
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "servicemap": {
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "epoch": 1,
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:         "services": {}
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     },
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]:     "progress_events": {}
Oct 02 11:31:07 compute-0 loving_mendeleev[74228]: }
Oct 02 11:31:07 compute-0 systemd[1]: libpod-fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60.scope: Deactivated successfully.
Oct 02 11:31:07 compute-0 conmon[74228]: conmon fe3bf910413cd3c81e4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60.scope/container/memory.events
Oct 02 11:31:07 compute-0 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:31:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:07.982+0000 7f9d2e64e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:31:07 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 02 11:31:07 compute-0 podman[74254]: 2025-10-02 11:31:07.998530303 +0000 UTC m=+0.022693771 container died fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:31:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/371481312' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a6a5dcee1ff89702fb0010fe194ff6a8213dad168b1383ea750197eb785264-merged.mount: Deactivated successfully.
Oct 02 11:31:08 compute-0 podman[74254]: 2025-10-02 11:31:08.149826538 +0000 UTC m=+0.173989976 container remove fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60 (image=quay.io/ceph/ceph:v18, name=loving_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:08 compute-0 systemd[1]: libpod-conmon-fe3bf910413cd3c81e4c13cd21a6f29bd96c567a04977d8ef68a37a3e634ad60.scope: Deactivated successfully.
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.250385317 +0000 UTC m=+0.070491758 container create 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.202353434 +0000 UTC m=+0.022459855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:10 compute-0 systemd[1]: Started libpod-conmon-6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503.scope.
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 02 11:31:10 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:10.299+0000 7f9d2e64e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795b88fc65b27e3c4346fa2b75534d94b538c8fd972e4a646ba1cc296da27979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795b88fc65b27e3c4346fa2b75534d94b538c8fd972e4a646ba1cc296da27979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795b88fc65b27e3c4346fa2b75534d94b538c8fd972e4a646ba1cc296da27979/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.33839644 +0000 UTC m=+0.158502861 container init 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.343163816 +0000 UTC m=+0.163270217 container start 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.349761405 +0000 UTC m=+0.169867836 container attach 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 02 11:31:10 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:10.559+0000 7f9d2e64e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52621117' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]: 
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]: {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "health": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "status": "HEALTH_OK",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "checks": {},
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "mutes": []
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "election_epoch": 5,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "quorum": [
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         0
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     ],
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "quorum_names": [
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "compute-0"
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     ],
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "quorum_age": 21,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "monmap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "epoch": 1,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "min_mon_release_name": "reef",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_mons": 1
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "osdmap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "epoch": 1,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_osds": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_up_osds": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "osd_up_since": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_in_osds": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "osd_in_since": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_remapped_pgs": 0
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "pgmap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "pgs_by_state": [],
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_pgs": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_pools": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_objects": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "data_bytes": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "bytes_used": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "bytes_avail": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "bytes_total": 0
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "fsmap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "epoch": 1,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "by_rank": [],
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "up:standby": 0
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "mgrmap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "available": false,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "num_standbys": 0,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "modules": [
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:             "iostat",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:             "nfs",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:             "restful"
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         ],
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "services": {}
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "servicemap": {
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "epoch": 1,
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:         "services": {}
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     },
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]:     "progress_events": {}
Oct 02 11:31:10 compute-0 nifty_chebyshev[74286]: }
Oct 02 11:31:10 compute-0 systemd[1]: libpod-6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503.scope: Deactivated successfully.
Oct 02 11:31:10 compute-0 podman[74269]: 2025-10-02 11:31:10.740291246 +0000 UTC m=+0.560397657 container died 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 02 11:31:10 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:10.834+0000 7f9d2e64e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:31:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/52621117' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-795b88fc65b27e3c4346fa2b75534d94b538c8fd972e4a646ba1cc296da27979-merged.mount: Deactivated successfully.
Oct 02 11:31:11 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 02 11:31:11 compute-0 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:31:11 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 02 11:31:11 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:11.391+0000 7f9d2e64e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:31:11 compute-0 podman[74269]: 2025-10-02 11:31:11.404344205 +0000 UTC m=+1.224450606 container remove 6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503 (image=quay.io/ceph/ceph:v18, name=nifty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:11 compute-0 systemd[1]: libpod-conmon-6aea6360e58d65667718e1b2f86e702c4500194e5d0fb8c28d4b6a5fe591f503.scope: Deactivated successfully.
Oct 02 11:31:11 compute-0 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:31:11 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 02 11:31:11 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:11.632+0000 7f9d2e64e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:31:12 compute-0 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:31:12 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 11:31:12 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:12.270+0000 7f9d2e64e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:31:12 compute-0 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:12 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 02 11:31:12 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:12.989+0000 7f9d2e64e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:13 compute-0 podman[74326]: 2025-10-02 11:31:13.456262476 +0000 UTC m=+0.026378880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:13 compute-0 podman[74326]: 2025-10-02 11:31:13.612416619 +0000 UTC m=+0.182532983 container create 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:31:13 compute-0 systemd[1]: Started libpod-conmon-7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a.scope.
Oct 02 11:31:13 compute-0 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:31:13 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 02 11:31:13 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:13.746+0000 7f9d2e64e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:31:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad290af5d05d99b6bb38420d9b4c2ea6f5ecf5212ee6c2ce8df02bf0828ec19f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad290af5d05d99b6bb38420d9b4c2ea6f5ecf5212ee6c2ce8df02bf0828ec19f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad290af5d05d99b6bb38420d9b4c2ea6f5ecf5212ee6c2ce8df02bf0828ec19f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:13 compute-0 podman[74326]: 2025-10-02 11:31:13.831989218 +0000 UTC m=+0.402105622 container init 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:13 compute-0 podman[74326]: 2025-10-02 11:31:13.840021633 +0000 UTC m=+0.410138007 container start 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:31:13 compute-0 podman[74326]: 2025-10-02 11:31:13.848368755 +0000 UTC m=+0.418485169 container attach 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:31:14 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:14.004+0000 7f9d2e64e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x560d0ce74420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmcstn
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fmcstn(active, starting, since 0.0348237s)
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmcstn is now available
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:31:14
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [progress INFO root] No stored events to load
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [progress INFO root] Loaded [] historic events
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 02 11:31:14 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 02 11:31:14 compute-0 ceph-mon[73607]: Activating manager daemon compute-0.fmcstn
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mgrmap e2: compute-0.fmcstn(active, starting, since 0.0348237s)
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: Manager daemon compute-0.fmcstn is now available
Oct 02 11:31:14 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472862038' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]: 
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]: {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "health": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "status": "HEALTH_OK",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "checks": {},
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "mutes": []
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "election_epoch": 5,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "quorum": [
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         0
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     ],
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "quorum_names": [
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "compute-0"
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     ],
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "quorum_age": 25,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "monmap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "epoch": 1,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "min_mon_release_name": "reef",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_mons": 1
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "osdmap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "epoch": 1,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_osds": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_up_osds": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "osd_up_since": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_in_osds": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "osd_in_since": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_remapped_pgs": 0
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "pgmap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "pgs_by_state": [],
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_pgs": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_pools": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_objects": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "data_bytes": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "bytes_used": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "bytes_avail": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "bytes_total": 0
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "fsmap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "epoch": 1,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "by_rank": [],
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "up:standby": 0
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "mgrmap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "available": false,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "num_standbys": 0,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "modules": [
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:             "iostat",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:             "nfs",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:             "restful"
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         ],
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "services": {}
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "servicemap": {
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "epoch": 1,
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:         "services": {}
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     },
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]:     "progress_events": {}
Oct 02 11:31:14 compute-0 dreamy_swanson[74342]: }
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 02 11:31:14 compute-0 systemd[1]: libpod-7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a.scope: Deactivated successfully.
Oct 02 11:31:14 compute-0 podman[74326]: 2025-10-02 11:31:14.256988895 +0000 UTC m=+0.827105259 container died 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:31:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad290af5d05d99b6bb38420d9b4c2ea6f5ecf5212ee6c2ce8df02bf0828ec19f-merged.mount: Deactivated successfully.
Oct 02 11:31:14 compute-0 podman[74326]: 2025-10-02 11:31:14.376862229 +0000 UTC m=+0.946978623 container remove 7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a (image=quay.io/ceph/ceph:v18, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:31:14 compute-0 systemd[1]: libpod-conmon-7985ca8ae30d94fd553d3dd9f377a39f1cfb1f9597c3b7c187b60d4ebd1bfd9a.scope: Deactivated successfully.
Oct 02 11:31:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fmcstn(active, since 1.15228s)
Oct 02 11:31:15 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"}]: dispatch
Oct 02 11:31:15 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/472862038' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:15 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:15 compute-0 ceph-mon[73607]: from='mgr.14102 192.168.122.100:0/2566804250' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:15 compute-0 ceph-mon[73607]: mgrmap e3: compute-0.fmcstn(active, since 1.15228s)
Oct 02 11:31:16 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fmcstn(active, since 2s)
Oct 02 11:31:16 compute-0 podman[74458]: 2025-10-02 11:31:16.469948078 +0000 UTC m=+0.071927524 container create 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:16 compute-0 podman[74458]: 2025-10-02 11:31:16.417949027 +0000 UTC m=+0.019928413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:16 compute-0 systemd[1]: Started libpod-conmon-3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2.scope.
Oct 02 11:31:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a2b1b97499a52d03ca08fa4d2bf57718386dc3b4a4972f970445a7e779390a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a2b1b97499a52d03ca08fa4d2bf57718386dc3b4a4972f970445a7e779390a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a2b1b97499a52d03ca08fa4d2bf57718386dc3b4a4972f970445a7e779390a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:16 compute-0 podman[74458]: 2025-10-02 11:31:16.760072317 +0000 UTC m=+0.362051683 container init 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:16 compute-0 podman[74458]: 2025-10-02 11:31:16.766463561 +0000 UTC m=+0.368442927 container start 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:31:16 compute-0 podman[74458]: 2025-10-02 11:31:16.814188777 +0000 UTC m=+0.416168193 container attach 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:31:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830896374' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:17 compute-0 keen_curran[74474]: 
Oct 02 11:31:17 compute-0 keen_curran[74474]: {
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "health": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "status": "HEALTH_OK",
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "checks": {},
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "mutes": []
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "election_epoch": 5,
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "quorum": [
Oct 02 11:31:17 compute-0 keen_curran[74474]:         0
Oct 02 11:31:17 compute-0 keen_curran[74474]:     ],
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "quorum_names": [
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "compute-0"
Oct 02 11:31:17 compute-0 keen_curran[74474]:     ],
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "quorum_age": 28,
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "monmap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "epoch": 1,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "min_mon_release_name": "reef",
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_mons": 1
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "osdmap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "epoch": 1,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_osds": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_up_osds": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "osd_up_since": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_in_osds": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "osd_in_since": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_remapped_pgs": 0
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "pgmap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "pgs_by_state": [],
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_pgs": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_pools": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_objects": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "data_bytes": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "bytes_used": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "bytes_avail": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "bytes_total": 0
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "fsmap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "epoch": 1,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "by_rank": [],
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "up:standby": 0
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "mgrmap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "available": true,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "num_standbys": 0,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "modules": [
Oct 02 11:31:17 compute-0 keen_curran[74474]:             "iostat",
Oct 02 11:31:17 compute-0 keen_curran[74474]:             "nfs",
Oct 02 11:31:17 compute-0 keen_curran[74474]:             "restful"
Oct 02 11:31:17 compute-0 keen_curran[74474]:         ],
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "services": {}
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "servicemap": {
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "epoch": 1,
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "modified": "2025-10-02T11:30:45.146182+0000",
Oct 02 11:31:17 compute-0 keen_curran[74474]:         "services": {}
Oct 02 11:31:17 compute-0 keen_curran[74474]:     },
Oct 02 11:31:17 compute-0 keen_curran[74474]:     "progress_events": {}
Oct 02 11:31:17 compute-0 keen_curran[74474]: }
Oct 02 11:31:17 compute-0 systemd[1]: libpod-3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2.scope: Deactivated successfully.
Oct 02 11:31:17 compute-0 podman[74458]: 2025-10-02 11:31:17.359685913 +0000 UTC m=+0.961665279 container died 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:17 compute-0 ceph-mon[73607]: mgrmap e4: compute-0.fmcstn(active, since 2s)
Oct 02 11:31:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2830896374' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a2b1b97499a52d03ca08fa4d2bf57718386dc3b4a4972f970445a7e779390a-merged.mount: Deactivated successfully.
Oct 02 11:31:18 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:18 compute-0 podman[74458]: 2025-10-02 11:31:18.074928591 +0000 UTC m=+1.676907947 container remove 3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2 (image=quay.io/ceph/ceph:v18, name=keen_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:18 compute-0 systemd[1]: libpod-conmon-3c4ffd843a4fc5361449e73efad7357bb43c54582973b67b9e8d1422486891a2.scope: Deactivated successfully.
Oct 02 11:31:18 compute-0 podman[74512]: 2025-10-02 11:31:18.197963112 +0000 UTC m=+0.105098517 container create 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:18 compute-0 podman[74512]: 2025-10-02 11:31:18.112286326 +0000 UTC m=+0.019421751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:18 compute-0 systemd[1]: Started libpod-conmon-62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31.scope.
Oct 02 11:31:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2893e66909613deb0ef4816060e6a7756450df219400a391f12e931b38e9dba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2893e66909613deb0ef4816060e6a7756450df219400a391f12e931b38e9dba3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2893e66909613deb0ef4816060e6a7756450df219400a391f12e931b38e9dba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2893e66909613deb0ef4816060e6a7756450df219400a391f12e931b38e9dba3/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:18 compute-0 podman[74512]: 2025-10-02 11:31:18.852619511 +0000 UTC m=+0.759754916 container init 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:31:18 compute-0 podman[74512]: 2025-10-02 11:31:18.858451032 +0000 UTC m=+0.765586427 container start 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:31:19 compute-0 podman[74512]: 2025-10-02 11:31:19.034792745 +0000 UTC m=+0.941928140 container attach 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:31:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:31:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4137600257' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:31:19 compute-0 systemd[1]: libpod-62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31.scope: Deactivated successfully.
Oct 02 11:31:19 compute-0 podman[74512]: 2025-10-02 11:31:19.395360171 +0000 UTC m=+1.302495566 container died 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:31:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4137600257' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2893e66909613deb0ef4816060e6a7756450df219400a391f12e931b38e9dba3-merged.mount: Deactivated successfully.
Oct 02 11:31:20 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:20 compute-0 podman[74512]: 2025-10-02 11:31:20.11523781 +0000 UTC m=+2.022373205 container remove 62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31 (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:20 compute-0 systemd[1]: libpod-conmon-62d63548994b2c7b21774849e8cc00a4edd86df85f6fea0adc47b4434f581c31.scope: Deactivated successfully.
Oct 02 11:31:20 compute-0 podman[74566]: 2025-10-02 11:31:20.154329628 +0000 UTC m=+0.021926763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:20 compute-0 podman[74566]: 2025-10-02 11:31:20.252626639 +0000 UTC m=+0.120223754 container create d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:31:20 compute-0 systemd[1]: Started libpod-conmon-d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca.scope.
Oct 02 11:31:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb9106471812b4029fcb20462c02bce42d7861b8518e1ef62589f6a73a3f998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb9106471812b4029fcb20462c02bce42d7861b8518e1ef62589f6a73a3f998/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb9106471812b4029fcb20462c02bce42d7861b8518e1ef62589f6a73a3f998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:20 compute-0 podman[74566]: 2025-10-02 11:31:20.393112383 +0000 UTC m=+0.260709508 container init d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:20 compute-0 podman[74566]: 2025-10-02 11:31:20.397998211 +0000 UTC m=+0.265595326 container start d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:20 compute-0 podman[74566]: 2025-10-02 11:31:20.482083658 +0000 UTC m=+0.349680793 container attach d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 02 11:31:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1702907958' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 11:31:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1702907958' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 11:31:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fmcstn(active, since 7s)
Oct 02 11:31:21 compute-0 systemd[1]: libpod-d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca.scope: Deactivated successfully.
Oct 02 11:31:21 compute-0 podman[74608]: 2025-10-02 11:31:21.276315959 +0000 UTC m=+0.022852444 container died d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 02 11:31:21 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: ignoring --setuser ceph since I am not root
Oct 02 11:31:21 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: ignoring --setgroup ceph since I am not root
Oct 02 11:31:21 compute-0 ceph-mgr[73901]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 11:31:21 compute-0 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 02 11:31:21 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 02 11:31:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1702907958' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 11:31:21 compute-0 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:31:21 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 02 11:31:21 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:21.897+0000 7f4cd67f2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:31:22 compute-0 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:31:22 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 02 11:31:22 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:22.165+0000 7f4cd67f2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cb9106471812b4029fcb20462c02bce42d7861b8518e1ef62589f6a73a3f998-merged.mount: Deactivated successfully.
Oct 02 11:31:22 compute-0 podman[74608]: 2025-10-02 11:31:22.672154636 +0000 UTC m=+1.418691111 container remove d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca (image=quay.io/ceph/ceph:v18, name=crazy_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:22 compute-0 systemd[1]: libpod-conmon-d454e63e53d316d9f92af8991faa5164c54e9f0b1d2f58234cffd0244cab6cca.scope: Deactivated successfully.
Oct 02 11:31:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1702907958' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 11:31:22 compute-0 ceph-mon[73607]: mgrmap e5: compute-0.fmcstn(active, since 7s)
Oct 02 11:31:22 compute-0 podman[74648]: 2025-10-02 11:31:22.768748176 +0000 UTC m=+0.073177994 container create a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:22 compute-0 systemd[1]: Started libpod-conmon-a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279.scope.
Oct 02 11:31:22 compute-0 podman[74648]: 2025-10-02 11:31:22.715630709 +0000 UTC m=+0.020060557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7773601ae97d8aec79fe53d3d35c51ff3dd9a0e34a593e40d42f2ef526f1c8e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7773601ae97d8aec79fe53d3d35c51ff3dd9a0e34a593e40d42f2ef526f1c8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7773601ae97d8aec79fe53d3d35c51ff3dd9a0e34a593e40d42f2ef526f1c8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:22 compute-0 podman[74648]: 2025-10-02 11:31:22.838337532 +0000 UTC m=+0.142767360 container init a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:31:22 compute-0 podman[74648]: 2025-10-02 11:31:22.843735113 +0000 UTC m=+0.148164931 container start a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:31:22 compute-0 podman[74648]: 2025-10-02 11:31:22.8671485 +0000 UTC m=+0.171578338 container attach a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:31:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 11:31:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420400555' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 11:31:23 compute-0 practical_knuth[74664]: {
Oct 02 11:31:23 compute-0 practical_knuth[74664]:     "epoch": 5,
Oct 02 11:31:23 compute-0 practical_knuth[74664]:     "available": true,
Oct 02 11:31:23 compute-0 practical_knuth[74664]:     "active_name": "compute-0.fmcstn",
Oct 02 11:31:23 compute-0 practical_knuth[74664]:     "num_standby": 0
Oct 02 11:31:23 compute-0 practical_knuth[74664]: }
Oct 02 11:31:23 compute-0 systemd[1]: libpod-a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279.scope: Deactivated successfully.
Oct 02 11:31:23 compute-0 podman[74690]: 2025-10-02 11:31:23.447029899 +0000 UTC m=+0.021008870 container died a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7773601ae97d8aec79fe53d3d35c51ff3dd9a0e34a593e40d42f2ef526f1c8e-merged.mount: Deactivated successfully.
Oct 02 11:31:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2420400555' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 11:31:23 compute-0 podman[74690]: 2025-10-02 11:31:23.823954131 +0000 UTC m=+0.397933072 container remove a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279 (image=quay.io/ceph/ceph:v18, name=practical_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:31:23 compute-0 systemd[1]: libpod-conmon-a98d7cec1eec9f0f9acc5bfe3320eea2a1e726a6ba99088f361f9dc186742279.scope: Deactivated successfully.
Oct 02 11:31:23 compute-0 podman[74716]: 2025-10-02 11:31:23.871123254 +0000 UTC m=+0.023810078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:24 compute-0 podman[74716]: 2025-10-02 11:31:24.050493769 +0000 UTC m=+0.203180583 container create 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:24 compute-0 systemd[1]: Started libpod-conmon-25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c.scope.
Oct 02 11:31:24 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 02 11:31:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb97671ebcbe591ea7039681cd35e02b7f347feb1d853cde957200edba6b571/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb97671ebcbe591ea7039681cd35e02b7f347feb1d853cde957200edba6b571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb97671ebcbe591ea7039681cd35e02b7f347feb1d853cde957200edba6b571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:24 compute-0 podman[74716]: 2025-10-02 11:31:24.218740435 +0000 UTC m=+0.371427259 container init 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:31:24 compute-0 podman[74716]: 2025-10-02 11:31:24.225607611 +0000 UTC m=+0.378294415 container start 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:31:24 compute-0 podman[74716]: 2025-10-02 11:31:24.33784428 +0000 UTC m=+0.490531094 container attach 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:31:24 compute-0 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:31:24 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 02 11:31:24 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:24.428+0000 7f4cd67f2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:31:25 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 02 11:31:26 compute-0 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:31:26 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 11:31:26 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:26.164+0000 7f4cd67f2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:31:26 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 11:31:26 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 11:31:26 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   from numpy import show_config as show_numpy_config
Oct 02 11:31:26 compute-0 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:31:26 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:26.760+0000 7f4cd67f2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:31:26 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 02 11:31:27 compute-0 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:31:27 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:27.022+0000 7f4cd67f2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:31:27 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 02 11:31:27 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 02 11:31:27 compute-0 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:31:27 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 02 11:31:27 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:27.579+0000 7f4cd67f2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:31:29 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 02 11:31:29 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 11:31:30 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 02 11:31:30 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 02 11:31:31 compute-0 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:31:31 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 02 11:31:31 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:31.510+0000 7f4cd67f2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:32.215+0000 7f4cd67f2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 02 11:31:32 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:32.500+0000 7f4cd67f2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:32.780+0000 7f4cd67f2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:31:32 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 11:31:33 compute-0 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:31:33 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 02 11:31:33 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:33.084+0000 7f4cd67f2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:31:33 compute-0 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:31:33 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:33.335+0000 7f4cd67f2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:31:33 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 02 11:31:34 compute-0 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:31:34 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:34.378+0000 7f4cd67f2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:31:34 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 02 11:31:34 compute-0 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:31:34 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 02 11:31:34 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:34.698+0000 7f4cd67f2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:31:35 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 02 11:31:36 compute-0 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:31:36 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:36.191+0000 7f4cd67f2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:31:36 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:38.302+0000 7f4cd67f2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:38.563+0000 7f4cd67f2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:38.822+0000 7f4cd67f2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:31:38 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 02 11:31:39 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 02 11:31:39 compute-0 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:31:39 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 02 11:31:39 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:39.338+0000 7f4cd67f2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:31:39 compute-0 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:31:39 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 02 11:31:39 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:39.588+0000 7f4cd67f2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:31:40 compute-0 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:31:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:40.273+0000 7f4cd67f2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:31:40 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 11:31:41 compute-0 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:41 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 02 11:31:41 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:41.042+0000 7f4cd67f2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:31:41 compute-0 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:31:41 compute-0 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 02 11:31:41 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:41.802+0000 7f4cd67f2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:31:42 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:31:42.050+0000 7f4cd67f2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmcstn restarted
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x55d2a56ce420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmcstn
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fmcstn(active, starting, since 0.208198s)
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:31:42
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmcstn is now available
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: Active manager daemon compute-0.fmcstn restarted
Oct 02 11:31:42 compute-0 ceph-mon[73607]: Activating manager daemon compute-0.fmcstn
Oct 02 11:31:42 compute-0 ceph-mon[73607]: osdmap e2: 0 total, 0 up, 0 in
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mgrmap e6: compute-0.fmcstn(active, starting, since 0.208198s)
Oct 02 11:31:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmcstn", "id": "compute-0.fmcstn"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: cephadm
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [progress INFO root] No stored events to load
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [progress INFO root] Loaded [] historic events
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 02 11:31:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"} v 0) v1
Oct 02 11:31:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"}]: dispatch
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 02 11:31:42 compute-0 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 02 11:31:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fmcstn(active, since 1.21336s)
Oct 02 11:31:43 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 11:31:43 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 11:31:43 compute-0 epic_lovelace[74733]: {
Oct 02 11:31:43 compute-0 epic_lovelace[74733]:     "mgrmap_epoch": 7,
Oct 02 11:31:43 compute-0 epic_lovelace[74733]:     "initialized": true
Oct 02 11:31:43 compute-0 epic_lovelace[74733]: }
Oct 02 11:31:43 compute-0 systemd[1]: libpod-25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c.scope: Deactivated successfully.
Oct 02 11:31:43 compute-0 podman[74716]: 2025-10-02 11:31:43.299868658 +0000 UTC m=+19.452555462 container died 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb97671ebcbe591ea7039681cd35e02b7f347feb1d853cde957200edba6b571-merged.mount: Deactivated successfully.
Oct 02 11:31:43 compute-0 podman[74716]: 2025-10-02 11:31:43.346916809 +0000 UTC m=+19.499603613 container remove 25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c (image=quay.io/ceph/ceph:v18, name=epic_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:43 compute-0 ceph-mon[73607]: Found migration_current of "None". Setting to last migration.
Oct 02 11:31:43 compute-0 ceph-mon[73607]: Manager daemon compute-0.fmcstn is now available
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:31:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmcstn/trash_purge_schedule"}]: dispatch
Oct 02 11:31:43 compute-0 ceph-mon[73607]: mgrmap e7: compute-0.fmcstn(active, since 1.21336s)
Oct 02 11:31:43 compute-0 systemd[1]: libpod-conmon-25a63e33363612473d9f9c9ef39e9808e6f22da5ded2421ef4700ec2eb86cc9c.scope: Deactivated successfully.
Oct 02 11:31:43 compute-0 podman[74883]: 2025-10-02 11:31:43.39635973 +0000 UTC m=+0.032691662 container create ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:43 compute-0 systemd[1]: Started libpod-conmon-ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9.scope.
Oct 02 11:31:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839afa5d33208732f6c156515daa1ffc72081bea62e4cd08715e1f843451a407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839afa5d33208732f6c156515daa1ffc72081bea62e4cd08715e1f843451a407/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839afa5d33208732f6c156515daa1ffc72081bea62e4cd08715e1f843451a407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:43 compute-0 podman[74883]: 2025-10-02 11:31:43.458576282 +0000 UTC m=+0.094908244 container init ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:31:43 compute-0 podman[74883]: 2025-10-02 11:31:43.463582505 +0000 UTC m=+0.099914437 container start ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:43 compute-0 podman[74883]: 2025-10-02 11:31:43.467438409 +0000 UTC m=+0.103770341 container attach ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:43 compute-0 podman[74883]: 2025-10-02 11:31:43.382386408 +0000 UTC m=+0.018718370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 02 11:31:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 02 11:31:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 02 11:31:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:31:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:44 compute-0 systemd[1]: libpod-ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9.scope: Deactivated successfully.
Oct 02 11:31:44 compute-0 podman[74883]: 2025-10-02 11:31:44.034097157 +0000 UTC m=+0.670429119 container died ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-839afa5d33208732f6c156515daa1ffc72081bea62e4cd08715e1f843451a407-merged.mount: Deactivated successfully.
Oct 02 11:31:44 compute-0 podman[74883]: 2025-10-02 11:31:44.073817169 +0000 UTC m=+0.710149101 container remove ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9 (image=quay.io/ceph/ceph:v18, name=epic_kepler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:31:44 compute-0 systemd[1]: libpod-conmon-ddcfaf0880e9e62e07dba231237171b7ca0354b7adb4b52a9c6f887aed3141b9.scope: Deactivated successfully.
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924753 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.141661989 +0000 UTC m=+0.049050691 container create 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:31:44 compute-0 systemd[1]: Started libpod-conmon-00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14.scope.
Oct 02 11:31:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebfd79c5797a25d85b448f7fa9167fdac1dea169da5c81dc91eab9017323f5c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebfd79c5797a25d85b448f7fa9167fdac1dea169da5c81dc91eab9017323f5c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebfd79c5797a25d85b448f7fa9167fdac1dea169da5c81dc91eab9017323f5c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.205525512 +0000 UTC m=+0.112914104 container init 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.111505102 +0000 UTC m=+0.018893694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.210472683 +0000 UTC m=+0.117861285 container start 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.217972747 +0000 UTC m=+0.125361339 container attach 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:31:44] ENGINE Bus STARTING
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:31:44] ENGINE Bus STARTING
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:31:44] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:31:44] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:31:44] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:31:44] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:31:44] ENGINE Bus STARTED
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:31:44] ENGINE Bus STARTED
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:31:44] ENGINE Client ('192.168.122.100', 56922) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:31:44] ENGINE Client ('192.168.122.100', 56922) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:31:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 02 11:31:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_user
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 02 11:31:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 02 11:31:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_config
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 02 11:31:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 02 11:31:44 compute-0 zen_goldwasser[74957]: ssh user set to ceph-admin. sudo will be used
Oct 02 11:31:44 compute-0 systemd[1]: libpod-00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14.scope: Deactivated successfully.
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.730873579 +0000 UTC m=+0.638262151 container died 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebfd79c5797a25d85b448f7fa9167fdac1dea169da5c81dc91eab9017323f5c6-merged.mount: Deactivated successfully.
Oct 02 11:31:44 compute-0 podman[74941]: 2025-10-02 11:31:44.769069474 +0000 UTC m=+0.676458046 container remove 00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14 (image=quay.io/ceph/ceph:v18, name=zen_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:31:44 compute-0 systemd[1]: libpod-conmon-00e38a0e0f8d562e1bd90ae8d4606403cf88948a21db2c7a1f41ae238a7b4d14.scope: Deactivated successfully.
Oct 02 11:31:44 compute-0 podman[75020]: 2025-10-02 11:31:44.824019468 +0000 UTC m=+0.038588465 container create 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:31:44 compute-0 systemd[1]: Started libpod-conmon-4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402.scope.
Oct 02 11:31:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:44 compute-0 podman[75020]: 2025-10-02 11:31:44.885597865 +0000 UTC m=+0.100166872 container init 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:31:44 compute-0 podman[75020]: 2025-10-02 11:31:44.891326745 +0000 UTC m=+0.105895752 container start 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:44 compute-0 podman[75020]: 2025-10-02 11:31:44.89476673 +0000 UTC m=+0.109335727 container attach 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:44 compute-0 podman[75020]: 2025-10-02 11:31:44.80814264 +0000 UTC m=+0.022711647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fmcstn(active, since 2s)
Oct 02 11:31:45 compute-0 ceph-mon[73607]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:45 compute-0 ceph-mon[73607]: [02/Oct/2025:11:31:44] ENGINE Bus STARTING
Oct 02 11:31:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:45 compute-0 ceph-mon[73607]: mgrmap e8: compute-0.fmcstn(active, since 2s)
Oct 02 11:31:45 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 02 11:31:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:45 compute-0 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 02 11:31:45 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 02 11:31:45 compute-0 ceph-mgr[73901]: [cephadm INFO root] Set ssh private key
Oct 02 11:31:45 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 02 11:31:45 compute-0 systemd[1]: libpod-4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402.scope: Deactivated successfully.
Oct 02 11:31:45 compute-0 podman[75020]: 2025-10-02 11:31:45.482093963 +0000 UTC m=+0.696662960 container died 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-138d0a47d17e809f029394684df4356c720070fdfc280df14c3513a44d65eecf-merged.mount: Deactivated successfully.
Oct 02 11:31:45 compute-0 podman[75020]: 2025-10-02 11:31:45.530612931 +0000 UTC m=+0.745181928 container remove 4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402 (image=quay.io/ceph/ceph:v18, name=keen_shirley, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:45 compute-0 systemd[1]: libpod-conmon-4a36c78f88fe35127257422d4060ac31fedd042c3cc4d4b13e7d9bd8d794c402.scope: Deactivated successfully.
Oct 02 11:31:45 compute-0 podman[75075]: 2025-10-02 11:31:45.581388993 +0000 UTC m=+0.034882484 container create 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:31:45 compute-0 systemd[1]: Started libpod-conmon-1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda.scope.
Oct 02 11:31:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:45 compute-0 podman[75075]: 2025-10-02 11:31:45.656166814 +0000 UTC m=+0.109660315 container init 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:31:45 compute-0 podman[75075]: 2025-10-02 11:31:45.566890739 +0000 UTC m=+0.020384250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:45 compute-0 podman[75075]: 2025-10-02 11:31:45.664134478 +0000 UTC m=+0.117627979 container start 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:45 compute-0 podman[75075]: 2025-10-02 11:31:45.666910336 +0000 UTC m=+0.120403827 container attach 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:46 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 02 11:31:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:46 compute-0 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 02 11:31:46 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 02 11:31:46 compute-0 systemd[1]: libpod-1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda.scope: Deactivated successfully.
Oct 02 11:31:46 compute-0 podman[75075]: 2025-10-02 11:31:46.215924073 +0000 UTC m=+0.669417554 container died 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-888c5728b4db65c70fccf8dfbb6bd7294141b64129b96204ffe3cf6d5ed0c0cf-merged.mount: Deactivated successfully.
Oct 02 11:31:46 compute-0 podman[75075]: 2025-10-02 11:31:46.25751635 +0000 UTC m=+0.711009841 container remove 1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda (image=quay.io/ceph/ceph:v18, name=laughing_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:46 compute-0 systemd[1]: libpod-conmon-1d3c47d12eb2957eb0a199e0e23ecec93cc1add23522348e2573ea78b1771bda.scope: Deactivated successfully.
Oct 02 11:31:46 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.309956104 +0000 UTC m=+0.037045168 container create 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:46 compute-0 systemd[1]: Started libpod-conmon-12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7.scope.
Oct 02 11:31:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddd0fbf9d15c8c9ebe094d918bf661dcccecd9e8872720d9a0b7681e11735fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddd0fbf9d15c8c9ebe094d918bf661dcccecd9e8872720d9a0b7681e11735fd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddd0fbf9d15c8c9ebe094d918bf661dcccecd9e8872720d9a0b7681e11735fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.387223675 +0000 UTC m=+0.114312749 container init 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.291286577 +0000 UTC m=+0.018375631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.394730798 +0000 UTC m=+0.121819832 container start 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.398725906 +0000 UTC m=+0.125814940 container attach 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:31:46 compute-0 ceph-mon[73607]: [02/Oct/2025:11:31:44] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:31:46 compute-0 ceph-mon[73607]: [02/Oct/2025:11:31:44] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:31:46 compute-0 ceph-mon[73607]: [02/Oct/2025:11:31:44] ENGINE Bus STARTED
Oct 02 11:31:46 compute-0 ceph-mon[73607]: [02/Oct/2025:11:31:44] ENGINE Client ('192.168.122.100', 56922) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:31:46 compute-0 ceph-mon[73607]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:46 compute-0 ceph-mon[73607]: Set ssh ssh_user
Oct 02 11:31:46 compute-0 ceph-mon[73607]: Set ssh ssh_config
Oct 02 11:31:46 compute-0 ceph-mon[73607]: ssh user set to ceph-admin. sudo will be used
Oct 02 11:31:46 compute-0 ceph-mon[73607]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:46 compute-0 ceph-mon[73607]: Set ssh ssh_identity_key
Oct 02 11:31:46 compute-0 ceph-mon[73607]: Set ssh private key
Oct 02 11:31:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:46 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:46 compute-0 zen_neumann[75147]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBEijvS5OpHJRmoJlraOYy+rE6xSjNQxe9r/nwEB/mX+IbyDzx7clOJq0Cd78RWcnG9PLKktL5JIM1T1j9XB2fRZrWWrdno+mv7h6SwCgJbok7/Gl4BQzgEixGo0zDbqE8KFAmpgQg+KPl2UIGIpwtgt7GWYt9FKhUompGbFeTFa3Lt5XxtrWfbW2sPEiCbsYw9mQX0VhS8bWMW3yl/xxsv8EFx1lQmNVQfFymNVUaJCUxX9Yo19UbPDUQn+IY64Kws0RTIktY9IP/hlT3M6EDg1nTpuc5m5+krVY+ggdoKd4NTghXiKqobHpeVofoU1n+jH6HZtg5/IjJr7w+BTzk6pD/0ZgigAaTa8jzga1YnFFINsqzmbknJvPrAFxLPfupwPZlC/cxq1ZGR15aA/NwPOF1J3dhjJESqcfBQtqbPTe2RctCxwcS0wg+NzRRzPLE85LEAYDrjp0i/FxwWmf1GVv28QHFv4mx1U/kND+aHjQDrk93q2n8FJ4AA90unrc= zuul@controller
Oct 02 11:31:46 compute-0 systemd[1]: libpod-12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7.scope: Deactivated successfully.
Oct 02 11:31:46 compute-0 podman[75131]: 2025-10-02 11:31:46.950283945 +0000 UTC m=+0.677373019 container died 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-eddd0fbf9d15c8c9ebe094d918bf661dcccecd9e8872720d9a0b7681e11735fd-merged.mount: Deactivated successfully.
Oct 02 11:31:47 compute-0 podman[75131]: 2025-10-02 11:31:47.004937242 +0000 UTC m=+0.732026276 container remove 12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7 (image=quay.io/ceph/ceph:v18, name=zen_neumann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:47 compute-0 systemd[1]: libpod-conmon-12cd0917b1694dea65bad0f8c0c23f9ca99ac077ab116620bdf67d6f45de44e7.scope: Deactivated successfully.
Oct 02 11:31:47 compute-0 podman[75185]: 2025-10-02 11:31:47.059042447 +0000 UTC m=+0.036763282 container create fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:31:47 compute-0 systemd[1]: Started libpod-conmon-fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd.scope.
Oct 02 11:31:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b3d26a537ca6c74d3b0508245ca55e53849afcb3edb655843798c439c59de3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b3d26a537ca6c74d3b0508245ca55e53849afcb3edb655843798c439c59de3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b3d26a537ca6c74d3b0508245ca55e53849afcb3edb655843798c439c59de3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:47 compute-0 podman[75185]: 2025-10-02 11:31:47.124816286 +0000 UTC m=+0.102537181 container init fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 11:31:47 compute-0 podman[75185]: 2025-10-02 11:31:47.131364626 +0000 UTC m=+0.109085461 container start fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:31:47 compute-0 podman[75185]: 2025-10-02 11:31:47.134258187 +0000 UTC m=+0.111979022 container attach fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:47 compute-0 podman[75185]: 2025-10-02 11:31:47.042701166 +0000 UTC m=+0.020422021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:47 compute-0 ceph-mon[73607]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:47 compute-0 ceph-mon[73607]: Set ssh ssh_identity_pub
Oct 02 11:31:47 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:47 compute-0 sshd-session[75228]: Accepted publickey for ceph-admin from 192.168.122.100 port 46074 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:47 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 11:31:47 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 11:31:47 compute-0 systemd-logind[789]: New session 22 of user ceph-admin.
Oct 02 11:31:47 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 11:31:47 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 11:31:47 compute-0 systemd[75232]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:47 compute-0 systemd[75232]: Queued start job for default target Main User Target.
Oct 02 11:31:47 compute-0 sshd-session[75245]: Accepted publickey for ceph-admin from 192.168.122.100 port 46086 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:47 compute-0 systemd[75232]: Created slice User Application Slice.
Oct 02 11:31:47 compute-0 systemd[75232]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:31:47 compute-0 systemd[75232]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:31:47 compute-0 systemd[75232]: Reached target Paths.
Oct 02 11:31:47 compute-0 systemd[75232]: Reached target Timers.
Oct 02 11:31:47 compute-0 systemd[75232]: Starting D-Bus User Message Bus Socket...
Oct 02 11:31:47 compute-0 systemd[75232]: Starting Create User's Volatile Files and Directories...
Oct 02 11:31:47 compute-0 systemd-logind[789]: New session 24 of user ceph-admin.
Oct 02 11:31:48 compute-0 systemd[75232]: Finished Create User's Volatile Files and Directories.
Oct 02 11:31:48 compute-0 systemd[75232]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:31:48 compute-0 systemd[75232]: Reached target Sockets.
Oct 02 11:31:48 compute-0 systemd[75232]: Reached target Basic System.
Oct 02 11:31:48 compute-0 systemd[75232]: Reached target Main User Target.
Oct 02 11:31:48 compute-0 systemd[75232]: Startup finished in 124ms.
Oct 02 11:31:48 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 11:31:48 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct 02 11:31:48 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 02 11:31:48 compute-0 sshd-session[75228]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:48 compute-0 sshd-session[75245]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:48 compute-0 sudo[75252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:48 compute-0 sudo[75252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 sudo[75277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:48 compute-0 sudo[75277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75277]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:48 compute-0 sshd-session[75302]: Accepted publickey for ceph-admin from 192.168.122.100 port 46100 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:48 compute-0 systemd-logind[789]: New session 25 of user ceph-admin.
Oct 02 11:31:48 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 02 11:31:48 compute-0 sshd-session[75302]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:48 compute-0 ceph-mon[73607]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:48 compute-0 sudo[75306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:48 compute-0 sudo[75306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 sudo[75331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:31:48 compute-0 sudo[75331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 sshd-session[75356]: Accepted publickey for ceph-admin from 192.168.122.100 port 46104 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:48 compute-0 systemd-logind[789]: New session 26 of user ceph-admin.
Oct 02 11:31:48 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 02 11:31:48 compute-0 sshd-session[75356]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:48 compute-0 sudo[75360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:48 compute-0 sudo[75360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 sudo[75385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:31:48 compute-0 sudo[75385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:48 compute-0 sudo[75385]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:48 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 02 11:31:48 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 02 11:31:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053054 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:31:49 compute-0 sshd-session[75410]: Accepted publickey for ceph-admin from 192.168.122.100 port 46106 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:49 compute-0 systemd-logind[789]: New session 27 of user ceph-admin.
Oct 02 11:31:49 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 02 11:31:49 compute-0 sshd-session[75410]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:49 compute-0 sudo[75414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:49 compute-0 sudo[75414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:49 compute-0 sudo[75414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:49 compute-0 sudo[75439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:31:49 compute-0 sudo[75439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:49 compute-0 sudo[75439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:49 compute-0 ceph-mon[73607]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:49 compute-0 ceph-mon[73607]: Deploying cephadm binary to compute-0
Oct 02 11:31:49 compute-0 sshd-session[75464]: Accepted publickey for ceph-admin from 192.168.122.100 port 46122 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:49 compute-0 systemd-logind[789]: New session 28 of user ceph-admin.
Oct 02 11:31:49 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 02 11:31:49 compute-0 sshd-session[75464]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:49 compute-0 sudo[75468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:49 compute-0 sudo[75468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:49 compute-0 sudo[75468]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:49 compute-0 sudo[75493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:31:49 compute-0 sudo[75493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:49 compute-0 sudo[75493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:49 compute-0 sshd-session[75518]: Accepted publickey for ceph-admin from 192.168.122.100 port 46128 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:49 compute-0 systemd-logind[789]: New session 29 of user ceph-admin.
Oct 02 11:31:49 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 02 11:31:49 compute-0 sshd-session[75518]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:50 compute-0 sudo[75522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:50 compute-0 sudo[75522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75522]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 sudo[75547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:31:50 compute-0 sudo[75547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:50 compute-0 sshd-session[75572]: Accepted publickey for ceph-admin from 192.168.122.100 port 46132 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:50 compute-0 systemd-logind[789]: New session 30 of user ceph-admin.
Oct 02 11:31:50 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 02 11:31:50 compute-0 sshd-session[75572]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:50 compute-0 sudo[75576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:50 compute-0 sudo[75576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 sudo[75601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:31:50 compute-0 sudo[75601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75601]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 sshd-session[75626]: Accepted publickey for ceph-admin from 192.168.122.100 port 46144 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:50 compute-0 systemd-logind[789]: New session 31 of user ceph-admin.
Oct 02 11:31:50 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 02 11:31:50 compute-0 sshd-session[75626]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:50 compute-0 sudo[75630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:50 compute-0 sudo[75630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75630]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 sudo[75655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:31:50 compute-0 sudo[75655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:50 compute-0 sudo[75655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:50 compute-0 sshd-session[75680]: Accepted publickey for ceph-admin from 192.168.122.100 port 46148 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:50 compute-0 systemd-logind[789]: New session 32 of user ceph-admin.
Oct 02 11:31:51 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 02 11:31:51 compute-0 sshd-session[75680]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:51 compute-0 sshd-session[75707]: Accepted publickey for ceph-admin from 192.168.122.100 port 46162 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:51 compute-0 systemd-logind[789]: New session 33 of user ceph-admin.
Oct 02 11:31:51 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 02 11:31:51 compute-0 sshd-session[75707]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:51 compute-0 sudo[75711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:51 compute-0 sudo[75711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:51 compute-0 sudo[75711]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:51 compute-0 sudo[75736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:31:51 compute-0 sudo[75736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:51 compute-0 sudo[75736]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:51 compute-0 sshd-session[75761]: Accepted publickey for ceph-admin from 192.168.122.100 port 46176 ssh2: RSA SHA256:hipYxrQVnF7kc7v45q1bE4o8jwYBbChdRkFf71jCzQc
Oct 02 11:31:51 compute-0 systemd-logind[789]: New session 34 of user ceph-admin.
Oct 02 11:31:51 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 02 11:31:51 compute-0 sshd-session[75761]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:31:51 compute-0 sudo[75765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:51 compute-0 sudo[75765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 sudo[75765]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:52 compute-0 sudo[75790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:31:52 compute-0 sudo[75790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:52 compute-0 sudo[75790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:31:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:52 compute-0 ceph-mgr[73901]: [cephadm INFO root] Added host compute-0
Oct 02 11:31:52 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 11:31:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:31:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:52 compute-0 interesting_keller[75202]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 11:31:52 compute-0 systemd[1]: libpod-fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd.scope: Deactivated successfully.
Oct 02 11:31:52 compute-0 podman[75185]: 2025-10-02 11:31:52.320940131 +0000 UTC m=+5.298660966 container died fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:52 compute-0 sudo[75836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:52 compute-0 sudo[75836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 sudo[75836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b3d26a537ca6c74d3b0508245ca55e53849afcb3edb655843798c439c59de3-merged.mount: Deactivated successfully.
Oct 02 11:31:52 compute-0 sudo[75872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:52 compute-0 sudo[75872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 sudo[75872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:52 compute-0 podman[75185]: 2025-10-02 11:31:52.436636262 +0000 UTC m=+5.414357097 container remove fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd (image=quay.io/ceph/ceph:v18, name=interesting_keller, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:31:52 compute-0 systemd[1]: libpod-conmon-fd049da1db3ed0b2f59dabd54044d4428eb5e88f5fb0624498addc254d3e14fd.scope: Deactivated successfully.
Oct 02 11:31:52 compute-0 sudo[75897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:52 compute-0 sudo[75897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 sudo[75897]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:52 compute-0 podman[75903]: 2025-10-02 11:31:52.498610279 +0000 UTC m=+0.044269194 container create dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:52 compute-0 systemd[1]: Started libpod-conmon-dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c.scope.
Oct 02 11:31:52 compute-0 sudo[75936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 02 11:31:52 compute-0 sudo[75936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7216a649ecb66e6895da8a007b161327bb341959c4ad24df256d3c37c1bd6dcb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7216a649ecb66e6895da8a007b161327bb341959c4ad24df256d3c37c1bd6dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7216a649ecb66e6895da8a007b161327bb341959c4ad24df256d3c37c1bd6dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:52 compute-0 podman[75903]: 2025-10-02 11:31:52.4765799 +0000 UTC m=+0.022238845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:52 compute-0 podman[75903]: 2025-10-02 11:31:52.573904501 +0000 UTC m=+0.119563446 container init dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:31:52 compute-0 podman[75903]: 2025-10-02 11:31:52.581407566 +0000 UTC m=+0.127066481 container start dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:31:52 compute-0 podman[75903]: 2025-10-02 11:31:52.618860112 +0000 UTC m=+0.164519027 container attach dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:52 compute-0 podman[75993]: 2025-10-02 11:31:52.771918038 +0000 UTC m=+0.043529216 container create 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:31:52 compute-0 systemd[1]: Started libpod-conmon-9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379.scope.
Oct 02 11:31:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:52 compute-0 podman[75993]: 2025-10-02 11:31:52.843724615 +0000 UTC m=+0.115335803 container init 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:31:52 compute-0 podman[75993]: 2025-10-02 11:31:52.750701369 +0000 UTC m=+0.022312567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:52 compute-0 podman[75993]: 2025-10-02 11:31:52.848529293 +0000 UTC m=+0.120140491 container start 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:31:52 compute-0 podman[75993]: 2025-10-02 11:31:52.856509648 +0000 UTC m=+0.128120826 container attach 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:31:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:53 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 02 11:31:53 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 02 11:31:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:31:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:53 compute-0 optimistic_hoover[75962]: Scheduled mon update...
Oct 02 11:31:53 compute-0 systemd[1]: libpod-dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c.scope: Deactivated successfully.
Oct 02 11:31:53 compute-0 podman[75903]: 2025-10-02 11:31:53.158787415 +0000 UTC m=+0.704446330 container died dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:31:53 compute-0 nervous_saha[76009]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 11:31:53 compute-0 systemd[1]: libpod-9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379.scope: Deactivated successfully.
Oct 02 11:31:53 compute-0 podman[75993]: 2025-10-02 11:31:53.189898607 +0000 UTC m=+0.461509785 container died 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7216a649ecb66e6895da8a007b161327bb341959c4ad24df256d3c37c1bd6dcb-merged.mount: Deactivated successfully.
Oct 02 11:31:53 compute-0 podman[75903]: 2025-10-02 11:31:53.317089229 +0000 UTC m=+0.862748144 container remove dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c (image=quay.io/ceph/ceph:v18, name=optimistic_hoover, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:31:53 compute-0 systemd[1]: libpod-conmon-dc4494284ef6767c325f3712ad297084f6c9a9d677f8a0d20b7cbca5178b090c.scope: Deactivated successfully.
Oct 02 11:31:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:53 compute-0 ceph-mon[73607]: Added host compute-0
Oct 02 11:31:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:31:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f239a8f9afb1ec8c737c9f31458ae6c19a0e6d741f5a6aca11ce16f9e8a12bf7-merged.mount: Deactivated successfully.
Oct 02 11:31:53 compute-0 podman[75993]: 2025-10-02 11:31:53.48628768 +0000 UTC m=+0.757898858 container remove 9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379 (image=quay.io/ceph/ceph:v18, name=nervous_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:53 compute-0 podman[76061]: 2025-10-02 11:31:53.508061664 +0000 UTC m=+0.175168418 container create 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:53 compute-0 sudo[75936]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 02 11:31:53 compute-0 systemd[1]: libpod-conmon-9798f1ca293dfb0c776153f218325ac3c1d05c0dca5992f2e367c18774a52379.scope: Deactivated successfully.
Oct 02 11:31:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:53 compute-0 systemd[1]: Started libpod-conmon-061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e.scope.
Oct 02 11:31:53 compute-0 podman[76061]: 2025-10-02 11:31:53.45764997 +0000 UTC m=+0.124756744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0108b526ddd235497128ec5dc0a39b5882153ab9ed5e6c7bab3629d0d818f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0108b526ddd235497128ec5dc0a39b5882153ab9ed5e6c7bab3629d0d818f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0108b526ddd235497128ec5dc0a39b5882153ab9ed5e6c7bab3629d0d818f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:53 compute-0 sudo[76077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:53 compute-0 sudo[76077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:53 compute-0 podman[76061]: 2025-10-02 11:31:53.585266203 +0000 UTC m=+0.252372957 container init 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:53 compute-0 sudo[76077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:53 compute-0 podman[76061]: 2025-10-02 11:31:53.590704746 +0000 UTC m=+0.257811520 container start 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:31:53 compute-0 podman[76061]: 2025-10-02 11:31:53.598144038 +0000 UTC m=+0.265250812 container attach 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:31:53 compute-0 sudo[76109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:53 compute-0 sudo[76109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:53 compute-0 sudo[76109]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:53 compute-0 sudo[76134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:53 compute-0 sudo[76134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:53 compute-0 sudo[76134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:53 compute-0 sudo[76159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:31:53 compute-0 sudo[76159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:53 compute-0 sudo[76159]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:31:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:54 compute-0 sudo[76221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:54 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 02 11:31:54 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 02 11:31:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:31:54 compute-0 sudo[76221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:54 compute-0 sudo[76221]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:54 compute-0 sudo[76247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:54 compute-0 sudo[76247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:54 compute-0 sudo[76247]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:31:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:54 compute-0 pedantic_keldysh[76079]: Scheduled mgr update...
Oct 02 11:31:54 compute-0 sudo[76272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:54 compute-0 sudo[76272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:54 compute-0 sudo[76272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:54 compute-0 systemd[1]: libpod-061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e.scope: Deactivated successfully.
Oct 02 11:31:54 compute-0 podman[76061]: 2025-10-02 11:31:54.246213088 +0000 UTC m=+0.913319842 container died 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:31:54 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a0108b526ddd235497128ec5dc0a39b5882153ab9ed5e6c7bab3629d0d818f1-merged.mount: Deactivated successfully.
Oct 02 11:31:54 compute-0 sudo[76299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:31:54 compute-0 sudo[76299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:54 compute-0 podman[76061]: 2025-10-02 11:31:54.322958457 +0000 UTC m=+0.990065201 container remove 061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e (image=quay.io/ceph/ceph:v18, name=pedantic_keldysh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:31:54 compute-0 ceph-mon[73607]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:54 compute-0 ceph-mon[73607]: Saving service mon spec with placement count:5
Oct 02 11:31:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:54 compute-0 systemd[1]: libpod-conmon-061b0e7372d1982745a5e97ff4e96b7c17c2224e7ec9ec325001efffb525082e.scope: Deactivated successfully.
Oct 02 11:31:54 compute-0 podman[76334]: 2025-10-02 11:31:54.387919336 +0000 UTC m=+0.040769469 container create d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:54 compute-0 systemd[1]: Started libpod-conmon-d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6.scope.
Oct 02 11:31:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:54 compute-0 podman[76334]: 2025-10-02 11:31:54.369149227 +0000 UTC m=+0.021999380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4843837433e75b0c146a574bc8cb6e78cea3d5ed3def5da702b0f0a0c4699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4843837433e75b0c146a574bc8cb6e78cea3d5ed3def5da702b0f0a0c4699/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4843837433e75b0c146a574bc8cb6e78cea3d5ed3def5da702b0f0a0c4699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:54 compute-0 podman[76334]: 2025-10-02 11:31:54.485816642 +0000 UTC m=+0.138666795 container init d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:31:54 compute-0 podman[76334]: 2025-10-02 11:31:54.493473579 +0000 UTC m=+0.146323712 container start d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:54 compute-0 podman[76334]: 2025-10-02 11:31:54.499384864 +0000 UTC m=+0.152235007 container attach d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:31:54 compute-0 podman[76428]: 2025-10-02 11:31:54.828929899 +0000 UTC m=+0.107410840 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:31:55 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:55 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service crash spec with placement *
Oct 02 11:31:55 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 02 11:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:55 compute-0 intelligent_liskov[76351]: Scheduled crash update...
Oct 02 11:31:55 compute-0 systemd[1]: libpod-d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6.scope: Deactivated successfully.
Oct 02 11:31:55 compute-0 podman[76428]: 2025-10-02 11:31:55.135736297 +0000 UTC m=+0.414217218 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:31:55 compute-0 conmon[76351]: conmon d35f88d5d952e1e06c7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6.scope/container/memory.events
Oct 02 11:31:55 compute-0 podman[76334]: 2025-10-02 11:31:55.146076671 +0000 UTC m=+0.798926834 container died d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f4843837433e75b0c146a574bc8cb6e78cea3d5ed3def5da702b0f0a0c4699-merged.mount: Deactivated successfully.
Oct 02 11:31:55 compute-0 podman[76334]: 2025-10-02 11:31:55.227381081 +0000 UTC m=+0.880231214 container remove d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6 (image=quay.io/ceph/ceph:v18, name=intelligent_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:55 compute-0 systemd[1]: libpod-conmon-d35f88d5d952e1e06c7c7e1d13d39f70f2e6f62af98217feb346509dfc11bbd6.scope: Deactivated successfully.
Oct 02 11:31:55 compute-0 sudo[76299]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.28949136 +0000 UTC m=+0.044295515 container create a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:55 compute-0 systemd[1]: Started libpod-conmon-a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81.scope.
Oct 02 11:31:55 compute-0 ceph-mon[73607]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:55 compute-0 ceph-mon[73607]: Saving service mgr spec with placement count:2
Oct 02 11:31:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:55 compute-0 sudo[76525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4046eca8a0b4928645ec4e43b9b8dd0065153b23f67f7f9335dd13c6911ebf3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4046eca8a0b4928645ec4e43b9b8dd0065153b23f67f7f9335dd13c6911ebf3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4046eca8a0b4928645ec4e43b9b8dd0065153b23f67f7f9335dd13c6911ebf3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:55 compute-0 sudo[76525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:55 compute-0 sudo[76525]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.266497068 +0000 UTC m=+0.021301253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.374560552 +0000 UTC m=+0.129364727 container init a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.381589274 +0000 UTC m=+0.136393429 container start a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.384669649 +0000 UTC m=+0.139473804 container attach a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:31:55 compute-0 sudo[76555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:55 compute-0 sudo[76555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:55 compute-0 sudo[76555]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:55 compute-0 sudo[76582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:55 compute-0 sudo[76582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:55 compute-0 sudo[76582]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:55 compute-0 sudo[76607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:31:55 compute-0 sudo[76607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:55 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76653 (sysctl)
Oct 02 11:31:55 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 02 11:31:55 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 02 11:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 02 11:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398496956' entity='client.admin' 
Oct 02 11:31:55 compute-0 systemd[1]: libpod-a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81.scope: Deactivated successfully.
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.931784419 +0000 UTC m=+0.686588574 container died a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4046eca8a0b4928645ec4e43b9b8dd0065153b23f67f7f9335dd13c6911ebf3c-merged.mount: Deactivated successfully.
Oct 02 11:31:55 compute-0 sudo[76607]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:55 compute-0 podman[76504]: 2025-10-02 11:31:55.991131541 +0000 UTC m=+0.745935696 container remove a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81 (image=quay.io/ceph/ceph:v18, name=clever_kilby, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:31:56 compute-0 systemd[1]: libpod-conmon-a98bc35e31eca1bf6f87ca3eb5d379e8cdd7eab29b673dc52becefdc4bdaca81.scope: Deactivated successfully.
Oct 02 11:31:56 compute-0 sudo[76698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:56 compute-0 sudo[76698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.048136597 +0000 UTC m=+0.040418881 container create 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:56 compute-0 sudo[76698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 systemd[1]: Started libpod-conmon-2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c.scope.
Oct 02 11:31:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:56 compute-0 sudo[76738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:56 compute-0 sudo[76738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3b754afcbb0ef336cb42e6cccae5afc589ce6a5ae388f8091a49d5e8ba4e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3b754afcbb0ef336cb42e6cccae5afc589ce6a5ae388f8091a49d5e8ba4e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3b754afcbb0ef336cb42e6cccae5afc589ce6a5ae388f8091a49d5e8ba4e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 sudo[76738]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.122875545 +0000 UTC m=+0.115157859 container init 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.027358528 +0000 UTC m=+0.019640832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.128960625 +0000 UTC m=+0.121242909 container start 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.132614904 +0000 UTC m=+0.124897188 container attach 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:31:56 compute-0 sudo[76768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:56 compute-0 sudo[76768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 sudo[76768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 sudo[76795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 11:31:56 compute-0 sudo[76795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:56 compute-0 ceph-mon[73607]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:56 compute-0 ceph-mon[73607]: Saving service crash spec with placement *
Oct 02 11:31:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2398496956' entity='client.admin' 
Oct 02 11:31:56 compute-0 sudo[76795]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:31:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:56 compute-0 sudo[76839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:56 compute-0 sudo[76839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 sudo[76839]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 sudo[76882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:31:56 compute-0 sudo[76882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 sudo[76882]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 sudo[76907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:31:56 compute-0 sudo[76907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 sudo[76907]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:56 compute-0 sudo[76932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:31:56 compute-0 sudo[76932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:31:56 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 02 11:31:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:56 compute-0 systemd[1]: libpod-2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c.scope: Deactivated successfully.
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.662540563 +0000 UTC m=+0.654822847 container died 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda3b754afcbb0ef336cb42e6cccae5afc589ce6a5ae388f8091a49d5e8ba4e2-merged.mount: Deactivated successfully.
Oct 02 11:31:56 compute-0 podman[76699]: 2025-10-02 11:31:56.701743643 +0000 UTC m=+0.694025927 container remove 2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c (image=quay.io/ceph/ceph:v18, name=bold_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:31:56 compute-0 systemd[1]: libpod-conmon-2d055a4783742395beff1d7905b3111e4e25bfab67d0de0b60fee1100b80434c.scope: Deactivated successfully.
Oct 02 11:31:56 compute-0 podman[76973]: 2025-10-02 11:31:56.755875227 +0000 UTC m=+0.038933144 container create 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:31:56 compute-0 systemd[1]: Started libpod-conmon-951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c.scope.
Oct 02 11:31:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670039b1a0dd02690ab0bc93dca0d638aec010d24879596d48579054ca617db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670039b1a0dd02690ab0bc93dca0d638aec010d24879596d48579054ca617db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670039b1a0dd02690ab0bc93dca0d638aec010d24879596d48579054ca617db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:56 compute-0 podman[76973]: 2025-10-02 11:31:56.7364053 +0000 UTC m=+0.019463247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:56 compute-0 podman[76973]: 2025-10-02 11:31:56.852237075 +0000 UTC m=+0.135295012 container init 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:31:56 compute-0 podman[76973]: 2025-10-02 11:31:56.862144588 +0000 UTC m=+0.145202505 container start 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:31:56 compute-0 podman[76973]: 2025-10-02 11:31:56.886004931 +0000 UTC m=+0.169062948 container attach 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:56.994558319 +0000 UTC m=+0.020653037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:57.178690454 +0000 UTC m=+0.204785142 container create 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:31:57 compute-0 systemd[1]: Started libpod-conmon-7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9.scope.
Oct 02 11:31:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:57.326231736 +0000 UTC m=+0.352326454 container init 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:57.331142045 +0000 UTC m=+0.357236733 container start 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:57 compute-0 adoring_wozniak[77066]: 167 167
Oct 02 11:31:57 compute-0 systemd[1]: libpod-7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9.scope: Deactivated successfully.
Oct 02 11:31:57 compute-0 conmon[77066]: conmon 7d1f19d984c4ae9ec278 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9.scope/container/memory.events
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:57.337569023 +0000 UTC m=+0.363663731 container attach 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:31:57 compute-0 podman[77031]: 2025-10-02 11:31:57.338075055 +0000 UTC m=+0.364169753 container died 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:31:57 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:31:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:57 compute-0 ceph-mgr[73901]: [cephadm INFO root] Added label _admin to host compute-0
Oct 02 11:31:57 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 02 11:31:57 compute-0 sad_jepsen[77002]: Added label _admin to host compute-0
Oct 02 11:31:57 compute-0 systemd[1]: libpod-951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c.scope: Deactivated successfully.
Oct 02 11:31:57 compute-0 podman[76973]: 2025-10-02 11:31:57.42284319 +0000 UTC m=+0.705901117 container died 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 02 11:31:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e670039b1a0dd02690ab0bc93dca0d638aec010d24879596d48579054ca617db-merged.mount: Deactivated successfully.
Oct 02 11:31:57 compute-0 podman[76973]: 2025-10-02 11:31:57.898373038 +0000 UTC m=+1.181430955 container remove 951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c (image=quay.io/ceph/ceph:v18, name=sad_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:31:57 compute-0 podman[77099]: 2025-10-02 11:31:57.994659054 +0000 UTC m=+0.079351033 container create 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b62fe38fc02340a95d186f75489dec907cdbd57385f8d8390c8359a4243c51b-merged.mount: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77031]: 2025-10-02 11:31:58.015411212 +0000 UTC m=+1.041505900 container remove 7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:31:58 compute-0 systemd[1]: libpod-conmon-7d1f19d984c4ae9ec278652dd6e699719b3229170e45191a2a2767bdfb92b7e9.scope: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:57.936981983 +0000 UTC m=+0.021674052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:58 compute-0 systemd[1]: Started libpod-conmon-1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b.scope.
Oct 02 11:31:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f657ca935ed7a6c18e06816e5822e7511c351d7c6e4b0e4ec914d8a73645be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f657ca935ed7a6c18e06816e5822e7511c351d7c6e4b0e4ec914d8a73645be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f657ca935ed7a6c18e06816e5822e7511c351d7c6e4b0e4ec914d8a73645be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 systemd[1]: libpod-conmon-951f5b01863013c6087222e90936b5097f656b3b6581669f8c0f0e59a6590d7c.scope: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:58.085893687 +0000 UTC m=+0.170585686 container init 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:58.091591246 +0000 UTC m=+0.176283225 container start 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:58.094795745 +0000 UTC m=+0.179487744 container attach 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:58 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:31:58 compute-0 ceph-mon[73607]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:58 compute-0 ceph-mon[73607]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:31:58 compute-0 ceph-mon[73607]: Added label _admin to host compute-0
Oct 02 11:31:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 02 11:31:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2919028660' entity='client.admin' 
Oct 02 11:31:58 compute-0 systemd[1]: libpod-1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b.scope: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:58.649206373 +0000 UTC m=+0.733898352 container died 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f657ca935ed7a6c18e06816e5822e7511c351d7c6e4b0e4ec914d8a73645be-merged.mount: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77099]: 2025-10-02 11:31:58.690086754 +0000 UTC m=+0.774778733 container remove 1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b (image=quay.io/ceph/ceph:v18, name=charming_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:31:58 compute-0 systemd[1]: libpod-conmon-1debd5aa634365c70b25081352cab9cfa9963874519468409ada8010f66dd90b.scope: Deactivated successfully.
Oct 02 11:31:58 compute-0 podman[77154]: 2025-10-02 11:31:58.742033105 +0000 UTC m=+0.035399397 container create cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:31:58 compute-0 systemd[1]: Started libpod-conmon-cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761.scope.
Oct 02 11:31:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ac391f5cb3d8de3661a34e692420d33d36cf10fdb98008a95cb2131a45f519/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ac391f5cb3d8de3661a34e692420d33d36cf10fdb98008a95cb2131a45f519/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ac391f5cb3d8de3661a34e692420d33d36cf10fdb98008a95cb2131a45f519/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:58 compute-0 podman[77154]: 2025-10-02 11:31:58.803577661 +0000 UTC m=+0.096943983 container init cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:31:58 compute-0 podman[77154]: 2025-10-02 11:31:58.807936888 +0000 UTC m=+0.101303200 container start cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:31:58 compute-0 podman[77154]: 2025-10-02 11:31:58.811161596 +0000 UTC m=+0.104527918 container attach cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:31:58 compute-0 podman[77154]: 2025-10-02 11:31:58.725846248 +0000 UTC m=+0.019212580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:31:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:31:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 02 11:31:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2100635443' entity='client.admin' 
Oct 02 11:31:59 compute-0 nostalgic_mcclintock[77170]: set mgr/dashboard/cluster/status
Oct 02 11:31:59 compute-0 systemd[1]: libpod-cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761.scope: Deactivated successfully.
Oct 02 11:31:59 compute-0 podman[77154]: 2025-10-02 11:31:59.431820355 +0000 UTC m=+0.725186657 container died cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ac391f5cb3d8de3661a34e692420d33d36cf10fdb98008a95cb2131a45f519-merged.mount: Deactivated successfully.
Oct 02 11:31:59 compute-0 podman[77154]: 2025-10-02 11:31:59.470874671 +0000 UTC m=+0.764240963 container remove cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761 (image=quay.io/ceph/ceph:v18, name=nostalgic_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:31:59 compute-0 systemd[1]: libpod-conmon-cd20689ab0c577c085af8ae18b9f5ccb2ab95d42a8867effe20a06aa909c5761.scope: Deactivated successfully.
Oct 02 11:31:59 compute-0 sudo[72583]: pam_unix(sudo:session): session closed for user root
Oct 02 11:31:59 compute-0 podman[77218]: 2025-10-02 11:31:59.620856333 +0000 UTC m=+0.040753780 container create 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:31:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2919028660' entity='client.admin' 
Oct 02 11:31:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2100635443' entity='client.admin' 
Oct 02 11:31:59 compute-0 systemd[1]: Started libpod-conmon-170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215.scope.
Oct 02 11:31:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521bdecc8dedac95f80d5f9a09047d7ac627d7377bc314d99f7616358d014ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521bdecc8dedac95f80d5f9a09047d7ac627d7377bc314d99f7616358d014ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521bdecc8dedac95f80d5f9a09047d7ac627d7377bc314d99f7616358d014ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521bdecc8dedac95f80d5f9a09047d7ac627d7377bc314d99f7616358d014ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:31:59 compute-0 podman[77218]: 2025-10-02 11:31:59.689042781 +0000 UTC m=+0.108940258 container init 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:31:59 compute-0 podman[77218]: 2025-10-02 11:31:59.696206656 +0000 UTC m=+0.116104103 container start 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:31:59 compute-0 podman[77218]: 2025-10-02 11:31:59.699299562 +0000 UTC m=+0.119197019 container attach 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:31:59 compute-0 podman[77218]: 2025-10-02 11:31:59.604885661 +0000 UTC m=+0.024783128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:31:59 compute-0 sudo[77262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tscxhdiphhmoinpajzbwchhbmhwjpvlv ; /usr/bin/python3'
Oct 02 11:31:59 compute-0 sudo[77262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:00 compute-0 python3[77264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.129685565 +0000 UTC m=+0.098540972 container create f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.054153306 +0000 UTC m=+0.023008823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:00 compute-0 systemd[1]: Started libpod-conmon-f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062.scope.
Oct 02 11:32:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb55f8fc95645f1f03a0f0bd2d4bfe57f4f43ceee05f1aa6357d3f9a72fb587c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb55f8fc95645f1f03a0f0bd2d4bfe57f4f43ceee05f1aa6357d3f9a72fb587c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.238793005 +0000 UTC m=+0.207648502 container init f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.244970056 +0000 UTC m=+0.213825463 container start f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.251032155 +0000 UTC m=+0.219887562 container attach f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:00 compute-0 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:32:00 compute-0 recursing_herschel[77234]: [
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:     {
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "available": false,
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "ceph_device": false,
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "lsm_data": {},
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "lvs": [],
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "path": "/dev/sr0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "rejected_reasons": [
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "Insufficient space (<5GB)",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "Has a FileSystem"
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         ],
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         "sys_api": {
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "actuators": null,
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "device_nodes": "sr0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "devname": "sr0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "human_readable_size": "482.00 KB",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "id_bus": "ata",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "model": "QEMU DVD-ROM",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "nr_requests": "2",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "parent": "/dev/sr0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "partitions": {},
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "path": "/dev/sr0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "removable": "1",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "rev": "2.5+",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "ro": "0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "rotational": "0",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "sas_address": "",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "sas_device_handle": "",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "scheduler_mode": "mq-deadline",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "sectors": 0,
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "sectorsize": "2048",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "size": 493568.0,
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "support_discard": "2048",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "type": "disk",
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:             "vendor": "QEMU"
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:         }
Oct 02 11:32:00 compute-0 recursing_herschel[77234]:     }
Oct 02 11:32:00 compute-0 recursing_herschel[77234]: ]
Oct 02 11:32:00 compute-0 systemd[1]: libpod-170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215.scope: Deactivated successfully.
Oct 02 11:32:00 compute-0 systemd[1]: libpod-170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215.scope: Consumed 1.055s CPU time.
Oct 02 11:32:00 compute-0 podman[77218]: 2025-10-02 11:32:00.788884257 +0000 UTC m=+1.208781754 container died 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 02 11:32:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4104019984' entity='client.admin' 
Oct 02 11:32:00 compute-0 systemd[1]: libpod-f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062.scope: Deactivated successfully.
Oct 02 11:32:00 compute-0 podman[77265]: 2025-10-02 11:32:00.914598424 +0000 UTC m=+0.883453851 container died f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9521bdecc8dedac95f80d5f9a09047d7ac627d7377bc314d99f7616358d014ad-merged.mount: Deactivated successfully.
Oct 02 11:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb55f8fc95645f1f03a0f0bd2d4bfe57f4f43ceee05f1aa6357d3f9a72fb587c-merged.mount: Deactivated successfully.
Oct 02 11:32:01 compute-0 podman[77265]: 2025-10-02 11:32:01.015887433 +0000 UTC m=+0.984742840 container remove f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062 (image=quay.io/ceph/ceph:v18, name=eager_bhabha, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:01 compute-0 sudo[77262]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 systemd[1]: libpod-conmon-f90e277b3cac54cd095b11f3f64ad13cc6a81e20008da8aa9cebe1ded7a1d062.scope: Deactivated successfully.
Oct 02 11:32:01 compute-0 podman[77218]: 2025-10-02 11:32:01.050378156 +0000 UTC m=+1.470275603 container remove 170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:32:01 compute-0 sudo[76932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:01 compute-0 systemd[1]: libpod-conmon-170c55da4cc99a4c5dc81c5895e3002b7d1201c0a5895edc89bfa554f3ecd215.scope: Deactivated successfully.
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:32:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:32:01 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:32:01 compute-0 sudo[78258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:32:01 compute-0 sudo[78283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78283]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78308]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph
Oct 02 11:32:01 compute-0 sudo[78333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78333]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78358]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:32:01 compute-0 sudo[78406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78406]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78460]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:01 compute-0 sudo[78508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:32:01 compute-0 sudo[78558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78641]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlpzulfnspzfbfssgesmicciwsvvgbpa ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759404721.4507427-33779-91148341545636/async_wrapper.py j290624481537 30 /home/zuul/.ansible/tmp/ansible-tmp-1759404721.4507427-33779-91148341545636/AnsiballZ_command.py _'
Oct 02 11:32:01 compute-0 sudo[78682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:32:01 compute-0 sudo[78726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:01 compute-0 sudo[78682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4104019984' entity='client.admin' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:01 compute-0 ceph-mon[73607]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:32:01 compute-0 sudo[78731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:01 compute-0 sudo[78731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 sudo[78756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:32:01 compute-0 sudo[78756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:01 compute-0 sudo[78756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:01 compute-0 ansible-async_wrapper.py[78729]: Invoked with j290624481537 30 /home/zuul/.ansible/tmp/ansible-tmp-1759404721.4507427-33779-91148341545636/AnsiballZ_command.py _
Oct 02 11:32:01 compute-0 ansible-async_wrapper.py[78791]: Starting module and watcher
Oct 02 11:32:01 compute-0 ansible-async_wrapper.py[78791]: Start watching 78798 (30)
Oct 02 11:32:01 compute-0 ansible-async_wrapper.py[78798]: Start module (78798)
Oct 02 11:32:01 compute-0 ansible-async_wrapper.py[78729]: Return async_wrapper task started.
Oct 02 11:32:02 compute-0 sudo[78726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[78781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[78781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[78811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:32:02 compute-0 sudo[78811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78811]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:02 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:02 compute-0 python3[78804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:02 compute-0 sudo[78836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[78836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.203878436 +0000 UTC m=+0.042280255 container create f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:02 compute-0 sudo[78867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:32:02 compute-0 sudo[78867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 systemd[1]: Started libpod-conmon-f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b.scope.
Oct 02 11:32:02 compute-0 ceph-mgr[73901]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 02 11:32:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.184242166 +0000 UTC m=+0.022644015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:02 compute-0 sudo[78901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edb124ad53e297adcc5638fa69f9b198fd9c88822a0f2d036def6ad2f6de6ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edb124ad53e297adcc5638fa69f9b198fd9c88822a0f2d036def6ad2f6de6ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:02 compute-0 sudo[78901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78901]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.311196913 +0000 UTC m=+0.149598752 container init f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.32171925 +0000 UTC m=+0.160121069 container start f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.325454712 +0000 UTC m=+0.163856531 container attach f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:02 compute-0 sudo[78929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:32:02 compute-0 sudo[78929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78929]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[78955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[78955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78955]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[78980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:32:02 compute-0 sudo[78980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[78980]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[79005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[79005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[79030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:02 compute-0 sudo[79030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[79055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[79055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79055]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[79099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:32:02 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 sudo[79147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:32:02 compute-0 sudo[79172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:32:02 compute-0 recursing_galois[78921]: 
Oct 02 11:32:02 compute-0 recursing_galois[78921]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:32:02 compute-0 sudo[79172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.866939084 +0000 UTC m=+0.705340903 container died f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:02 compute-0 systemd[1]: libpod-f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b.scope: Deactivated successfully.
Oct 02 11:32:02 compute-0 ceph-mon[73607]: Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:02 compute-0 ceph-mon[73607]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:02 compute-0 ceph-mon[73607]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 11:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edb124ad53e297adcc5638fa69f9b198fd9c88822a0f2d036def6ad2f6de6ce-merged.mount: Deactivated successfully.
Oct 02 11:32:02 compute-0 sudo[79199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:02 compute-0 sudo[79199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:02 compute-0 podman[78860]: 2025-10-02 11:32:02.920060514 +0000 UTC m=+0.758462333 container remove f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b (image=quay.io/ceph/ceph:v18, name=recursing_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:32:02 compute-0 systemd[1]: libpod-conmon-f506dec8f573a2c638e06f468e15f9747ed3c6e04f37e75d56d69c867c0c3a7b.scope: Deactivated successfully.
Oct 02 11:32:02 compute-0 ansible-async_wrapper.py[78798]: Module complete (78798)
Oct 02 11:32:02 compute-0 sudo[79238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:32:02 compute-0 sudo[79238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:02 compute-0 sudo[79238]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79263]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:03 compute-0 sudo[79288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79288]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:03 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:03 compute-0 sudo[79336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:32:03 compute-0 sudo[79361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79361]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79386]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph
Oct 02 11:32:03 compute-0 sudo[79411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptfafliqwnmiduxnswhfjymnldrkppgg ; /usr/bin/python3'
Oct 02 11:32:03 compute-0 sudo[79411]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:03 compute-0 sudo[79462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:32:03 compute-0 sudo[79487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 python3[79461]: ansible-ansible.legacy.async_status Invoked with jid=j290624481537.78729 mode=status _async_dir=/root/.ansible_async
Oct 02 11:32:03 compute-0 sudo[79457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79512]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:03 compute-0 sudo[79537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79537]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79585]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwurlxetjwewwseaywlbsilqfhdbsmnq ; /usr/bin/python3'
Oct 02 11:32:03 compute-0 sudo[79633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:03 compute-0 sudo[79634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:32:03 compute-0 sudo[79634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79634]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 python3[79642]: ansible-ansible.legacy.async_status Invoked with jid=j290624481537.78729 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:32:03 compute-0 sudo[79684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79684]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:32:03 compute-0 sudo[79709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79709]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79734]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 ceph-mon[73607]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:32:03 compute-0 ceph-mon[73607]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:03 compute-0 sudo[79759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:32:03 compute-0 sudo[79759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79759]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:03 compute-0 sudo[79784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:03 compute-0 sudo[79784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:03 compute-0 sudo[79784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:04 compute-0 sudo[79809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79809]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjtawbkogqihrjsoueewtlrzicytrwj ; /usr/bin/python3'
Oct 02 11:32:04 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:04 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:04 compute-0 sudo[79855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:04 compute-0 sudo[79859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[79859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:32:04 compute-0 sudo[79885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79885]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:04 compute-0 sudo[79910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[79910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79910]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 python3[79860]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:32:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:04 compute-0 sudo[79935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:32:04 compute-0 sudo[79935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79935]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[79962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79962]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[79987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring.new
Oct 02 11:32:04 compute-0 sudo[79987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[79987]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[80012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:04 compute-0 sudo[80037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[80062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring.new
Oct 02 11:32:04 compute-0 sudo[80133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyyvwsvxpacpwqeqiucjnvqfkmctqeem ; /usr/bin/python3'
Oct 02 11:32:04 compute-0 sudo[80087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:04 compute-0 sudo[80087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[80161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80161]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 python3[80137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:04 compute-0 sudo[80187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring.new
Oct 02 11:32:04 compute-0 sudo[80187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80187]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[80218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80218]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 sudo[80250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring.new
Oct 02 11:32:04 compute-0 sudo[80250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 podman[80210]: 2025-10-02 11:32:04.772276103 +0000 UTC m=+0.022798990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:04 compute-0 sudo[80275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:04 compute-0 sudo[80275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 podman[80210]: 2025-10-02 11:32:04.971413566 +0000 UTC m=+0.221936433 container create ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:32:04 compute-0 sudo[80300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring.new /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:04 compute-0 sudo[80300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:04 compute-0 sudo[80300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:05 compute-0 ceph-mon[73607]: Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:05 compute-0 ceph-mon[73607]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:05 compute-0 systemd[1]: Started libpod-conmon-ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60.scope.
Oct 02 11:32:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952e263308e9d0321246435db8158046a4a42a2f5e2fe3a2d073cef494431749/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952e263308e9d0321246435db8158046a4a42a2f5e2fe3a2d073cef494431749/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952e263308e9d0321246435db8158046a4a42a2f5e2fe3a2d073cef494431749/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:05 compute-0 podman[80210]: 2025-10-02 11:32:05.332805271 +0000 UTC m=+0.583328168 container init ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:32:05 compute-0 podman[80210]: 2025-10-02 11:32:05.34015596 +0000 UTC m=+0.590678827 container start ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:05 compute-0 podman[80210]: 2025-10-02 11:32:05.414818658 +0000 UTC m=+0.665341605 container attach ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:05 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev d07bff9d-122d-482a-a2d7-c6bbc6d11718 (Updating crash deployment (+1 -> 1))
Oct 02 11:32:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:32:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:05 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 02 11:32:05 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 02 11:32:05 compute-0 sudo[80331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:05 compute-0 sudo[80331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:05 compute-0 sudo[80331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:05 compute-0 sudo[80356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:05 compute-0 sudo[80356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:05 compute-0 sudo[80356]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:05 compute-0 sudo[80381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:05 compute-0 sudo[80381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:05 compute-0 sudo[80381]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:05 compute-0 sudo[80406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:05 compute-0 sudo[80406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:32:05 compute-0 condescending_poitras[80327]: 
Oct 02 11:32:05 compute-0 condescending_poitras[80327]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:32:05 compute-0 systemd[1]: libpod-ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60.scope: Deactivated successfully.
Oct 02 11:32:05 compute-0 podman[80210]: 2025-10-02 11:32:05.874096358 +0000 UTC m=+1.124619225 container died ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-952e263308e9d0321246435db8158046a4a42a2f5e2fe3a2d073cef494431749-merged.mount: Deactivated successfully.
Oct 02 11:32:05 compute-0 podman[80493]: 2025-10-02 11:32:05.970160508 +0000 UTC m=+0.080596102 container create c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:06 compute-0 podman[80210]: 2025-10-02 11:32:06.004016437 +0000 UTC m=+1.254539304 container remove ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60 (image=quay.io/ceph/ceph:v18, name=condescending_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:05.910637352 +0000 UTC m=+0.021072976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:06 compute-0 systemd[1]: Started libpod-conmon-c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19.scope.
Oct 02 11:32:06 compute-0 systemd[1]: libpod-conmon-ddbf5f1eaf1488a95174d9b4e6d4a7819a327be6b62edee138766c1820735e60.scope: Deactivated successfully.
Oct 02 11:32:06 compute-0 sudo[80133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:06.053370125 +0000 UTC m=+0.163805769 container init c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:06.059023034 +0000 UTC m=+0.169458628 container start c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:32:06 compute-0 objective_jepsen[80522]: 167 167
Oct 02 11:32:06 compute-0 systemd[1]: libpod-c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19.scope: Deactivated successfully.
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:06.065402869 +0000 UTC m=+0.175838493 container attach c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:06.066420404 +0000 UTC m=+0.176856008 container died c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb2f85642bc5c10c46ffd13268fded28c5a9674ad7a3ae5ec60873db3c84c84-merged.mount: Deactivated successfully.
Oct 02 11:32:06 compute-0 podman[80493]: 2025-10-02 11:32:06.112336348 +0000 UTC m=+0.222771942 container remove c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:32:06 compute-0 systemd[1]: libpod-conmon-c72cb1c297961f7240ecd826dc7d5322be4e6a5deb1f9a2c6b03d20b0a47fa19.scope: Deactivated successfully.
Oct 02 11:32:06 compute-0 systemd[1]: Reloading.
Oct 02 11:32:06 compute-0 systemd-rc-local-generator[80566]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:32:06 compute-0 systemd-sysv-generator[80571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:32:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:32:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:06 compute-0 ceph-mon[73607]: Deploying daemon crash.compute-0 on compute-0
Oct 02 11:32:06 compute-0 sudo[80601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjhgbraxjezzxhugnskhpwlchimdgbik ; /usr/bin/python3'
Oct 02 11:32:06 compute-0 sudo[80601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:06 compute-0 systemd[1]: Reloading.
Oct 02 11:32:06 compute-0 systemd-rc-local-generator[80638]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:32:06 compute-0 systemd-sysv-generator[80641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:32:06 compute-0 python3[80605]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:06 compute-0 podman[80646]: 2025-10-02 11:32:06.580907275 +0000 UTC m=+0.039196210 container create aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:32:06 compute-0 podman[80646]: 2025-10-02 11:32:06.565542529 +0000 UTC m=+0.023831484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:06 compute-0 systemd[1]: Started libpod-conmon-aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880.scope.
Oct 02 11:32:06 compute-0 systemd[1]: Starting Ceph crash.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:32:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1236157a9347ebb7d330160da9a730fbe73f49abb70587af5929e58f5324ad0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1236157a9347ebb7d330160da9a730fbe73f49abb70587af5929e58f5324ad0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1236157a9347ebb7d330160da9a730fbe73f49abb70587af5929e58f5324ad0f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 podman[80646]: 2025-10-02 11:32:06.703136467 +0000 UTC m=+0.161425422 container init aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:32:06 compute-0 podman[80646]: 2025-10-02 11:32:06.710266711 +0000 UTC m=+0.168555646 container start aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:32:06 compute-0 podman[80646]: 2025-10-02 11:32:06.718918513 +0000 UTC m=+0.177207458 container attach aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:06 compute-0 podman[80713]: 2025-10-02 11:32:06.882691151 +0000 UTC m=+0.055665384 container create e8939fb73988f03abbc1301a5f045bb5f88f562171f656a073b5150467686f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a26fd826ac2a6b857809c19d4b4289fc34650a0707fdff69bb9ed4f270076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a26fd826ac2a6b857809c19d4b4289fc34650a0707fdff69bb9ed4f270076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a26fd826ac2a6b857809c19d4b4289fc34650a0707fdff69bb9ed4f270076/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3a26fd826ac2a6b857809c19d4b4289fc34650a0707fdff69bb9ed4f270076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:06 compute-0 podman[80713]: 2025-10-02 11:32:06.856387147 +0000 UTC m=+0.029361400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:06 compute-0 podman[80713]: 2025-10-02 11:32:06.956323263 +0000 UTC m=+0.129297546 container init e8939fb73988f03abbc1301a5f045bb5f88f562171f656a073b5150467686f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:32:06 compute-0 podman[80713]: 2025-10-02 11:32:06.962242408 +0000 UTC m=+0.135216651 container start e8939fb73988f03abbc1301a5f045bb5f88f562171f656a073b5150467686f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:06 compute-0 bash[80713]: e8939fb73988f03abbc1301a5f045bb5f88f562171f656a073b5150467686f52
Oct 02 11:32:06 compute-0 systemd[1]: Started Ceph crash.compute-0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:32:07 compute-0 ansible-async_wrapper.py[78791]: Done in kid B.
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:07 compute-0 sudo[80406]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev d07bff9d-122d-482a-a2d7-c6bbc6d11718 (Updating crash deployment (+1 -> 1))
Oct 02 11:32:07 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event d07bff9d-122d-482a-a2d7-c6bbc6d11718 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 382c3aab-ab08-42c4-8afe-d78f1e3ff7d5 does not exist
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c369a76b-ec79-4a3a-adbf-2a3b418267fe does not exist
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 sudo[80752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:07 compute-0 sudo[80752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 sudo[80752]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 sudo[80777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 02 11:32:07 compute-0 sudo[80777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 sudo[80777]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 sudo[80804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:07 compute-0 sudo[80804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 sudo[80804]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 02 11:32:07 compute-0 sudo[80829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:07 compute-0 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 sudo[80829]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3222752888' entity='client.admin' 
Oct 02 11:32:07 compute-0 sudo[80855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:07 compute-0 sudo[80855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 sudo[80855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 systemd[1]: libpod-aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880.scope: Deactivated successfully.
Oct 02 11:32:07 compute-0 podman[80646]: 2025-10-02 11:32:07.329494355 +0000 UTC m=+0.787783300 container died aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1236157a9347ebb7d330160da9a730fbe73f49abb70587af5929e58f5324ad0f-merged.mount: Deactivated successfully.
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.366+0000 7f839aae0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.366+0000 7f839aae0640 -1 AuthRegistry(0x7f8394067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.367+0000 7f839aae0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.368+0000 7f839aae0640 -1 AuthRegistry(0x7f839aadf000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.369+0000 7f8398855640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: 2025-10-02T11:32:07.369+0000 7f839aae0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 02 11:32:07 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-0[80728]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 02 11:32:07 compute-0 sudo[80882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:32:07 compute-0 podman[80646]: 2025-10-02 11:32:07.382520673 +0000 UTC m=+0.840809608 container remove aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880 (image=quay.io/ceph/ceph:v18, name=gifted_edison, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:07 compute-0 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:07 compute-0 systemd[1]: libpod-conmon-aca227f96ed46d6a9d7f148f32ad2084aace5dec3271b356b86c5da3e2ba8880.scope: Deactivated successfully.
Oct 02 11:32:07 compute-0 sudo[80601]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:32:07 compute-0 ceph-mon[73607]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3222752888' entity='client.admin' 
Oct 02 11:32:07 compute-0 sudo[80957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvtcjgftnihffolrfvhfnvchrhqprzk ; /usr/bin/python3'
Oct 02 11:32:07 compute-0 sudo[80957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:07 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 1 completed events
Oct 02 11:32:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:32:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:07 compute-0 python3[80963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:07 compute-0 podman[81017]: 2025-10-02 11:32:07.747223458 +0000 UTC m=+0.034142226 container create 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:07 compute-0 systemd[1]: Started libpod-conmon-601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6.scope.
Oct 02 11:32:07 compute-0 podman[81017]: 2025-10-02 11:32:07.731795451 +0000 UTC m=+0.018714239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4645231e0f396b001b01141cf8f5fc90788025202e9169a3559d4e5cb3dd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4645231e0f396b001b01141cf8f5fc90788025202e9169a3559d4e5cb3dd3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4645231e0f396b001b01141cf8f5fc90788025202e9169a3559d4e5cb3dd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:07 compute-0 podman[81031]: 2025-10-02 11:32:07.863178296 +0000 UTC m=+0.130667138 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:32:07 compute-0 podman[81017]: 2025-10-02 11:32:07.881868104 +0000 UTC m=+0.168786882 container init 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:07 compute-0 podman[81017]: 2025-10-02 11:32:07.888107776 +0000 UTC m=+0.175026544 container start 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:07 compute-0 podman[81017]: 2025-10-02 11:32:07.966510105 +0000 UTC m=+0.253428873 container attach 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:32:07 compute-0 podman[81031]: 2025-10-02 11:32:07.967309715 +0000 UTC m=+0.234798527 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:32:08 compute-0 sudo[80882]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e2bc28ef-005a-4efe-b9b4-2ee2d9511e8c does not exist
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0753df42-bae8-4ef1-b544-0f34a247a9f8 does not exist
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5656aa23-995f-4f4d-97e9-d097e3ff0d15 does not exist
Oct 02 11:32:08 compute-0 sudo[81129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:08 compute-0 sudo[81129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:08 compute-0 sudo[81129]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 sudo[81154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:32:08 compute-0 sudo[81154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 sudo[81154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:32:08 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:32:08 compute-0 sudo[81179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:08 compute-0 sudo[81179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 sudo[81179]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 02 11:32:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/433852697' entity='client.admin' 
Oct 02 11:32:08 compute-0 systemd[1]: libpod-601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6.scope: Deactivated successfully.
Oct 02 11:32:08 compute-0 podman[81017]: 2025-10-02 11:32:08.452819676 +0000 UTC m=+0.739738444 container died 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:08 compute-0 sudo[81204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:08 compute-0 sudo[81204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 sudo[81204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd4645231e0f396b001b01141cf8f5fc90788025202e9169a3559d4e5cb3dd3-merged.mount: Deactivated successfully.
Oct 02 11:32:08 compute-0 podman[81017]: 2025-10-02 11:32:08.507935386 +0000 UTC m=+0.794854144 container remove 601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6 (image=quay.io/ceph/ceph:v18, name=pedantic_fermat, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:08 compute-0 systemd[1]: libpod-conmon-601d1c39dd4094ae40a7159088c0378bd2acae2361d65cb2646697a5d2d53cb6.scope: Deactivated successfully.
Oct 02 11:32:08 compute-0 sudo[80957]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 sudo[81243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:08 compute-0 sudo[81243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 sudo[81243]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:08 compute-0 sudo[81269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:08 compute-0 sudo[81269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:08 compute-0 ceph-mon[73607]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:08 compute-0 ceph-mon[73607]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:32:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/433852697' entity='client.admin' 
Oct 02 11:32:08 compute-0 sudo[81317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojdrqbawgytbpgqdxfkgtfqjjrhjzllz ; /usr/bin/python3'
Oct 02 11:32:08 compute-0 sudo[81317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.828234315 +0000 UTC m=+0.038729119 container create 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:08 compute-0 python3[81319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:08 compute-0 systemd[1]: Started libpod-conmon-7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50.scope.
Oct 02 11:32:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.899596861 +0000 UTC m=+0.110091695 container init 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.807463976 +0000 UTC m=+0.017958790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:08 compute-0 podman[81351]: 2025-10-02 11:32:08.905233239 +0000 UTC m=+0.042233715 container create 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.90692482 +0000 UTC m=+0.117419624 container start 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:08 compute-0 beautiful_cray[81355]: 167 167
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.91061168 +0000 UTC m=+0.121106484 container attach 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:08 compute-0 systemd[1]: libpod-7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50.scope: Deactivated successfully.
Oct 02 11:32:08 compute-0 conmon[81355]: conmon 7e294ffa950618d4e20d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50.scope/container/memory.events
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.912092326 +0000 UTC m=+0.122587130 container died 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:08 compute-0 systemd[1]: Started libpod-conmon-9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae.scope.
Oct 02 11:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2175d648209e5ed6af65b332ad8cb600a0d1f4f72611e2a754e29aa69600f9f-merged.mount: Deactivated successfully.
Oct 02 11:32:08 compute-0 podman[81335]: 2025-10-02 11:32:08.96410038 +0000 UTC m=+0.174595184 container remove 7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:32:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:08 compute-0 systemd[1]: libpod-conmon-7e294ffa950618d4e20d7281c91af958d36e4910f3e8e9f7c23f42818beb2e50.scope: Deactivated successfully.
Oct 02 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccbd6efdd442fc8a7a1e5d1a653b5024d074be76febc50e912f0fa122c25d574/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccbd6efdd442fc8a7a1e5d1a653b5024d074be76febc50e912f0fa122c25d574/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccbd6efdd442fc8a7a1e5d1a653b5024d074be76febc50e912f0fa122c25d574/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:08 compute-0 podman[81351]: 2025-10-02 11:32:08.883475076 +0000 UTC m=+0.020475572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:08 compute-0 podman[81351]: 2025-10-02 11:32:08.988543647 +0000 UTC m=+0.125544153 container init 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:08 compute-0 podman[81351]: 2025-10-02 11:32:08.996570084 +0000 UTC m=+0.133570560 container start 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:08 compute-0 podman[81351]: 2025-10-02 11:32:08.999860324 +0000 UTC m=+0.136860820 container attach 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:32:09 compute-0 sudo[81269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fmcstn (unknown last config time)...
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fmcstn (unknown last config time)...
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:32:09 compute-0 sudo[81391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:09 compute-0 sudo[81391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 sudo[81391]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 sudo[81416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:09 compute-0 sudo[81416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 sudo[81416]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 sudo[81441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:09 compute-0 sudo[81441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 sudo[81441]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:09 compute-0 sudo[81466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:09 compute-0 sudo[81466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.475037833 +0000 UTC m=+0.043808903 container create c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:32:09 compute-0 systemd[1]: Started libpod-conmon-c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e.scope.
Oct 02 11:32:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/513578869' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.544630286 +0000 UTC m=+0.113401366 container init c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.45201892 +0000 UTC m=+0.020790010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.550594952 +0000 UTC m=+0.119366022 container start c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:32:09 compute-0 eloquent_pascal[81541]: 167 167
Oct 02 11:32:09 compute-0 systemd[1]: libpod-c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e.scope: Deactivated successfully.
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.556819255 +0000 UTC m=+0.125590325 container attach c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.557248215 +0000 UTC m=+0.126019285 container died c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b14e2ac0c36743e1aac549bcf73332893084a239927bcac5779a84f3af9f92e-merged.mount: Deactivated successfully.
Oct 02 11:32:09 compute-0 podman[81524]: 2025-10-02 11:32:09.613284757 +0000 UTC m=+0.182055827 container remove c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:32:09 compute-0 systemd[1]: libpod-conmon-c24632eac07ed4911ce381f68121ddd1e1dee7f30ab3fb118a78bb0f6267f96e.scope: Deactivated successfully.
Oct 02 11:32:09 compute-0 sudo[81466]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9eb0c6bc-0e16-4862-8d68-cd24c0cf6a52 does not exist
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0293c353-2068-4dda-99bd-7592c6e45ba2 does not exist
Oct 02 11:32:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e64f4496-6862-4400-b293-9da5b4d693f7 does not exist
Oct 02 11:32:09 compute-0 sudo[81562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:09 compute-0 sudo[81562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 sudo[81562]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:09 compute-0 sudo[81587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:32:09 compute-0 sudo[81587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:09 compute-0 sudo[81587]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:10 compute-0 ceph-mon[73607]: Reconfiguring mgr.compute-0.fmcstn (unknown last config time)...
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/513578869' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 02 11:32:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/513578869' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 11:32:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 02 11:32:10 compute-0 optimistic_ellis[81385]: set require_min_compat_client to mimic
Oct 02 11:32:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 02 11:32:10 compute-0 systemd[1]: libpod-9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae.scope: Deactivated successfully.
Oct 02 11:32:10 compute-0 podman[81351]: 2025-10-02 11:32:10.248005081 +0000 UTC m=+1.385005557 container died 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:32:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccbd6efdd442fc8a7a1e5d1a653b5024d074be76febc50e912f0fa122c25d574-merged.mount: Deactivated successfully.
Oct 02 11:32:10 compute-0 podman[81351]: 2025-10-02 11:32:10.293000552 +0000 UTC m=+1.430001028 container remove 9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae (image=quay.io/ceph/ceph:v18, name=optimistic_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:10 compute-0 systemd[1]: libpod-conmon-9b19df4da3ee7e096dcf58565e7a20603b9a079938f4a420bbecc92432475eae.scope: Deactivated successfully.
Oct 02 11:32:10 compute-0 sudo[81317]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:10 compute-0 sudo[81649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddsetqktejijkmslmjwrhtjxywdgdoi ; /usr/bin/python3'
Oct 02 11:32:10 compute-0 sudo[81649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:10 compute-0 python3[81651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:10 compute-0 podman[81652]: 2025-10-02 11:32:10.966120754 +0000 UTC m=+0.057138069 container create cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:32:10 compute-0 systemd[1]: Started libpod-conmon-cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038.scope.
Oct 02 11:32:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8a48f69dbcf86fe9859d922ca4964a32610d297f50213fa43ccc1faf2fec2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8a48f69dbcf86fe9859d922ca4964a32610d297f50213fa43ccc1faf2fec2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8a48f69dbcf86fe9859d922ca4964a32610d297f50213fa43ccc1faf2fec2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:11 compute-0 podman[81652]: 2025-10-02 11:32:10.928915004 +0000 UTC m=+0.019932339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:11 compute-0 podman[81652]: 2025-10-02 11:32:11.029774812 +0000 UTC m=+0.120792147 container init cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:32:11 compute-0 podman[81652]: 2025-10-02 11:32:11.035165504 +0000 UTC m=+0.126182809 container start cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:32:11 compute-0 podman[81652]: 2025-10-02 11:32:11.039282895 +0000 UTC m=+0.130300210 container attach cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:32:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/513578869' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 11:32:11 compute-0 ceph-mon[73607]: osdmap e3: 0 total, 0 up, 0 in
Oct 02 11:32:11 compute-0 ceph-mon[73607]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:32:11 compute-0 sudo[81691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:11 compute-0 sudo[81691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:11 compute-0 sudo[81691]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:11 compute-0 sudo[81716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:11 compute-0 sudo[81716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:11 compute-0 sudo[81716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:11 compute-0 sudo[81741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:11 compute-0 sudo[81741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:11 compute-0 sudo[81741]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:11 compute-0 sudo[81766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:32:11 compute-0 sudo[81766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:12 compute-0 sudo[81766]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [cephadm INFO root] Added host compute-0
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:32:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 49cfb7f7-f111-4662-bcf1-c5d31696d00c does not exist
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8f0da8f8-b699-4ab9-b5fc-6c52cf3df285 does not exist
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 98399016-b07c-4276-88ee-39b5f730633f does not exist
Oct 02 11:32:12 compute-0 sudo[81812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:12 compute-0 sudo[81812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:12 compute-0 sudo[81812]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:12 compute-0 sudo[81837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:32:12 compute-0 sudo[81837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:12 compute-0 sudo[81837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:13 compute-0 ceph-mon[73607]: Added host compute-0
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:13 compute-0 ceph-mon[73607]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:13 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct 02 11:32:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct 02 11:32:14 compute-0 ceph-mon[73607]: Deploying cephadm binary to compute-1
Oct 02 11:32:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:15 compute-0 ceph-mon[73607]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:16 compute-0 ceph-mgr[73901]: [cephadm INFO root] Added host compute-1
Oct 02 11:32:16 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-1
Oct 02 11:32:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:17 compute-0 ceph-mon[73607]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:18 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct 02 11:32:18 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct 02 11:32:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:18 compute-0 ceph-mon[73607]: Added host compute-1
Oct 02 11:32:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:19 compute-0 ceph-mon[73607]: Deploying cephadm binary to compute-2
Oct 02 11:32:19 compute-0 ceph-mon[73607]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:21 compute-0 ceph-mon[73607]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:32:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Added host compute-2
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-2
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:32:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:32:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 02 11:32:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Added host 'compute-1' with addr '192.168.122.101'
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Added host 'compute-2' with addr '192.168.122.102'
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Scheduled mon update...
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Scheduled mgr update...
Oct 02 11:32:22 compute-0 cool_lederberg[81667]: Scheduled osd.default_drive_group update...
Oct 02 11:32:22 compute-0 systemd[1]: libpod-cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038.scope: Deactivated successfully.
Oct 02 11:32:22 compute-0 podman[81652]: 2025-10-02 11:32:22.063305277 +0000 UTC m=+11.154322592 container died cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-09e8a48f69dbcf86fe9859d922ca4964a32610d297f50213fa43ccc1faf2fec2-merged.mount: Deactivated successfully.
Oct 02 11:32:22 compute-0 podman[81652]: 2025-10-02 11:32:22.110680767 +0000 UTC m=+11.201698092 container remove cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038 (image=quay.io/ceph/ceph:v18, name=cool_lederberg, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:22 compute-0 systemd[1]: libpod-conmon-cd8a90b3958b94547e257c4bf2633ebdeb5b87c174223708fdd04709de510038.scope: Deactivated successfully.
Oct 02 11:32:22 compute-0 sudo[81649]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:22 compute-0 sudo[81897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspxsdibkahgqlptkrayyaofrpcwirxb ; /usr/bin/python3'
Oct 02 11:32:22 compute-0 sudo[81897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:22 compute-0 python3[81899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:22 compute-0 podman[81901]: 2025-10-02 11:32:22.597953912 +0000 UTC m=+0.041967608 container create 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:22 compute-0 systemd[1]: Started libpod-conmon-6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9.scope.
Oct 02 11:32:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56baa6c02237702fdb8ca1fc5c1c9de35c8b38e93fe1eeca9dd8faa67c2dec60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56baa6c02237702fdb8ca1fc5c1c9de35c8b38e93fe1eeca9dd8faa67c2dec60/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56baa6c02237702fdb8ca1fc5c1c9de35c8b38e93fe1eeca9dd8faa67c2dec60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:22 compute-0 podman[81901]: 2025-10-02 11:32:22.670259522 +0000 UTC m=+0.114273238 container init 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:32:22 compute-0 podman[81901]: 2025-10-02 11:32:22.574454247 +0000 UTC m=+0.018467973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:22 compute-0 podman[81901]: 2025-10-02 11:32:22.676926075 +0000 UTC m=+0.120939771 container start 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:22 compute-0 podman[81901]: 2025-10-02 11:32:22.67996692 +0000 UTC m=+0.123980626 container attach 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:32:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Added host compute-2
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:32:23 compute-0 ceph-mon[73607]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:32:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:23 compute-0 ceph-mon[73607]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:32:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111316742' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:32:23 compute-0 youthful_montalcini[81918]: 
Oct 02 11:32:23 compute-0 youthful_montalcini[81918]: {"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-02T11:30:45.146182+0000","services":{}},"progress_events":{}}
Oct 02 11:32:23 compute-0 systemd[1]: libpod-6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9.scope: Deactivated successfully.
Oct 02 11:32:23 compute-0 podman[81901]: 2025-10-02 11:32:23.336082596 +0000 UTC m=+0.780096292 container died 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-56baa6c02237702fdb8ca1fc5c1c9de35c8b38e93fe1eeca9dd8faa67c2dec60-merged.mount: Deactivated successfully.
Oct 02 11:32:23 compute-0 podman[81901]: 2025-10-02 11:32:23.376557927 +0000 UTC m=+0.820571623 container remove 6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9 (image=quay.io/ceph/ceph:v18, name=youthful_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:32:23 compute-0 systemd[1]: libpod-conmon-6da4570074ec144ca82bdb8a2f772efd666debb2f2617f2b5c577889318863e9.scope: Deactivated successfully.
Oct 02 11:32:23 compute-0 sudo[81897]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/111316742' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:32:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:25 compute-0 ceph-mon[73607]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:27 compute-0 ceph-mon[73607]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:29 compute-0 ceph-mon[73607]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:31 compute-0 ceph-mon[73607]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:33 compute-0 ceph-mon[73607]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:34 compute-0 ceph-mon[73607]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:32:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:36 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:32:36 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:32:37 compute-0 ceph-mon[73607]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:32:37 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:37 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:38 compute-0 ceph-mon[73607]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:32:38 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:38 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:39 compute-0 ceph-mon[73607]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:32:39 compute-0 ceph-mon[73607]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:39 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:39 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:40 compute-0 ceph-mon[73607]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 97b748a6-4a90-4cbe-ac8d-76abc9bb68b6 (Updating crash deployment (+1 -> 2))
Oct 02 11:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:32:40.826+0000 7f4c66205640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: service_name: mon
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: placement:
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   hosts:
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-0
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-1
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-2
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:32:40.826+0000 7f4c66205640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: service_name: mgr
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: placement:
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   hosts:
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-0
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-1
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]:   - compute-2
Oct 02 11:32:40 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct 02 11:32:40 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct 02 11:32:41 compute-0 ceph-mon[73607]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:32:41 compute-0 ceph-mon[73607]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:32:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:32:42
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 02 11:32:42 compute-0 ceph-mon[73607]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:32:42 compute-0 ceph-mon[73607]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:42 compute-0 ceph-mon[73607]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:32:42 compute-0 ceph-mon[73607]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:42 compute-0 ceph-mon[73607]: Deploying daemon crash.compute-1 on compute-1
Oct 02 11:32:42 compute-0 ceph-mon[73607]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:32:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:43 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 97b748a6-4a90-4cbe-ac8d-76abc9bb68b6 (Updating crash deployment (+1 -> 2))
Oct 02 11:32:43 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 97b748a6-4a90-4cbe-ac8d-76abc9bb68b6 (Updating crash deployment (+1 -> 2)) in 2 seconds
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:43 compute-0 sudo[81954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:43 compute-0 sudo[81954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:43 compute-0 sudo[81954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:43 compute-0 sudo[81979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:43 compute-0 sudo[81979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:43 compute-0 sudo[81979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:43 compute-0 sudo[82004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:43 compute-0 sudo[82004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:43 compute-0 sudo[82004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:43 compute-0 sudo[82029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:32:43 compute-0 sudo[82029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.642847506 +0000 UTC m=+0.040594386 container create 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:32:43 compute-0 systemd[1]: Started libpod-conmon-6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944.scope.
Oct 02 11:32:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.709679058 +0000 UTC m=+0.107425948 container init 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.715793464 +0000 UTC m=+0.113540344 container start 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:43 compute-0 silly_darwin[82110]: 167 167
Oct 02 11:32:43 compute-0 systemd[1]: libpod-6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944.scope: Deactivated successfully.
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.720583873 +0000 UTC m=+0.118330773 container attach 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.62523328 +0000 UTC m=+0.022980180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.721387285 +0000 UTC m=+0.119134165 container died 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9756aee05563bfd8c182ed8d92a554301da30d192aaaae1a73981a46f35cbaa-merged.mount: Deactivated successfully.
Oct 02 11:32:43 compute-0 podman[82094]: 2025-10-02 11:32:43.765744681 +0000 UTC m=+0.163491561 container remove 6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:43 compute-0 systemd[1]: libpod-conmon-6ceb1e96ab104ee4218a4a9fe55fcc49234e7716a0a45fdcb194b3415394d944.scope: Deactivated successfully.
Oct 02 11:32:43 compute-0 podman[82136]: 2025-10-02 11:32:43.902563883 +0000 UTC m=+0.037213245 container create 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:32:43 compute-0 systemd[1]: Started libpod-conmon-3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9.scope.
Oct 02 11:32:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:43 compute-0 podman[82136]: 2025-10-02 11:32:43.977616878 +0000 UTC m=+0.112266260 container init 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:32:43 compute-0 podman[82136]: 2025-10-02 11:32:43.88649784 +0000 UTC m=+0.021147222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:43 compute-0 podman[82136]: 2025-10-02 11:32:43.985548532 +0000 UTC m=+0.120197904 container start 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:32:43 compute-0 podman[82136]: 2025-10-02 11:32:43.988515202 +0000 UTC m=+0.123164584 container attach 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:32:44 compute-0 ceph-mon[73607]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:44 compute-0 elated_almeida[82153]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:32:44 compute-0 elated_almeida[82153]: --> relative data size: 1.0
Oct 02 11:32:44 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:32:44 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ca035294-3f2d-465d-b3e6-43971a2c0201
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]': finished
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]': finished
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:45 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:32:45 compute-0 lvm[82201]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:32:45 compute-0 lvm[82201]: VG ceph_vg0 finished
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/972466927' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:32:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 11:32:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890291292' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:32:45 compute-0 elated_almeida[82153]:  stderr: got monmap epoch 1
Oct 02 11:32:45 compute-0 elated_almeida[82153]: --> Creating keyring file for osd.1
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 02 11:32:45 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid ca035294-3f2d-465d-b3e6-43971a2c0201 --setuser ceph --setgroup ceph
Oct 02 11:32:46 compute-0 ceph-mon[73607]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]': finished
Oct 02 11:32:46 compute-0 ceph-mon[73607]: osdmap e4: 1 total, 0 up, 1 in
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]': finished
Oct 02 11:32:46 compute-0 ceph-mon[73607]: osdmap e5: 2 total, 0 up, 2 in
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/972466927' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3890291292' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:32:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 11:32:47 compute-0 ceph-mon[73607]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 11:32:47 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 2 completed events
Oct 02 11:32:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:32:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:48 compute-0 ceph-mon[73607]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:48 compute-0 elated_almeida[82153]:  stderr: 2025-10-02T11:32:45.960+0000 7f29bc905740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:32:48 compute-0 elated_almeida[82153]:  stderr: 2025-10-02T11:32:45.960+0000 7f29bc905740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:32:48 compute-0 elated_almeida[82153]:  stderr: 2025-10-02T11:32:45.961+0000 7f29bc905740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:32:48 compute-0 elated_almeida[82153]:  stderr: 2025-10-02T11:32:45.961+0000 7f29bc905740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 02 11:32:48 compute-0 elated_almeida[82153]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:32:48 compute-0 elated_almeida[82153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:48 compute-0 elated_almeida[82153]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 02 11:32:48 compute-0 elated_almeida[82153]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 02 11:32:48 compute-0 systemd[1]: libpod-3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9.scope: Deactivated successfully.
Oct 02 11:32:48 compute-0 systemd[1]: libpod-3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9.scope: Consumed 2.353s CPU time.
Oct 02 11:32:48 compute-0 podman[82136]: 2025-10-02 11:32:48.507104341 +0000 UTC m=+4.641753713 container died 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d31e45258242eb3a4c00341c8bba92521a7ef4ea8010b28877cadb85aa5e5d7-merged.mount: Deactivated successfully.
Oct 02 11:32:48 compute-0 podman[82136]: 2025-10-02 11:32:48.629512783 +0000 UTC m=+4.764162165 container remove 3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:32:48 compute-0 systemd[1]: libpod-conmon-3c7ae7ba1eff61b9ffb6ca1aae76d30e614fb36a015d8414c2e1a8152fce32b9.scope: Deactivated successfully.
Oct 02 11:32:48 compute-0 sudo[82029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:48 compute-0 sudo[83126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:48 compute-0 sudo[83126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:48 compute-0 sudo[83126]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:48 compute-0 sudo[83151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:48 compute-0 sudo[83151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:48 compute-0 sudo[83151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:48 compute-0 sudo[83176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:48 compute-0 sudo[83176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:48 compute-0 sudo[83176]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:48 compute-0 sudo[83201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:32:48 compute-0 sudo[83201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.142614888 +0000 UTC m=+0.034464381 container create 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:32:49 compute-0 systemd[1]: Started libpod-conmon-52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054.scope.
Oct 02 11:32:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.210706005 +0000 UTC m=+0.102555498 container init 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.216917783 +0000 UTC m=+0.108767276 container start 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.220398906 +0000 UTC m=+0.112248489 container attach 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:49 compute-0 stupefied_fermi[83281]: 167 167
Oct 02 11:32:49 compute-0 systemd[1]: libpod-52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054.scope: Deactivated successfully.
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.222702249 +0000 UTC m=+0.114551742 container died 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.127670244 +0000 UTC m=+0.019519767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ebe4d7fb402912ae3376e847bfcd72fdeb69782704ea75d6f6c66b53dd87248-merged.mount: Deactivated successfully.
Oct 02 11:32:49 compute-0 podman[83265]: 2025-10-02 11:32:49.260689243 +0000 UTC m=+0.152538736 container remove 52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:49 compute-0 systemd[1]: libpod-conmon-52d48a1bcd145f092503f45d74992412d29f597ddc648d9c66d429dca9d92054.scope: Deactivated successfully.
Oct 02 11:32:49 compute-0 podman[83306]: 2025-10-02 11:32:49.417254148 +0000 UTC m=+0.040318369 container create 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:32:49 compute-0 systemd[1]: Started libpod-conmon-839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716.scope.
Oct 02 11:32:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab13377b35406a806e0de462117c4b46686430346b01269a150abecf43243d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab13377b35406a806e0de462117c4b46686430346b01269a150abecf43243d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab13377b35406a806e0de462117c4b46686430346b01269a150abecf43243d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab13377b35406a806e0de462117c4b46686430346b01269a150abecf43243d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:49 compute-0 podman[83306]: 2025-10-02 11:32:49.491685986 +0000 UTC m=+0.114750207 container init 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:32:49 compute-0 podman[83306]: 2025-10-02 11:32:49.401674508 +0000 UTC m=+0.024738749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:49 compute-0 podman[83306]: 2025-10-02 11:32:49.499806495 +0000 UTC m=+0.122870716 container start 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:32:49 compute-0 podman[83306]: 2025-10-02 11:32:49.503045413 +0000 UTC m=+0.126109634 container attach 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:32:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 02 11:32:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:32:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:49 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Oct 02 11:32:49 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Oct 02 11:32:50 compute-0 ceph-mon[73607]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:32:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:50 compute-0 tender_davinci[83322]: {
Oct 02 11:32:50 compute-0 tender_davinci[83322]:     "1": [
Oct 02 11:32:50 compute-0 tender_davinci[83322]:         {
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "devices": [
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "/dev/loop3"
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             ],
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "lv_name": "ceph_lv0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "lv_size": "7511998464",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "name": "ceph_lv0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "tags": {
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.cluster_name": "ceph",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.crush_device_class": "",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.encrypted": "0",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.osd_id": "1",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.type": "block",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:                 "ceph.vdo": "0"
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             },
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "type": "block",
Oct 02 11:32:50 compute-0 tender_davinci[83322]:             "vg_name": "ceph_vg0"
Oct 02 11:32:50 compute-0 tender_davinci[83322]:         }
Oct 02 11:32:50 compute-0 tender_davinci[83322]:     ]
Oct 02 11:32:50 compute-0 tender_davinci[83322]: }
Oct 02 11:32:50 compute-0 systemd[1]: libpod-839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716.scope: Deactivated successfully.
Oct 02 11:32:50 compute-0 podman[83306]: 2025-10-02 11:32:50.258270059 +0000 UTC m=+0.881334280 container died 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-daab13377b35406a806e0de462117c4b46686430346b01269a150abecf43243d-merged.mount: Deactivated successfully.
Oct 02 11:32:50 compute-0 podman[83306]: 2025-10-02 11:32:50.330997172 +0000 UTC m=+0.954061393 container remove 839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:32:50 compute-0 systemd[1]: libpod-conmon-839b740fc8c59bdcded0e175691a350b57392d97cf5f9ea7bd267121324ed716.scope: Deactivated successfully.
Oct 02 11:32:50 compute-0 sudo[83201]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 02 11:32:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:32:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:50 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 02 11:32:50 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 02 11:32:50 compute-0 sudo[83345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:50 compute-0 sudo[83345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:50 compute-0 sudo[83345]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:50 compute-0 sudo[83370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:50 compute-0 sudo[83370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:50 compute-0 sudo[83370]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:50 compute-0 sudo[83395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:50 compute-0 sudo[83395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:50 compute-0 sudo[83395]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:50 compute-0 sudo[83420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:32:50 compute-0 sudo[83420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.873040128 +0000 UTC m=+0.036841776 container create 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:50 compute-0 systemd[1]: Started libpod-conmon-0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71.scope.
Oct 02 11:32:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.952583223 +0000 UTC m=+0.116384911 container init 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.856901092 +0000 UTC m=+0.020702770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.959067848 +0000 UTC m=+0.122869506 container start 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:32:50 compute-0 quirky_dirac[83501]: 167 167
Oct 02 11:32:50 compute-0 systemd[1]: libpod-0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71.scope: Deactivated successfully.
Oct 02 11:32:50 compute-0 conmon[83501]: conmon 0402cc9975f89055dabc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71.scope/container/memory.events
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.965494762 +0000 UTC m=+0.129296420 container attach 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:50 compute-0 podman[83484]: 2025-10-02 11:32:50.96578851 +0000 UTC m=+0.129590158 container died 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 11:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-13faf6afa9ce6187bc2f48ac3188ba9516296c6802e971140bed8b4c3edd21f4-merged.mount: Deactivated successfully.
Oct 02 11:32:51 compute-0 podman[83484]: 2025-10-02 11:32:51.004938516 +0000 UTC m=+0.168740174 container remove 0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:32:51 compute-0 systemd[1]: libpod-conmon-0402cc9975f89055dabccf119546358ef894c03e0cfa3d4ec503e9f317b3af71.scope: Deactivated successfully.
Oct 02 11:32:51 compute-0 ceph-mon[73607]: Deploying daemon osd.0 on compute-1
Oct 02 11:32:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:32:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:32:51 compute-0 ceph-mon[73607]: Deploying daemon osd.1 on compute-0
Oct 02 11:32:51 compute-0 podman[83532]: 2025-10-02 11:32:51.226793832 +0000 UTC m=+0.042117198 container create 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:32:51 compute-0 systemd[1]: Started libpod-conmon-4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca.scope.
Oct 02 11:32:51 compute-0 podman[83532]: 2025-10-02 11:32:51.206347201 +0000 UTC m=+0.021670587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:51 compute-0 podman[83532]: 2025-10-02 11:32:51.318768854 +0000 UTC m=+0.134092240 container init 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:32:51 compute-0 podman[83532]: 2025-10-02 11:32:51.328114916 +0000 UTC m=+0.143438282 container start 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:32:51 compute-0 podman[83532]: 2025-10-02 11:32:51.331314742 +0000 UTC m=+0.146638108 container attach 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:52 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test[83548]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 11:32:52 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test[83548]:                             [--no-systemd] [--no-tmpfs]
Oct 02 11:32:52 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test[83548]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 11:32:52 compute-0 systemd[1]: libpod-4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca.scope: Deactivated successfully.
Oct 02 11:32:52 compute-0 podman[83532]: 2025-10-02 11:32:52.039235052 +0000 UTC m=+0.854558408 container died 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:32:52 compute-0 ceph-mon[73607]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f98b1bdf553397a2068e35469e83006ef858248c6fc9ea7431697fc044f31d7-merged.mount: Deactivated successfully.
Oct 02 11:32:52 compute-0 podman[83532]: 2025-10-02 11:32:52.404813627 +0000 UTC m=+1.220136993 container remove 4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:32:52 compute-0 systemd[1]: libpod-conmon-4e49dc0d698a523310784892956a48b10964c606bef0cc01dd6790ec1456b2ca.scope: Deactivated successfully.
Oct 02 11:32:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:53 compute-0 systemd[1]: Reloading.
Oct 02 11:32:53 compute-0 systemd-rc-local-generator[83611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:32:53 compute-0 systemd-sysv-generator[83614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:32:53 compute-0 systemd[1]: Reloading.
Oct 02 11:32:53 compute-0 systemd-rc-local-generator[83652]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:32:53 compute-0 systemd-sysv-generator[83656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:32:53 compute-0 sudo[83683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfryacuyqegxlzdcdvvvalrfsjoecdfp ; /usr/bin/python3'
Oct 02 11:32:53 compute-0 sudo[83683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:32:53 compute-0 systemd[1]: Starting Ceph osd.1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:32:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:53 compute-0 python3[83687]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:32:53 compute-0 podman[83739]: 2025-10-02 11:32:53.768410818 +0000 UTC m=+0.022699213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:53 compute-0 podman[83737]: 2025-10-02 11:32:53.772142529 +0000 UTC m=+0.023715871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:32:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:54 compute-0 podman[83739]: 2025-10-02 11:32:54.082308739 +0000 UTC m=+0.336597114 container create 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 podman[83737]: 2025-10-02 11:32:54.611439515 +0000 UTC m=+0.863012827 container create 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:32:54 compute-0 ceph-mon[73607]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:54 compute-0 systemd[1]: Started libpod-conmon-3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a.scope.
Oct 02 11:32:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db33e3a76e3b0e44c5cceadc5c51b9259d165dcce24f87df128a60332786ec9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db33e3a76e3b0e44c5cceadc5c51b9259d165dcce24f87df128a60332786ec9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db33e3a76e3b0e44c5cceadc5c51b9259d165dcce24f87df128a60332786ec9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 02 11:32:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 11:32:54 compute-0 podman[83739]: 2025-10-02 11:32:54.956270589 +0000 UTC m=+1.210558984 container init 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:54 compute-0 podman[83739]: 2025-10-02 11:32:54.962524257 +0000 UTC m=+1.216812622 container start 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:32:55 compute-0 podman[83739]: 2025-10-02 11:32:55.287471894 +0000 UTC m=+1.541760289 container attach 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:32:55 compute-0 podman[83737]: 2025-10-02 11:32:55.450614787 +0000 UTC m=+1.702188129 container init 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:32:55 compute-0 podman[83737]: 2025-10-02 11:32:55.459745153 +0000 UTC m=+1.711318495 container start 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:55 compute-0 podman[83737]: 2025-10-02 11:32:55.515294952 +0000 UTC m=+1.766868304 container attach 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:55 compute-0 ceph-mon[73607]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:55 compute-0 ceph-mon[73607]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:55 compute-0 bash[83739]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:32:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate[83762]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 11:32:55 compute-0 bash[83739]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 11:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct 02 11:32:55 compute-0 systemd[1]: libpod-2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab.scope: Deactivated successfully.
Oct 02 11:32:55 compute-0 podman[83739]: 2025-10-02 11:32:55.905755817 +0000 UTC m=+2.160044192 container died 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Oct 02 11:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-1,root=default}
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:55 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:55 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f8271816e52ec4fd5575668c7ab503e8912f5713bcab0a98f41ae1dc04bd075-merged.mount: Deactivated successfully.
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950923059' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:32:56 compute-0 nice_kilby[83770]: 
Oct 02 11:32:56 compute-0 nice_kilby[83770]: {"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1759404765,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T11:32:44.828316+0000","services":{}},"progress_events":{}}
Oct 02 11:32:56 compute-0 systemd[1]: libpod-3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a.scope: Deactivated successfully.
Oct 02 11:32:56 compute-0 podman[83737]: 2025-10-02 11:32:56.258506985 +0000 UTC m=+2.510080297 container died 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:32:56 compute-0 podman[83739]: 2025-10-02 11:32:56.419029526 +0000 UTC m=+2.673317901 container remove 2570e28c2dea0601fcbc88daafb691b5f04d37dac13c05171191ca0b5941caab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1-activate, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db33e3a76e3b0e44c5cceadc5c51b9259d165dcce24f87df128a60332786ec9-merged.mount: Deactivated successfully.
Oct 02 11:32:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:56 compute-0 podman[83737]: 2025-10-02 11:32:56.913012493 +0000 UTC m=+3.164585815 container remove 3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a (image=quay.io/ceph/ceph:v18, name=nice_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:32:56 compute-0 sudo[83683]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:56 compute-0 systemd[1]: libpod-conmon-3155e111e922237f7cc8fde97800d5529548dfc2b0ed13f6c89bdb1e7e7f097a.scope: Deactivated successfully.
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:56 compute-0 ceph-mon[73607]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 11:32:56 compute-0 ceph-mon[73607]: osdmap e6: 2 total, 0 up, 2 in
Oct 02 11:32:56 compute-0 ceph-mon[73607]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/950923059' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:32:56 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:32:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:56 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:57 compute-0 podman[83966]: 2025-10-02 11:32:57.066902805 +0000 UTC m=+0.044153441 container create 811b7a8ea4b2deb6778574e99d428f21919dceeb6057c6d5051f410f446441b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42368045efdc79c30f4c985ced31a8ef7456dd6a1bb4b9add9cd604d684b0c8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42368045efdc79c30f4c985ced31a8ef7456dd6a1bb4b9add9cd604d684b0c8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42368045efdc79c30f4c985ced31a8ef7456dd6a1bb4b9add9cd604d684b0c8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42368045efdc79c30f4c985ced31a8ef7456dd6a1bb4b9add9cd604d684b0c8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42368045efdc79c30f4c985ced31a8ef7456dd6a1bb4b9add9cd604d684b0c8f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:57 compute-0 podman[83966]: 2025-10-02 11:32:57.115655301 +0000 UTC m=+0.092905957 container init 811b7a8ea4b2deb6778574e99d428f21919dceeb6057c6d5051f410f446441b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:32:57 compute-0 podman[83966]: 2025-10-02 11:32:57.12451406 +0000 UTC m=+0.101764696 container start 811b7a8ea4b2deb6778574e99d428f21919dceeb6057c6d5051f410f446441b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:32:57 compute-0 bash[83966]: 811b7a8ea4b2deb6778574e99d428f21919dceeb6057c6d5051f410f446441b5
Oct 02 11:32:57 compute-0 podman[83966]: 2025-10-02 11:32:57.04630724 +0000 UTC m=+0.023557926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:57 compute-0 systemd[1]: Started Ceph osd.1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:32:57 compute-0 ceph-osd[83986]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:32:57 compute-0 ceph-osd[83986]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 11:32:57 compute-0 ceph-osd[83986]: pidfile_write: ignore empty --pid-file
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd3ac9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd3ac9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd3ac9800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901000 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901000 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:32:57 compute-0 sudo[83420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:32:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:32:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 sudo[83999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:57 compute-0 sudo[83999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:57 compute-0 sudo[83999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:57 compute-0 sudo[84024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:57 compute-0 sudo[84024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:57 compute-0 sudo[84024]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:57 compute-0 sudo[84049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:57 compute-0 sudo[84049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:57 compute-0 sudo[84049]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:57 compute-0 sudo[84074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:32:57 compute-0 sudo[84074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd3ac9800 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:32:57 compute-0 ceph-osd[83986]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 02 11:32:57 compute-0 ceph-osd[83986]: load: jerasure load: lrc 
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.704821988 +0000 UTC m=+0.036402123 container create 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:57 compute-0 systemd[1]: Started libpod-conmon-2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38.scope.
Oct 02 11:32:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.782689868 +0000 UTC m=+0.114270033 container init 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.69044025 +0000 UTC m=+0.022020385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.789702117 +0000 UTC m=+0.121282262 container start 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.793211413 +0000 UTC m=+0.124791578 container attach 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:32:57 compute-0 unruffled_wiles[84163]: 167 167
Oct 02 11:32:57 compute-0 systemd[1]: libpod-2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38.scope: Deactivated successfully.
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.794737274 +0000 UTC m=+0.126317419 container died 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b9dc0d05524f02f888129644535922624ced392a11a7e10b24baea5cf236bc-merged.mount: Deactivated successfully.
Oct 02 11:32:57 compute-0 podman[84142]: 2025-10-02 11:32:57.830219591 +0000 UTC m=+0.161799726 container remove 2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:32:57 compute-0 systemd[1]: libpod-conmon-2e3886241b92d132a310cb17a052b9ec809729d74936b99518ad821ec4590a38.scope: Deactivated successfully.
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:32:57 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:32:57 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:32:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:57 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:57 compute-0 ceph-mon[73607]: purged_snaps scrub starts
Oct 02 11:32:57 compute-0 ceph-mon[73607]: purged_snaps scrub ok
Oct 02 11:32:57 compute-0 ceph-mon[73607]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct 02 11:32:57 compute-0 ceph-mon[73607]: osdmap e7: 2 total, 0 up, 2 in
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:58.012053057 +0000 UTC m=+0.066838454 container create dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:57.9680373 +0000 UTC m=+0.022822727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:32:58 compute-0 systemd[1]: Started libpod-conmon-dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8.scope.
Oct 02 11:32:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc85797c324d255343bcc8f7a419943a40d152b57653131aef0d79e8ffa10e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc85797c324d255343bcc8f7a419943a40d152b57653131aef0d79e8ffa10e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc85797c324d255343bcc8f7a419943a40d152b57653131aef0d79e8ffa10e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc85797c324d255343bcc8f7a419943a40d152b57653131aef0d79e8ffa10e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:58.119415344 +0000 UTC m=+0.174200771 container init dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:58.131598193 +0000 UTC m=+0.186383590 container start dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:58.134937033 +0000 UTC m=+0.189722440 container attach dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4901c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs mount
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs mount shared_bdev_used = 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Git sha 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DB SUMMARY
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DB Session ID:  H31GRQBOCRGUX1S9T7RU
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                     Options.env: 0x55bcd4953d50
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                Options.info_log: 0x55bcd3b46b40
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.write_buffer_manager: 0x55bcd4a5e460
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.row_cache: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.wal_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.wal_compression: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Compression algorithms supported:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZSTD supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b465a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b46540)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b46540)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b46540)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 01ea9488-10c0-4ebc-acca-8a0b167dd3c7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778293938, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778294268, "job": 1, "event": "recovery_finished"}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: freelist init
Oct 02 11:32:58 compute-0 ceph-osd[83986]: freelist _read_cfg
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs umount
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bdev(0x55bcd4ae4400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs mount
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluefs mount shared_bdev_used = 4718592
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Git sha 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DB SUMMARY
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DB Session ID:  H31GRQBOCRGUX1S9T7RV
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                     Options.env: 0x55bcd3c943f0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                Options.info_log: 0x55bcd3b47700
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.write_buffer_manager: 0x55bcd4a5e8c0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.row_cache: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                              Options.wal_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.wal_compression: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Compression algorithms supported:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZSTD supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b47d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b462a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b462a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:           Options.merge_operator: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcd3b462a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bcd3b3c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.compression: LZ4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.num_levels: 7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 01ea9488-10c0-4ebc-acca-8a0b167dd3c7
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778567272, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778574793, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404778, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01ea9488-10c0-4ebc-acca-8a0b167dd3c7", "db_session_id": "H31GRQBOCRGUX1S9T7RV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778577434, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404778, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01ea9488-10c0-4ebc-acca-8a0b167dd3c7", "db_session_id": "H31GRQBOCRGUX1S9T7RV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778581364, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404778, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01ea9488-10c0-4ebc-acca-8a0b167dd3c7", "db_session_id": "H31GRQBOCRGUX1S9T7RV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404778583908, "job": 1, "event": "recovery_finished"}
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bcd3bf9c00
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: DB pointer 0x55bcd4a47a00
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 02 11:32:58 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:32:58 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 11:32:58 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 11:32:58 compute-0 ceph-osd[83986]: _get_class not permitted to load lua
Oct 02 11:32:58 compute-0 ceph-osd[83986]: _get_class not permitted to load sdk
Oct 02 11:32:58 compute-0 ceph-osd[83986]: _get_class not permitted to load test_remote_reads
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 load_pgs
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 load_pgs opened 0 pgs
Oct 02 11:32:58 compute-0 ceph-osd[83986]: osd.1 0 log_to_monitors true
Oct 02 11:32:58 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1[83982]: 2025-10-02T11:32:58.628+0000 7fd2408af740 -1 osd.1 0 log_to_monitors true
Oct 02 11:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 02 11:32:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 11:32:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:32:58 compute-0 awesome_moser[84205]: {
Oct 02 11:32:58 compute-0 awesome_moser[84205]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:32:58 compute-0 awesome_moser[84205]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:32:58 compute-0 awesome_moser[84205]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:32:58 compute-0 awesome_moser[84205]:         "osd_id": 1,
Oct 02 11:32:58 compute-0 awesome_moser[84205]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:32:58 compute-0 awesome_moser[84205]:         "type": "bluestore"
Oct 02 11:32:58 compute-0 awesome_moser[84205]:     }
Oct 02 11:32:58 compute-0 awesome_moser[84205]: }
Oct 02 11:32:58 compute-0 systemd[1]: libpod-dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8.scope: Deactivated successfully.
Oct 02 11:32:58 compute-0 podman[84186]: 2025-10-02 11:32:58.940956661 +0000 UTC m=+0.995742058 container died dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:32:58 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:58 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bc85797c324d255343bcc8f7a419943a40d152b57653131aef0d79e8ffa10e1-merged.mount: Deactivated successfully.
Oct 02 11:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 02 11:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:32:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:58 compute-0 ceph-mon[73607]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 11:32:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:59 compute-0 podman[84186]: 2025-10-02 11:32:59.020196108 +0000 UTC m=+1.074981505 container remove dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-0,root=default}
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:32:59 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:32:59 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:59 compute-0 systemd[1]: libpod-conmon-dbb5ab84f910ae5b22e51d4faada6839ef5370f4509489fe04829546d5554fa8.scope: Deactivated successfully.
Oct 02 11:32:59 compute-0 sudo[84074]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:32:59 compute-0 sudo[84649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:59 compute-0 sudo[84649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 sudo[84649]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:32:59 compute-0 sudo[84674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:32:59 compute-0 sudo[84674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 sudo[84674]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 sudo[84699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:59 compute-0 sudo[84699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 sudo[84699]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 sudo[84724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:32:59 compute-0 sudo[84724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 sudo[84724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 11:32:59 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 11:32:59 compute-0 sudo[84749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:32:59 compute-0 sudo[84749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 sudo[84749]: pam_unix(sudo:session): session closed for user root
Oct 02 11:32:59 compute-0 sudo[84774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:32:59 compute-0 sudo[84774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:32:59 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:32:59 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:32:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:32:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 done with init, starting boot process
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 start_boot
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 11:33:00 compute-0 ceph-osd[83986]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:33:00 compute-0 ceph-mon[73607]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 11:33:00 compute-0 ceph-mon[73607]: osdmap e8: 2 total, 0 up, 2 in
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1702934741; not ready for session (expect reconnect)
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:00 compute-0 podman[84868]: 2025-10-02 11:33:00.173433604 +0000 UTC m=+0.087478471 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:00 compute-0 podman[84868]: 2025-10-02 11:33:00.273014561 +0000 UTC m=+0.187059398 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:33:00 compute-0 sudo[84774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 sudo[84952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:00 compute-0 sudo[84952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:33:00 compute-0 sudo[84952]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:00 compute-0 sudo[84977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:33:00 compute-0 sudo[84977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:00 compute-0 sudo[84977]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:00 compute-0 sudo[85003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:00 compute-0 sudo[85003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:00 compute-0 sudo[85003]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:33:00 compute-0 sudo[85028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:33:00 compute-0 sudo[85028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:33:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:33:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:00 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:33:01 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1702934741; not ready for session (expect reconnect)
Oct 02 11:33:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:01 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 11:33:01 compute-0 ceph-mon[73607]: osdmap e9: 2 total, 0 up, 2 in
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:01 compute-0 sudo[85028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:01 compute-0 sudo[85083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:01 compute-0 sudo[85083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:01 compute-0 sudo[85083]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:01 compute-0 sudo[85108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:33:01 compute-0 sudo[85108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:01 compute-0 sudo[85108]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:01 compute-0 sudo[85133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:01 compute-0 sudo[85133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:01 compute-0 sudo[85133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:01 compute-0 sudo[85158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:33:01 compute-0 sudo[85158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.773387185 +0000 UTC m=+0.054094931 container create adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:01 compute-0 systemd[1]: Started libpod-conmon-adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13.scope.
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.739752727 +0000 UTC m=+0.020460493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.891115641 +0000 UTC m=+0.171823407 container init adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.898033238 +0000 UTC m=+0.178740984 container start adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:33:01 compute-0 nostalgic_dubinsky[85236]: 167 167
Oct 02 11:33:01 compute-0 systemd[1]: libpod-adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13.scope: Deactivated successfully.
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.909015184 +0000 UTC m=+0.189722930 container attach adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:33:01 compute-0 podman[85220]: 2025-10-02 11:33:01.909348133 +0000 UTC m=+0.190055879 container died adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:33:01 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:33:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:33:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:01 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-85659d6cf31ab600ea73362c329f6fa0b6d5d3bd695bece4288fc6c48a0026e6-merged.mount: Deactivated successfully.
Oct 02 11:33:02 compute-0 podman[85220]: 2025-10-02 11:33:02.04375279 +0000 UTC m=+0.324460536 container remove adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:33:02 compute-0 systemd[1]: libpod-conmon-adb403558131ff62e819a4f694cc6317c803bca237e725147b8a0409fa351b13.scope: Deactivated successfully.
Oct 02 11:33:02 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1702934741; not ready for session (expect reconnect)
Oct 02 11:33:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:02 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:02 compute-0 ceph-mon[73607]: purged_snaps scrub starts
Oct 02 11:33:02 compute-0 ceph-mon[73607]: purged_snaps scrub ok
Oct 02 11:33:02 compute-0 ceph-mon[73607]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:33:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:02 compute-0 podman[85261]: 2025-10-02 11:33:02.212153373 +0000 UTC m=+0.060222756 container create e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:33:02 compute-0 podman[85261]: 2025-10-02 11:33:02.171593528 +0000 UTC m=+0.019662921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:02 compute-0 systemd[1]: Started libpod-conmon-e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4.scope.
Oct 02 11:33:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d9537b624e9b5e7f605edafbe4c631e3082210fa755a28b28a26c624d15443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d9537b624e9b5e7f605edafbe4c631e3082210fa755a28b28a26c624d15443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d9537b624e9b5e7f605edafbe4c631e3082210fa755a28b28a26c624d15443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d9537b624e9b5e7f605edafbe4c631e3082210fa755a28b28a26c624d15443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:02 compute-0 podman[85261]: 2025-10-02 11:33:02.33248877 +0000 UTC m=+0.180558143 container init e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:02 compute-0 podman[85261]: 2025-10-02 11:33:02.341142523 +0000 UTC m=+0.189211886 container start e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:33:02 compute-0 podman[85261]: 2025-10-02 11:33:02.357791763 +0000 UTC m=+0.205861146 container attach e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:33:02 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/721300389; not ready for session (expect reconnect)
Oct 02 11:33:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:33:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:02 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1702934741; not ready for session (expect reconnect)
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: OSD bench result of 4356.019216 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:33:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389] boot
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 39.143 iops: 10020.516 elapsed_sec: 0.299
Oct 02 11:33:03 compute-0 ceph-osd[83986]: log_channel(cluster) log [WRN] : OSD bench result of 10020.515737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 0 waiting for initial osdmap
Oct 02 11:33:03 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1[83982]: 2025-10-02T11:33:03.404+0000 7fd23c82f640 -1 osd.1 0 waiting for initial osdmap
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 check_osdmap_features require_osd_release unknown -> reef
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 set_numa_affinity not setting numa affinity
Oct 02 11:33:03 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-1[83982]: 2025-10-02T11:33:03.420+0000 7fd237e57640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:33:03 compute-0 ceph-osd[83986]: osd.1 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 02 11:33:03 compute-0 elegant_borg[85276]: [
Oct 02 11:33:03 compute-0 elegant_borg[85276]:     {
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "available": false,
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "ceph_device": false,
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "lsm_data": {},
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "lvs": [],
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "path": "/dev/sr0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "rejected_reasons": [
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "Insufficient space (<5GB)",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "Has a FileSystem"
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         ],
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         "sys_api": {
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "actuators": null,
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "device_nodes": "sr0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "devname": "sr0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "human_readable_size": "482.00 KB",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "id_bus": "ata",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "model": "QEMU DVD-ROM",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "nr_requests": "2",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "parent": "/dev/sr0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "partitions": {},
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "path": "/dev/sr0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "removable": "1",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "rev": "2.5+",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "ro": "0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "rotational": "0",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "sas_address": "",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "sas_device_handle": "",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "scheduler_mode": "mq-deadline",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "sectors": 0,
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "sectorsize": "2048",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "size": 493568.0,
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "support_discard": "2048",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "type": "disk",
Oct 02 11:33:03 compute-0 elegant_borg[85276]:             "vendor": "QEMU"
Oct 02 11:33:03 compute-0 elegant_borg[85276]:         }
Oct 02 11:33:03 compute-0 elegant_borg[85276]:     }
Oct 02 11:33:03 compute-0 elegant_borg[85276]: ]
Oct 02 11:33:03 compute-0 systemd[1]: libpod-e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4.scope: Deactivated successfully.
Oct 02 11:33:03 compute-0 podman[85261]: 2025-10-02 11:33:03.490588758 +0000 UTC m=+1.338658131 container died e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:03 compute-0 systemd[1]: libpod-e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4.scope: Consumed 1.130s CPU time.
Oct 02 11:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d9537b624e9b5e7f605edafbe4c631e3082210fa755a28b28a26c624d15443-merged.mount: Deactivated successfully.
Oct 02 11:33:03 compute-0 podman[85261]: 2025-10-02 11:33:03.546142697 +0000 UTC m=+1.394212070 container remove e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_borg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:33:03 compute-0 systemd[1]: libpod-conmon-e59696878221b2a4b48a5a1612799a1cdc89d0b37b6e6e1ef605c57d4a94bcc4.scope: Deactivated successfully.
Oct 02 11:33:03 compute-0 sudo[85158]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:33:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:33:03 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:33:04 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1702934741; not ready for session (expect reconnect)
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:33:04 compute-0 ceph-mon[73607]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389] boot
Oct 02 11:33:04 compute-0 ceph-mon[73607]: osdmap e10: 2 total, 1 up, 2 in
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mon[73607]: Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: OSD bench result of 10020.515737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Oct 02 11:33:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741] boot
Oct 02 11:33:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:33:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:04 compute-0 ceph-osd[83986]: osd.1 11 state: booting -> active
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:04 compute-0 ceph-mgr[73901]: [devicehealth INFO root] creating mgr pool
Oct 02 11:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 02 11:33:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:33:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:33:05 compute-0 ceph-mon[73607]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:33:05 compute-0 ceph-mon[73607]: Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:33:05 compute-0 ceph-mon[73607]: osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741] boot
Oct 02 11:33:05 compute-0 ceph-mon[73607]: osdmap e11: 2 total, 2 up, 2 in
Oct 02 11:33:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:33:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:33:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:33:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:33:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 02 11:33:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:33:05 compute-0 ceph-osd[83986]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 11:33:05 compute-0 ceph-osd[83986]: osd.1 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 02 11:33:05 compute-0 ceph-osd[83986]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 11:33:05 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 02 11:33:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 11:33:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct 02 11:33:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct 02 11:33:06 compute-0 ceph-mon[73607]: pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 02 11:33:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 11:33:06 compute-0 ceph-mon[73607]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:33:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:33:06 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] creating main.db for devicehealth
Oct 02 11:33:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 11:33:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:33:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 11:33:06 compute-0 sudo[86467]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 02 11:33:06 compute-0 sudo[86467]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:33:06 compute-0 sudo[86467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 02 11:33:06 compute-0 sudo[86467]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 11:33:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:33:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 02 11:33:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct 02 11:33:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:33:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 11:33:07 compute-0 ceph-mon[73607]: osdmap e13: 2 total, 2 up, 2 in
Oct 02 11:33:07 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 11:33:07 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 11:33:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:08 compute-0 ceph-mon[73607]: pgmap v48: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:33:08 compute-0 ceph-mon[73607]: osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:33:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmcstn(active, since 86s)
Oct 02 11:33:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:33:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:09 compute-0 ceph-mon[73607]: mgrmap e9: compute-0.fmcstn(active, since 86s)
Oct 02 11:33:10 compute-0 ceph-mon[73607]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:33:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:12 compute-0 ceph-mon[73607]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:14 compute-0 ceph-mon[73607]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:15 compute-0 ceph-mon[73607]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:18 compute-0 ceph-mon[73607]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:20 compute-0 ceph-mon[73607]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:33:21 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:33:21 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:33:22 compute-0 ceph-mon[73607]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:33:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:22 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:33:22 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:33:23 compute-0 ceph-mon[73607]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:33:23 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:33:23 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:33:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:24 compute-0 ceph-mon[73607]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:24 compute-0 ceph-mon[73607]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:33:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:25 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:33:25 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:33:25 compute-0 ceph-mon[73607]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:26 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 4112ec34-8fec-4886-9061-53691a6b7dc9 (Updating mon deployment (+2 -> 3))
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:33:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:26 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct 02 11:33:26 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct 02 11:33:26 compute-0 ceph-mon[73607]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:26 compute-0 ceph-mon[73607]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:33:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:27 compute-0 sudo[86493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqqixbmgfrfovzjflffowlzpoplxjoeh ; /usr/bin/python3'
Oct 02 11:33:27 compute-0 sudo[86493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:27 compute-0 python3[86495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 02 11:33:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.224103993 +0000 UTC m=+0.053407182 container create e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:27 compute-0 systemd[1]: Started libpod-conmon-e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b.scope.
Oct 02 11:33:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d54791f7ecfb4dd1be8d0ae9b0619421ff2978ecd8b46720d99600e50ad829c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d54791f7ecfb4dd1be8d0ae9b0619421ff2978ecd8b46720d99600e50ad829c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d54791f7ecfb4dd1be8d0ae9b0619421ff2978ecd8b46720d99600e50ad829c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.201111492 +0000 UTC m=+0.030414731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.307433941 +0000 UTC m=+0.136737160 container init e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.317188414 +0000 UTC m=+0.146491603 container start e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.320684999 +0000 UTC m=+0.149988208 container attach e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:33:27 compute-0 ceph-mon[73607]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:27 compute-0 ceph-mon[73607]: Deploying daemon mon.compute-2 on compute-2
Oct 02 11:33:27 compute-0 ceph-mon[73607]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 02 11:33:27 compute-0 ceph-mon[73607]: Cluster is now healthy
Oct 02 11:33:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:33:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597523062' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:33:27 compute-0 mystifying_sanderson[86514]: 
Oct 02 11:33:27 compute-0 mystifying_sanderson[86514]: {"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":158,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1759404784,"num_in_osds":2,"osd_in_since":1759404765,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475222016,"bytes_avail":14548774912,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T11:32:44.828316+0000","services":{}},"progress_events":{}}
Oct 02 11:33:27 compute-0 systemd[1]: libpod-e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b.scope: Deactivated successfully.
Oct 02 11:33:27 compute-0 podman[86497]: 2025-10-02 11:33:27.961752086 +0000 UTC m=+0.791055285 container died e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d54791f7ecfb4dd1be8d0ae9b0619421ff2978ecd8b46720d99600e50ad829c-merged.mount: Deactivated successfully.
Oct 02 11:33:28 compute-0 podman[86497]: 2025-10-02 11:33:28.018774264 +0000 UTC m=+0.848077493 container remove e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b (image=quay.io/ceph/ceph:v18, name=mystifying_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:33:28 compute-0 systemd[1]: libpod-conmon-e412678c68ce37d6aaaece11eacf3c37b9343e603777ba1ac8b9210847b24e6b.scope: Deactivated successfully.
Oct 02 11:33:28 compute-0 sudo[86493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:28 compute-0 sudo[86575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmdweryufdsfjwjlxvhmybsjapgloye ; /usr/bin/python3'
Oct 02 11:33:28 compute-0 sudo[86575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:28 compute-0 python3[86577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/597523062' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:33:28 compute-0 podman[86578]: 2025-10-02 11:33:28.549625288 +0000 UTC m=+0.040473993 container create a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:28 compute-0 systemd[1]: Started libpod-conmon-a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad.scope.
Oct 02 11:33:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6925ddaad18bdd4f47429370ad2384428be240a7436440be4a8b3acae5522de4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6925ddaad18bdd4f47429370ad2384428be240a7436440be4a8b3acae5522de4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:28 compute-0 podman[86578]: 2025-10-02 11:33:28.529951027 +0000 UTC m=+0.020799752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:28 compute-0 podman[86578]: 2025-10-02 11:33:28.626846531 +0000 UTC m=+0.117695256 container init a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:33:28 compute-0 podman[86578]: 2025-10-02 11:33:28.632410911 +0000 UTC m=+0.123259616 container start a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:33:28 compute-0 podman[86578]: 2025-10-02 11:33:28.641744903 +0000 UTC m=+0.132593708 container attach a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4219378585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct 02 11:33:29 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct 02 11:33:29 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4219378585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 11:33:29 compute-0 ceph-mon[73607]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:29 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:29 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:30 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:30 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:30 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:33:31 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:31 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:33:31 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:31 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:31 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:33:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:32 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:32 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:32 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:32 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:32 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:32 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:33:33 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:33 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:33 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:33 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:33:33 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:33 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:33 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmcstn(active, since 112s)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4219378585' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct 02 11:33:34 compute-0 priceless_carver[86593]: pool 'vms' created
Oct 02 11:33:34 compute-0 ceph-mon[73607]: Deploying daemon mon.compute-1 on compute-1
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4219378585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0 calling monitor election
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-2 calling monitor election
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:33:34 compute-0 ceph-mon[73607]: fsmap 
Oct 02 11:33:34 compute-0 ceph-mon[73607]: osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mgrmap e9: compute-0.fmcstn(active, since 112s)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 11:33:34 compute-0 systemd[1]: libpod-a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad.scope: Deactivated successfully.
Oct 02 11:33:34 compute-0 podman[86578]: 2025-10-02 11:33:34.793368523 +0000 UTC m=+6.284217258 container died a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:33:34 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6925ddaad18bdd4f47429370ad2384428be240a7436440be4a8b3acae5522de4-merged.mount: Deactivated successfully.
Oct 02 11:33:34 compute-0 podman[86578]: 2025-10-02 11:33:34.869933029 +0000 UTC m=+6.360781724 container remove a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad (image=quay.io/ceph/ceph:v18, name=priceless_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:33:34 compute-0 systemd[1]: libpod-conmon-a033336a624f4bb432ff34b3cfa6c53830be49afc24c2a1196e6c86b978f74ad.scope: Deactivated successfully.
Oct 02 11:33:34 compute-0 sudo[86575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:33:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 4112ec34-8fec-4886-9061-53691a6b7dc9 (Updating mon deployment (+2 -> 3))
Oct 02 11:33:34 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 4112ec34-8fec-4886-9061-53691a6b7dc9 (Updating mon deployment (+2 -> 3)) in 9 seconds
Oct 02 11:33:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:33:35 compute-0 sudo[86659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgucjufvvxpqhdsfzwxrwfnfdqhrrza ; /usr/bin/python3'
Oct 02 11:33:35 compute-0 sudo[86659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:35 compute-0 python3[86661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev ee4c6fb3-efee-4707-911b-1957f688c295 (Updating mgr deployment (+2 -> 3))
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.rbjjpf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rbjjpf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:33:35 compute-0 podman[86662]: 2025-10-02 11:33:35.255998545 +0000 UTC m=+0.048608562 container create 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/233023486; not ready for session (expect reconnect)
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:33:35 compute-0 systemd[1]: Started libpod-conmon-6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3.scope.
Oct 02 11:33:35 compute-0 podman[86662]: 2025-10-02 11:33:35.227883576 +0000 UTC m=+0.020493613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad19d4e557590caf280720eb7d14a10ddd0effbd3d2b18a15046ad0da7fcfce7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad19d4e557590caf280720eb7d14a10ddd0effbd3d2b18a15046ad0da7fcfce7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:35 compute-0 podman[86662]: 2025-10-02 11:33:35.34849909 +0000 UTC m=+0.141109137 container init 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:33:35 compute-0 podman[86662]: 2025-10-02 11:33:35.354990816 +0000 UTC m=+0.147600833 container start 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:33:35 compute-0 podman[86662]: 2025-10-02 11:33:35.359358764 +0000 UTC m=+0.151968811 container attach 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rbjjpf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.rbjjpf on compute-2
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.rbjjpf on compute-2
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:35 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 11:33:35 compute-0 ceph-mon[73607]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:35 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v65: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:36 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:36 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:36 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:36 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:37 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:37 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:37 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:37 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:37 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 3 completed events
Oct 02 11:33:37 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v66: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:38 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:38 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:39 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:39 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:33:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v67: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:40 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:40 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:33:40 compute-0 ceph-mon[73607]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmcstn(active, since 118s)
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:40 compute-0 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0 calling monitor election
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-2 calling monitor election
Oct 02 11:33:41 compute-0 ceph-mon[73607]: pgmap v65: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-1 calling monitor election
Oct 02 11:33:41 compute-0 ceph-mon[73607]: pgmap v66: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: pgmap v67: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:33:41 compute-0 ceph-mon[73607]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:33:41 compute-0 ceph-mon[73607]: fsmap 
Oct 02 11:33:41 compute-0 ceph-mon[73607]: osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mgrmap e9: compute-0.fmcstn(active, since 118s)
Oct 02 11:33:41 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 11:33:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct 02 11:33:41 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:33:41 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2594590692; not ready for session (expect reconnect)
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:41 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.ypnrbl on compute-1
Oct 02 11:33:41 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.ypnrbl on compute-1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 02 11:33:42 compute-0 ceph-mon[73607]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:42 compute-0 ceph-mon[73607]: osdmap e16: 2 total, 2 up, 2 in
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: Deploying daemon mgr.compute-1.ypnrbl on compute-1
Oct 02 11:33:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct 02 11:33:42 compute-0 quizzical_dubinsky[86679]: pool 'volumes' created
Oct 02 11:33:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct 02 11:33:42 compute-0 systemd[1]: libpod-6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3.scope: Deactivated successfully.
Oct 02 11:33:42 compute-0 conmon[86679]: conmon 6a8bfe54643076da449c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3.scope/container/memory.events
Oct 02 11:33:42 compute-0 podman[86706]: 2025-10-02 11:33:42.199794888 +0000 UTC m=+0.028875590 container died 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad19d4e557590caf280720eb7d14a10ddd0effbd3d2b18a15046ad0da7fcfce7-merged.mount: Deactivated successfully.
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v70: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:42 compute-0 podman[86706]: 2025-10-02 11:33:42.242540821 +0000 UTC m=+0.071621503 container remove 6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3 (image=quay.io/ceph/ceph:v18, name=quizzical_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:42 compute-0 systemd[1]: libpod-conmon-6a8bfe54643076da449c7bf4f65b0ada5b69da385aa16312e0e98f7f9bbf08f3.scope: Deactivated successfully.
Oct 02 11:33:42 compute-0 sudo[86659]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:33:42
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Some PGs (0.333333) are unknown; try again later
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct 02 11:33:42 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T11:33:42.295+0000 7f4c74221640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct 02 11:33:42 compute-0 sudo[86743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkuvidyyafayrrnraafcmtwrxhplqzxg ; /usr/bin/python3'
Oct 02 11:33:42 compute-0 sudo[86743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:42 compute-0 python3[86745]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:42 compute-0 podman[86746]: 2025-10-02 11:33:42.613265674 +0000 UTC m=+0.041960223 container create 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:33:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:33:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:33:42 compute-0 systemd[1]: Started libpod-conmon-9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9.scope.
Oct 02 11:33:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b8790a282398d02a39b99a1a160fb09de3f53ab140570349145ce1aa12759/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b8790a282398d02a39b99a1a160fb09de3f53ab140570349145ce1aa12759/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:42 compute-0 podman[86746]: 2025-10-02 11:33:42.683137659 +0000 UTC m=+0.111832238 container init 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:33:42 compute-0 podman[86746]: 2025-10-02 11:33:42.688187056 +0000 UTC m=+0.116881615 container start 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:33:42 compute-0 podman[86746]: 2025-10-02 11:33:42.594302742 +0000 UTC m=+0.022997321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:42 compute-0 podman[86746]: 2025-10-02 11:33:42.691665629 +0000 UTC m=+0.120360218 container attach 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Oct 02 11:33:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:43 compute-0 ceph-mon[73607]: osdmap e17: 2 total, 2 up, 2 in
Oct 02 11:33:43 compute-0 ceph-mon[73607]: pgmap v70: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Oct 02 11:33:43 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 148e66b7-0ec4-48ce-9ead-85685104ec30 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev ee4c6fb3-efee-4707-911b-1957f688c295 (Updating mgr deployment (+2 -> 3))
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event ee4c6fb3-efee-4707-911b-1957f688c295 (Updating mgr deployment (+2 -> 3)) in 9 seconds
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 45b20e6c-3785-4dd0-ab8e-c1095d2de45c (Updating crash deployment (+1 -> 3))
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v72: 3 pgs: 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Oct 02 11:33:44 compute-0 sleepy_benz[86762]: pool 'backups' created
Oct 02 11:33:44 compute-0 systemd[1]: libpod-9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9.scope: Deactivated successfully.
Oct 02 11:33:44 compute-0 podman[86746]: 2025-10-02 11:33:44.770443528 +0000 UTC m=+2.199138097 container died 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:33:44 compute-0 ceph-mon[73607]: osdmap e18: 2 total, 2 up, 2 in
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:33:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev cfd2e0df-8b21-47f8-ad77-e0c03fe2e55b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 148e66b7-0ec4-48ce-9ead-85685104ec30 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 148e66b7-0ec4-48ce-9ead-85685104ec30 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 1 seconds
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev cfd2e0df-8b21-47f8-ad77-e0c03fe2e55b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 11:33:44 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event cfd2e0df-8b21-47f8-ad77-e0c03fe2e55b (PG autoscaler increasing pool 3 PGs from 1 to 32) in 0 seconds
Oct 02 11:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-707b8790a282398d02a39b99a1a160fb09de3f53ab140570349145ce1aa12759-merged.mount: Deactivated successfully.
Oct 02 11:33:45 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 6 completed events
Oct 02 11:33:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 02 11:33:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:33:46 compute-0 podman[86746]: 2025-10-02 11:33:46.027151565 +0000 UTC m=+3.455846134 container remove 9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9 (image=quay.io/ceph/ceph:v18, name=sleepy_benz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:46 compute-0 systemd[1]: libpod-conmon-9cd25b3378c1fb330bb98089105938f146f4027a48f1bfde6a890a5af55180d9.scope: Deactivated successfully.
Oct 02 11:33:46 compute-0 sudo[86743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:46 compute-0 ceph-mon[73607]: pgmap v72: 3 pgs: 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:33:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:46 compute-0 ceph-mon[73607]: Deploying daemon crash.compute-2 on compute-2
Oct 02 11:33:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:33:46 compute-0 ceph-mon[73607]: osdmap e19: 2 total, 2 up, 2 in
Oct 02 11:33:46 compute-0 sudo[86824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwasvayvdtasiskjainnijhppgchgsfo ; /usr/bin/python3'
Oct 02 11:33:46 compute-0 sudo[86824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:33:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:33:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:46 compute-0 python3[86826]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:46 compute-0 podman[86827]: 2025-10-02 11:33:46.373731816 +0000 UTC m=+0.046775333 container create b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:46 compute-0 systemd[1]: Started libpod-conmon-b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1.scope.
Oct 02 11:33:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a84d06621d6e31ceabf7d69f9adeabde90ee7dd9cb066455677de75f4a075e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a84d06621d6e31ceabf7d69f9adeabde90ee7dd9cb066455677de75f4a075e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:46 compute-0 podman[86827]: 2025-10-02 11:33:46.43872405 +0000 UTC m=+0.111767577 container init b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:46 compute-0 podman[86827]: 2025-10-02 11:33:46.44500445 +0000 UTC m=+0.118047967 container start b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:46 compute-0 podman[86827]: 2025-10-02 11:33:46.351227739 +0000 UTC m=+0.024271276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:46 compute-0 podman[86827]: 2025-10-02 11:33:46.448619897 +0000 UTC m=+0.121663434 container attach b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:33:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Oct 02 11:33:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Oct 02 11:33:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 02 11:33:47 compute-0 ceph-mon[73607]: pgmap v74: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:47 compute-0 ceph-mon[73607]: osdmap e20: 2 total, 2 up, 2 in
Oct 02 11:33:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Oct 02 11:33:48 compute-0 cranky_mirzakhani[86842]: pool 'images' created
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Oct 02 11:33:48 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21 pruub=8.966326714s) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active pruub 58.510486603s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:48 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21 pruub=8.966326714s) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown pruub 58.510486603s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:48 compute-0 systemd[1]: libpod-b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1.scope: Deactivated successfully.
Oct 02 11:33:48 compute-0 podman[86827]: 2025-10-02 11:33:48.180890055 +0000 UTC m=+1.853933582 container died b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a84d06621d6e31ceabf7d69f9adeabde90ee7dd9cb066455677de75f4a075e2-merged.mount: Deactivated successfully.
Oct 02 11:33:48 compute-0 podman[86827]: 2025-10-02 11:33:48.229544518 +0000 UTC m=+1.902588035 container remove b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1 (image=quay.io/ceph/ceph:v18, name=cranky_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:33:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v77: 67 pgs: 64 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:48 compute-0 systemd[75232]: Starting Mark boot as successful...
Oct 02 11:33:48 compute-0 systemd[75232]: Finished Mark boot as successful.
Oct 02 11:33:48 compute-0 sudo[86824]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:48 compute-0 systemd[1]: libpod-conmon-b15db4928251c5163075ee2f8b96993f9862de700ae3082b110ecdc0b8cdf9f1.scope: Deactivated successfully.
Oct 02 11:33:48 compute-0 sudo[86906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tibxjjgrjsclinkbnxdijzjiagasyyke ; /usr/bin/python3'
Oct 02 11:33:48 compute-0 sudo[86906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 45b20e6c-3785-4dd0-ab8e-c1095d2de45c (Updating crash deployment (+1 -> 3))
Oct 02 11:33:48 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 45b20e6c-3785-4dd0-ab8e-c1095d2de45c (Updating crash deployment (+1 -> 3)) in 4 seconds
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:33:48 compute-0 python3[86908]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:48 compute-0 podman[86909]: 2025-10-02 11:33:48.579231047 +0000 UTC m=+0.044904663 container create c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:48 compute-0 sudo[86915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:48 compute-0 sudo[86915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:48 compute-0 sudo[86915]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:48 compute-0 systemd[1]: Started libpod-conmon-c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4.scope.
Oct 02 11:33:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:48 compute-0 sudo[86949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:33:48 compute-0 podman[86909]: 2025-10-02 11:33:48.557597352 +0000 UTC m=+0.023270988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:48 compute-0 sudo[86949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a964bb962d4d14625277880be658cb1c05e4398115eb0112c6fa387ff48ed3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a964bb962d4d14625277880be658cb1c05e4398115eb0112c6fa387ff48ed3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:48 compute-0 sudo[86949]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:48 compute-0 podman[86909]: 2025-10-02 11:33:48.667044882 +0000 UTC m=+0.132718518 container init c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:48 compute-0 ceph-mon[73607]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:48 compute-0 ceph-mon[73607]: osdmap e21: 2 total, 2 up, 2 in
Oct 02 11:33:48 compute-0 ceph-mon[73607]: pgmap v77: 67 pgs: 64 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:33:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:48 compute-0 podman[86909]: 2025-10-02 11:33:48.675508075 +0000 UTC m=+0.141181711 container start c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:48 compute-0 podman[86909]: 2025-10-02 11:33:48.67889102 +0000 UTC m=+0.144564646 container attach c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:33:48 compute-0 sudo[86977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:48 compute-0 sudo[86977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:48 compute-0 sudo[86977]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:48 compute-0 sudo[87003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:33:48 compute-0 sudo[87003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.085873212 +0000 UTC m=+0.045508989 container create b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:49 compute-0 systemd[1]: Started libpod-conmon-b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08.scope.
Oct 02 11:33:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.155912899 +0000 UTC m=+0.115548676 container init b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:33:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.064459963 +0000 UTC m=+0.024095750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.16272003 +0000 UTC m=+0.122355817 container start b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:49 compute-0 peaceful_hopper[87099]: 167 167
Oct 02 11:33:49 compute-0 systemd[1]: libpod-b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08.scope: Deactivated successfully.
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.167414128 +0000 UTC m=+0.127049895 container attach b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.168068255 +0000 UTC m=+0.127704022 container died b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c65ac25a12258ba60196e0987e58adb9ae0dbf7799d617d54a167704118967-merged.mount: Deactivated successfully.
Oct 02 11:33:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Oct 02 11:33:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 podman[87083]: 2025-10-02 11:33:49.216384193 +0000 UTC m=+0.176019960 container remove b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [1] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:49 compute-0 systemd[1]: libpod-conmon-b00237de661b7d2c75c39f013cc0361678298466fe80b252178f05e01f7b5a08.scope: Deactivated successfully.
Oct 02 11:33:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:49 compute-0 podman[87129]: 2025-10-02 11:33:49.36298564 +0000 UTC m=+0.040888722 container create fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:33:49 compute-0 systemd[1]: Started libpod-conmon-fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001.scope.
Oct 02 11:33:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:49 compute-0 podman[87129]: 2025-10-02 11:33:49.439809067 +0000 UTC m=+0.117712149 container init fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:33:49 compute-0 podman[87129]: 2025-10-02 11:33:49.345887639 +0000 UTC m=+0.023790741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:49 compute-0 podman[87129]: 2025-10-02 11:33:49.450073656 +0000 UTC m=+0.127976738 container start fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:33:49 compute-0 podman[87129]: 2025-10-02 11:33:49.453572873 +0000 UTC m=+0.131475985 container attach fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:33:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 02 11:33:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v79: 67 pgs: 1 creating+peering, 1 peering, 31 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:50 compute-0 cool_noether[87145]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:33:50 compute-0 cool_noether[87145]: --> relative data size: 1.0
Oct 02 11:33:50 compute-0 cool_noether[87145]: --> All data devices are unavailable
Oct 02 11:33:50 compute-0 ceph-mon[73607]: osdmap e22: 2 total, 2 up, 2 in
Oct 02 11:33:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:50 compute-0 systemd[1]: libpod-fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001.scope: Deactivated successfully.
Oct 02 11:33:50 compute-0 conmon[87145]: conmon fc506c3e80c90aad9ec9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001.scope/container/memory.events
Oct 02 11:33:50 compute-0 podman[87129]: 2025-10-02 11:33:50.276621437 +0000 UTC m=+0.954524519 container died fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2402e26fb1f22b3e3b65eac568b9ac6224b1d9779866ace429472b08d5af67-merged.mount: Deactivated successfully.
Oct 02 11:33:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Oct 02 11:33:50 compute-0 busy_chebyshev[86961]: pool 'cephfs.cephfs.meta' created
Oct 02 11:33:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Oct 02 11:33:50 compute-0 podman[87129]: 2025-10-02 11:33:50.350234504 +0000 UTC m=+1.028137586 container remove fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 11:33:50 compute-0 systemd[1]: libpod-c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4.scope: Deactivated successfully.
Oct 02 11:33:50 compute-0 podman[86909]: 2025-10-02 11:33:50.352919812 +0000 UTC m=+1.818593428 container died c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:50 compute-0 systemd[1]: libpod-conmon-fc506c3e80c90aad9ec951ca23afbb2441ced441c9a79322d4dde2ce20ca0001.scope: Deactivated successfully.
Oct 02 11:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-31a964bb962d4d14625277880be658cb1c05e4398115eb0112c6fa387ff48ed3-merged.mount: Deactivated successfully.
Oct 02 11:33:50 compute-0 sudo[87003]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:50 compute-0 podman[86909]: 2025-10-02 11:33:50.395920065 +0000 UTC m=+1.861593681 container remove c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:33:50 compute-0 systemd[1]: libpod-conmon-c2c40a9029f6044c7c68cfb659aad5dab7c883385b7a7ee23fc81354a39ba4e4.scope: Deactivated successfully.
Oct 02 11:33:50 compute-0 sudo[86906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:50 compute-0 sudo[87186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:50 compute-0 sudo[87186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:50 compute-0 sudo[87186]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:50 compute-0 sudo[87211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:33:50 compute-0 sudo[87211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:50 compute-0 sudo[87211]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:50 compute-0 sudo[87262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxcxcpxqeidivmantsxatxlqhlpqkgrm ; /usr/bin/python3'
Oct 02 11:33:50 compute-0 sudo[87262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:50 compute-0 sudo[87259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:50 compute-0 sudo[87259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:50 compute-0 sudo[87259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:50 compute-0 sudo[87287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:33:50 compute-0 sudo[87287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:50 compute-0 python3[87281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:50 compute-0 podman[87312]: 2025-10-02 11:33:50.741462169 +0000 UTC m=+0.041253221 container create db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:50 compute-0 systemd[1]: Started libpod-conmon-db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c.scope.
Oct 02 11:33:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:50 compute-0 podman[87312]: 2025-10-02 11:33:50.721972877 +0000 UTC m=+0.021763959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8db068c6bf029a1f4124013e06a6be4d67d18451efd4788eb3fb8722dfdf86b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8db068c6bf029a1f4124013e06a6be4d67d18451efd4788eb3fb8722dfdf86b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:50 compute-0 podman[87312]: 2025-10-02 11:33:50.829174631 +0000 UTC m=+0.128965703 container init db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:50 compute-0 podman[87312]: 2025-10-02 11:33:50.835227963 +0000 UTC m=+0.135019015 container start db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:50 compute-0 podman[87312]: 2025-10-02 11:33:50.838255189 +0000 UTC m=+0.138046261 container attach db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:33:50 compute-0 podman[87372]: 2025-10-02 11:33:50.945279998 +0000 UTC m=+0.036779878 container create ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:50 compute-0 systemd[1]: Started libpod-conmon-ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581.scope.
Oct 02 11:33:51 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 02 11:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:50.928996567 +0000 UTC m=+0.020496457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:51 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:51.032231271 +0000 UTC m=+0.123731171 container init ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:51.039308779 +0000 UTC m=+0.130808659 container start ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:33:51 compute-0 nostalgic_lalande[87389]: 167 167
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:51.043899105 +0000 UTC m=+0.135399005 container attach ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:51 compute-0 systemd[1]: libpod-ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581.scope: Deactivated successfully.
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:51.045131056 +0000 UTC m=+0.136630956 container died ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-88aa4a8ecc8abb7f484e29378242c2a8801dc543d34841ced115be550bdcc3e1-merged.mount: Deactivated successfully.
Oct 02 11:33:51 compute-0 podman[87372]: 2025-10-02 11:33:51.087521944 +0000 UTC m=+0.179021824 container remove ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:51 compute-0 systemd[1]: libpod-conmon-ca8ae5d9a2612be1e8fcd6468982749482eb5cf96d238f5b1c64dd2cb76c2581.scope: Deactivated successfully.
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"} v 0) v1
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]: dispatch
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 02 11:33:51 compute-0 podman[87432]: 2025-10-02 11:33:51.240087211 +0000 UTC m=+0.039989768 container create 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:33:51 compute-0 systemd[1]: Started libpod-conmon-214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7.scope.
Oct 02 11:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70614a8cc8012184dee4fcc0f335755e82b1c3b0b79dfe08ce5cd378f9ca9f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70614a8cc8012184dee4fcc0f335755e82b1c3b0b79dfe08ce5cd378f9ca9f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70614a8cc8012184dee4fcc0f335755e82b1c3b0b79dfe08ce5cd378f9ca9f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70614a8cc8012184dee4fcc0f335755e82b1c3b0b79dfe08ce5cd378f9ca9f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:51 compute-0 podman[87432]: 2025-10-02 11:33:51.223266217 +0000 UTC m=+0.023168804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:51 compute-0 podman[87432]: 2025-10-02 11:33:51.323527666 +0000 UTC m=+0.123430243 container init 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]': finished
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct 02 11:33:51 compute-0 podman[87432]: 2025-10-02 11:33:51.335256061 +0000 UTC m=+0.135158618 container start 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:51 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:51 compute-0 podman[87432]: 2025-10-02 11:33:51.343101329 +0000 UTC m=+0.143003916 container attach 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:51 compute-0 ceph-mon[73607]: pgmap v79: 67 pgs: 1 creating+peering, 1 peering, 31 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:51 compute-0 ceph-mon[73607]: osdmap e23: 2 total, 2 up, 2 in
Oct 02 11:33:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1986060367' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]: dispatch
Oct 02 11:33:51 compute-0 ceph-mon[73607]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]: dispatch
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:51 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 7 completed events
Oct 02 11:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:52 compute-0 musing_gates[87448]: {
Oct 02 11:33:52 compute-0 musing_gates[87448]:     "1": [
Oct 02 11:33:52 compute-0 musing_gates[87448]:         {
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "devices": [
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "/dev/loop3"
Oct 02 11:33:52 compute-0 musing_gates[87448]:             ],
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "lv_name": "ceph_lv0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "lv_size": "7511998464",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "name": "ceph_lv0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "tags": {
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.cluster_name": "ceph",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.crush_device_class": "",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.encrypted": "0",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.osd_id": "1",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.type": "block",
Oct 02 11:33:52 compute-0 musing_gates[87448]:                 "ceph.vdo": "0"
Oct 02 11:33:52 compute-0 musing_gates[87448]:             },
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "type": "block",
Oct 02 11:33:52 compute-0 musing_gates[87448]:             "vg_name": "ceph_vg0"
Oct 02 11:33:52 compute-0 musing_gates[87448]:         }
Oct 02 11:33:52 compute-0 musing_gates[87448]:     ]
Oct 02 11:33:52 compute-0 musing_gates[87448]: }
Oct 02 11:33:52 compute-0 systemd[1]: libpod-214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7.scope: Deactivated successfully.
Oct 02 11:33:52 compute-0 podman[87460]: 2025-10-02 11:33:52.200854679 +0000 UTC m=+0.025094674 container died 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:33:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v82: 68 pgs: 1 creating+peering, 1 peering, 32 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70614a8cc8012184dee4fcc0f335755e82b1c3b0b79dfe08ce5cd378f9ca9f4-merged.mount: Deactivated successfully.
Oct 02 11:33:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 02 11:33:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct 02 11:33:52 compute-0 podman[87460]: 2025-10-02 11:33:52.492322598 +0000 UTC m=+0.316562583 container remove 214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gates, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:33:52 compute-0 jolly_leakey[87352]: pool 'cephfs.cephfs.data' created
Oct 02 11:33:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct 02 11:33:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:52 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:52 compute-0 systemd[1]: libpod-conmon-214923fc890d2cdf140441ab9feb4a493101d06ee9bc83c15e47065ff9e7d0f7.scope: Deactivated successfully.
Oct 02 11:33:52 compute-0 systemd[1]: libpod-db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c.scope: Deactivated successfully.
Oct 02 11:33:52 compute-0 podman[87312]: 2025-10-02 11:33:52.517106482 +0000 UTC m=+1.816897534 container died db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:52 compute-0 sudo[87287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:52 compute-0 sudo[87487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:52 compute-0 sudo[87487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:52 compute-0 sudo[87487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:52 compute-0 sudo[87513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:33:52 compute-0 sudo[87513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:52 compute-0 sudo[87513]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8db068c6bf029a1f4124013e06a6be4d67d18451efd4788eb3fb8722dfdf86b-merged.mount: Deactivated successfully.
Oct 02 11:33:52 compute-0 sudo[87538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:33:52 compute-0 sudo[87538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:52 compute-0 sudo[87538]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:52 compute-0 ceph-mon[73607]: 2.1 scrub starts
Oct 02 11:33:52 compute-0 ceph-mon[73607]: 2.1 scrub ok
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]': finished
Oct 02 11:33:52 compute-0 ceph-mon[73607]: osdmap e24: 3 total, 2 up, 3 in
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4255282364' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:52 compute-0 ceph-mon[73607]: pgmap v82: 68 pgs: 1 creating+peering, 1 peering, 32 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3509981406' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:33:52 compute-0 ceph-mon[73607]: osdmap e25: 3 total, 2 up, 3 in
Oct 02 11:33:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:52 compute-0 sudo[87563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:33:52 compute-0 sudo[87563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:33:52 compute-0 podman[87312]: 2025-10-02 11:33:52.757240948 +0000 UTC m=+2.057032000 container remove db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c (image=quay.io/ceph/ceph:v18, name=jolly_leakey, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:33:52 compute-0 sudo[87262]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:52 compute-0 systemd[1]: libpod-conmon-db3c038938a641fb40aa488532c941dae0d6b9d72be68b80eb825217dcf04d6c.scope: Deactivated successfully.
Oct 02 11:33:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:52 compute-0 sudo[87637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suvkhgbssbaiqbhlaylvxtshyqzidkxe ; /usr/bin/python3'
Oct 02 11:33:52 compute-0 sudo[87637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:52 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.065242584 +0000 UTC m=+0.060685591 container create 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:33:53 compute-0 python3[87641]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.023962143 +0000 UTC m=+0.019405170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:53 compute-0 systemd[1]: Started libpod-conmon-2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443.scope.
Oct 02 11:33:53 compute-0 podman[87666]: 2025-10-02 11:33:53.18050342 +0000 UTC m=+0.056804412 container create 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:33:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.212114678 +0000 UTC m=+0.207557715 container init 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.219149075 +0000 UTC m=+0.214592082 container start 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:33:53 compute-0 nifty_snyder[87681]: 167 167
Oct 02 11:33:53 compute-0 systemd[1]: Started libpod-conmon-94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed.scope.
Oct 02 11:33:53 compute-0 systemd[1]: libpod-2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443.scope: Deactivated successfully.
Oct 02 11:33:53 compute-0 podman[87666]: 2025-10-02 11:33:53.14916548 +0000 UTC m=+0.025466492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0920b68579a836d350e31781fcd08f432d1469d78c9b21241642001f978e65b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0920b68579a836d350e31781fcd08f432d1469d78c9b21241642001f978e65b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.355450731 +0000 UTC m=+0.350893768 container attach 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.357881233 +0000 UTC m=+0.353324240 container died 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cf9270487f508db2d35217dc5d750a7ec68e446a08b6c91b0a859f3db12a710-merged.mount: Deactivated successfully.
Oct 02 11:33:53 compute-0 podman[87666]: 2025-10-02 11:33:53.447766329 +0000 UTC m=+0.324067341 container init 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:33:53 compute-0 podman[87666]: 2025-10-02 11:33:53.453651347 +0000 UTC m=+0.329952339 container start 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:33:53 compute-0 podman[87652]: 2025-10-02 11:33:53.455131335 +0000 UTC m=+0.450574342 container remove 2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_snyder, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:53 compute-0 systemd[1]: libpod-conmon-2e160c46a155a94cf1337efeff741584c30b30590f38cce29f680da41efe6443.scope: Deactivated successfully.
Oct 02 11:33:53 compute-0 podman[87666]: 2025-10-02 11:33:53.46605748 +0000 UTC m=+0.342358472 container attach 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:33:53 compute-0 podman[87713]: 2025-10-02 11:33:53.637425012 +0000 UTC m=+0.081851035 container create cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:53 compute-0 podman[87713]: 2025-10-02 11:33:53.579424579 +0000 UTC m=+0.023850632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:33:53 compute-0 systemd[1]: Started libpod-conmon-cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3.scope.
Oct 02 11:33:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d298baae815142c4a71af35ea6a0bb54434cb2b4eea93a9b5f3159585b3635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d298baae815142c4a71af35ea6a0bb54434cb2b4eea93a9b5f3159585b3635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d298baae815142c4a71af35ea6a0bb54434cb2b4eea93a9b5f3159585b3635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d298baae815142c4a71af35ea6a0bb54434cb2b4eea93a9b5f3159585b3635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 02 11:33:53 compute-0 podman[87713]: 2025-10-02 11:33:53.894823242 +0000 UTC m=+0.339249275 container init cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:53 compute-0 podman[87713]: 2025-10-02 11:33:53.901020189 +0000 UTC m=+0.345446212 container start cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:33:53 compute-0 podman[87713]: 2025-10-02 11:33:53.958377105 +0000 UTC m=+0.402803128 container attach cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:33:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 02 11:33:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 11:33:54 compute-0 ceph-mon[73607]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct 02 11:33:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct 02 11:33:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:54 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:54 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:54 compute-0 goofy_jemison[87748]: {
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:         "osd_id": 1,
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:         "type": "bluestore"
Oct 02 11:33:54 compute-0 goofy_jemison[87748]:     }
Oct 02 11:33:54 compute-0 goofy_jemison[87748]: }
Oct 02 11:33:54 compute-0 systemd[1]: libpod-cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3.scope: Deactivated successfully.
Oct 02 11:33:54 compute-0 podman[87713]: 2025-10-02 11:33:54.732353071 +0000 UTC m=+1.176779094 container died cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d298baae815142c4a71af35ea6a0bb54434cb2b4eea93a9b5f3159585b3635-merged.mount: Deactivated successfully.
Oct 02 11:33:54 compute-0 podman[87713]: 2025-10-02 11:33:54.938418017 +0000 UTC m=+1.382844040 container remove cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:33:54 compute-0 systemd[1]: libpod-conmon-cb480acd75362d5a786956f7bce165f7ea09b56585f691b3b781518df0c07da3.scope: Deactivated successfully.
Oct 02 11:33:54 compute-0 sudo[87563]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 02 11:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 11:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct 02 11:33:55 compute-0 pensive_murdock[87689]: enabled application 'rbd' on pool 'vms'
Oct 02 11:33:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 11:33:55 compute-0 ceph-mon[73607]: osdmap e26: 3 total, 2 up, 3 in
Oct 02 11:33:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:55 compute-0 ceph-mon[73607]: pgmap v85: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:55 compute-0 systemd[1]: libpod-94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed.scope: Deactivated successfully.
Oct 02 11:33:55 compute-0 podman[87666]: 2025-10-02 11:33:55.365008483 +0000 UTC m=+2.241309475 container died 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:33:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct 02 11:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:55 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0920b68579a836d350e31781fcd08f432d1469d78c9b21241642001f978e65b9-merged.mount: Deactivated successfully.
Oct 02 11:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:55 compute-0 podman[87666]: 2025-10-02 11:33:55.573629274 +0000 UTC m=+2.449930266 container remove 94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed (image=quay.io/ceph/ceph:v18, name=pensive_murdock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:33:55 compute-0 systemd[1]: libpod-conmon-94210f1c24efa412bbc5bc940b9554dc378565cd1dba13fc2b0c1c7fdeec58ed.scope: Deactivated successfully.
Oct 02 11:33:55 compute-0 sudo[87637]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:55 compute-0 sudo[87818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zogxgmghkzogxsuyrlehqxbutauwrtoo ; /usr/bin/python3'
Oct 02 11:33:55 compute-0 sudo[87818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:55 compute-0 python3[87820]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:55 compute-0 podman[87821]: 2025-10-02 11:33:55.910925689 +0000 UTC m=+0.041794224 container create 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:33:55 compute-0 systemd[1]: Started libpod-conmon-16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601.scope.
Oct 02 11:33:55 compute-0 podman[87821]: 2025-10-02 11:33:55.891882469 +0000 UTC m=+0.022751034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff563ef2bad2fdb763a48cf34c8a5f3743ce2aa7fb59b35d29d683246d39de3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff563ef2bad2fdb763a48cf34c8a5f3743ce2aa7fb59b35d29d683246d39de3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:56 compute-0 podman[87821]: 2025-10-02 11:33:56.0105056 +0000 UTC m=+0.141374165 container init 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:33:56 compute-0 podman[87821]: 2025-10-02 11:33:56.017194569 +0000 UTC m=+0.148063104 container start 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:33:56 compute-0 podman[87821]: 2025-10-02 11:33:56.020656066 +0000 UTC m=+0.151524611 container attach 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:33:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 11:33:56 compute-0 ceph-mon[73607]: osdmap e27: 3 total, 2 up, 3 in
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct 02 11:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:56 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=0/0 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536719322s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.599525452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536690712s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.599525452s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536573410s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.599525452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536555290s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.599525452s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536511421s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.599563599s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.536499023s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.599563599s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540493011s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603630066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540480614s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603630066s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540445328s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603668213s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540434837s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603668213s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540410995s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603759766s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540392876s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603759766s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540318489s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603797913s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540287018s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603797913s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.540397644s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.604003906s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539790154s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603858948s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539921761s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.604003906s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539747238s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603858948s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539712906s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603874207s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539714813s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603912354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539689064s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603874207s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539670944s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603919983s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539732933s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.603981018s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539678574s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603912354s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539636612s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603919983s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539784431s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.604103088s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539724350s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.604103088s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539588928s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.603981018s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539532661s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 66.604057312s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:33:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.539512634s) [0] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.604057312s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:33:56 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event a627725d-c062-4116-a669-5e4de336f203 (Global Recovery Event) in 16 seconds
Oct 02 11:33:57 compute-0 ceph-mon[73607]: pgmap v87: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 11:33:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:33:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:33:57 compute-0 ceph-mon[73607]: osdmap e28: 3 total, 2 up, 3 in
Oct 02 11:33:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 02 11:33:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 11:33:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct 02 11:33:57 compute-0 ecstatic_bassi[87836]: enabled application 'rbd' on pool 'volumes'
Oct 02 11:33:57 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct 02 11:33:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:57 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:33:57 compute-0 systemd[1]: libpod-16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601.scope: Deactivated successfully.
Oct 02 11:33:57 compute-0 podman[87821]: 2025-10-02 11:33:57.768343315 +0000 UTC m=+1.899211850 container died 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff563ef2bad2fdb763a48cf34c8a5f3743ce2aa7fb59b35d29d683246d39de3-merged.mount: Deactivated successfully.
Oct 02 11:33:57 compute-0 podman[87821]: 2025-10-02 11:33:57.819156786 +0000 UTC m=+1.950025321 container remove 16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601 (image=quay.io/ceph/ceph:v18, name=ecstatic_bassi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:57 compute-0 sudo[87818]: pam_unix(sudo:session): session closed for user root
Oct 02 11:33:57 compute-0 systemd[1]: libpod-conmon-16557ae3414dd30810311dd1c84920d06dbdbb2a993ad1a7afb2f0e49fa2f601.scope: Deactivated successfully.
Oct 02 11:33:57 compute-0 sudo[87895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhsnpimfedzxkybowvpqlgzrmlskdkc ; /usr/bin/python3'
Oct 02 11:33:57 compute-0 sudo[87895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:33:58 compute-0 python3[87897]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:33:58 compute-0 podman[87898]: 2025-10-02 11:33:58.177043891 +0000 UTC m=+0.070739515 container create eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:33:58 compute-0 podman[87898]: 2025-10-02 11:33:58.128007144 +0000 UTC m=+0.021702788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:33:58 compute-0 systemd[1]: Started libpod-conmon-eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175.scope.
Oct 02 11:33:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v90: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d387130466758d8d839431f823a7f7b5673bbfb2bf881288f5af4a95cfdc05a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d387130466758d8d839431f823a7f7b5673bbfb2bf881288f5af4a95cfdc05a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:33:58 compute-0 podman[87898]: 2025-10-02 11:33:58.273516063 +0000 UTC m=+0.167211687 container init eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:33:58 compute-0 podman[87898]: 2025-10-02 11:33:58.279232998 +0000 UTC m=+0.172928622 container start eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:33:58 compute-0 podman[87898]: 2025-10-02 11:33:58.295082987 +0000 UTC m=+0.188778641 container attach eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:33:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 11:33:58 compute-0 ceph-mon[73607]: osdmap e29: 3 total, 2 up, 3 in
Oct 02 11:33:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:58 compute-0 ceph-mon[73607]: pgmap v90: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:33:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 02 11:33:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 02 11:33:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:33:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct 02 11:33:59 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct 02 11:33:59 compute-0 ceph-mon[73607]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:33:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 02 11:33:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct 02 11:33:59 compute-0 epic_pascal[87913]: enabled application 'rbd' on pool 'backups'
Oct 02 11:33:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct 02 11:33:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:33:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:33:59 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:33:59 compute-0 systemd[1]: libpod-eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175.scope: Deactivated successfully.
Oct 02 11:33:59 compute-0 podman[87898]: 2025-10-02 11:33:59.807595655 +0000 UTC m=+1.701291279 container died eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d387130466758d8d839431f823a7f7b5673bbfb2bf881288f5af4a95cfdc05a-merged.mount: Deactivated successfully.
Oct 02 11:33:59 compute-0 podman[87898]: 2025-10-02 11:33:59.852285672 +0000 UTC m=+1.745981296 container remove eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175 (image=quay.io/ceph/ceph:v18, name=epic_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:33:59 compute-0 systemd[1]: libpod-conmon-eb30dc7a64ccbc057ad179bf9089dd65a0b7e296f4a65988ba423e9f33891175.scope: Deactivated successfully.
Oct 02 11:33:59 compute-0 sudo[87895]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:00 compute-0 sudo[87972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwbonauzlxndlrxzwwueizfwbnmcvpne ; /usr/bin/python3'
Oct 02 11:34:00 compute-0 sudo[87972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:00 compute-0 python3[87974]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v92: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:00 compute-0 podman[87975]: 2025-10-02 11:34:00.265720338 +0000 UTC m=+0.096199357 container create d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:00 compute-0 podman[87975]: 2025-10-02 11:34:00.189649889 +0000 UTC m=+0.020128928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:00 compute-0 systemd[1]: Started libpod-conmon-d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196.scope.
Oct 02 11:34:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4be84ba5837bc4d2b4ef880835f0d2b3a56711d454272078706172a9e451ba0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4be84ba5837bc4d2b4ef880835f0d2b3a56711d454272078706172a9e451ba0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:00 compute-0 podman[87975]: 2025-10-02 11:34:00.338751989 +0000 UTC m=+0.169231028 container init d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:00 compute-0 podman[87975]: 2025-10-02 11:34:00.344105774 +0000 UTC m=+0.174584793 container start d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:34:00 compute-0 podman[87975]: 2025-10-02 11:34:00.347257203 +0000 UTC m=+0.177736222 container attach d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:34:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rbjjpf started
Oct 02 11:34:00 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mgr.compute-2.rbjjpf 192.168.122.102:0/3728391262; not ready for session (expect reconnect)
Oct 02 11:34:00 compute-0 ceph-mon[73607]: Deploying daemon osd.2 on compute-2
Oct 02 11:34:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 11:34:00 compute-0 ceph-mon[73607]: osdmap e30: 3 total, 2 up, 3 in
Oct 02 11:34:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:00 compute-0 ceph-mon[73607]: pgmap v92: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 02 11:34:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 11:34:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 02 11:34:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 02 11:34:01 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mgr.compute-2.rbjjpf 192.168.122.102:0/3728391262; not ready for session (expect reconnect)
Oct 02 11:34:01 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 8 completed events
Oct 02 11:34:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 02 11:34:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.fmcstn(active, since 2m), standbys: compute-2.rbjjpf
Oct 02 11:34:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rbjjpf", "id": "compute-2.rbjjpf"} v 0) v1
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rbjjpf", "id": "compute-2.rbjjpf"}]: dispatch
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 11:34:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct 02 11:34:02 compute-0 stupefied_torvalds[87991]: enabled application 'rbd' on pool 'images'
Oct 02 11:34:02 compute-0 ceph-mon[73607]: Standby manager daemon compute-2.rbjjpf started
Oct 02 11:34:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 11:34:02 compute-0 ceph-mon[73607]: 2.2 scrub starts
Oct 02 11:34:02 compute-0 ceph-mon[73607]: 2.2 scrub ok
Oct 02 11:34:02 compute-0 systemd[1]: libpod-d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196.scope: Deactivated successfully.
Oct 02 11:34:02 compute-0 podman[87975]: 2025-10-02 11:34:02.153933099 +0000 UTC m=+1.984412118 container died d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct 02 11:34:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:02 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v94: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4be84ba5837bc4d2b4ef880835f0d2b3a56711d454272078706172a9e451ba0-merged.mount: Deactivated successfully.
Oct 02 11:34:02 compute-0 podman[87975]: 2025-10-02 11:34:02.364441088 +0000 UTC m=+2.194920107 container remove d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196 (image=quay.io/ceph/ceph:v18, name=stupefied_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:34:02 compute-0 systemd[1]: libpod-conmon-d3238752bfa9f019225ff9857e20647872c9bd7ea287071ef38a9495a328a196.scope: Deactivated successfully.
Oct 02 11:34:02 compute-0 sudo[87972]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:02 compute-0 sudo[88053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzpsjjonwpobeglqponzsuysclipzyjc ; /usr/bin/python3'
Oct 02 11:34:02 compute-0 sudo[88053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:02 compute-0 python3[88055]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:02 compute-0 podman[88056]: 2025-10-02 11:34:02.722153257 +0000 UTC m=+0.042246346 container create 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:34:02 compute-0 systemd[1]: Started libpod-conmon-1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a.scope.
Oct 02 11:34:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45230a2920ce7ff318a4cd9ba8d6fa83e8502b86487b00bedfe78702c5915cee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45230a2920ce7ff318a4cd9ba8d6fa83e8502b86487b00bedfe78702c5915cee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:02 compute-0 podman[88056]: 2025-10-02 11:34:02.702138693 +0000 UTC m=+0.022231822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:02 compute-0 podman[88056]: 2025-10-02 11:34:02.799745843 +0000 UTC m=+0.119838942 container init 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:34:02 compute-0 podman[88056]: 2025-10-02 11:34:02.806407661 +0000 UTC m=+0.126500750 container start 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:02 compute-0 podman[88056]: 2025-10-02 11:34:02.811169102 +0000 UTC m=+0.131262201 container attach 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:34:03 compute-0 ceph-mon[73607]: 3.1 scrub starts
Oct 02 11:34:03 compute-0 ceph-mon[73607]: 3.1 scrub ok
Oct 02 11:34:03 compute-0 ceph-mon[73607]: mgrmap e10: compute-0.fmcstn(active, since 2m), standbys: compute-2.rbjjpf
Oct 02 11:34:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rbjjpf", "id": "compute-2.rbjjpf"}]: dispatch
Oct 02 11:34:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 11:34:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:03 compute-0 ceph-mon[73607]: osdmap e31: 3 total, 2 up, 3 in
Oct 02 11:34:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:03 compute-0 ceph-mon[73607]: pgmap v94: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 02 11:34:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 11:34:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ypnrbl started
Oct 02 11:34:03 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from mgr.compute-1.ypnrbl 192.168.122.101:0/2843141904; not ready for session (expect reconnect)
Oct 02 11:34:04 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 02 11:34:04 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 02 11:34:04 compute-0 ceph-mon[73607]: 3.2 scrub starts
Oct 02 11:34:04 compute-0 ceph-mon[73607]: 3.2 scrub ok
Oct 02 11:34:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 11:34:04 compute-0 ceph-mon[73607]: Standby manager daemon compute-1.ypnrbl started
Oct 02 11:34:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v95: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct 02 11:34:04 compute-0 strange_mccarthy[88070]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.fmcstn(active, since 2m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ypnrbl", "id": "compute-1.ypnrbl"} v 0) v1
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ypnrbl", "id": "compute-1.ypnrbl"}]: dispatch
Oct 02 11:34:04 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:04 compute-0 systemd[1]: libpod-1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a.scope: Deactivated successfully.
Oct 02 11:34:04 compute-0 podman[88056]: 2025-10-02 11:34:04.269022912 +0000 UTC m=+1.589116011 container died 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-45230a2920ce7ff318a4cd9ba8d6fa83e8502b86487b00bedfe78702c5915cee-merged.mount: Deactivated successfully.
Oct 02 11:34:04 compute-0 podman[88056]: 2025-10-02 11:34:04.31769531 +0000 UTC m=+1.637788409 container remove 1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:34:04 compute-0 systemd[1]: libpod-conmon-1423c23c0bbc99dc69a795010036a9bd3885fa0b28edfd72a0874614b2184c6a.scope: Deactivated successfully.
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:04 compute-0 sudo[88053]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:04 compute-0 sudo[88132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmqhmasmtikqvmerfckuyqhoxdkhampg ; /usr/bin/python3'
Oct 02 11:34:04 compute-0 sudo[88132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:04 compute-0 python3[88134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:04 compute-0 podman[88135]: 2025-10-02 11:34:04.661910539 +0000 UTC m=+0.061908132 container create 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:04 compute-0 systemd[1]: Started libpod-conmon-4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee.scope.
Oct 02 11:34:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1188fd68750e4debc87c1f2c9b9953378c52f12510fbbbf61e1975db6849f87d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1188fd68750e4debc87c1f2c9b9953378c52f12510fbbbf61e1975db6849f87d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:04 compute-0 podman[88135]: 2025-10-02 11:34:04.735819043 +0000 UTC m=+0.135816666 container init 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:34:04 compute-0 podman[88135]: 2025-10-02 11:34:04.643067004 +0000 UTC m=+0.043064617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:04 compute-0 podman[88135]: 2025-10-02 11:34:04.741862796 +0000 UTC m=+0.141860399 container start 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:34:04 compute-0 podman[88135]: 2025-10-02 11:34:04.749619031 +0000 UTC m=+0.149616654 container attach 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:34:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 02 11:34:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 11:34:05 compute-0 ceph-mon[73607]: 2.3 scrub starts
Oct 02 11:34:05 compute-0 ceph-mon[73607]: 2.3 scrub ok
Oct 02 11:34:05 compute-0 ceph-mon[73607]: pgmap v95: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 11:34:05 compute-0 ceph-mon[73607]: mgrmap e11: compute-0.fmcstn(active, since 2m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 11:34:05 compute-0 ceph-mon[73607]: osdmap e32: 3 total, 2 up, 3 in
Oct 02 11:34:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ypnrbl", "id": "compute-1.ypnrbl"}]: dispatch
Oct 02 11:34:05 compute-0 ceph-mon[73607]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:34:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 02 11:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 11:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Oct 02 11:34:06 compute-0 sweet_banach[88150]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 02 11:34:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Oct 02 11:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:06 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:06 compute-0 systemd[1]: libpod-4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee.scope: Deactivated successfully.
Oct 02 11:34:06 compute-0 podman[88135]: 2025-10-02 11:34:06.389979394 +0000 UTC m=+1.789977017 container died 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1188fd68750e4debc87c1f2c9b9953378c52f12510fbbbf61e1975db6849f87d-merged.mount: Deactivated successfully.
Oct 02 11:34:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 11:34:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 11:34:06 compute-0 ceph-mon[73607]: osdmap e33: 3 total, 2 up, 3 in
Oct 02 11:34:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:06 compute-0 podman[88135]: 2025-10-02 11:34:06.840292858 +0000 UTC m=+2.240290451 container remove 4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 02 11:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:34:06 compute-0 sudo[88132]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:06 compute-0 systemd[1]: libpod-conmon-4f8881cff97a6ac4a5edc655438ef923c5d9215e3d40e8d82f5211452fd129ee.scope: Deactivated successfully.
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 02 11:34:07 compute-0 ceph-mon[73607]: pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:07 compute-0 ceph-mon[73607]: from='osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:34:07 compute-0 ceph-mon[73607]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Oct 02 11:34:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:07 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Oct 02 11:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e34 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Oct 02 11:34:07 compute-0 python3[88263]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:08 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Oct 02 11:34:08 compute-0 sudo[88335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:08 compute-0 sudo[88335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Oct 02 11:34:08 compute-0 sudo[88360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:34:08 compute-0 sudo[88360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 python3[88334]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404847.581963-33895-91709822366079/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v100: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:08 compute-0 sudo[88409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:08 compute-0 sudo[88409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88409]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 sudo[88434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:08 compute-0 sudo[88434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88434]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 sudo[88485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:08 compute-0 sudo[88485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 02 11:34:08 compute-0 sudo[88536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:34:08 compute-0 sudo[88536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:08 compute-0 sudo[88584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgssrqyvnfgwkyxwgarlboiopzvbruma ; /usr/bin/python3'
Oct 02 11:34:08 compute-0 sudo[88584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct 02 11:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Oct 02 11:34:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Oct 02 11:34:08 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1863586281; not ready for session (expect reconnect)
Oct 02 11:34:08 compute-0 python3[88586]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:08 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:08 compute-0 sudo[88584]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:34:08 compute-0 ceph-mon[73607]: osdmap e34: 3 total, 2 up, 3 in
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:08 compute-0 ceph-mon[73607]: 2.5 deep-scrub starts
Oct 02 11:34:08 compute-0 ceph-mon[73607]: 2.5 deep-scrub ok
Oct 02 11:34:08 compute-0 ceph-mon[73607]: pgmap v100: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:34:09 compute-0 sudo[88690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yamljijarmyataluuydnqjvnajzybmqv ; /usr/bin/python3'
Oct 02 11:34:09 compute-0 sudo[88690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:09 compute-0 sudo[88536]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 02 11:34:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 02 11:34:09 compute-0 python3[88692]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404848.554589-33909-202702235817887/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=faf9f53f21ff0d430108f26d4acac9a999454987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:09 compute-0 sudo[88690]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:09 compute-0 sudo[88740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfehzofeabzuqzxlehxrshvnkwrwzdws ; /usr/bin/python3'
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:34:09 compute-0 sudo[88740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759516716s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.603874207s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.755645752s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.600021362s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759637833s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.604026794s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296730042s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141128540s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759516716s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.603874207s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759637833s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296730042s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141128540s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296663284s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141212463s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296663284s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141212463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759384155s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.604003906s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759384155s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604003906s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.755645752s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.600021362s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759341240s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.604110718s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296497345s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141281128s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296497345s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141281128s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759341240s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604110718s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296427727s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141288757s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759141922s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.604026794s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296427727s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759141922s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759235382s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 82.604194641s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=11.759235382s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604194641s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296442986s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141426086s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296442986s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141426086s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296175957s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 83.141288757s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:09 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.296175957s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:34:09 compute-0 python3[88742]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:09 compute-0 podman[88743]: 2025-10-02 11:34:09.627359386 +0000 UTC m=+0.043089068 container create 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:09 compute-0 systemd[1]: Started libpod-conmon-2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283.scope.
Oct 02 11:34:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f39ab63a574846ef80dd1560db1dc1a47df51da4e784f15c9c0831207d92a14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f39ab63a574846ef80dd1560db1dc1a47df51da4e784f15c9c0831207d92a14/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f39ab63a574846ef80dd1560db1dc1a47df51da4e784f15c9c0831207d92a14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:09 compute-0 podman[88743]: 2025-10-02 11:34:09.696408516 +0000 UTC m=+0.112138218 container init 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:09 compute-0 podman[88743]: 2025-10-02 11:34:09.702435148 +0000 UTC m=+0.118164830 container start 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:09 compute-0 podman[88743]: 2025-10-02 11:34:09.606856748 +0000 UTC m=+0.022586450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:09 compute-0 podman[88743]: 2025-10-02 11:34:09.707356163 +0000 UTC m=+0.123085845 container attach 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:09 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1863586281; not ready for session (expect reconnect)
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:09 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:10 compute-0 ceph-mon[73607]: purged_snaps scrub starts
Oct 02 11:34:10 compute-0 ceph-mon[73607]: purged_snaps scrub ok
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct 02 11:34:10 compute-0 ceph-mon[73607]: osdmap e35: 3 total, 2 up, 3 in
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:10 compute-0 ceph-mon[73607]: 3.4 scrub starts
Oct 02 11:34:10 compute-0 ceph-mon[73607]: 3.4 scrub ok
Oct 02 11:34:10 compute-0 ceph-mon[73607]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:34:10 compute-0 ceph-mon[73607]: Cluster is now healthy
Oct 02 11:34:10 compute-0 ceph-mon[73607]: 2.7 scrub starts
Oct 02 11:34:10 compute-0 ceph-mon[73607]: 2.7 scrub ok
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v102: 69 pgs: 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:34:10 compute-0 zen_sammet[88758]: 
Oct 02 11:34:10 compute-0 zen_sammet[88758]: [global]
Oct 02 11:34:10 compute-0 zen_sammet[88758]:         fsid = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:10 compute-0 zen_sammet[88758]:         mon_host = 192.168.122.100
Oct 02 11:34:10 compute-0 systemd[1]: libpod-2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283.scope: Deactivated successfully.
Oct 02 11:34:10 compute-0 podman[88783]: 2025-10-02 11:34:10.404041171 +0000 UTC m=+0.023786821 container died 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f39ab63a574846ef80dd1560db1dc1a47df51da4e784f15c9c0831207d92a14-merged.mount: Deactivated successfully.
Oct 02 11:34:10 compute-0 podman[88783]: 2025-10-02 11:34:10.445638029 +0000 UTC m=+0.065383679 container remove 2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283 (image=quay.io/ceph/ceph:v18, name=zen_sammet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:10 compute-0 systemd[1]: libpod-conmon-2550f46b1f1b92d2ce6944b5cb9046e0cf574fae3f29e2c00aaa100806349283.scope: Deactivated successfully.
Oct 02 11:34:10 compute-0 sudo[88740]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:10 compute-0 sudo[88821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtemkzmlcvyaqxfhbbauvamsbahcejvh ; /usr/bin/python3'
Oct 02 11:34:10 compute-0 sudo[88821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:10 compute-0 python3[88823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:10 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1863586281; not ready for session (expect reconnect)
Oct 02 11:34:10 compute-0 podman[88824]: 2025-10-02 11:34:10.802255441 +0000 UTC m=+0.042357278 container create ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:10 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:10 compute-0 systemd[1]: Started libpod-conmon-ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5.scope.
Oct 02 11:34:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27bba738701d8c0d621d21634914522d5ec380114ebe69e0bda4b81d39ee1974/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27bba738701d8c0d621d21634914522d5ec380114ebe69e0bda4b81d39ee1974/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27bba738701d8c0d621d21634914522d5ec380114ebe69e0bda4b81d39ee1974/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:10 compute-0 podman[88824]: 2025-10-02 11:34:10.869700002 +0000 UTC m=+0.109801869 container init ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:10 compute-0 podman[88824]: 2025-10-02 11:34:10.875863148 +0000 UTC m=+0.115964995 container start ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:10 compute-0 podman[88824]: 2025-10-02 11:34:10.879307714 +0000 UTC m=+0.119409561 container attach ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:34:10 compute-0 podman[88824]: 2025-10-02 11:34:10.783496239 +0000 UTC m=+0.023598106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:11 compute-0 ceph-mon[73607]: pgmap v102: 69 pgs: 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:34:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 02 11:34:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 02 11:34:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 02 11:34:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2085867460' entity='client.admin' 
Oct 02 11:34:11 compute-0 hopeful_mcnulty[88840]: set ssl_option
Oct 02 11:34:11 compute-0 systemd[1]: libpod-ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5.scope: Deactivated successfully.
Oct 02 11:34:11 compute-0 podman[88824]: 2025-10-02 11:34:11.599473843 +0000 UTC m=+0.839575700 container died ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-27bba738701d8c0d621d21634914522d5ec380114ebe69e0bda4b81d39ee1974-merged.mount: Deactivated successfully.
Oct 02 11:34:11 compute-0 podman[88824]: 2025-10-02 11:34:11.642761375 +0000 UTC m=+0.882863212 container remove ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5 (image=quay.io/ceph/ceph:v18, name=hopeful_mcnulty, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:34:11 compute-0 systemd[1]: libpod-conmon-ff311ac7e1b13bf7526d2f2debfe5d59fa7e0736c5eb22d4b9940d80a9005fc5.scope: Deactivated successfully.
Oct 02 11:34:11 compute-0 sudo[88821]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:11 compute-0 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1863586281; not ready for session (expect reconnect)
Oct 02 11:34:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:11 compute-0 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:34:11 compute-0 sudo[88899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlwvtmmgrkuzyhxthfvbjsfwgsfignly ; /usr/bin/python3'
Oct 02 11:34:11 compute-0 sudo[88899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:11 compute-0 python3[88901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:12.001610554 +0000 UTC m=+0.039759904 container create 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:12 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 02 11:34:12 compute-0 systemd[1]: Started libpod-conmon-29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511.scope.
Oct 02 11:34:12 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 02 11:34:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12394e398e73810a6763b44e5cc4dc1fc1323ccbc94aa0083aeaa4c57274dbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12394e398e73810a6763b44e5cc4dc1fc1323ccbc94aa0083aeaa4c57274dbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12394e398e73810a6763b44e5cc4dc1fc1323ccbc94aa0083aeaa4c57274dbd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:12.072882471 +0000 UTC m=+0.111031841 container init 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:12.078933354 +0000 UTC m=+0.117082714 container start 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:11.984140403 +0000 UTC m=+0.022289773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:12.082389081 +0000 UTC m=+0.120538431 container attach 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v103: 69 pgs: 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:12 compute-0 ceph-mon[73607]: 3.6 scrub starts
Oct 02 11:34:12 compute-0 ceph-mon[73607]: 3.6 scrub ok
Oct 02 11:34:12 compute-0 ceph-mon[73607]: 2.8 scrub starts
Oct 02 11:34:12 compute-0 ceph-mon[73607]: 2.8 scrub ok
Oct 02 11:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2085867460' entity='client.admin' 
Oct 02 11:34:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:12 compute-0 ceph-mon[73607]: OSD bench result of 6028.959553 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:34:12 compute-0 ceph-mon[73607]: pgmap v103: 69 pgs: 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 02 11:34:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281] boot
Oct 02 11:34:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 02 11:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:34:12 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:12 compute-0 stoic_pike[88917]: Scheduled rgw.rgw update...
Oct 02 11:34:12 compute-0 stoic_pike[88917]: Scheduled ingress.rgw.default update...
Oct 02 11:34:12 compute-0 systemd[1]: libpod-29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511.scope: Deactivated successfully.
Oct 02 11:34:12 compute-0 podman[88902]: 2025-10-02 11:34:12.94612887 +0000 UTC m=+0.984278220 container died 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12394e398e73810a6763b44e5cc4dc1fc1323ccbc94aa0083aeaa4c57274dbd-merged.mount: Deactivated successfully.
Oct 02 11:34:13 compute-0 podman[88902]: 2025-10-02 11:34:13.128272444 +0000 UTC m=+1.166421794 container remove 29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511 (image=quay.io/ceph/ceph:v18, name=stoic_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:34:13 compute-0 sudo[88899]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 systemd[1]: libpod-conmon-29787fc73b0554a694fb6dc4616bf09c978c0301b662d9762899ea6f2e7e0511.scope: Deactivated successfully.
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.757236958s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.294339180s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141128540s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.757193089s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.294297218s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141128540s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.753047466s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.600021362s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.753000736s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.600021362s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.294166565s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141212463s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756830692s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.603874207s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756920338s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604003906s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756779194s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.603874207s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756895542s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604003906s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.294144630s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141212463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756800175s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604110718s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293923378s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756779194s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604110718s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293908119s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293899536s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756608963s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293888092s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141288757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293848038s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141281128s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756587982s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604026794s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293799400s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141281128s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756672859s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604194641s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293883324s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141426086s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.756656170s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.604194641s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=28/29 n=0 ec=21/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.293862343s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.141426086s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 sudo[88954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:13 compute-0 sudo[88954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[88954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 sudo[88979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:34:13 compute-0 sudo[88979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[88979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 sudo[89004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:13 compute-0 sudo[89004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 sudo[89029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph
Oct 02 11:34:13 compute-0 sudo[89029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 02 11:34:13 compute-0 ceph-mon[73607]: 2.11 scrub starts
Oct 02 11:34:13 compute-0 ceph-mon[73607]: 2.11 scrub ok
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:13 compute-0 ceph-mon[73607]: osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281] boot
Oct 02 11:34:13 compute-0 ceph-mon[73607]: osdmap e36: 3 total, 3 up, 3 in
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 ceph-mon[73607]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:34:13 compute-0 sudo[89054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:13 compute-0 sudo[89054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 02 11:34:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 02 11:34:13 compute-0 sudo[89079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:34:13 compute-0 sudo[89079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89079]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 sudo[89104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:13 compute-0 sudo[89104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89104]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:13 compute-0 sudo[89129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:13 compute-0 sudo[89129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:13 compute-0 sudo[89129]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:34:14 compute-0 sudo[89216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89216]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v106: 69 pgs: 11 peering, 58 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:14 compute-0 sudo[89328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:34:14 compute-0 sudo[89328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 python3[89299]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:34:14 compute-0 sudo[89353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89353]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:14 compute-0 sudo[89379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new
Oct 02 11:34:14 compute-0 sudo[89379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89379]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89429]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 sudo[89480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:34:14 compute-0 sudo[89480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89480]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 sudo[89524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89524]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:34:14 compute-0 sudo[89549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89549]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 python3[89516]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404853.9936295-33950-42338196838517/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:14 compute-0 sudo[89574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89574]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 sudo[89620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config
Oct 02 11:34:14 compute-0 sudo[89620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89620]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:34:14 compute-0 sudo[89673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89673]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 sudo[89698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:14 compute-0 sudo[89698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:14 compute-0 sudo[89698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:14 compute-0 ceph-mon[73607]: osdmap e37: 3 total, 3 up, 3 in
Oct 02 11:34:14 compute-0 ceph-mon[73607]: pgmap v106: 69 pgs: 11 peering, 58 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:14 compute-0 ceph-mon[73607]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 ceph-mon[73607]: Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:14 compute-0 sudo[89766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkuhmjgytvsnreweqirhukvgooprjxw ; /usr/bin/python3'
Oct 02 11:34:14 compute-0 sudo[89766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:14 compute-0 sudo[89728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:14 compute-0 sudo[89728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 sudo[89774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:15 compute-0 sudo[89774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct 02 11:34:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct 02 11:34:15 compute-0 sudo[89799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:34:15 compute-0 sudo[89799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89799]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 python3[89771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:15 compute-0 sudo[89853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:15 compute-0 sudo[89853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89853]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 podman[89845]: 2025-10-02 11:34:15.21529828 +0000 UTC m=+0.068271883 container create 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:34:15 compute-0 sudo[89885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:34:15 compute-0 sudo[89885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89885]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 podman[89845]: 2025-10-02 11:34:15.167477944 +0000 UTC m=+0.020451567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:34:15 compute-0 sudo[89910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:15 compute-0 sudo[89910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89910]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 systemd[1]: Started libpod-conmon-845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a.scope.
Oct 02 11:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:34:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87236a281559c0e31669b585c977d6875ea8beb06dccbd19745ae05f2573fa16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87236a281559c0e31669b585c977d6875ea8beb06dccbd19745ae05f2573fa16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87236a281559c0e31669b585c977d6875ea8beb06dccbd19745ae05f2573fa16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:15 compute-0 sudo[89937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new
Oct 02 11:34:15 compute-0 sudo[89937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89937]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 sudo[89965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:15 compute-0 sudo[89965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89965]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 podman[89845]: 2025-10-02 11:34:15.464995936 +0000 UTC m=+0.317969549 container init 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 podman[89845]: 2025-10-02 11:34:15.47191231 +0000 UTC m=+0.324885913 container start 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:34:15 compute-0 sudo[89990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf.new /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:15 compute-0 sudo[89990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:15 compute-0 sudo[89990]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:15 compute-0 podman[89845]: 2025-10-02 11:34:15.692154913 +0000 UTC m=+0.545128556 container attach 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:15 compute-0 ceph-mon[73607]: 3.7 scrub starts
Oct 02 11:34:15 compute-0 ceph-mon[73607]: 3.7 scrub ok
Oct 02 11:34:15 compute-0 ceph-mon[73607]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct 02 11:34:15 compute-0 ceph-mon[73607]: 2.14 deep-scrub starts
Oct 02 11:34:15 compute-0 ceph-mon[73607]: 2.14 deep-scrub ok
Oct 02 11:34:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5b8794ed-78cb-4a30-bbfa-82601dfcfcca does not exist
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 35eb78cb-c915-41b0-9033-822bc3312243 does not exist
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e20e2768-0642-4a10-b716-769f0bb10fbe does not exist
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 11:34:16 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0[73603]: 2025-10-02T11:34:16.058+0000 7f1d62f33640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e2 new map
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:34:16.058958+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:34:16 compute-0 sudo[90035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:16 compute-0 sudo[90035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:16 compute-0 sudo[90035]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 11:34:16 compute-0 systemd[1]: libpod-845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a.scope: Deactivated successfully.
Oct 02 11:34:16 compute-0 podman[89845]: 2025-10-02 11:34:16.113974681 +0000 UTC m=+0.966948284 container died 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 11:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-87236a281559c0e31669b585c977d6875ea8beb06dccbd19745ae05f2573fa16-merged.mount: Deactivated successfully.
Oct 02 11:34:16 compute-0 sudo[90062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:16 compute-0 sudo[90062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:16 compute-0 sudo[90062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:16 compute-0 podman[89845]: 2025-10-02 11:34:16.164777231 +0000 UTC m=+1.017750834 container remove 845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a (image=quay.io/ceph/ceph:v18, name=infallible_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:34:16 compute-0 systemd[1]: libpod-conmon-845d088a4d11f701fb5e53f2c69c866b3b53cdd5fdd6e09ff6dc0567950e644a.scope: Deactivated successfully.
Oct 02 11:34:16 compute-0 sudo[89766]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:16 compute-0 sudo[90099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:16 compute-0 sudo[90099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:16 compute-0 sudo[90099]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v108: 69 pgs: 1 active+clean+scrubbing+deep, 11 peering, 57 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:16 compute-0 sudo[90124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:34:16 compute-0 sudo[90124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:16 compute-0 sudo[90172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledozyudnceyzlutcurplrkxiaijnyrb ; /usr/bin/python3'
Oct 02 11:34:16 compute-0 sudo[90172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:16 compute-0 python3[90174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:16 compute-0 podman[90202]: 2025-10-02 11:34:16.533304654 +0000 UTC m=+0.045568490 container create d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.572674376 +0000 UTC m=+0.040445371 container create 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:34:16 compute-0 systemd[1]: Started libpod-conmon-d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de.scope.
Oct 02 11:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ff5f4f1a1cc9ad4c40d9b1429265f95d002878107b1f5ee8878ad55fc55b1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ff5f4f1a1cc9ad4c40d9b1429265f95d002878107b1f5ee8878ad55fc55b1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ff5f4f1a1cc9ad4c40d9b1429265f95d002878107b1f5ee8878ad55fc55b1d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 systemd[1]: Started libpod-conmon-6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c.scope.
Oct 02 11:34:16 compute-0 podman[90202]: 2025-10-02 11:34:16.517330951 +0000 UTC m=+0.029594827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:16 compute-0 podman[90202]: 2025-10-02 11:34:16.64260834 +0000 UTC m=+0.154872216 container init d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:16 compute-0 podman[90202]: 2025-10-02 11:34:16.648726095 +0000 UTC m=+0.160989941 container start d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.552288783 +0000 UTC m=+0.020059798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:16 compute-0 podman[90202]: 2025-10-02 11:34:16.65727839 +0000 UTC m=+0.169542256 container attach d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.68425114 +0000 UTC m=+0.152022135 container init 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.689587075 +0000 UTC m=+0.157358070 container start 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:34:16 compute-0 vigorous_stonebraker[90249]: 167 167
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.693237276 +0000 UTC m=+0.161008261 container attach 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:34:16 compute-0 systemd[1]: libpod-6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c.scope: Deactivated successfully.
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.694016326 +0000 UTC m=+0.161787321 container died 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70241e0d9ebf0c3a0fd4e903a087a096a418ec06eea2f7e71f1d930f11cb2f5-merged.mount: Deactivated successfully.
Oct 02 11:34:16 compute-0 podman[90226]: 2025-10-02 11:34:16.732643291 +0000 UTC m=+0.200414286 container remove 6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:34:16 compute-0 systemd[1]: libpod-conmon-6e42baf4ff3bb0cf0967305ebecbdd036d014ff037c6ef16d23189ca2c08fc4c.scope: Deactivated successfully.
Oct 02 11:34:16 compute-0 podman[90273]: 2025-10-02 11:34:16.884894849 +0000 UTC m=+0.045002855 container create b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:16 compute-0 systemd[1]: Started libpod-conmon-b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935.scope.
Oct 02 11:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:16 compute-0 podman[90273]: 2025-10-02 11:34:16.959247684 +0000 UTC m=+0.119355700 container init b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:34:16 compute-0 podman[90273]: 2025-10-02 11:34:16.866799423 +0000 UTC m=+0.026907459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:16 compute-0 podman[90273]: 2025-10-02 11:34:16.970457397 +0000 UTC m=+0.130565403 container start b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:34:16 compute-0 podman[90273]: 2025-10-02 11:34:16.974736284 +0000 UTC m=+0.134844340 container attach b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:34:17 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 02 11:34:17 compute-0 ceph-mon[73607]: 3.1b scrub starts
Oct 02 11:34:17 compute-0 ceph-mon[73607]: 3.1b scrub ok
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mon[73607]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:34:17 compute-0 ceph-mon[73607]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 11:34:17 compute-0 ceph-mon[73607]: osdmap e38: 3 total, 3 up, 3 in
Oct 02 11:34:17 compute-0 ceph-mon[73607]: fsmap cephfs:0
Oct 02 11:34:17 compute-0 ceph-mon[73607]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:17 compute-0 ceph-mon[73607]: pgmap v108: 69 pgs: 1 active+clean+scrubbing+deep, 11 peering, 57 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:17 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 02 11:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:17 compute-0 ceph-mgr[73901]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:17 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:34:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:17 compute-0 distracted_payne[90242]: Scheduled mds.cephfs update...
Oct 02 11:34:17 compute-0 systemd[1]: libpod-d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de.scope: Deactivated successfully.
Oct 02 11:34:17 compute-0 podman[90202]: 2025-10-02 11:34:17.272154945 +0000 UTC m=+0.784418801 container died d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-66ff5f4f1a1cc9ad4c40d9b1429265f95d002878107b1f5ee8878ad55fc55b1d-merged.mount: Deactivated successfully.
Oct 02 11:34:17 compute-0 podman[90202]: 2025-10-02 11:34:17.314740518 +0000 UTC m=+0.827004364 container remove d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de (image=quay.io/ceph/ceph:v18, name=distracted_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:17 compute-0 systemd[1]: libpod-conmon-d09971e458a14b45f055f3f33cbc931e92923f75408afb08a9f45e84808007de.scope: Deactivated successfully.
Oct 02 11:34:17 compute-0 sudo[90172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:17 compute-0 lucid_dhawan[90290]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:34:17 compute-0 lucid_dhawan[90290]: --> relative data size: 1.0
Oct 02 11:34:17 compute-0 lucid_dhawan[90290]: --> All data devices are unavailable
Oct 02 11:34:17 compute-0 systemd[1]: libpod-b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935.scope: Deactivated successfully.
Oct 02 11:34:17 compute-0 podman[90273]: 2025-10-02 11:34:17.799569794 +0000 UTC m=+0.959677820 container died b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-877ebd553b9033ca08a85ff233e19f81ebc058230441c11f805bb1f1927e1cd9-merged.mount: Deactivated successfully.
Oct 02 11:34:17 compute-0 sudo[90424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzxhbsuwbvubayoszepoyipnmofqarar ; /usr/bin/python3'
Oct 02 11:34:17 compute-0 sudo[90424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:17 compute-0 podman[90273]: 2025-10-02 11:34:17.894405295 +0000 UTC m=+1.054513301 container remove b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:34:17 compute-0 systemd[1]: libpod-conmon-b7fce66ff3897599c9c56208a7cb5421b46fa325621b2f4a7d4f71559da8f935.scope: Deactivated successfully.
Oct 02 11:34:17 compute-0 sudo[90124]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:17 compute-0 sudo[90427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:17 compute-0 sudo[90427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:17 compute-0 sudo[90427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-0 python3[90426]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:34:18 compute-0 sudo[90424]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-0 sudo[90452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:18 compute-0 sudo[90452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:18 compute-0 sudo[90452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 02 11:34:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 02 11:34:18 compute-0 sudo[90477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:18 compute-0 sudo[90477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:18 compute-0 sudo[90477]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-0 sudo[90525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:34:18 compute-0 sudo[90525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:18 compute-0 sudo[90597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jthocaqbhrfdmsaszamdcrjodxnnxvhc ; /usr/bin/python3'
Oct 02 11:34:18 compute-0 sudo[90597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v109: 69 pgs: 1 active+clean+scrubbing+deep, 68 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:18 compute-0 ceph-mon[73607]: 2.16 scrub starts
Oct 02 11:34:18 compute-0 ceph-mon[73607]: 2.16 scrub ok
Oct 02 11:34:18 compute-0 ceph-mon[73607]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:34:18 compute-0 ceph-mon[73607]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:18 compute-0 python3[90599]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404857.7387397-33980-191212296812949/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=879f4ae20801e566b8dfcda89b2df304e135843d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:18 compute-0 sudo[90597]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.475259201 +0000 UTC m=+0.037940017 container create 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:34:18 compute-0 systemd[1]: Started libpod-conmon-406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8.scope.
Oct 02 11:34:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.541890202 +0000 UTC m=+0.104571028 container init 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.550172371 +0000 UTC m=+0.112853187 container start 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.457190536 +0000 UTC m=+0.019871372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.554891799 +0000 UTC m=+0.117572645 container attach 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:34:18 compute-0 heuristic_antonelli[90680]: 167 167
Oct 02 11:34:18 compute-0 systemd[1]: libpod-406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8.scope: Deactivated successfully.
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.558535242 +0000 UTC m=+0.121216058 container died 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea424146a32c247bca6e75f6fc41bfef49c16d0950a0ee10f3d0d2b705b6b57-merged.mount: Deactivated successfully.
Oct 02 11:34:18 compute-0 podman[90658]: 2025-10-02 11:34:18.595167025 +0000 UTC m=+0.157847841 container remove 406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_antonelli, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:34:18 compute-0 systemd[1]: libpod-conmon-406041d6462109c55e3ffb898e221f9ed17737a66e9bd2fdc9ca6c9a677cdea8.scope: Deactivated successfully.
Oct 02 11:34:18 compute-0 sudo[90741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazqydotcjmduscphrgwvrzubpygaksv ; /usr/bin/python3'
Oct 02 11:34:18 compute-0 podman[90704]: 2025-10-02 11:34:18.761433437 +0000 UTC m=+0.053255124 container create 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:18 compute-0 sudo[90741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:18 compute-0 systemd[1]: Started libpod-conmon-224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd.scope.
Oct 02 11:34:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d572b7b30fc006c4967c40cb4857a92a95c43912192dcd62d3e97241356320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d572b7b30fc006c4967c40cb4857a92a95c43912192dcd62d3e97241356320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d572b7b30fc006c4967c40cb4857a92a95c43912192dcd62d3e97241356320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d572b7b30fc006c4967c40cb4857a92a95c43912192dcd62d3e97241356320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:18 compute-0 podman[90704]: 2025-10-02 11:34:18.732097378 +0000 UTC m=+0.023919085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:18 compute-0 podman[90704]: 2025-10-02 11:34:18.842587633 +0000 UTC m=+0.134409350 container init 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:34:18 compute-0 podman[90704]: 2025-10-02 11:34:18.851171781 +0000 UTC m=+0.142993498 container start 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:34:18 compute-0 podman[90704]: 2025-10-02 11:34:18.855407187 +0000 UTC m=+0.147228884 container attach 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:34:18 compute-0 python3[90743]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:18 compute-0 podman[90751]: 2025-10-02 11:34:18.971300389 +0000 UTC m=+0.040674947 container create 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:34:19 compute-0 systemd[1]: Started libpod-conmon-97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f.scope.
Oct 02 11:34:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e3fbfa51cb602ea2955e0dc48888e571f8d2cd5c36fb81930d2504cf96a424/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e3fbfa51cb602ea2955e0dc48888e571f8d2cd5c36fb81930d2504cf96a424/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:19 compute-0 podman[90751]: 2025-10-02 11:34:19.041685904 +0000 UTC m=+0.111060492 container init 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:34:19 compute-0 podman[90751]: 2025-10-02 11:34:18.952051424 +0000 UTC m=+0.021426012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:19 compute-0 podman[90751]: 2025-10-02 11:34:19.049999324 +0000 UTC m=+0.119373882 container start 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:34:19 compute-0 podman[90751]: 2025-10-02 11:34:19.053759168 +0000 UTC m=+0.123133756 container attach 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 02 11:34:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 02 11:34:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:19 compute-0 ceph-mon[73607]: 2.17 scrub starts
Oct 02 11:34:19 compute-0 ceph-mon[73607]: 2.17 scrub ok
Oct 02 11:34:19 compute-0 ceph-mon[73607]: pgmap v109: 69 pgs: 1 active+clean+scrubbing+deep, 68 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]: {
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:     "1": [
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:         {
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "devices": [
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "/dev/loop3"
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             ],
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "lv_name": "ceph_lv0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "lv_size": "7511998464",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "name": "ceph_lv0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "tags": {
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.cluster_name": "ceph",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.crush_device_class": "",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.encrypted": "0",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.osd_id": "1",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.type": "block",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:                 "ceph.vdo": "0"
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             },
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "type": "block",
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:             "vg_name": "ceph_vg0"
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:         }
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]:     ]
Oct 02 11:34:19 compute-0 reverent_lederberg[90746]: }
Oct 02 11:34:19 compute-0 systemd[1]: libpod-224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd.scope: Deactivated successfully.
Oct 02 11:34:19 compute-0 podman[90704]: 2025-10-02 11:34:19.643488318 +0000 UTC m=+0.935310005 container died 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 02 11:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 11:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 11:34:19 compute-0 systemd[1]: libpod-97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f.scope: Deactivated successfully.
Oct 02 11:34:19 compute-0 podman[90751]: 2025-10-02 11:34:19.824817331 +0000 UTC m=+0.894191919 container died 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0e3fbfa51cb602ea2955e0dc48888e571f8d2cd5c36fb81930d2504cf96a424-merged.mount: Deactivated successfully.
Oct 02 11:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-96d572b7b30fc006c4967c40cb4857a92a95c43912192dcd62d3e97241356320-merged.mount: Deactivated successfully.
Oct 02 11:34:20 compute-0 podman[90704]: 2025-10-02 11:34:20.135774932 +0000 UTC m=+1.427596619 container remove 224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:34:20 compute-0 sudo[90525]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:20 compute-0 sudo[90818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:20 compute-0 sudo[90818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:20 compute-0 sudo[90818]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v110: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:20 compute-0 sudo[90843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:20 compute-0 sudo[90843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:20 compute-0 sudo[90843]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:20 compute-0 podman[90751]: 2025-10-02 11:34:20.300988668 +0000 UTC m=+1.370363226 container remove 97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f (image=quay.io/ceph/ceph:v18, name=epic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:20 compute-0 systemd[1]: libpod-conmon-224b01779eb805d9c84f8b756d69fd5af0306c70d626ab4497526a27ef55cbcd.scope: Deactivated successfully.
Oct 02 11:34:20 compute-0 sudo[90741]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:20 compute-0 sudo[90868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:20 compute-0 sudo[90868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:20 compute-0 sudo[90868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:20 compute-0 systemd[1]: libpod-conmon-97b4f0ff5cb131917bf020ebc9448b14df2a35dee7e14e51d5532ad219a4fb7f.scope: Deactivated successfully.
Oct 02 11:34:20 compute-0 ceph-mon[73607]: 3.b scrub starts
Oct 02 11:34:20 compute-0 ceph-mon[73607]: 2.1a scrub starts
Oct 02 11:34:20 compute-0 ceph-mon[73607]: 3.b scrub ok
Oct 02 11:34:20 compute-0 ceph-mon[73607]: 2.1a scrub ok
Oct 02 11:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 11:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 11:34:20 compute-0 sudo[90893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:34:20 compute-0 sudo[90893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.707568729 +0000 UTC m=+0.040288636 container create df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:34:20 compute-0 systemd[1]: Started libpod-conmon-df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370.scope.
Oct 02 11:34:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.68853881 +0000 UTC m=+0.021258747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.798665467 +0000 UTC m=+0.131385394 container init df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.806014172 +0000 UTC m=+0.138734079 container start df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 02 11:34:20 compute-0 nostalgic_brattain[90971]: 167 167
Oct 02 11:34:20 compute-0 systemd[1]: libpod-df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370.scope: Deactivated successfully.
Oct 02 11:34:20 compute-0 conmon[90971]: conmon df70579714e1036e9cc2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370.scope/container/memory.events
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.81625508 +0000 UTC m=+0.148974987 container attach df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.816570737 +0000 UTC m=+0.149290644 container died df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-76dc83b4a20e067bc915fa9e4549ac10759898f813d2efb14df92eeccafd85e5-merged.mount: Deactivated successfully.
Oct 02 11:34:20 compute-0 podman[90955]: 2025-10-02 11:34:20.885998229 +0000 UTC m=+0.218718126 container remove df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:20 compute-0 systemd[1]: libpod-conmon-df70579714e1036e9cc2b2e34c4ed82921648b455875d5116ce96d4c18d5e370.scope: Deactivated successfully.
Oct 02 11:34:21 compute-0 sudo[91019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbichouwzilkryimvqrrrjwincbhydfp ; /usr/bin/python3'
Oct 02 11:34:21 compute-0 sudo[91019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:21 compute-0 podman[91021]: 2025-10-02 11:34:21.119912827 +0000 UTC m=+0.106233530 container create 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:21 compute-0 podman[91021]: 2025-10-02 11:34:21.035025026 +0000 UTC m=+0.021345759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:21 compute-0 python3[91024]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:21 compute-0 systemd[1]: Started libpod-conmon-416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584.scope.
Oct 02 11:34:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbacd051761fa88d9c4d7acc40a95bcd2ba9f39d7e6fc91932947b2c68ed3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbacd051761fa88d9c4d7acc40a95bcd2ba9f39d7e6fc91932947b2c68ed3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbacd051761fa88d9c4d7acc40a95bcd2ba9f39d7e6fc91932947b2c68ed3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbacd051761fa88d9c4d7acc40a95bcd2ba9f39d7e6fc91932947b2c68ed3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 podman[91021]: 2025-10-02 11:34:21.225449668 +0000 UTC m=+0.211770391 container init 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:34:21 compute-0 podman[91021]: 2025-10-02 11:34:21.234075615 +0000 UTC m=+0.220396318 container start 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:34:21 compute-0 podman[91021]: 2025-10-02 11:34:21.239920133 +0000 UTC m=+0.226240836 container attach 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:21 compute-0 podman[91041]: 2025-10-02 11:34:21.274110075 +0000 UTC m=+0.095211192 container create 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:21 compute-0 systemd[1]: Started libpod-conmon-291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3.scope.
Oct 02 11:34:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb61f95c3b60ddefc95488dbeb19dd22ea39d0605dc61f5f3384d6b142cd0788/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb61f95c3b60ddefc95488dbeb19dd22ea39d0605dc61f5f3384d6b142cd0788/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:21 compute-0 podman[91041]: 2025-10-02 11:34:21.248281614 +0000 UTC m=+0.069382751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:21 compute-0 podman[91041]: 2025-10-02 11:34:21.361435027 +0000 UTC m=+0.182536144 container init 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:21 compute-0 podman[91041]: 2025-10-02 11:34:21.366928535 +0000 UTC m=+0.188029652 container start 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:21 compute-0 podman[91041]: 2025-10-02 11:34:21.384175161 +0000 UTC m=+0.205276278 container attach 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:34:21 compute-0 ceph-mon[73607]: pgmap v110: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:22 compute-0 cranky_carver[91040]: {
Oct 02 11:34:22 compute-0 cranky_carver[91040]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:34:22 compute-0 cranky_carver[91040]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:34:22 compute-0 cranky_carver[91040]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:34:22 compute-0 cranky_carver[91040]:         "osd_id": 1,
Oct 02 11:34:22 compute-0 cranky_carver[91040]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:34:22 compute-0 cranky_carver[91040]:         "type": "bluestore"
Oct 02 11:34:22 compute-0 cranky_carver[91040]:     }
Oct 02 11:34:22 compute-0 cranky_carver[91040]: }
Oct 02 11:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862309776' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:34:22 compute-0 thirsty_chandrasekhar[91060]: 
Oct 02 11:34:22 compute-0 thirsty_chandrasekhar[91060]: {"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":41,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1759404852,"num_in_osds":3,"osd_in_since":1759404831,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":69}],"num_pgs":69,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84041728,"bytes_avail":22451953664,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-02T11:34:06.233739+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.ypnrbl":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.rbjjpf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 02 11:34:22 compute-0 systemd[1]: libpod-291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3.scope: Deactivated successfully.
Oct 02 11:34:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 02 11:34:22 compute-0 podman[91041]: 2025-10-02 11:34:22.104875903 +0000 UTC m=+0.925977030 container died 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:34:22 compute-0 systemd[1]: libpod-416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584.scope: Deactivated successfully.
Oct 02 11:34:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 02 11:34:22 compute-0 podman[91021]: 2025-10-02 11:34:22.138915022 +0000 UTC m=+1.125235755 container died 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb61f95c3b60ddefc95488dbeb19dd22ea39d0605dc61f5f3384d6b142cd0788-merged.mount: Deactivated successfully.
Oct 02 11:34:22 compute-0 podman[91041]: 2025-10-02 11:34:22.23519502 +0000 UTC m=+1.056296137 container remove 291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3 (image=quay.io/ceph/ceph:v18, name=thirsty_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 11:34:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:22 compute-0 systemd[1]: libpod-conmon-291ffcf69647f86e3e4b0cc2b00b534a7faed8272a604a728ac2e302f46564a3.scope: Deactivated successfully.
Oct 02 11:34:22 compute-0 sudo[91019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fbacd051761fa88d9c4d7acc40a95bcd2ba9f39d7e6fc91932947b2c68ed3da-merged.mount: Deactivated successfully.
Oct 02 11:34:22 compute-0 podman[91021]: 2025-10-02 11:34:22.378334799 +0000 UTC m=+1.364655502 container remove 416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:34:22 compute-0 systemd[1]: libpod-conmon-416d640fdbadfa4a88e3581d23b7adcf141c8a7cb0f29afe6ae25630b85d7584.scope: Deactivated successfully.
Oct 02 11:34:22 compute-0 sudo[90893]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:22 compute-0 sudo[91149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmuzwenyrufswfxweqaescwdvqeewji ; /usr/bin/python3'
Oct 02 11:34:22 compute-0 sudo[91149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/862309776' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:34:22 compute-0 python3[91151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:22 compute-0 podman[91152]: 2025-10-02 11:34:22.649810704 +0000 UTC m=+0.066645012 container create 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:22 compute-0 podman[91152]: 2025-10-02 11:34:22.605744773 +0000 UTC m=+0.022579111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:22 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 5a0fc3b6-3cec-475f-a025-a95879a6068f (Updating rgw.rgw deployment (+3 -> 3))
Oct 02 11:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:22 compute-0 systemd[1]: Started libpod-conmon-7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231.scope.
Oct 02 11:34:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ec1ff6f99a2d62e80f376577a1ab43dba1303e34191f155ab047500e362add/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ec1ff6f99a2d62e80f376577a1ab43dba1303e34191f155ab047500e362add/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:34:22 compute-0 podman[91152]: 2025-10-02 11:34:22.887604939 +0000 UTC m=+0.304439267 container init 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:22 compute-0 podman[91152]: 2025-10-02 11:34:22.893661763 +0000 UTC m=+0.310496071 container start 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:22 compute-0 podman[91152]: 2025-10-02 11:34:22.929191268 +0000 UTC m=+0.346025576 container attach 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:23 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.mwuxwy on compute-2
Oct 02 11:34:23 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.mwuxwy on compute-2
Oct 02 11:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462869718' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:34:23 compute-0 youthful_mirzakhani[91168]: 
Oct 02 11:34:23 compute-0 youthful_mirzakhani[91168]: {"epoch":3,"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","modified":"2025-10-02T11:33:35.294581Z","created":"2025-10-02T11:30:40.519249Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct 02 11:34:23 compute-0 youthful_mirzakhani[91168]: dumped monmap epoch 3
Oct 02 11:34:23 compute-0 systemd[1]: libpod-7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231.scope: Deactivated successfully.
Oct 02 11:34:23 compute-0 podman[91152]: 2025-10-02 11:34:23.504870714 +0000 UTC m=+0.921705042 container died 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:34:23 compute-0 ceph-mon[73607]: 3.16 scrub starts
Oct 02 11:34:23 compute-0 ceph-mon[73607]: 3.16 scrub ok
Oct 02 11:34:23 compute-0 ceph-mon[73607]: pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2462869718' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ec1ff6f99a2d62e80f376577a1ab43dba1303e34191f155ab047500e362add-merged.mount: Deactivated successfully.
Oct 02 11:34:23 compute-0 podman[91152]: 2025-10-02 11:34:23.796218881 +0000 UTC m=+1.213053189 container remove 7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231 (image=quay.io/ceph/ceph:v18, name=youthful_mirzakhani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:34:23 compute-0 systemd[1]: libpod-conmon-7fc509636bc838a6d5bc9a013f9f5fa78ba7ea65b427c940cb70f74b0d36a231.scope: Deactivated successfully.
Oct 02 11:34:23 compute-0 sudo[91149]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:24 compute-0 sudo[91230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmizabvskcukvgmqhnmzdaqlthrkcsx ; /usr/bin/python3'
Oct 02 11:34:24 compute-0 sudo[91230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:24 compute-0 python3[91232]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:24 compute-0 podman[91233]: 2025-10-02 11:34:24.543401562 +0000 UTC m=+0.113304508 container create f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:24 compute-0 podman[91233]: 2025-10-02 11:34:24.454751327 +0000 UTC m=+0.024654263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:24 compute-0 systemd[1]: Started libpod-conmon-f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc.scope.
Oct 02 11:34:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b0ddce4e8228611c3be2f98244d8a9a3f660fec14b7ff286173265fadf1fd1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b0ddce4e8228611c3be2f98244d8a9a3f660fec14b7ff286173265fadf1fd1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:24 compute-0 podman[91233]: 2025-10-02 11:34:24.715789969 +0000 UTC m=+0.285692935 container init f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:34:24 compute-0 podman[91233]: 2025-10-02 11:34:24.725811161 +0000 UTC m=+0.295714087 container start f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:34:24 compute-0 podman[91233]: 2025-10-02 11:34:24.744293458 +0000 UTC m=+0.314196404 container attach f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:34:24 compute-0 ceph-mon[73607]: 3.12 scrub starts
Oct 02 11:34:24 compute-0 ceph-mon[73607]: Deploying daemon rgw.rgw.compute-2.mwuxwy on compute-2
Oct 02 11:34:24 compute-0 ceph-mon[73607]: 3.12 scrub ok
Oct 02 11:34:24 compute-0 ceph-mon[73607]: pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 02 11:34:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3798139046' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 11:34:25 compute-0 laughing_fermat[91248]: [client.openstack]
Oct 02 11:34:25 compute-0 laughing_fermat[91248]:         key = AQAtYt5oAAAAABAAuYoDEL6p1gtJZUr+6PiDPw==
Oct 02 11:34:25 compute-0 laughing_fermat[91248]:         caps mgr = "allow *"
Oct 02 11:34:25 compute-0 laughing_fermat[91248]:         caps mon = "profile rbd"
Oct 02 11:34:25 compute-0 laughing_fermat[91248]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 02 11:34:25 compute-0 systemd[1]: libpod-f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc.scope: Deactivated successfully.
Oct 02 11:34:25 compute-0 podman[91273]: 2025-10-02 11:34:25.39040205 +0000 UTC m=+0.023309959 container died f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-98b0ddce4e8228611c3be2f98244d8a9a3f660fec14b7ff286173265fadf1fd1-merged.mount: Deactivated successfully.
Oct 02 11:34:25 compute-0 podman[91273]: 2025-10-02 11:34:25.428037419 +0000 UTC m=+0.060945308 container remove f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc (image=quay.io/ceph/ceph:v18, name=laughing_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:25 compute-0 systemd[1]: libpod-conmon-f04f2e782ec116839e0519022f99c202f774eae83a7d336c44a8bb38d8edeecc.scope: Deactivated successfully.
Oct 02 11:34:25 compute-0 sudo[91230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:25 compute-0 ceph-mon[73607]: 3.17 scrub starts
Oct 02 11:34:25 compute-0 ceph-mon[73607]: 3.17 scrub ok
Oct 02 11:34:25 compute-0 ceph-mon[73607]: 3.0 scrub starts
Oct 02 11:34:25 compute-0 ceph-mon[73607]: 3.0 scrub ok
Oct 02 11:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3798139046' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 11:34:26 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 02 11:34:26 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 02 11:34:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:26 compute-0 sudo[91437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwbctrowwdyuzvnybldtkilxiefvvnm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759404866.4842656-34052-15690317122531/async_wrapper.py j832647030316 30 /home/zuul/.ansible/tmp/ansible-tmp-1759404866.4842656-34052-15690317122531/AnsiballZ_command.py _'
Oct 02 11:34:26 compute-0 sudo[91437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:26 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.tijdss on compute-1
Oct 02 11:34:26 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.tijdss on compute-1
Oct 02 11:34:26 compute-0 ansible-async_wrapper.py[91439]: Invoked with j832647030316 30 /home/zuul/.ansible/tmp/ansible-tmp-1759404866.4842656-34052-15690317122531/AnsiballZ_command.py _
Oct 02 11:34:26 compute-0 ansible-async_wrapper.py[91442]: Starting module and watcher
Oct 02 11:34:26 compute-0 ansible-async_wrapper.py[91442]: Start watching 91443 (30)
Oct 02 11:34:26 compute-0 ansible-async_wrapper.py[91443]: Start module (91443)
Oct 02 11:34:26 compute-0 ansible-async_wrapper.py[91439]: Return async_wrapper task started.
Oct 02 11:34:26 compute-0 sudo[91437]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 02 11:34:27 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Oct 02 11:34:27 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Oct 02 11:34:27 compute-0 ceph-mon[73607]: 2.a scrub starts
Oct 02 11:34:27 compute-0 ceph-mon[73607]: 2.a scrub ok
Oct 02 11:34:27 compute-0 ceph-mon[73607]: 3.14 scrub starts
Oct 02 11:34:27 compute-0 ceph-mon[73607]: 3.14 scrub ok
Oct 02 11:34:27 compute-0 ceph-mon[73607]: pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:27 compute-0 python3[91444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 02 11:34:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 02 11:34:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 02 11:34:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.31462934 +0000 UTC m=+0.106072635 container create c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.227595265 +0000 UTC m=+0.019038590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:27 compute-0 systemd[1]: Started libpod-conmon-c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb.scope.
Oct 02 11:34:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4deb0da23bac9e89ad27b1dd4eab71452f7b27f6d5e9f7eac1f69453a6eac7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4deb0da23bac9e89ad27b1dd4eab71452f7b27f6d5e9f7eac1f69453a6eac7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.383680372 +0000 UTC m=+0.175123697 container init c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.388993845 +0000 UTC m=+0.180437150 container start c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.393645953 +0000 UTC m=+0.185089248 container attach c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:34:27 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:27 compute-0 eloquent_mccarthy[91461]: 
Oct 02 11:34:27 compute-0 eloquent_mccarthy[91461]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:34:27 compute-0 systemd[1]: libpod-c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb.scope: Deactivated successfully.
Oct 02 11:34:27 compute-0 podman[91445]: 2025-10-02 11:34:27.960316291 +0000 UTC m=+0.751759586 container died c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4deb0da23bac9e89ad27b1dd4eab71452f7b27f6d5e9f7eac1f69453a6eac7c-merged.mount: Deactivated successfully.
Oct 02 11:34:28 compute-0 podman[91445]: 2025-10-02 11:34:28.006158747 +0000 UTC m=+0.797602042 container remove c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb (image=quay.io/ceph/ceph:v18, name=eloquent_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:34:28 compute-0 systemd[1]: libpod-conmon-c6beb8193f1faf39f303f9ed1a8c75e55ee0be92e18a90823105f07c3bfd54bb.scope: Deactivated successfully.
Oct 02 11:34:28 compute-0 ansible-async_wrapper.py[91443]: Module complete (91443)
Oct 02 11:34:28 compute-0 sudo[91542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guzzelgpptlachymvtwenpaszyiqgvwx ; /usr/bin/python3'
Oct 02 11:34:28 compute-0 sudo[91542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v115: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 02 11:34:28 compute-0 python3[91544]: ansible-ansible.legacy.async_status Invoked with jid=j832647030316.91439 mode=status _async_dir=/root/.ansible_async
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 02 11:34:28 compute-0 sudo[91542]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:28 compute-0 ceph-mon[73607]: Deploying daemon rgw.rgw.compute-1.tijdss on compute-1
Oct 02 11:34:28 compute-0 ceph-mon[73607]: 3.18 scrub starts
Oct 02 11:34:28 compute-0 ceph-mon[73607]: 3.18 scrub ok
Oct 02 11:34:28 compute-0 ceph-mon[73607]: 3.13 deep-scrub starts
Oct 02 11:34:28 compute-0 ceph-mon[73607]: 3.13 deep-scrub ok
Oct 02 11:34:28 compute-0 ceph-mon[73607]: osdmap e39: 3 total, 3 up, 3 in
Oct 02 11:34:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:34:28 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:34:28 compute-0 sudo[91591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeqxllfspxdvojtodvauydqcckkwles ; /usr/bin/python3'
Oct 02 11:34:28 compute-0 sudo[91591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:28 compute-0 python3[91593]: ansible-ansible.legacy.async_status Invoked with jid=j832647030316.91439 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:34:28 compute-0 sudo[91591]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:29 compute-0 sudo[91621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqnmpxyaxnxwtcdzucejgvekmgibvqjd ; /usr/bin/python3'
Oct 02 11:34:29 compute-0 sudo[91621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:29 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.mcnfdf on compute-0
Oct 02 11:34:29 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.mcnfdf on compute-0
Oct 02 11:34:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 02 11:34:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 02 11:34:29 compute-0 sudo[91624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:29 compute-0 sudo[91624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:29 compute-0 sudo[91624]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:29 compute-0 sudo[91649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:29 compute-0 sudo[91649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:29 compute-0 sudo[91649]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:29 compute-0 python3[91623]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:29 compute-0 sudo[91674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:29 compute-0 sudo[91674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:29 compute-0 sudo[91674]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.259275115 +0000 UTC m=+0.058563058 container create 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 02 11:34:29 compute-0 systemd[1]: Started libpod-conmon-4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b.scope.
Oct 02 11:34:29 compute-0 sudo[91712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:29 compute-0 sudo[91712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.22692277 +0000 UTC m=+0.026210793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 02 11:34:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bccd53d6dc156d30b9116f129441d10c17c900eeec2c81db0ea9c1969c5e15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bccd53d6dc156d30b9116f129441d10c17c900eeec2c81db0ea9c1969c5e15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.354896456 +0000 UTC m=+0.154184409 container init 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:29 compute-0 ceph-mon[73607]: pgmap v115: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:29 compute-0 ceph-mon[73607]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 11:34:29 compute-0 ceph-mon[73607]: osdmap e40: 3 total, 3 up, 3 in
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.363073863 +0000 UTC m=+0.162361796 container start 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.365973466 +0000 UTC m=+0.165261399 container attach 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 11:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.631151313 +0000 UTC m=+0.039036446 container create 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:34:29 compute-0 systemd[1]: Started libpod-conmon-66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881.scope.
Oct 02 11:34:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.705569499 +0000 UTC m=+0.113454652 container init 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.613712622 +0000 UTC m=+0.021597775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.71275069 +0000 UTC m=+0.120635823 container start 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:29 compute-0 ecstatic_goodall[91799]: 167 167
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.715610182 +0000 UTC m=+0.123495335 container attach 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 11:34:29 compute-0 systemd[1]: libpod-66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881.scope: Deactivated successfully.
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.718435813 +0000 UTC m=+0.126320946 container died 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-755f599d4f0d16a3eca06e6952a51aaf5fa70111d47e1311617238b32c6b713d-merged.mount: Deactivated successfully.
Oct 02 11:34:29 compute-0 podman[91783]: 2025-10-02 11:34:29.757099669 +0000 UTC m=+0.164984822 container remove 66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:34:29 compute-0 systemd[1]: libpod-conmon-66d86f760bf44161a2240b2bcb3f42e8729801c9330120ea25752ead55b5e881.scope: Deactivated successfully.
Oct 02 11:34:29 compute-0 systemd[1]: Reloading.
Oct 02 11:34:29 compute-0 systemd-rc-local-generator[91863]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:29 compute-0 systemd-sysv-generator[91866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:29 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:29 compute-0 thirsty_gauss[91739]: 
Oct 02 11:34:29 compute-0 thirsty_gauss[91739]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:34:29 compute-0 podman[91678]: 2025-10-02 11:34:29.9542371 +0000 UTC m=+0.753525033 container died 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:30 compute-0 systemd[1]: libpod-4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b.scope: Deactivated successfully.
Oct 02 11:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7bccd53d6dc156d30b9116f129441d10c17c900eeec2c81db0ea9c1969c5e15-merged.mount: Deactivated successfully.
Oct 02 11:34:30 compute-0 podman[91678]: 2025-10-02 11:34:30.067736692 +0000 UTC m=+0.867024635 container remove 4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:30 compute-0 systemd[1]: libpod-conmon-4c306947ff40a7e3dab5125505f47fc2cbf16160622836cf6818349b12a2fc7b.scope: Deactivated successfully.
Oct 02 11:34:30 compute-0 sudo[91621]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:30 compute-0 systemd[1]: Reloading.
Oct 02 11:34:30 compute-0 systemd-rc-local-generator[91921]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:30 compute-0 systemd-sysv-generator[91924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v118: 71 pgs: 1 unknown, 70 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 853 B/s wr, 3 op/s
Oct 02 11:34:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 02 11:34:30 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.mcnfdf for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:34:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:34:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:34:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 02 11:34:30 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 02 11:34:30 compute-0 ceph-mon[73607]: Deploying daemon rgw.rgw.compute-0.mcnfdf on compute-0
Oct 02 11:34:30 compute-0 ceph-mon[73607]: 3.10 scrub starts
Oct 02 11:34:30 compute-0 ceph-mon[73607]: 3.19 scrub starts
Oct 02 11:34:30 compute-0 ceph-mon[73607]: 3.10 scrub ok
Oct 02 11:34:30 compute-0 ceph-mon[73607]: 3.19 scrub ok
Oct 02 11:34:30 compute-0 ceph-mon[73607]: osdmap e41: 3 total, 3 up, 3 in
Oct 02 11:34:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:30 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:30 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:34:30 compute-0 podman[91977]: 2025-10-02 11:34:30.626242834 +0000 UTC m=+0.038681626 container create 085121c0bd544c0ff58cb166ee01112228019382bf3dbb9e6eafee3192f40b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-0-mcnfdf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74aa43fabb15ff0327b46ec3313a8979d104f0da998c0f186ffa5d5ea9eca88d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74aa43fabb15ff0327b46ec3313a8979d104f0da998c0f186ffa5d5ea9eca88d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74aa43fabb15ff0327b46ec3313a8979d104f0da998c0f186ffa5d5ea9eca88d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74aa43fabb15ff0327b46ec3313a8979d104f0da998c0f186ffa5d5ea9eca88d/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.mcnfdf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:30 compute-0 podman[91977]: 2025-10-02 11:34:30.609666217 +0000 UTC m=+0.022105029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:34:30 compute-0 podman[91977]: 2025-10-02 11:34:30.827067338 +0000 UTC m=+0.239506160 container init 085121c0bd544c0ff58cb166ee01112228019382bf3dbb9e6eafee3192f40b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-0-mcnfdf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:34:30 compute-0 sudo[92024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyauaglutreyjfagxgqumrgsmquhiqsm ; /usr/bin/python3'
Oct 02 11:34:30 compute-0 sudo[92024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:30 compute-0 podman[91977]: 2025-10-02 11:34:30.83347182 +0000 UTC m=+0.245910602 container start 085121c0bd544c0ff58cb166ee01112228019382bf3dbb9e6eafee3192f40b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-0-mcnfdf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:30 compute-0 bash[91977]: 085121c0bd544c0ff58cb166ee01112228019382bf3dbb9e6eafee3192f40b0b
Oct 02 11:34:30 compute-0 radosgw[92027]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:34:30 compute-0 radosgw[92027]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 02 11:34:30 compute-0 radosgw[92027]: framework: beast
Oct 02 11:34:30 compute-0 radosgw[92027]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 02 11:34:30 compute-0 radosgw[92027]: init_numa not setting numa affinity
Oct 02 11:34:30 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.mcnfdf for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:34:30 compute-0 sudo[91712]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:30 compute-0 python3[92028]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.045523487 +0000 UTC m=+0.061787549 container create aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.013489089 +0000 UTC m=+0.029753171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:31 compute-0 systemd[1]: Started libpod-conmon-aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748.scope.
Oct 02 11:34:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55966669e972e0fa70d09aee753bd280e7f8f4b6599d989438255e98ad1aa1cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55966669e972e0fa70d09aee753bd280e7f8f4b6599d989438255e98ad1aa1cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.146942654 +0000 UTC m=+0.163206746 container init aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.153505749 +0000 UTC m=+0.169769811 container start aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.15630274 +0000 UTC m=+0.172566802 container attach aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 5a0fc3b6-3cec-475f-a025-a95879a6068f (Updating rgw.rgw deployment (+3 -> 3))
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 5a0fc3b6-3cec-475f-a025-a95879a6068f (Updating rgw.rgw deployment (+3 -> 3)) in 8 seconds
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev ea40d521-4432-4eeb-b8bd-39b448f82e8c (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.qdmsoe on compute-0
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.qdmsoe on compute-0
Oct 02 11:34:31 compute-0 sudo[92109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:31 compute-0 sudo[92109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:31 compute-0 sudo[92109]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:31 compute-0 sudo[92144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:31 compute-0 sudo[92144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:31 compute-0 sudo[92144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:31 compute-0 sudo[92178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:31 compute-0 sudo[92178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:31 compute-0 sudo[92178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:31 compute-0 sudo[92203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:31 compute-0 sudo[92203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='client.14355 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:31 compute-0 ceph-mon[73607]: pgmap v118: 71 pgs: 1 unknown, 70 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 853 B/s wr, 3 op/s
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:34:31 compute-0 ceph-mon[73607]: osdmap e42: 3 total, 3 up, 3 in
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:31 compute-0 ceph-mon[73607]: Deploying daemon haproxy.rgw.default.compute-0.qdmsoe on compute-0
Oct 02 11:34:31 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:31 compute-0 nervous_bose[92105]: 
Oct 02 11:34:31 compute-0 nervous_bose[92105]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 02 11:34:31 compute-0 systemd[1]: libpod-aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748.scope: Deactivated successfully.
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.749851887 +0000 UTC m=+0.766115979 container died aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 02 11:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-55966669e972e0fa70d09aee753bd280e7f8f4b6599d989438255e98ad1aa1cf-merged.mount: Deactivated successfully.
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:31 compute-0 podman[92090]: 2025-10-02 11:34:31.971802294 +0000 UTC m=+0.988066356 container remove aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748 (image=quay.io/ceph/ceph:v18, name=nervous_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:31 compute-0 ansible-async_wrapper.py[91442]: Done in kid B.
Oct 02 11:34:31 compute-0 sudo[92024]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:34:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:32 compute-0 systemd[1]: libpod-conmon-aa1ae9f6c9481f5cb60a251139b6983bc724a8c0793af43350c0edce04499748.scope: Deactivated successfully.
Oct 02 11:34:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 02 11:34:32 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 43 pg[10.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 02 11:34:32 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 9 completed events
Oct 02 11:34:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:34:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:32 compute-0 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Oct 02 11:34:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v121: 72 pgs: 2 unknown, 70 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 02 11:34:32 compute-0 sudo[92318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbtgfhhjttocmncddanbmvlqktbponif ; /usr/bin/python3'
Oct 02 11:34:32 compute-0 sudo[92318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 02 11:34:32 compute-0 python3[92320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:33 compute-0 ceph-mon[73607]: 2.d deep-scrub starts
Oct 02 11:34:33 compute-0 ceph-mon[73607]: 2.d deep-scrub ok
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: osdmap e43: 3 total, 3 up, 3 in
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:34:33 compute-0 ceph-mon[73607]: 3.f scrub starts
Oct 02 11:34:33 compute-0 ceph-mon[73607]: 3.f scrub ok
Oct 02 11:34:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:33 compute-0 ceph-mon[73607]: pgmap v121: 72 pgs: 2 unknown, 70 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.044434681 +0000 UTC m=+0.041076338 container create aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:34:33 compute-0 systemd[1]: Started libpod-conmon-aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9.scope.
Oct 02 11:34:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154b6701cb3a080e5cfa40a30baf99f588f6f846667c62ae32c604166b00338c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154b6701cb3a080e5cfa40a30baf99f588f6f846667c62ae32c604166b00338c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.112967969 +0000 UTC m=+0.109609646 container init aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:34:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.024527998 +0000 UTC m=+0.021169685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.121122354 +0000 UTC m=+0.117764011 container start aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 44 pg[10.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.127378592 +0000 UTC m=+0.124020249 container attach aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:34:33 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:33 compute-0 sharp_napier[92336]: 
Oct 02 11:34:33 compute-0 sharp_napier[92336]: [{"container_id": "e8939fb73988", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.78%", "created": "2025-10-02T11:32:06.979507Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-02T11:32:07.025902Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:33:00.566076Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-02T11:32:06.862426Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@crash.compute-0", "version": "18.2.7"}, {"container_id": "87c34bf8dcfe", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.69%", "created": "2025-10-02T11:32:43.015955Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-10-02T11:32:43.060793Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:34:09.417522Z", "memory_usage": 11754536, "ports": [], "service_name": "crash", "started": "2025-10-02T11:32:42.942354Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@crash.compute-1", "version": "18.2.7"}, {"container_id": "1fd1f805ef64", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.97%", "created": "2025-10-02T11:33:48.393248Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-10-02T11:33:48.474276Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:34:09.911563Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2025-10-02T11:33:46.233442Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@crash.compute-2", "version": "18.2.7"}, {"container_id": "40049824dedf", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "40.89%", "created": "2025-10-02T11:30:52.418747Z", "daemon_id": "compute-0.fmcstn", "daemon_name": "mgr.compute-0.fmcstn", "daemon_type": "mgr", "events": ["2025-10-02T11:32:09.672699Z daemon:mgr.compute-0.fmcstn [INFO] \"Reconfigured mgr.compute-0.fmcstn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:33:00.565981Z", "memory_usage": 545993523, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-02T11:30:52.327420Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mgr.compute-0.fmcstn", "version": "18.2.7"}, {"container_id": "01c00c066c6a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "79.22%", "created": "2025-10-02T11:33:42.990543Z", "daemon_id": "compute-1.ypnrbl", "daemon_name": "mgr.compute-1.ypnrbl", "daemon_type": "mgr", "events": ["2025-10-02T11:33:43.946675Z daemon:mgr.compute-1.ypnrbl [INFO] \"Deployed mgr.compute-1.ypnrbl on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:34:09.417859Z", "memory_usage": 514326528, "ports": [8765], "service_name": "mgr", "started": "2025-10-02T11:33:42.882270Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mgr.compute-1.ypnrbl", "version": "18.2.7"}, {"container_id": "1848f12633a1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "64.17%", "created": "2025-10-02T11:33:38.150953Z", "daemon_id": "compute-2.rbjjpf", "daemon_name": "mgr.compute-2.rbjjpf", "daemon_type": "mgr", "events": ["2025-10-02T11:33:41.245150Z daemon:mgr.compute-2.rbjjpf [INFO] \"Deployed mgr.compute-2.rbjjpf on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:34:09.911505Z", "memory_usage": 513697382, "ports": [8765], "service_name": "mgr", "started": "2025-10-02T11:33:37.659144Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mgr.compute-2.rbjjpf", "version": "18.2.7"}, {"container_id": "7dd5d6593b13", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.37%", "created": "2025-10-02T11:30:44.625564Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-02T11:32:09.035237Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:33:00.565865Z", "memory_request": 2147483648, "memory_usage": 32694599, "ports": [], "service_name": "mon", "started": "2025-10-02T11:30:48.959613Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mon.compute-0", "version": "18.2.7"}, {"container_id": "fe37bd460a2c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.37%", "created": "2025-10-02T11:33:31.183231Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-10-02T11:33:34.952338Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:34:09.417742Z", "memory_request": 2147483648, "memory_usage": 29831987, "ports": [], "service_name": "mon", "started": "2025-10-02T11:33:31.075003Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mon.compute-1", "version": "18.2.7"}, {"container_id": "6a8146c47c40", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.60%", "created": "2025-10-02T11:33:29.163693Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-10-02T11:33:29.224451Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:34:09.911424Z", "memory_request": 2147483648, "memory_usage": 29831987, "ports": [], "service_name": "mon", "started": "2025-10-02T11:33:29.062598Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@mon.compute-2", "version": "18.2.7"}, {"container_id": "811b7a8ea4b2", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.28%", "created": "2025-10-02T11:32:57.136734Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-02T11:32:57.203005Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:33:00.566136Z", "memory_request": 4294967296, "memory_usage": 23498588, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:32:57.051578Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@osd.1", "version": "18.2.7"}, {"container_id": "070e0f0e9cfd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.39%", "created": "2025-10-02T11:32:53.668237Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-02T11:32:54.325465Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:34:09.417656Z", "memory_request": 5502766284, "memory_usage": 61069066, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:32:53.578744Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@osd.0", "version": "18.2.7"}, {"container_id": "5849831e42ec", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.54%", "created": "2025-10-02T11:34:04.249653Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-02T11:34:04.371747Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:34:09.911619Z", "memory_request": 4294967296, "memory_usage": 32757514, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:34:04.067889Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.mcnfdf", "daemon_name": "rgw.rgw.compute-0.mcnfdf", "daemon_type": "rgw", "events": ["2025-10-02T11:34:31.079287Z daemon:rgw.rgw.compute-0.mcnfdf [INFO] \"Deployed rgw.rgw.compute-0.mcnfdf on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-1.tijdss", "daemon_name": "rgw.rgw.compute-1.tijdss", "daemon_type": "rgw", "events": ["2025-10-02T11:34:28.837957Z daemon:rgw.rgw.compute-1.tijdss [INFO] \"Deployed rgw.rgw.compute-1.tijdss on host 'compute-1'\""], "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-2.mwuxwy", "daemon_name": "rgw.rgw.compute-2.mwuxwy", "daemon_type": "rgw", "events": ["2025-10-02T11:34:26.657461Z daemon:rgw.rgw.compute-2.mwuxwy [INFO] \"Deployed rgw.rgw.compute-2.mwuxwy on host 'compute-2'\""], "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 02 11:34:33 compute-0 systemd[1]: libpod-aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9.scope: Deactivated successfully.
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.697236411 +0000 UTC m=+0.693878068 container died aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-154b6701cb3a080e5cfa40a30baf99f588f6f846667c62ae32c604166b00338c-merged.mount: Deactivated successfully.
Oct 02 11:34:33 compute-0 podman[92321]: 2025-10-02 11:34:33.738284796 +0000 UTC m=+0.734926453 container remove aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9 (image=quay.io/ceph/ceph:v18, name=sharp_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:33 compute-0 systemd[1]: libpod-conmon-aa93e5897a46c9bc22711ba30e1749953ee9eed41bc3269474ff4dcc8bad08b9.scope: Deactivated successfully.
Oct 02 11:34:33 compute-0 sudo[92318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:34 compute-0 rsyslogd[1007]: message too long (13999) with configured size 8096, begin of message is: [{"container_id": "e8939fb73988", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 02 11:34:34 compute-0 ceph-mon[73607]: 3.1e scrub starts
Oct 02 11:34:34 compute-0 ceph-mon[73607]: 3.1e scrub ok
Oct 02 11:34:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:34 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:34 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:34:34 compute-0 ceph-mon[73607]: osdmap e44: 3 total, 3 up, 3 in
Oct 02 11:34:34 compute-0 ceph-mon[73607]: 2.10 scrub starts
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 02 11:34:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:34:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:34:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:34:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 02 11:34:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 02 11:34:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v124: 73 pgs: 2 unknown, 71 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 5 op/s
Oct 02 11:34:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:34 compute-0 sudo[92451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwruexfxebqwyfeggifvfxmxclqnzhiz ; /usr/bin/python3'
Oct 02 11:34:34 compute-0 sudo[92451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:34 compute-0 python3[92453]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 02 11:34:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:34:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: 2.10 scrub ok
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: osdmap e45: 3 total, 3 up, 3 in
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2401809657' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:34:35 compute-0 ceph-mon[73607]: 3.c scrub starts
Oct 02 11:34:35 compute-0 ceph-mon[73607]: 3.c scrub ok
Oct 02 11:34:35 compute-0 ceph-mon[73607]: pgmap v124: 73 pgs: 2 unknown, 71 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 5 op/s
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.012140093 +0000 UTC m=+1.231595406 container create 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:35.983295476 +0000 UTC m=+1.202750819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:36 compute-0 systemd[1]: Started libpod-conmon-6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb.scope.
Oct 02 11:34:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c060b9eabe2bfb2147418e0207bcd204bf6a00d99c360d6c558472dedbd77245/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c060b9eabe2bfb2147418e0207bcd204bf6a00d99c360d6c558472dedbd77245/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:36 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 02 11:34:36 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.139677948 +0000 UTC m=+1.359133281 container init 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.120656199 +0000 UTC m=+4.008332183 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.145152927 +0000 UTC m=+1.364608240 container start 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.151357494 +0000 UTC m=+1.370812827 container attach 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.157322514 +0000 UTC m=+4.044998478 container create a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 02 11:34:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 02 11:34:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 02 11:34:36 compute-0 systemd[1]: Started libpod-conmon-a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815.scope.
Oct 02 11:34:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v127: 73 pgs: 1 unknown, 72 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 5 op/s
Oct 02 11:34:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:34:36 compute-0 ceph-mon[73607]: osdmap e46: 3 total, 3 up, 3 in
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2401809657' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.291355744 +0000 UTC m=+4.179031738 container init a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.296529173 +0000 UTC m=+4.184205137 container start a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 focused_hertz[92536]: 0 0
Oct 02 11:34:36 compute-0 systemd[1]: libpod-a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815.scope: Deactivated successfully.
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.365163034 +0000 UTC m=+4.252838998 container attach a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.365498063 +0000 UTC m=+4.253174017 container died a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-39152ae25206583367c17298d1a531f81c2d1ac123744c3cc723fe73e582b9d2-merged.mount: Deactivated successfully.
Oct 02 11:34:36 compute-0 podman[92281]: 2025-10-02 11:34:36.461943584 +0000 UTC m=+4.349619538 container remove a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815 (image=quay.io/ceph/haproxy:2.3, name=focused_hertz)
Oct 02 11:34:36 compute-0 radosgw[92027]: LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:34:36 compute-0 radosgw[92027]: framework: beast
Oct 02 11:34:36 compute-0 radosgw[92027]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 02 11:34:36 compute-0 radosgw[92027]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 02 11:34:36 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-0-mcnfdf[91996]: 2025-10-02T11:34:36.509+0000 7ff3c9c3c940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:34:36 compute-0 systemd[1]: libpod-conmon-a900ef736d159e243c490af8da1d97af40d7588b6dce68da94f48aa6232ab815.scope: Deactivated successfully.
Oct 02 11:34:36 compute-0 radosgw[92027]: starting handler: beast
Oct 02 11:34:36 compute-0 radosgw[92027]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:34:36 compute-0 radosgw[92027]: mgrc service_daemon_register rgw.14382 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.mcnfdf,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d51d2c37-0c52-473d-a68f-705b7a7c6947,zone_name=default,zonegroup_id=46b6f139-f98e-4349-978c-7decf8294e94,zonegroup_name=default}
Oct 02 11:34:36 compute-0 systemd[1]: Reloading.
Oct 02 11:34:36 compute-0 systemd-rc-local-generator[93145]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:36 compute-0 systemd-sysv-generator[93149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:34:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497740742' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:34:36 compute-0 reverent_wozniak[92531]: 
Oct 02 11:34:36 compute-0 reverent_wozniak[92531]: {"fsid":"fd4c5763-22d1-50ea-ad0b-96a3dc3040b2","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":56,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":47,"num_osds":3,"num_up_osds":3,"osd_up_since":1759404852,"num_in_osds":3,"osd_in_since":1759404831,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":71},{"state_name":"unknown","count":2}],"num_pgs":73,"num_pools":11,"num_objects":9,"data_bytes":461268,"bytes_used":84209664,"bytes_avail":22451785728,"bytes_total":22535995392,"unknown_pgs_ratio":0.027397260069847107,"read_bytes_sec":3839,"write_bytes_sec":511,"read_op_per_sec":5,"write_op_per_sec":0},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-02T11:34:06.233739+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.ypnrbl":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.rbjjpf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"7b94ad89-c538-41b7-9e34-1b431ad52278":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"ea40d521-4432-4eeb-b8bd-39b448f82e8c":{"message":"Updating ingress.rgw.default deployment (+4 -> 4) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.819964742 +0000 UTC m=+2.039420055 container died 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:36 compute-0 systemd[1]: libpod-6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb.scope: Deactivated successfully.
Oct 02 11:34:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c060b9eabe2bfb2147418e0207bcd204bf6a00d99c360d6c558472dedbd77245-merged.mount: Deactivated successfully.
Oct 02 11:34:36 compute-0 systemd[1]: Reloading.
Oct 02 11:34:36 compute-0 podman[92454]: 2025-10-02 11:34:36.926966671 +0000 UTC m=+2.146421984 container remove 6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb (image=quay.io/ceph/ceph:v18, name=reverent_wozniak, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:36 compute-0 sudo[92451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:36 compute-0 systemd-sysv-generator[93203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:36 compute-0 systemd-rc-local-generator[93199]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:37 compute-0 systemd[1]: libpod-conmon-6a8cba3c8ef76a054cb0979024155e727f9087f8ab486310228f7e8a89d184cb.scope: Deactivated successfully.
Oct 02 11:34:37 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.qdmsoe for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:34:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:34:37 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 7b94ad89-c538-41b7-9e34-1b431ad52278 (Global Recovery Event) in 5 seconds
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 2.15 scrub starts
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 2.15 scrub ok
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 3.d scrub starts
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 3.d scrub ok
Oct 02 11:34:37 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:37 compute-0 ceph-mon[73607]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:34:37 compute-0 ceph-mon[73607]: osdmap e47: 3 total, 3 up, 3 in
Oct 02 11:34:37 compute-0 ceph-mon[73607]: pgmap v127: 73 pgs: 1 unknown, 72 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 5 op/s
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 3.8 deep-scrub starts
Oct 02 11:34:37 compute-0 ceph-mon[73607]: 3.8 deep-scrub ok
Oct 02 11:34:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3497740742' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:34:37 compute-0 ceph-mon[73607]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:34:37 compute-0 podman[93254]: 2025-10-02 11:34:37.402676365 +0000 UTC m=+0.080078400 container create 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:34:37 compute-0 podman[93254]: 2025-10-02 11:34:37.343652877 +0000 UTC m=+0.021054912 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 02 11:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5447d30e487c70fc6f49e74a33aa2e6cb5d8545ed6fe5f9867e255d42ff712/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:37 compute-0 podman[93254]: 2025-10-02 11:34:37.47146324 +0000 UTC m=+0.148865305 container init 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:34:37 compute-0 podman[93254]: 2025-10-02 11:34:37.475898922 +0000 UTC m=+0.153300967 container start 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:34:37 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe[93269]: [NOTICE] 274/113437 (2) : New worker #1 (4) forked
Oct 02 11:34:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000049s ======
Oct 02 11:34:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 02 11:34:37 compute-0 bash[93254]: 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb
Oct 02 11:34:37 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.qdmsoe for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:34:37 compute-0 sudo[92203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:37 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.jycvzz on compute-2
Oct 02 11:34:37 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.jycvzz on compute-2
Oct 02 11:34:37 compute-0 sudo[93306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvqpeuemizxehgvcwannieyibebzygl ; /usr/bin/python3'
Oct 02 11:34:37 compute-0 sudo[93306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:38 compute-0 python3[93308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.124012545 +0000 UTC m=+0.102196848 container create 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.045030383 +0000 UTC m=+0.023214696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:38 compute-0 systemd[1]: Started libpod-conmon-9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b.scope.
Oct 02 11:34:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a0bd22b909b3c80339e1ddb041a25ec15d1c24c1878bc3b584e94d1a462e99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a0bd22b909b3c80339e1ddb041a25ec15d1c24c1878bc3b584e94d1a462e99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v128: 73 pgs: 1 unknown, 72 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 399 B/s wr, 4 op/s
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.313227996 +0000 UTC m=+0.291412309 container init 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.320952011 +0000 UTC m=+0.299136304 container start 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.369407273 +0000 UTC m=+0.347591586 container attach 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:38 compute-0 ceph-mon[73607]: 3.1f scrub starts
Oct 02 11:34:38 compute-0 ceph-mon[73607]: 3.1f scrub ok
Oct 02 11:34:38 compute-0 ceph-mon[73607]: 2.13 scrub starts
Oct 02 11:34:38 compute-0 ceph-mon[73607]: 2.13 scrub ok
Oct 02 11:34:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:38 compute-0 ceph-mon[73607]: Deploying daemon haproxy.rgw.default.compute-2.jycvzz on compute-2
Oct 02 11:34:38 compute-0 ceph-mon[73607]: pgmap v128: 73 pgs: 1 unknown, 72 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 399 B/s wr, 4 op/s
Oct 02 11:34:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:34:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730718565' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:34:38 compute-0 heuristic_chaum[93325]: 
Oct 02 11:34:38 compute-0 heuristic_chaum[93325]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502766284","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.mcnfdf","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.tijdss","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.mwuxwy","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 02 11:34:38 compute-0 systemd[1]: libpod-9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b.scope: Deactivated successfully.
Oct 02 11:34:38 compute-0 podman[93309]: 2025-10-02 11:34:38.950238459 +0000 UTC m=+0.928422762 container died 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-92a0bd22b909b3c80339e1ddb041a25ec15d1c24c1878bc3b584e94d1a462e99-merged.mount: Deactivated successfully.
Oct 02 11:34:39 compute-0 podman[93309]: 2025-10-02 11:34:39.18431591 +0000 UTC m=+1.162500203 container remove 9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b (image=quay.io/ceph/ceph:v18, name=heuristic_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:34:39 compute-0 systemd[1]: libpod-conmon-9272f79a0adb852ab8f8b72e034f5105adffe298e3e8fc12fa58f633baa1bd0b.scope: Deactivated successfully.
Oct 02 11:34:39 compute-0 sudo[93306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:39 compute-0 ceph-mon[73607]: 2.c scrub starts
Oct 02 11:34:39 compute-0 ceph-mon[73607]: 2.c scrub ok
Oct 02 11:34:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2730718565' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:34:40 compute-0 sudo[93387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqixhcwrkytotanslynrrmlmnwhputst ; /usr/bin/python3'
Oct 02 11:34:40 compute-0 sudo[93387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v129: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 233 KiB/s rd, 5.3 KiB/s wr, 425 op/s
Oct 02 11:34:40 compute-0 python3[93389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:40 compute-0 podman[93390]: 2025-10-02 11:34:40.377370614 +0000 UTC m=+0.024775736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:40 compute-0 podman[93390]: 2025-10-02 11:34:40.617723645 +0000 UTC m=+0.265128737 container create 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:34:40 compute-0 systemd[1]: Started libpod-conmon-64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d.scope.
Oct 02 11:34:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ebd6392d12e7d8eab07698872db6291946fd21dc17f4024701ec811144c21c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ebd6392d12e7d8eab07698872db6291946fd21dc17f4024701ec811144c21c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:40 compute-0 podman[93390]: 2025-10-02 11:34:40.967271678 +0000 UTC m=+0.614676790 container init 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:34:40 compute-0 podman[93390]: 2025-10-02 11:34:40.973025073 +0000 UTC m=+0.620430165 container start 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:41 compute-0 podman[93390]: 2025-10-02 11:34:41.024943142 +0000 UTC m=+0.672348264 container attach 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:34:41 compute-0 ceph-mon[73607]: 2.1e scrub starts
Oct 02 11:34:41 compute-0 ceph-mon[73607]: 2.1e scrub ok
Oct 02 11:34:41 compute-0 ceph-mon[73607]: pgmap v129: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 233 KiB/s rd, 5.3 KiB/s wr, 425 op/s
Oct 02 11:34:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 02 11:34:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497184492' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 11:34:41 compute-0 upbeat_joliot[93405]: mimic
Oct 02 11:34:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:41.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:41 compute-0 systemd[1]: libpod-64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d.scope: Deactivated successfully.
Oct 02 11:34:41 compute-0 podman[93390]: 2025-10-02 11:34:41.503015147 +0000 UTC m=+1.150420239 container died 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ebd6392d12e7d8eab07698872db6291946fd21dc17f4024701ec811144c21c-merged.mount: Deactivated successfully.
Oct 02 11:34:42 compute-0 podman[93390]: 2025-10-02 11:34:42.045004054 +0000 UTC m=+1.692409146 container remove 64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d (image=quay.io/ceph/ceph:v18, name=upbeat_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:34:42 compute-0 sudo[93387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:42 compute-0 systemd[1]: libpod-conmon-64cbd8f504eaf86d74764a74045dc12a95d0753630d79644363da0c386ebd45d.scope: Deactivated successfully.
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 10 completed events
Oct 02 11:34:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v130: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 177 KiB/s rd, 4.0 KiB/s wr, 323 op/s
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:34:42
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'default.rgw.log']
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:34:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3497184492' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 11:34:42 compute-0 ceph-mon[73607]: 2.1b scrub starts
Oct 02 11:34:42 compute-0 ceph-mon[73607]: 2.1b scrub ok
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:34:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:34:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:34:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:34:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:42 compute-0 sudo[93466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqsdciayxwdxkzdmvcfslvqsevxbdbyh ; /usr/bin/python3'
Oct 02 11:34:42 compute-0 sudo[93466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Oct 02 11:34:43 compute-0 python3[93468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.ahfyxt on compute-2
Oct 02 11:34:43 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.ahfyxt on compute-2
Oct 02 11:34:43 compute-0 podman[93469]: 2025-10-02 11:34:43.157471635 +0000 UTC m=+0.023491323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:34:43 compute-0 podman[93469]: 2025-10-02 11:34:43.309185691 +0000 UTC m=+0.175205369 container create cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:34:43 compute-0 systemd[1]: Started libpod-conmon-cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555.scope.
Oct 02 11:34:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2305db07499a22b7752b11cfd7da208bc522325312f4b131008e0fbfef81fee2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2305db07499a22b7752b11cfd7da208bc522325312f4b131008e0fbfef81fee2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:43 compute-0 podman[93469]: 2025-10-02 11:34:43.449631302 +0000 UTC m=+0.315650980 container init cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:43 compute-0 podman[93469]: 2025-10-02 11:34:43.456222218 +0000 UTC m=+0.322241876 container start cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:34:43 compute-0 podman[93469]: 2025-10-02 11:34:43.463191304 +0000 UTC m=+0.329210982 container attach cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:34:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 02 11:34:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482171371' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 11:34:44 compute-0 lucid_burnell[93485]: 
Oct 02 11:34:44 compute-0 lucid_burnell[93485]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":12}}
Oct 02 11:34:44 compute-0 systemd[1]: libpod-cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555.scope: Deactivated successfully.
Oct 02 11:34:44 compute-0 podman[93469]: 2025-10-02 11:34:44.058587577 +0000 UTC m=+0.924607235 container died cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:34:44 compute-0 ceph-mon[73607]: 2.1f scrub starts
Oct 02 11:34:44 compute-0 ceph-mon[73607]: 2.1f scrub ok
Oct 02 11:34:44 compute-0 ceph-mon[73607]: pgmap v130: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 177 KiB/s rd, 4.0 KiB/s wr, 323 op/s
Oct 02 11:34:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:44 compute-0 ceph-mon[73607]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:44 compute-0 ceph-mon[73607]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:44 compute-0 ceph-mon[73607]: Deploying daemon keepalived.rgw.default.compute-2.ahfyxt on compute-2
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v131: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 157 KiB/s rd, 3.5 KiB/s wr, 286 op/s
Oct 02 11:34:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Oct 02 11:34:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Oct 02 11:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2305db07499a22b7752b11cfd7da208bc522325312f4b131008e0fbfef81fee2-merged.mount: Deactivated successfully.
Oct 02 11:34:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:44.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:44 compute-0 podman[93469]: 2025-10-02 11:34:44.82011557 +0000 UTC m=+1.686135238 container remove cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555 (image=quay.io/ceph/ceph:v18, name=lucid_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:34:44 compute-0 sudo[93466]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:34:44 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:34:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:44 compute-0 systemd[1]: libpod-conmon-cff018e96035a6d05bcf2ce0174eb26b558a7f2cdcf06a1abce5d9eab9dec555.scope: Deactivated successfully.
Oct 02 11:34:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 02 11:34:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2482171371' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 11:34:45 compute-0 ceph-mon[73607]: pgmap v131: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 157 KiB/s rd, 3.5 KiB/s wr, 286 op/s
Oct 02 11:34:45 compute-0 ceph-mon[73607]: 3.3 deep-scrub starts
Oct 02 11:34:45 compute-0 ceph-mon[73607]: 3.3 deep-scrub ok
Oct 02 11:34:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 02 11:34:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:45.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 02 11:34:45 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 92cd8e79-129d-42c6-a28d-6fe5ba9190f1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 11:34:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v133: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 3.2 KiB/s wr, 259 op/s
Oct 02 11:34:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 02 11:34:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 02 11:34:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 02 11:34:46 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev aa15b7c6-a979-451d-8b6c-419e2cd7be22 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 11:34:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 02 11:34:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 02 11:34:47 compute-0 ceph-mon[73607]: 2.9 scrub starts
Oct 02 11:34:47 compute-0 ceph-mon[73607]: 2.9 scrub ok
Oct 02 11:34:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:47 compute-0 ceph-mon[73607]: 2.1c scrub starts
Oct 02 11:34:47 compute-0 ceph-mon[73607]: osdmap e48: 3 total, 3 up, 3 in
Oct 02 11:34:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:47 compute-0 ceph-mon[73607]: 2.1c scrub ok
Oct 02 11:34:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 02 11:34:47 compute-0 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Oct 02 11:34:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 02 11:34:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:34:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:34:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 02 11:34:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 02 11:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 02 11:34:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 02 11:34:48 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 07fde3f5-0a9a-4259-a29b-c9cd37872644 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 02 11:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v136: 104 pgs: 31 unknown, 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 02 11:34:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 02 11:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 2.6 scrub starts
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 2.6 scrub ok
Oct 02 11:34:48 compute-0 ceph-mon[73607]: pgmap v133: 73 pgs: 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 3.2 KiB/s wr, 259 op/s
Oct 02 11:34:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:48 compute-0 ceph-mon[73607]: osdmap e49: 3 total, 3 up, 3 in
Oct 02 11:34:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 2.4 scrub starts
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 2.4 scrub ok
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 3.5 scrub starts
Oct 02 11:34:48 compute-0 ceph-mon[73607]: 3.5 scrub ok
Oct 02 11:34:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 02 11:34:48 compute-0 ceph-mon[73607]: osdmap e50: 3 total, 3 up, 3 in
Oct 02 11:34:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 02 11:34:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 02 11:34:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 02 11:34:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 02 11:34:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 02 11:34:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 02 11:34:49 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev bc54bd1e-411f-477f-ae0a-e523ffdc960e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 11:34:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:49 compute-0 ceph-mon[73607]: pgmap v136: 104 pgs: 31 unknown, 73 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 02 11:34:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:49 compute-0 ceph-mon[73607]: 3.a scrub starts
Oct 02 11:34:49 compute-0 ceph-mon[73607]: 3.a scrub ok
Oct 02 11:34:49 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 02 11:34:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:49 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v138: 150 pgs: 46 unknown, 104 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 6aab5b41-6f57-4606-a677-927562f7113d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:50 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52 pruub=15.725957870s) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active pruub 127.495903015s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:50 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52 pruub=15.725957870s) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown pruub 127.495903015s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:50 compute-0 ceph-mon[73607]: 2.e scrub starts
Oct 02 11:34:50 compute-0 ceph-mon[73607]: 2.e scrub ok
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: osdmap e51: 3 total, 3 up, 3 in
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:50 compute-0 ceph-mon[73607]: 3.1c scrub starts
Oct 02 11:34:50 compute-0 ceph-mon[73607]: 3.1c scrub ok
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:50 compute-0 ceph-mon[73607]: osdmap e52: 3 total, 3 up, 3 in
Oct 02 11:34:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.dcvgot on compute-0
Oct 02 11:34:50 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.dcvgot on compute-0
Oct 02 11:34:50 compute-0 sudo[93523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:50 compute-0 sudo[93523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:50 compute-0 sudo[93523]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:51 compute-0 sudo[93548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:51 compute-0 sudo[93548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:51 compute-0 sudo[93548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:51 compute-0 sudo[93573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:51 compute-0 sudo[93573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:51 compute-0 sudo[93573]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:51 compute-0 sudo[93598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:34:51 compute-0 sudo[93598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 02 11:34:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 02 11:34:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 02 11:34:51 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 5b4c5cef-78a9-479c-96cf-326ad83e22eb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 11:34:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:51 compute-0 ceph-mon[73607]: 2.19 scrub starts
Oct 02 11:34:51 compute-0 ceph-mon[73607]: 2.19 scrub ok
Oct 02 11:34:51 compute-0 ceph-mon[73607]: pgmap v138: 150 pgs: 46 unknown, 104 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:51 compute-0 ceph-mon[73607]: 2.1d scrub starts
Oct 02 11:34:51 compute-0 ceph-mon[73607]: 2.1d scrub ok
Oct 02 11:34:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.0( empty local-lis/les=52/53 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:51 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [1] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v141: 181 pgs: 77 unknown, 104 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 02 11:34:52 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 5aa44c0f-149b-4a8e-96f8-ff746bf51a7b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 11:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:34:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:34:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:34:52 compute-0 ceph-mon[73607]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:34:52 compute-0 ceph-mon[73607]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:34:52 compute-0 ceph-mon[73607]: Deploying daemon keepalived.rgw.default.compute-0.dcvgot on compute-0
Oct 02 11:34:52 compute-0 ceph-mon[73607]: 4.1 scrub starts
Oct 02 11:34:52 compute-0 ceph-mon[73607]: 4.1 scrub ok
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: osdmap e53: 3 total, 3 up, 3 in
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 ceph-mon[73607]: pgmap v141: 181 pgs: 77 unknown, 104 active+clean; 454 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:52 compute-0 ceph-mon[73607]: osdmap e54: 3 total, 3 up, 3 in
Oct 02 11:34:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:34:53 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 02 11:34:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:53 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 02 11:34:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 02 11:34:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 02 11:34:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev a0c34675-1d45-458c-a3a3-b0c6104244cf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 92cd8e79-129d-42c6-a28d-6fe5ba9190f1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 92cd8e79-129d-42c6-a28d-6fe5ba9190f1 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev aa15b7c6-a979-451d-8b6c-419e2cd7be22 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event aa15b7c6-a979-451d-8b6c-419e2cd7be22 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 07fde3f5-0a9a-4259-a29b-c9cd37872644 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 07fde3f5-0a9a-4259-a29b-c9cd37872644 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev bc54bd1e-411f-477f-ae0a-e523ffdc960e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event bc54bd1e-411f-477f-ae0a-e523ffdc960e (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 6aab5b41-6f57-4606-a677-927562f7113d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 6aab5b41-6f57-4606-a677-927562f7113d (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 5b4c5cef-78a9-479c-96cf-326ad83e22eb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 5b4c5cef-78a9-479c-96cf-326ad83e22eb (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 5aa44c0f-149b-4a8e-96f8-ff746bf51a7b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 5aa44c0f-149b-4a8e-96f8-ff746bf51a7b (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev a0c34675-1d45-458c-a3a3-b0c6104244cf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 11:34:53 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event a0c34675-1d45-458c-a3a3-b0c6104244cf (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 02 11:34:53 compute-0 ceph-mon[73607]: 7.1 scrub starts
Oct 02 11:34:53 compute-0 ceph-mon[73607]: 7.1 scrub ok
Oct 02 11:34:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:34:53 compute-0 ceph-mon[73607]: osdmap e55: 3 total, 3 up, 3 in
Oct 02 11:34:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v144: 243 pgs: 31 unknown, 212 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:34:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:34:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:34:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 02 11:34:55 compute-0 ceph-mon[73607]: pgmap v144: 243 pgs: 31 unknown, 212 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.173107097 +0000 UTC m=+3.780112350 container create 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Oct 02 11:34:55 compute-0 systemd[1]: Started libpod-conmon-1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a.scope.
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.155578464 +0000 UTC m=+3.762583737 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 02 11:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.242483243 +0000 UTC m=+3.849488516 container init 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.248541533 +0000 UTC m=+3.855546786 container start 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, vcs-type=git, io.buildah.version=1.28.2, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived)
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.251899215 +0000 UTC m=+3.858904468 container attach 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, distribution-scope=public, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vendor=Red Hat, Inc., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 02 11:34:55 compute-0 eager_shaw[93762]: 0 0
Oct 02 11:34:55 compute-0 systemd[1]: libpod-1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a.scope: Deactivated successfully.
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.254989872 +0000 UTC m=+3.861995145 container died 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Oct 02 11:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f8c16c51395befb0e77ad885637177f14230f9c80d56c42afaf8c9ad0235b2-merged.mount: Deactivated successfully.
Oct 02 11:34:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 02 11:34:55 compute-0 podman[93665]: 2025-10-02 11:34:55.301286397 +0000 UTC m=+3.908291650 container remove 1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a (image=quay.io/ceph/keepalived:2.2.4, name=eager_shaw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 11:34:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 02 11:34:55 compute-0 systemd[1]: libpod-conmon-1d5a2e31f2b71402bd6d5250fbbc22cee4c8f269fbe07b6ae0e018774ec0706a.scope: Deactivated successfully.
Oct 02 11:34:55 compute-0 systemd[1]: Reloading.
Oct 02 11:34:55 compute-0 systemd-rc-local-generator[93810]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:55 compute-0 systemd-sysv-generator[93813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Oct 02 11:34:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Oct 02 11:34:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:55.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:55 compute-0 systemd[1]: Reloading.
Oct 02 11:34:55 compute-0 systemd-rc-local-generator[93849]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:34:55 compute-0 systemd-sysv-generator[93852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:34:55 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.dcvgot for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:34:55 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 56 pg[10.0( v 44'48 (0'0,44'48] local-lis/les=43/44 n=8 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.198925018s) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 44'47 mlcod 44'47 active pruub 126.504615784s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:34:55 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 56 pg[10.0( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.198925018s) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 44'47 mlcod 0'0 unknown pruub 126.504615784s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 podman[93904]: 2025-10-02 11:34:56.096163811 +0000 UTC m=+0.058408295 container create a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, release=1793, name=keepalived)
Oct 02 11:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39eb1a86f67c7abadffc22663d45300b35ece8b09561a433b6d29f31ce26233/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:34:56 compute-0 podman[93904]: 2025-10-02 11:34:56.05728998 +0000 UTC m=+0.019534484 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 02 11:34:56 compute-0 podman[93904]: 2025-10-02 11:34:56.207308159 +0000 UTC m=+0.169552663 container init a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=keepalived-container, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 4.2 deep-scrub starts
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 4.2 deep-scrub ok
Oct 02 11:34:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:34:56 compute-0 ceph-mon[73607]: osdmap e56: 3 total, 3 up, 3 in
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 2.b scrub starts
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 2.b scrub ok
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 7.2 deep-scrub starts
Oct 02 11:34:56 compute-0 ceph-mon[73607]: 7.2 deep-scrub ok
Oct 02 11:34:56 compute-0 podman[93904]: 2025-10-02 11:34:56.216891586 +0000 UTC m=+0.179136100 container start a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, release=1793, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Starting VRRP child process, pid=4
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: Startup complete
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: (VI_0) Entering BACKUP STATE (init)
Oct 02 11:34:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 62 unknown, 243 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:56 2025: VRRP_Script(check_backend) succeeded
Oct 02 11:34:56 compute-0 bash[93904]: a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9
Oct 02 11:34:56 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.dcvgot for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 02 11:34:56 compute-0 sudo[93598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 02 11:34:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1e( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.12( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1d( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1c( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1a( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.19( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.6( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.5( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.4( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.b( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1f( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.8( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.a( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.c( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.f( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.d( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.3( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.15( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.e( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.7( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1( v 44'48 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.9( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.2( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.18( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1b( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.17( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.16( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.10( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.13( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.14( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.11( v 44'48 lc 0'0 (0'0,44'48] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1e( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:56.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.12( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1c( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1a( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.6( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.5( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.4( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1d( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.b( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.19( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.8( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.a( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.c( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.0( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 44'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.f( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.d( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.3( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.15( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1f( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.7( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.e( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.2( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.9( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.18( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.17( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.16( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.10( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.1b( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.13( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.14( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 57 pg[10.11( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:34:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev ea40d521-4432-4eeb-b8bd-39b448f82e8c (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 02 11:34:57 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event ea40d521-4432-4eeb-b8bd-39b448f82e8c (Updating ingress.rgw.default deployment (+4 -> 4)) in 26 seconds
Oct 02 11:34:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 sudo[93927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:57 compute-0 sudo[93928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:57 compute-0 sudo[93927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[93928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[93927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[93928]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[93977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:34:57 compute-0 sudo[93978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:57 compute-0 sudo[93977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[93978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[93977]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[93978]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 19 completed events
Oct 02 11:34:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:57 compute-0 ceph-mon[73607]: pgmap v146: 305 pgs: 62 unknown, 243 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:57 compute-0 ceph-mon[73607]: 3.9 deep-scrub starts
Oct 02 11:34:57 compute-0 ceph-mon[73607]: 3.9 deep-scrub ok
Oct 02 11:34:57 compute-0 ceph-mon[73607]: osdmap e57: 3 total, 3 up, 3 in
Oct 02 11:34:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:57 compute-0 sudo[94027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:57 compute-0 sudo[94027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[94027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[94052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:57 compute-0 sudo[94052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[94052]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[94077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:57 compute-0 sudo[94077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:57 compute-0 sudo[94077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:57 compute-0 sudo[94102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:34:57 compute-0 sudo[94102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 62 unknown, 243 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:58 compute-0 podman[94202]: 2025-10-02 11:34:58.349864687 +0000 UTC m=+0.061070402 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:34:58 compute-0 podman[94202]: 2025-10-02 11:34:58.442666171 +0000 UTC m=+0.153871866 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:34:58 compute-0 ceph-mon[73607]: 4.3 scrub starts
Oct 02 11:34:58 compute-0 ceph-mon[73607]: 4.3 scrub ok
Oct 02 11:34:58 compute-0 ceph-mon[73607]: 3.11 scrub starts
Oct 02 11:34:58 compute-0 ceph-mon[73607]: 3.11 scrub ok
Oct 02 11:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:34:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:34:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:59 compute-0 podman[94339]: 2025-10-02 11:34:59.076181316 +0000 UTC m=+0.050575532 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:34:59 compute-0 podman[94339]: 2025-10-02 11:34:59.085106246 +0000 UTC m=+0.059500452 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:34:59 compute-0 podman[94407]: 2025-10-02 11:34:59.270032648 +0000 UTC m=+0.047776502 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=keepalived-container, name=keepalived, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Oct 02 11:34:59 compute-0 podman[94407]: 2025-10-02 11:34:59.281297167 +0000 UTC m=+0.059041001 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Oct 02 11:34:59 compute-0 sudo[94102]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:59 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:34:59 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:34:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:34:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:34:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:34:59 compute-0 ceph-mon[73607]: 4.4 scrub starts
Oct 02 11:34:59 compute-0 ceph-mon[73607]: 4.4 scrub ok
Oct 02 11:34:59 compute-0 ceph-mon[73607]: pgmap v148: 305 pgs: 62 unknown, 243 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cdafeac6-4ced-4cb6-9eee-4f71232f71f4 does not exist
Oct 02 11:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 483a70b9-3d05-4942-b892-415f62566c10 does not exist
Oct 02 11:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 17cb0dac-b600-41bc-be4b-6862e6747e38 does not exist
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:34:59 compute-0 sudo[94437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:59 compute-0 sudo[94437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:59 compute-0 sudo[94437]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:59 compute-0 sudo[94462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:34:59 compute-0 sudo[94462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:59 compute-0 sudo[94462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:59 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot[93919]: Thu Oct  2 11:34:59 2025: (VI_0) Entering MASTER STATE
Oct 02 11:34:59 compute-0 sudo[94487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:34:59 compute-0 sudo[94487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:34:59 compute-0 sudo[94487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:59 compute-0 sudo[94512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:34:59 compute-0 sudo[94512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.275014998 +0000 UTC m=+0.106486315 container create 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.190111308 +0000 UTC m=+0.021582655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:00 compute-0 systemd[1]: Started libpod-conmon-3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53.scope.
Oct 02 11:35:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.38958388 +0000 UTC m=+0.221055217 container init 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.396546753 +0000 UTC m=+0.228018070 container start 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:00 compute-0 funny_newton[94592]: 167 167
Oct 02 11:35:00 compute-0 systemd[1]: libpod-3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53.scope: Deactivated successfully.
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.429303472 +0000 UTC m=+0.260774819 container attach 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.429659161 +0000 UTC m=+0.261130478 container died 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:35:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 02 11:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-62c8571e4b7cc37ed2b12f04f6f4d807a9e38457db65f3835f0610996ecb88ec-merged.mount: Deactivated successfully.
Oct 02 11:35:00 compute-0 ceph-mon[73607]: 7.3 scrub starts
Oct 02 11:35:00 compute-0 ceph-mon[73607]: 7.3 scrub ok
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: pgmap v149: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 02 11:35:00 compute-0 podman[94575]: 2025-10-02 11:35:00.862221606 +0000 UTC m=+0.693692923 container remove 3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:00 compute-0 systemd[1]: libpod-conmon-3bf1591cac1d36b6891be18cebfddd8a62aadcacfc82866cfbe21fe375dbde53.scope: Deactivated successfully.
Oct 02 11:35:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.10( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.11( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.13( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.15( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.16( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.9( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.a( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.8( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.a( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.5( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.7( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.2( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.e( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.f( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.d( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.1b( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.1( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.2( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.5( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.d( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.c( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[6.e( empty local-lis/les=0/0 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.1a( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.1b( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.18( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[4.18( empty local-lis/les=0/0 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.1f( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[5.1c( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905999184s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213394165s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905971527s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213394165s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.13( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.829068184s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136581421s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.13( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.829051971s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136581421s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.852828979s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.160446167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.852810860s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.160446167s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.10( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828832626s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136535645s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.10( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828819275s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136535645s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.11( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828839302s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136627197s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.11( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828823090s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136627197s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.906061172s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213973999s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.906041145s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213973999s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828481674s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 134.136596680s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905237198s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213409424s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1b( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828388214s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136566162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905205727s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213409424s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.18( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828174591s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136383057s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828390121s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.18( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828142166s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136383057s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1b( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.828342438s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136566162s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905276299s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213592529s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905575752s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213973999s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827788353s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136154175s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.2( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827866554s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.136230469s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905539513s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213973999s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905262947s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213714600s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.2( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827801704s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136230469s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905242920s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213714600s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827716827s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136154175s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905190468s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213806152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905254364s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213851929s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905173302s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213806152s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827301025s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 134.135910034s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905086517s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.213745117s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905210495s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213851929s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905069351s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213745117s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827236176s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 134.135910034s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827174187s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 134.135894775s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.827136040s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 134.135894775s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905237198s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214050293s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905187607s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214050293s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905039787s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214019775s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905237198s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.213592529s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.905000687s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214050293s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904979706s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214050293s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.8( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.826545715s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.135711670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904874802s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214019775s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.8( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.826521873s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.135711670s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904745102s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214080811s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904714584s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214080811s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.4( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.825245857s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.134674072s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904638290s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214126587s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.4( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.825211525s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.134674072s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904608727s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214126587s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.5( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.824265480s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.133895874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.5( v 44'48 (0'0,44'48] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.824242592s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.133895874s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904479027s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214157104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904454231s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214157104s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904362679s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214141846s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.f( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.826979637s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.135787964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.19( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.825795174s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.135696411s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904219627s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214141846s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.19( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.825757027s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.135696411s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904037476s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214126587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.f( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.825917244s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.135787964s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903993607s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214126587s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.904006004s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214126587s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903955460s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214126587s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903903961s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214141846s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903882027s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214141846s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1e( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.691933632s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.002258301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.1e( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.691891670s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.002258301s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903700829s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 137.214202881s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.903676033s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.214202881s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.12( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.823058128s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active pruub 134.133697510s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:00 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 58 pg[10.12( v 44'48 (0'0,44'48] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.823003769s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.133697510s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:01 compute-0 podman[94616]: 2025-10-02 11:35:00.997081692 +0000 UTC m=+0.023338879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:01 compute-0 podman[94616]: 2025-10-02 11:35:01.118901873 +0000 UTC m=+0.145159040 container create c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:35:01 compute-0 systemd[1]: Started libpod-conmon-c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec.scope.
Oct 02 11:35:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:01 compute-0 podman[94616]: 2025-10-02 11:35:01.199357162 +0000 UTC m=+0.225614359 container init c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:01 compute-0 podman[94616]: 2025-10-02 11:35:01.205819713 +0000 UTC m=+0.232076880 container start c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:01 compute-0 podman[94616]: 2025-10-02 11:35:01.210085628 +0000 UTC m=+0.236342815 container attach c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:35:01 compute-0 PackageKit[31123]: daemon quit
Oct 02 11:35:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 02 11:35:01 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:35:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 02 11:35:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:01.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 2.f scrub starts
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 2.f scrub ok
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:35:01 compute-0 ceph-mon[73607]: osdmap e58: 3 total, 3 up, 3 in
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 7.7 scrub starts
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 7.7 scrub ok
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 3.15 scrub starts
Oct 02 11:35:01 compute-0 ceph-mon[73607]: 3.15 scrub ok
Oct 02 11:35:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 02 11:35:01 compute-0 eager_jackson[94632]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:35:01 compute-0 eager_jackson[94632]: --> relative data size: 1.0
Oct 02 11:35:01 compute-0 eager_jackson[94632]: --> All data devices are unavailable
Oct 02 11:35:02 compute-0 systemd[1]: libpod-c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec.scope: Deactivated successfully.
Oct 02 11:35:02 compute-0 podman[94616]: 2025-10-02 11:35:02.001431875 +0000 UTC m=+1.027689052 container died c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 02 11:35:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.1f( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.1c( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.18( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.1a( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.18( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.1b( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.19( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.c( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.5( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.1b( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.1( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.d( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.f( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.2( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.3( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.8( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.7( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.17( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.7( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.5( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.a( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.8( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.e( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=58/59 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.9( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.4( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=58/59 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.16( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.15( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[4.13( empty local-lis/les=58/59 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.11( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[5.10( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=58/59 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d8a66f79105e8d1971408e9c5ff2b846edaa1143f216c692f3168bf6472ed29-merged.mount: Deactivated successfully.
Oct 02 11:35:02 compute-0 podman[94616]: 2025-10-02 11:35:02.103854527 +0000 UTC m=+1.130111694 container remove c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:35:02 compute-0 systemd[1]: libpod-conmon-c6dbd04d18fe99064c98829011825f3ecb99822631ded592bfa1535aceab53ec.scope: Deactivated successfully.
Oct 02 11:35:02 compute-0 sudo[94512]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:02 compute-0 sudo[94660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:02 compute-0 sudo[94660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:02 compute-0 sudo[94660]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:02 compute-0 sudo[94685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:02 compute-0 sudo[94685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:02 compute-0 sudo[94685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 02 11:35:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:35:02 compute-0 sudo[94711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:02 compute-0 sudo[94711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:02 compute-0 sudo[94711]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:02 compute-0 sudo[94736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:35:02 compute-0 sudo[94736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:02 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 2833d3b7-0c07-4118-87a0-9b517155c019 (Global Recovery Event) in 15 seconds
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.65924036 +0000 UTC m=+0.037815226 container create bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:02 compute-0 systemd[1]: Started libpod-conmon-bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845.scope.
Oct 02 11:35:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:02.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.729295142 +0000 UTC m=+0.107870018 container init bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.735715121 +0000 UTC m=+0.114289987 container start bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.643301256 +0000 UTC m=+0.021876152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.739073454 +0000 UTC m=+0.117648340 container attach bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:02 compute-0 systemd[1]: libpod-bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845.scope: Deactivated successfully.
Oct 02 11:35:02 compute-0 relaxed_jang[94816]: 167 167
Oct 02 11:35:02 compute-0 conmon[94816]: conmon bf660045515477d5d41b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845.scope/container/memory.events
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.741970365 +0000 UTC m=+0.120545231 container died bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db742fc6ae743f56c879e1fc5021e365f2d6dfcf9c374513caf8f4fae0745f5-merged.mount: Deactivated successfully.
Oct 02 11:35:02 compute-0 podman[94800]: 2025-10-02 11:35:02.774646943 +0000 UTC m=+0.153221809 container remove bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:35:02 compute-0 systemd[1]: libpod-conmon-bf660045515477d5d41b2c7b5c2a922c5f7442c760e02aba4206e0e39bc0e845.scope: Deactivated successfully.
Oct 02 11:35:02 compute-0 ceph-mon[73607]: 4.7 scrub starts
Oct 02 11:35:02 compute-0 ceph-mon[73607]: 4.7 scrub ok
Oct 02 11:35:02 compute-0 ceph-mon[73607]: osdmap e59: 3 total, 3 up, 3 in
Oct 02 11:35:02 compute-0 ceph-mon[73607]: pgmap v152: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:35:02 compute-0 podman[94840]: 2025-10-02 11:35:02.910280517 +0000 UTC m=+0.031468299 container create 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:35:02 compute-0 systemd[1]: Started libpod-conmon-954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010.scope.
Oct 02 11:35:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc64d7617ef8a3b23ca992814305c877a02e0ecf787ae74b430ac6e9eb83d32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc64d7617ef8a3b23ca992814305c877a02e0ecf787ae74b430ac6e9eb83d32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc64d7617ef8a3b23ca992814305c877a02e0ecf787ae74b430ac6e9eb83d32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc64d7617ef8a3b23ca992814305c877a02e0ecf787ae74b430ac6e9eb83d32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:02 compute-0 podman[94840]: 2025-10-02 11:35:02.977860897 +0000 UTC m=+0.099048699 container init 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:02 compute-0 podman[94840]: 2025-10-02 11:35:02.986070981 +0000 UTC m=+0.107258763 container start 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:02 compute-0 podman[94840]: 2025-10-02 11:35:02.989573428 +0000 UTC m=+0.110761220 container attach 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:02 compute-0 podman[94840]: 2025-10-02 11:35:02.897581983 +0000 UTC m=+0.018769785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 02 11:35:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:35:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 02 11:35:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 02 11:35:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:03 compute-0 brave_lamarr[94857]: {
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:     "1": [
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:         {
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "devices": [
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "/dev/loop3"
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             ],
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "lv_name": "ceph_lv0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "lv_size": "7511998464",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "name": "ceph_lv0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "tags": {
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.cluster_name": "ceph",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.crush_device_class": "",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.encrypted": "0",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.osd_id": "1",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.type": "block",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:                 "ceph.vdo": "0"
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             },
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "type": "block",
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:             "vg_name": "ceph_vg0"
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:         }
Oct 02 11:35:03 compute-0 brave_lamarr[94857]:     ]
Oct 02 11:35:03 compute-0 brave_lamarr[94857]: }
Oct 02 11:35:03 compute-0 systemd[1]: libpod-954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010.scope: Deactivated successfully.
Oct 02 11:35:03 compute-0 podman[94840]: 2025-10-02 11:35:03.76135478 +0000 UTC m=+0.882542562 container died 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc64d7617ef8a3b23ca992814305c877a02e0ecf787ae74b430ac6e9eb83d32-merged.mount: Deactivated successfully.
Oct 02 11:35:03 compute-0 ceph-mon[73607]: 4.b scrub starts
Oct 02 11:35:03 compute-0 ceph-mon[73607]: 4.b scrub ok
Oct 02 11:35:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:35:03 compute-0 ceph-mon[73607]: osdmap e60: 3 total, 3 up, 3 in
Oct 02 11:35:03 compute-0 podman[94840]: 2025-10-02 11:35:03.961194743 +0000 UTC m=+1.082382525 container remove 954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:03 compute-0 systemd[1]: libpod-conmon-954f4877f3df4e0a26f5c8c2063f70502d63395f257f7320fb22b8de6b083010.scope: Deactivated successfully.
Oct 02 11:35:03 compute-0 sudo[94736]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:04 compute-0 sudo[94880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:04 compute-0 sudo[94880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:04 compute-0 sudo[94880]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:04 compute-0 sudo[94905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:04 compute-0 sudo[94905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:04 compute-0 sudo[94905]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:04 compute-0 sudo[94930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:04 compute-0 sudo[94930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:04 compute-0 sudo[94930]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:04 compute-0 sudo[94955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:35:04 compute-0 sudo[94955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 1 active+recovering, 304 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 11:35:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 02 11:35:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:35:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.491917024 +0000 UTC m=+0.022682501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.605712058 +0000 UTC m=+0.136477515 container create 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:04 compute-0 systemd[1]: Started libpod-conmon-33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31.scope.
Oct 02 11:35:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:04.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.719299996 +0000 UTC m=+0.250065483 container init 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.72590237 +0000 UTC m=+0.256667827 container start 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:04 compute-0 elated_driscoll[95036]: 167 167
Oct 02 11:35:04 compute-0 systemd[1]: libpod-33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31.scope: Deactivated successfully.
Oct 02 11:35:04 compute-0 conmon[95036]: conmon 33f4ada3176266812eac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31.scope/container/memory.events
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.738623144 +0000 UTC m=+0.269388621 container attach 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.739236659 +0000 UTC m=+0.270002116 container died 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6162fdf04c0f0d69795da17292fb0438e21e23499c67f9927dfbd0b36046d1fb-merged.mount: Deactivated successfully.
Oct 02 11:35:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 02 11:35:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:35:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 02 11:35:04 compute-0 podman[95020]: 2025-10-02 11:35:04.907253663 +0000 UTC m=+0.438019120 container remove 33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_driscoll, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:35:04 compute-0 systemd[1]: libpod-conmon-33f4ada3176266812eac9504a2905067adb3781a05b1904d90029b589b8f4c31.scope: Deactivated successfully.
Oct 02 11:35:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 02 11:35:04 compute-0 ceph-mon[73607]: 4.f scrub starts
Oct 02 11:35:04 compute-0 ceph-mon[73607]: 4.f scrub ok
Oct 02 11:35:04 compute-0 ceph-mon[73607]: pgmap v154: 305 pgs: 1 active+recovering, 304 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 11:35:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:35:05 compute-0 podman[95061]: 2025-10-02 11:35:05.037738531 +0000 UTC m=+0.023924143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:05 compute-0 podman[95061]: 2025-10-02 11:35:05.461782095 +0000 UTC m=+0.447967687 container create fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:05 compute-0 systemd[1]: Started libpod-conmon-fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47.scope.
Oct 02 11:35:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b927f26335bd0a577b0451a2c58c33573449271a919db61138c3a9185d09d1c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b927f26335bd0a577b0451a2c58c33573449271a919db61138c3a9185d09d1c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b927f26335bd0a577b0451a2c58c33573449271a919db61138c3a9185d09d1c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b927f26335bd0a577b0451a2c58c33573449271a919db61138c3a9185d09d1c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:05 compute-0 podman[95061]: 2025-10-02 11:35:05.554509768 +0000 UTC m=+0.540695380 container init fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:35:05 compute-0 podman[95061]: 2025-10-02 11:35:05.560566527 +0000 UTC m=+0.546752119 container start fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:05 compute-0 podman[95061]: 2025-10-02 11:35:05.57280036 +0000 UTC m=+0.558985952 container attach fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 02 11:35:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 02 11:35:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 02 11:35:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 1 active+recovering, 304 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 329 B/s, 1 objects/s recovering
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]: {
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:         "osd_id": 1,
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:         "type": "bluestore"
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]:     }
Oct 02 11:35:06 compute-0 intelligent_mestorf[95077]: }
Oct 02 11:35:06 compute-0 systemd[1]: libpod-fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47.scope: Deactivated successfully.
Oct 02 11:35:06 compute-0 podman[95061]: 2025-10-02 11:35:06.376522843 +0000 UTC m=+1.362708435 container died fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 02 11:35:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:35:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:06.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:07 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 02 11:35:07 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 02 11:35:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 02 11:35:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:35:07 compute-0 ceph-mon[73607]: osdmap e61: 3 total, 3 up, 3 in
Oct 02 11:35:07 compute-0 ceph-mon[73607]: 2.12 scrub starts
Oct 02 11:35:07 compute-0 ceph-mon[73607]: 2.12 scrub ok
Oct 02 11:35:07 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 20 completed events
Oct 02 11:35:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:07.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b927f26335bd0a577b0451a2c58c33573449271a919db61138c3a9185d09d1c3-merged.mount: Deactivated successfully.
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 02 11:35:08 compute-0 podman[95061]: 2025-10-02 11:35:08.164811398 +0000 UTC m=+3.150997000 container remove fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:35:08 compute-0 systemd[1]: libpod-conmon-fdba48cddde8eaf9e835a1947fbf6b34822974fe7f581eef6d87835223605b47.scope: Deactivated successfully.
Oct 02 11:35:08 compute-0 sudo[94955]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 249 B/s, 0 objects/s recovering
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:08 compute-0 ceph-mon[73607]: osdmap e62: 3 total, 3 up, 3 in
Oct 02 11:35:08 compute-0 ceph-mon[73607]: pgmap v157: 305 pgs: 1 active+recovering, 304 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 329 B/s, 1 objects/s recovering
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 3.e scrub starts
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 3.e scrub ok
Oct 02 11:35:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 4.10 scrub starts
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 4.10 scrub ok
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 7.c scrub starts
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 7.c scrub ok
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 3.1d scrub starts
Oct 02 11:35:08 compute-0 ceph-mon[73607]: 3.1d scrub ok
Oct 02 11:35:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:35:08 compute-0 ceph-mon[73607]: osdmap e63: 3 total, 3 up, 3 in
Oct 02 11:35:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a5d203f0-caa8-442d-9a4c-d628de80e2fa does not exist
Oct 02 11:35:08 compute-0 ceph-mgr[73901]: [progress INFO root] update: starting ev 9a89eb57-e61f-4b9e-a201-87d211f8bc7e (Updating mds.cephfs deployment (+3 -> 3))
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:08 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.gpiyct on compute-2
Oct 02 11:35:08 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.gpiyct on compute-2
Oct 02 11:35:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:08.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 02 11:35:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 02 11:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 02 11:35:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 02 11:35:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 02 11:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 02 11:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 02 11:35:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 02 11:35:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:09 compute-0 ceph-mon[73607]: pgmap v159: 305 pgs: 305 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 249 B/s, 0 objects/s recovering
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:09 compute-0 ceph-mon[73607]: Deploying daemon mds.cephfs.compute-2.gpiyct on compute-2
Oct 02 11:35:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:35:09 compute-0 ceph-mon[73607]: osdmap e64: 3 total, 3 up, 3 in
Oct 02 11:35:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 8 active+remapped, 297 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 288 B/s, 9 objects/s recovering
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 02 11:35:10 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:10 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:10 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:10 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.odxjnj on compute-0
Oct 02 11:35:10 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.odxjnj on compute-0
Oct 02 11:35:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:10 compute-0 sudo[95119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:10 compute-0 sudo[95119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:10 compute-0 sudo[95119]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 7.d scrub starts
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 7.d scrub ok
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 3.1a scrub starts
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 3.1a scrub ok
Oct 02 11:35:10 compute-0 ceph-mon[73607]: osdmap e65: 3 total, 3 up, 3 in
Oct 02 11:35:10 compute-0 ceph-mon[73607]: pgmap v162: 305 pgs: 8 active+remapped, 297 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 288 B/s, 9 objects/s recovering
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 2.18 deep-scrub starts
Oct 02 11:35:10 compute-0 ceph-mon[73607]: 2.18 deep-scrub ok
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:35:10 compute-0 ceph-mon[73607]: osdmap e66: 3 total, 3 up, 3 in
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:10 compute-0 sudo[95144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:10 compute-0 sudo[95144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:10 compute-0 sudo[95144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:10 compute-0 sudo[95169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:10 compute-0 sudo[95169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:10 compute-0 sudo[95169]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:10 compute-0 sudo[95194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:35:10 compute-0 sudo[95194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e3 new map
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:34:16.058958+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.gpiyct{-1:24148} state up:standby seq 1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:boot
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] as mds.0
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gpiyct assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.gpiyct"} v 0) v1
Oct 02 11:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gpiyct"}]: dispatch
Oct 02 11:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e3 all = 0
Oct 02 11:35:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e4 new map
Oct 02 11:35:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:10.977247+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:creating seq 1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:35:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:creating}
Oct 02 11:35:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gpiyct is now active in filesystem cephfs as rank 0
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.263146818 +0000 UTC m=+0.047123407 container create e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:35:11 compute-0 systemd[1]: Started libpod-conmon-e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c.scope.
Oct 02 11:35:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.235600186 +0000 UTC m=+0.019576795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.347523884 +0000 UTC m=+0.131500493 container init e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.354128287 +0000 UTC m=+0.138104876 container start e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:35:11 compute-0 musing_borg[95275]: 167 167
Oct 02 11:35:11 compute-0 systemd[1]: libpod-e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c.scope: Deactivated successfully.
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.364486393 +0000 UTC m=+0.148462992 container attach e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.365053038 +0000 UTC m=+0.149029637 container died e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:35:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5c6308420758e531bb1bf4d129f2d1e6880bea239f63c3e75e833623711851-merged.mount: Deactivated successfully.
Oct 02 11:35:11 compute-0 podman[95258]: 2025-10-02 11:35:11.454582782 +0000 UTC m=+0.238559371 container remove e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:11 compute-0 systemd[1]: libpod-conmon-e0213037bda65fc86c307aa619dedcb4aa1bc67123dabf42a6504c694f44ff8c.scope: Deactivated successfully.
Oct 02 11:35:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:11 compute-0 systemd[1]: Reloading.
Oct 02 11:35:11 compute-0 systemd-rc-local-generator[95321]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:35:11 compute-0 systemd-sysv-generator[95324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:35:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 02 11:35:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 02 11:35:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:11 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:11 compute-0 systemd[1]: Reloading.
Oct 02 11:35:11 compute-0 systemd-sysv-generator[95365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:35:11 compute-0 systemd-rc-local-generator[95362]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:35:11 compute-0 ceph-mon[73607]: Deploying daemon mds.cephfs.compute-0.odxjnj on compute-0
Oct 02 11:35:11 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:boot
Oct 02 11:35:11 compute-0 ceph-mon[73607]: daemon mds.cephfs.compute-2.gpiyct assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 11:35:11 compute-0 ceph-mon[73607]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 11:35:11 compute-0 ceph-mon[73607]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 11:35:11 compute-0 ceph-mon[73607]: Cluster is now healthy
Oct 02 11:35:11 compute-0 ceph-mon[73607]: fsmap cephfs:0 1 up:standby
Oct 02 11:35:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gpiyct"}]: dispatch
Oct 02 11:35:11 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:creating}
Oct 02 11:35:11 compute-0 ceph-mon[73607]: daemon mds.cephfs.compute-2.gpiyct is now active in filesystem cephfs as rank 0
Oct 02 11:35:11 compute-0 ceph-mon[73607]: osdmap e67: 3 total, 3 up, 3 in
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e5 new map
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:12.043439+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:active
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active}
Oct 02 11:35:12 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.odxjnj for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 8 active+remapped, 297 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 302 B/s, 9 objects/s recovering
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:35:12 compute-0 podman[95421]: 2025-10-02 11:35:12.327138396 +0000 UTC m=+0.067021838 container create 28f85318a1a552a00763e6c2a91831272facd35fc0076825cb6c650d5e9cb5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-0-odxjnj, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:35:12 compute-0 podman[95421]: 2025-10-02 11:35:12.285118337 +0000 UTC m=+0.025001819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6232bcf35f22cc31c6b615a2591ccdfdbb089c8c37f10b49110c9809244b2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6232bcf35f22cc31c6b615a2591ccdfdbb089c8c37f10b49110c9809244b2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6232bcf35f22cc31c6b615a2591ccdfdbb089c8c37f10b49110c9809244b2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6232bcf35f22cc31c6b615a2591ccdfdbb089c8c37f10b49110c9809244b2c6/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.odxjnj supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:12 compute-0 podman[95421]: 2025-10-02 11:35:12.482123878 +0000 UTC m=+0.222007320 container init 28f85318a1a552a00763e6c2a91831272facd35fc0076825cb6c650d5e9cb5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-0-odxjnj, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:12 compute-0 podman[95421]: 2025-10-02 11:35:12.488237789 +0000 UTC m=+0.228121231 container start 28f85318a1a552a00763e6c2a91831272facd35fc0076825cb6c650d5e9cb5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-0-odxjnj, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:35:12 compute-0 bash[95421]: 28f85318a1a552a00763e6c2a91831272facd35fc0076825cb6c650d5e9cb5f2
Oct 02 11:35:12 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.odxjnj for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct 02 11:35:12 compute-0 ceph-mds[95441]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:35:12 compute-0 ceph-mds[95441]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 02 11:35:12 compute-0 ceph-mds[95441]: main not setting numa affinity
Oct 02 11:35:12 compute-0 ceph-mds[95441]: pidfile_write: ignore empty --pid-file
Oct 02 11:35:12 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-0-odxjnj[95437]: starting mds.cephfs.compute-0.odxjnj at 
Oct 02 11:35:12 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Updating MDS map to version 5 from mon.0
Oct 02 11:35:12 compute-0 sudo[95194]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.zfhmgy on compute-1
Oct 02 11:35:12 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.zfhmgy on compute-1
Oct 02 11:35:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:12.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e6 new map
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:12.043439+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Updating MDS map to version 6 from mon.0
Oct 02 11:35:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Monitors have assigned me to become a standby.
Oct 02 11:35:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] up:boot
Oct 02 11:35:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 1 up:standby
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.odxjnj"} v 0) v1
Oct 02 11:35:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.odxjnj"}]: dispatch
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e6 all = 0
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e7 new map
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:12.043439+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:active
Oct 02 11:35:13 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active}
Oct 02 11:35:13 compute-0 ceph-mon[73607]: pgmap v165: 305 pgs: 8 active+remapped, 297 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 302 B/s, 9 objects/s recovering
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:35:13 compute-0 ceph-mon[73607]: osdmap e68: 3 total, 3 up, 3 in
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:35:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 1 up:standby
Oct 02 11:35:13 compute-0 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Oct 02 11:35:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 02 11:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 02 11:35:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 69 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:14 compute-0 ceph-mon[73607]: Deploying daemon mds.cephfs.compute-1.zfhmgy on compute-1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] up:boot
Oct 02 11:35:14 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 1 up:standby
Oct 02 11:35:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.odxjnj"}]: dispatch
Oct 02 11:35:14 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 1 up:standby
Oct 02 11:35:14 compute-0 ceph-mon[73607]: 5.4 scrub starts
Oct 02 11:35:14 compute-0 ceph-mon[73607]: 5.4 scrub ok
Oct 02 11:35:14 compute-0 ceph-mon[73607]: osdmap e69: 3 total, 3 up, 3 in
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s wr, 6 op/s; 148 B/s, 5 objects/s recovering
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:14 compute-0 ceph-mgr[73901]: [progress INFO root] complete: finished ev 9a89eb57-e61f-4b9e-a201-87d211f8bc7e (Updating mds.cephfs deployment (+3 -> 3))
Oct 02 11:35:14 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 9a89eb57-e61f-4b9e-a201-87d211f8bc7e (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 615d61b0-b059-4039-9820-9ba405e0bc64 does not exist
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:14 compute-0 sudo[95462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:14 compute-0 sudo[95462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:14 compute-0 sudo[95462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:14 compute-0 sudo[95487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:35:14 compute-0 sudo[95487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:14 compute-0 sudo[95487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 02 11:35:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:35:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:14.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 02 11:35:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 02 11:35:14 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 70 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:14 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 70 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:14 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 70 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:14 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 70 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:14 compute-0 sudo[95512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:14 compute-0 sudo[95512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:14 compute-0 sudo[95512]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:14 compute-0 sudo[95537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:14 compute-0 sudo[95537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:14 compute-0 sudo[95537]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:14 compute-0 sudo[95562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:14 compute-0 sudo[95562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:14 compute-0 sudo[95562]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:15 compute-0 sudo[95587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:35:15 compute-0 sudo[95587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e8 new map
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:12.043439+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] up:boot
Oct 02 11:35:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.zfhmgy"} v 0) v1
Oct 02 11:35:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.zfhmgy"}]: dispatch
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e8 all = 0
Oct 02 11:35:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 02 11:35:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 02 11:35:15 compute-0 ceph-mon[73607]: 4.11 scrub starts
Oct 02 11:35:15 compute-0 ceph-mon[73607]: 4.11 scrub ok
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:15 compute-0 ceph-mon[73607]: pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s wr, 6 op/s; 148 B/s, 5 objects/s recovering
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:15 compute-0 ceph-mon[73607]: 5.8 scrub starts
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:15 compute-0 ceph-mon[73607]: 5.8 scrub ok
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:35:15 compute-0 ceph-mon[73607]: osdmap e70: 3 total, 3 up, 3 in
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] up:boot
Oct 02 11:35:15 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.zfhmgy"}]: dispatch
Oct 02 11:35:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:15 compute-0 podman[95685]: 2025-10-02 11:35:15.638470041 +0000 UTC m=+0.214590947 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 02 11:35:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 02 11:35:15 compute-0 podman[95705]: 2025-10-02 11:35:15.79207149 +0000 UTC m=+0.052880139 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 02 11:35:15 compute-0 podman[95685]: 2025-10-02 11:35:15.81437343 +0000 UTC m=+0.390494356 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e9 new map
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:16.189190+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:active
Oct 02 11:35:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 3.5 KiB/s wr, 10 op/s; 203 B/s, 10 objects/s recovering
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 02 11:35:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:35:16 compute-0 podman[95818]: 2025-10-02 11:35:16.342413747 +0000 UTC m=+0.054003377 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:35:16 compute-0 podman[95818]: 2025-10-02 11:35:16.378159851 +0000 UTC m=+0.089749451 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:35:16 compute-0 podman[95884]: 2025-10-02 11:35:16.663960407 +0000 UTC m=+0.116218964 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, description=keepalived for Ceph, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2)
Oct 02 11:35:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 02 11:35:16 compute-0 ceph-mon[73607]: 7.12 scrub starts
Oct 02 11:35:16 compute-0 ceph-mon[73607]: 7.12 scrub ok
Oct 02 11:35:16 compute-0 ceph-mon[73607]: osdmap e71: 3 total, 3 up, 3 in
Oct 02 11:35:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:16 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] up:active
Oct 02 11:35:16 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:35:16 compute-0 podman[95905]: 2025-10-02 11:35:16.800067612 +0000 UTC m=+0.091741599 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Oct 02 11:35:16 compute-0 podman[95884]: 2025-10-02 11:35:16.834134395 +0000 UTC m=+0.286392962 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Oct 02 11:35:16 compute-0 sudo[95587]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 sudo[95937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:17 compute-0 sudo[95937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 sudo[95937]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e10 new map
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:16.189190+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Updating MDS map to version 10 from mon.0
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] up:standby
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:35:17 compute-0 sudo[95962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:17 compute-0 sudo[95962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:35:17 compute-0 sudo[95962]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 sudo[95985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:17 compute-0 sudo[95985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 sudo[95985]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 sudo[96004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:17 compute-0 sudo[96004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 sudo[96004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 sudo[96035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:17 compute-0 sudo[96035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 sudo[96035]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 sudo[96052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:35:17 compute-0 sudo[96052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:17 compute-0 sudo[96052]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 4.12 scrub starts
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 4.12 scrub ok
Oct 02 11:35:17 compute-0 ceph-mon[73607]: pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 3.5 KiB/s wr, 10 op/s; 203 B/s, 10 objects/s recovering
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 5.b deep-scrub starts
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 5.b deep-scrub ok
Oct 02 11:35:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:35:17 compute-0 ceph-mon[73607]: osdmap e72: 3 total, 3 up, 3 in
Oct 02 11:35:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] up:standby
Oct 02 11:35:17 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 5.d deep-scrub starts
Oct 02 11:35:17 compute-0 ceph-mon[73607]: 5.d deep-scrub ok
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:35:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:35:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b34a924a-f73a-406a-a692-11e3d45df80b does not exist
Oct 02 11:35:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 589b36f4-8f71-4acb-ae69-9693e19a054a does not exist
Oct 02 11:35:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f7c4cca0-1d4e-4d5c-9946-509d5b3293b6 does not exist
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 21 completed events
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:35:18 compute-0 sudo[96118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:18 compute-0 sudo[96118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:18 compute-0 sudo[96118]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 4 objects/s recovering
Oct 02 11:35:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 02 11:35:18 compute-0 sudo[96144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:18 compute-0 sudo[96144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:18 compute-0 sudo[96144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 02 11:35:18 compute-0 sudo[96169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:18 compute-0 sudo[96169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:18 compute-0 sudo[96169]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:18 compute-0 sudo[96194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:35:18 compute-0 sudo[96194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:18.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:18 compute-0 podman[96257]: 2025-10-02 11:35:18.705287801 +0000 UTC m=+0.020135149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:18 compute-0 podman[96257]: 2025-10-02 11:35:18.848006799 +0000 UTC m=+0.162854177 container create 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:18 compute-0 ceph-mon[73607]: pgmap v173: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 4 objects/s recovering
Oct 02 11:35:18 compute-0 ceph-mon[73607]: osdmap e73: 3 total, 3 up, 3 in
Oct 02 11:35:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:18 compute-0 systemd[1]: Started libpod-conmon-429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21.scope.
Oct 02 11:35:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:18 compute-0 podman[96257]: 2025-10-02 11:35:18.993020506 +0000 UTC m=+0.307867864 container init 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:35:19 compute-0 podman[96257]: 2025-10-02 11:35:19.000167722 +0000 UTC m=+0.315015060 container start 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:19 compute-0 strange_khayyam[96273]: 167 167
Oct 02 11:35:19 compute-0 systemd[1]: libpod-429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21.scope: Deactivated successfully.
Oct 02 11:35:19 compute-0 podman[96257]: 2025-10-02 11:35:19.029414125 +0000 UTC m=+0.344261453 container attach 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:35:19 compute-0 podman[96257]: 2025-10-02 11:35:19.029851676 +0000 UTC m=+0.344699014 container died 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49ca5e8172b053eb4a2d32d9f833f6b435c71d3ff09630e25b9e180db88bfa3-merged.mount: Deactivated successfully.
Oct 02 11:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 02 11:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e11 new map
Oct 02 11:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:34:16.058920+0000
                                           modified        2025-10-02T11:35:16.189190+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24148}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:35:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] up:standby
Oct 02 11:35:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:19 compute-0 podman[96257]: 2025-10-02 11:35:19.367057574 +0000 UTC m=+0.681904902 container remove 429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:19 compute-0 systemd[1]: libpod-conmon-429c5e90bdc0a21ae1ea1f46de6ecfb005cfc0b24c85d6c26968d30f4e1f9d21.scope: Deactivated successfully.
Oct 02 11:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 02 11:35:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 02 11:35:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:19 compute-0 podman[96299]: 2025-10-02 11:35:19.546365557 +0000 UTC m=+0.077683732 container create 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:19 compute-0 podman[96299]: 2025-10-02 11:35:19.489072571 +0000 UTC m=+0.020390766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:19 compute-0 systemd[1]: Started libpod-conmon-8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b.scope.
Oct 02 11:35:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:19 compute-0 podman[96299]: 2025-10-02 11:35:19.694014368 +0000 UTC m=+0.225332563 container init 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:35:19 compute-0 podman[96299]: 2025-10-02 11:35:19.70135084 +0000 UTC m=+0.232669015 container start 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:35:19 compute-0 podman[96299]: 2025-10-02 11:35:19.716857253 +0000 UTC m=+0.248175438 container attach 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:20 compute-0 ceph-mon[73607]: 4.16 scrub starts
Oct 02 11:35:20 compute-0 ceph-mon[73607]: 4.16 scrub ok
Oct 02 11:35:20 compute-0 ceph-mon[73607]: mds.? [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] up:standby
Oct 02 11:35:20 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:35:20 compute-0 ceph-mon[73607]: 5.e scrub starts
Oct 02 11:35:20 compute-0 ceph-mon[73607]: 5.e scrub ok
Oct 02 11:35:20 compute-0 ceph-mon[73607]: osdmap e74: 3 total, 3 up, 3 in
Oct 02 11:35:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 02 11:35:20 compute-0 zealous_lederberg[96316]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:35:20 compute-0 zealous_lederberg[96316]: --> relative data size: 1.0
Oct 02 11:35:20 compute-0 zealous_lederberg[96316]: --> All data devices are unavailable
Oct 02 11:35:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 02 11:35:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 02 11:35:20 compute-0 systemd[1]: libpod-8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b.scope: Deactivated successfully.
Oct 02 11:35:20 compute-0 podman[96299]: 2025-10-02 11:35:20.520256927 +0000 UTC m=+1.051575102 container died 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f087d2ac716127e604725d927116ff4d038ed881379c260c90701a975b3a7421-merged.mount: Deactivated successfully.
Oct 02 11:35:20 compute-0 podman[96299]: 2025-10-02 11:35:20.615131423 +0000 UTC m=+1.146449598 container remove 8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lederberg, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:20 compute-0 systemd[1]: libpod-conmon-8ac83d778c39b38f92a92309a4fb13fad29493372b37ee5313b849ee33f58a1b.scope: Deactivated successfully.
Oct 02 11:35:20 compute-0 sudo[96194]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:20 compute-0 sudo[96346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:20 compute-0 sudo[96346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:20 compute-0 sudo[96346]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:20.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:20 compute-0 sudo[96371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:20 compute-0 sudo[96371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:20 compute-0 sudo[96371]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:20 compute-0 sudo[96396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:20 compute-0 sudo[96396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:20 compute-0 sudo[96396]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:20 compute-0 sudo[96421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:35:20 compute-0 sudo[96421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.162705242 +0000 UTC m=+0.049439883 container create 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:21 compute-0 systemd[1]: Started libpod-conmon-1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3.scope.
Oct 02 11:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.132783303 +0000 UTC m=+0.019517964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.250305589 +0000 UTC m=+0.137040240 container init 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:35:21 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.257432965 +0000 UTC m=+0.144167606 container start 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:35:21 compute-0 sleepy_tu[96501]: 167 167
Oct 02 11:35:21 compute-0 systemd[1]: libpod-1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3.scope: Deactivated successfully.
Oct 02 11:35:21 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.27747402 +0000 UTC m=+0.164208691 container attach 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.278022154 +0000 UTC m=+0.164756795 container died 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d19691050822c3dd7c22042b4bb5035e093a6be1964d8c7426067521a59c53-merged.mount: Deactivated successfully.
Oct 02 11:35:21 compute-0 podman[96485]: 2025-10-02 11:35:21.341153545 +0000 UTC m=+0.227888186 container remove 1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:21 compute-0 systemd[1]: libpod-conmon-1d9a1743dc6e0aeffb002397a82f96d2fa5ff30d0ef41ac9a50db8e78d2a7cf3.scope: Deactivated successfully.
Oct 02 11:35:21 compute-0 podman[96527]: 2025-10-02 11:35:21.499168482 +0000 UTC m=+0.045622429 container create 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 02 11:35:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 02 11:35:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 02 11:35:21 compute-0 ceph-mon[73607]: 4.17 scrub starts
Oct 02 11:35:21 compute-0 ceph-mon[73607]: 4.17 scrub ok
Oct 02 11:35:21 compute-0 ceph-mon[73607]: pgmap v176: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:21 compute-0 ceph-mon[73607]: osdmap e75: 3 total, 3 up, 3 in
Oct 02 11:35:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:21 compute-0 systemd[1]: Started libpod-conmon-592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421.scope.
Oct 02 11:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8574ec6e9d618c74e6d6602b931227dc80f815d637328ea50188f46e001a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8574ec6e9d618c74e6d6602b931227dc80f815d637328ea50188f46e001a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8574ec6e9d618c74e6d6602b931227dc80f815d637328ea50188f46e001a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8574ec6e9d618c74e6d6602b931227dc80f815d637328ea50188f46e001a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:21 compute-0 podman[96527]: 2025-10-02 11:35:21.476217084 +0000 UTC m=+0.022670931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:21 compute-0 podman[96527]: 2025-10-02 11:35:21.582898482 +0000 UTC m=+0.129352339 container init 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:21 compute-0 podman[96527]: 2025-10-02 11:35:21.588949192 +0000 UTC m=+0.135403029 container start 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:35:21 compute-0 podman[96527]: 2025-10-02 11:35:21.597224177 +0000 UTC m=+0.143678014 container attach 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:35:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Oct 02 11:35:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 02 11:35:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 02 11:35:22 compute-0 magical_golick[96545]: {
Oct 02 11:35:22 compute-0 magical_golick[96545]:     "1": [
Oct 02 11:35:22 compute-0 magical_golick[96545]:         {
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "devices": [
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "/dev/loop3"
Oct 02 11:35:22 compute-0 magical_golick[96545]:             ],
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "lv_name": "ceph_lv0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "lv_size": "7511998464",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "name": "ceph_lv0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "tags": {
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.cluster_name": "ceph",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.crush_device_class": "",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.encrypted": "0",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.osd_id": "1",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.type": "block",
Oct 02 11:35:22 compute-0 magical_golick[96545]:                 "ceph.vdo": "0"
Oct 02 11:35:22 compute-0 magical_golick[96545]:             },
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "type": "block",
Oct 02 11:35:22 compute-0 magical_golick[96545]:             "vg_name": "ceph_vg0"
Oct 02 11:35:22 compute-0 magical_golick[96545]:         }
Oct 02 11:35:22 compute-0 magical_golick[96545]:     ]
Oct 02 11:35:22 compute-0 magical_golick[96545]: }
Oct 02 11:35:22 compute-0 systemd[1]: libpod-592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421.scope: Deactivated successfully.
Oct 02 11:35:22 compute-0 podman[96527]: 2025-10-02 11:35:22.36296448 +0000 UTC m=+0.909418317 container died 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec8574ec6e9d618c74e6d6602b931227dc80f815d637328ea50188f46e001a7d-merged.mount: Deactivated successfully.
Oct 02 11:35:22 compute-0 podman[96527]: 2025-10-02 11:35:22.428343566 +0000 UTC m=+0.974797403 container remove 592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:22 compute-0 systemd[1]: libpod-conmon-592247496e29ed30d1ecee7201d9f381693fb84d1b3c8fa0ad6521228b726421.scope: Deactivated successfully.
Oct 02 11:35:22 compute-0 sudo[96421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:22 compute-0 sudo[96569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:22 compute-0 sudo[96569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:22 compute-0 sudo[96569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 4.1e scrub starts
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 4.1e scrub ok
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 7.15 scrub starts
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 7.15 scrub ok
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 5.12 scrub starts
Oct 02 11:35:22 compute-0 ceph-mon[73607]: 5.12 scrub ok
Oct 02 11:35:22 compute-0 ceph-mon[73607]: osdmap e76: 3 total, 3 up, 3 in
Oct 02 11:35:22 compute-0 sudo[96594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:22 compute-0 sudo[96594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:22 compute-0 sudo[96594]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:22 compute-0 sudo[96619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:22 compute-0 sudo[96619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:22 compute-0 sudo[96619]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:22 compute-0 sudo[96644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:35:22 compute-0 sudo[96644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:35:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:22.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:35:22 compute-0 podman[96708]: 2025-10-02 11:35:22.983695528 +0000 UTC m=+0.041410845 container create 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:23 compute-0 systemd[1]: Started libpod-conmon-88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211.scope.
Oct 02 11:35:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:23.045633109 +0000 UTC m=+0.103348426 container init 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:23.053111654 +0000 UTC m=+0.110826971 container start 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:23 compute-0 gifted_bohr[96724]: 167 167
Oct 02 11:35:23 compute-0 systemd[1]: libpod-88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211.scope: Deactivated successfully.
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:23.057663397 +0000 UTC m=+0.115378714 container attach 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:23.058131469 +0000 UTC m=+0.115846786 container died 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:22.965354275 +0000 UTC m=+0.023069622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a093ef9048ef7937278ff7c77f8fadb75c87a1f2951be17213554526999bf5b-merged.mount: Deactivated successfully.
Oct 02 11:35:23 compute-0 podman[96708]: 2025-10-02 11:35:23.096646641 +0000 UTC m=+0.154361958 container remove 88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:35:23 compute-0 systemd[1]: libpod-conmon-88344fe7806df5a1c902b084ec233c7f4f6d8be2868282088cc50afe92f28211.scope: Deactivated successfully.
Oct 02 11:35:23 compute-0 podman[96746]: 2025-10-02 11:35:23.257725314 +0000 UTC m=+0.042460661 container create 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:35:23 compute-0 systemd[1]: Started libpod-conmon-1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f.scope.
Oct 02 11:35:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42c949df2c9c54ed4b04fc754560cf35fbb86ed2075de196516e94f903839c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42c949df2c9c54ed4b04fc754560cf35fbb86ed2075de196516e94f903839c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42c949df2c9c54ed4b04fc754560cf35fbb86ed2075de196516e94f903839c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42c949df2c9c54ed4b04fc754560cf35fbb86ed2075de196516e94f903839c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:23 compute-0 podman[96746]: 2025-10-02 11:35:23.23855042 +0000 UTC m=+0.023285797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:23 compute-0 podman[96746]: 2025-10-02 11:35:23.344314264 +0000 UTC m=+0.129049641 container init 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:23 compute-0 podman[96746]: 2025-10-02 11:35:23.350537629 +0000 UTC m=+0.135272976 container start 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:35:23 compute-0 podman[96746]: 2025-10-02 11:35:23.355731907 +0000 UTC m=+0.140467324 container attach 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:35:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:23 compute-0 ceph-mon[73607]: pgmap v179: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Oct 02 11:35:23 compute-0 ceph-mon[73607]: 7.17 scrub starts
Oct 02 11:35:23 compute-0 ceph-mon[73607]: 7.17 scrub ok
Oct 02 11:35:24 compute-0 unruffled_saha[96762]: {
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:         "osd_id": 1,
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:         "type": "bluestore"
Oct 02 11:35:24 compute-0 unruffled_saha[96762]:     }
Oct 02 11:35:24 compute-0 unruffled_saha[96762]: }
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 92 B/s, 3 objects/s recovering
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:35:24 compute-0 systemd[1]: libpod-1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f.scope: Deactivated successfully.
Oct 02 11:35:24 compute-0 podman[96784]: 2025-10-02 11:35:24.299404121 +0000 UTC m=+0.024892917 container died 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f42c949df2c9c54ed4b04fc754560cf35fbb86ed2075de196516e94f903839c4-merged.mount: Deactivated successfully.
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:24 compute-0 podman[96784]: 2025-10-02 11:35:24.380762702 +0000 UTC m=+0.106251478 container remove 1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:24 compute-0 systemd[1]: libpod-conmon-1b91188c973602094e0b71e5272a1ec6cf78dfcd6223606d7a6cfb0a0a1d227f.scope: Deactivated successfully.
Oct 02 11:35:24 compute-0 sudo[96644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8735c8b8-74d4-4002-a926-40d50d23eb1e does not exist
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4074a471-f0f1-433f-80f2-a3a85e8078fc does not exist
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 174d5eb3-842f-4b0d-b5ea-a5b50a41e08d does not exist
Oct 02 11:35:24 compute-0 sudo[96797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:24 compute-0 sudo[96797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:24 compute-0 sudo[96797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 02 11:35:24 compute-0 ceph-mon[73607]: 6.4 scrub starts
Oct 02 11:35:24 compute-0 ceph-mon[73607]: 6.4 scrub ok
Oct 02 11:35:24 compute-0 ceph-mon[73607]: 5.13 scrub starts
Oct 02 11:35:24 compute-0 ceph-mon[73607]: 5.13 scrub ok
Oct 02 11:35:24 compute-0 ceph-mon[73607]: pgmap v180: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 92 B/s, 3 objects/s recovering
Oct 02 11:35:24 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:35:24 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:24 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:24 compute-0 sudo[96822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:35:24 compute-0 sudo[96822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:24 compute-0 sudo[96822]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:24.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:35:24 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:35:24 compute-0 sudo[96847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:24 compute-0 sudo[96847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:24 compute-0 sudo[96847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:24 compute-0 sudo[96872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:24 compute-0 sudo[96872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:24 compute-0 sudo[96872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:25 compute-0 sudo[96897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:25 compute-0 sudo[96897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:25 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:35:25 compute-0 sudo[96897]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:25 compute-0 sudo[96923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:35:25 compute-0 sudo[96923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.293756357 +0000 UTC m=+0.020432697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.470153618 +0000 UTC m=+0.196829938 container create a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:25 compute-0 systemd[1]: Started libpod-conmon-a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59.scope.
Oct 02 11:35:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.549476229 +0000 UTC m=+0.276152569 container init a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.556723849 +0000 UTC m=+0.283400169 container start a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:35:25 compute-0 zen_mirzakhani[96980]: 167 167
Oct 02 11:35:25 compute-0 systemd[1]: libpod-a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59.scope: Deactivated successfully.
Oct 02 11:35:25 compute-0 conmon[96980]: conmon a359f3842294eda71858 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59.scope/container/memory.events
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.562702536 +0000 UTC m=+0.289378856 container attach a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.563027394 +0000 UTC m=+0.289703714 container died a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 77 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 77 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1eeb9b6263ca86bb94e14a70337396dd938e82a334cdf5132afb28662d86ef-merged.mount: Deactivated successfully.
Oct 02 11:35:25 compute-0 podman[96964]: 2025-10-02 11:35:25.625669943 +0000 UTC m=+0.352346263 container remove a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:25 compute-0 systemd[1]: libpod-conmon-a359f3842294eda71858aad554c29bf79c3f0530b8440c275767e347b1479b59.scope: Deactivated successfully.
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 02 11:35:25 compute-0 sudo[96923]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:25 compute-0 ceph-mon[73607]: 5.1a scrub starts
Oct 02 11:35:25 compute-0 ceph-mon[73607]: 5.1a scrub ok
Oct 02 11:35:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:35:25 compute-0 ceph-mon[73607]: osdmap e77: 3 total, 3 up, 3 in
Oct 02 11:35:25 compute-0 ceph-mon[73607]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:35:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mon[73607]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:25 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:25 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fmcstn (monmap changed)...
Oct 02 11:35:25 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fmcstn (monmap changed)...
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:25 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:35:25 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:35:25 compute-0 sudo[97001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:25 compute-0 sudo[97001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:26 compute-0 sudo[97001]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:26 compute-0 sudo[97026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:26 compute-0 sudo[97026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:26 compute-0 sudo[97026]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:26 compute-0 sudo[97051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:26 compute-0 sudo[97051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:26 compute-0 sudo[97051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:26 compute-0 sudo[97076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:35:26 compute-0 sudo[97076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 27 op/s; 95 B/s, 3 objects/s recovering
Oct 02 11:35:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 02 11:35:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.414006846 +0000 UTC m=+0.037732994 container create 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:26 compute-0 systemd[1]: Started libpod-conmon-07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847.scope.
Oct 02 11:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.481876853 +0000 UTC m=+0.105603021 container init 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.487565824 +0000 UTC m=+0.111291972 container start 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:35:26 compute-0 pedantic_khayyam[97134]: 167 167
Oct 02 11:35:26 compute-0 systemd[1]: libpod-07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847.scope: Deactivated successfully.
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.397306193 +0000 UTC m=+0.021032361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.494210978 +0000 UTC m=+0.117937126 container attach 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:35:26 compute-0 podman[97118]: 2025-10-02 11:35:26.494630969 +0000 UTC m=+0.118357127 container died 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b379d795e288dce7cded84408abd651be8281a3dc543af2c1b0d7800e9c23093-merged.mount: Deactivated successfully.
Oct 02 11:35:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:26.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 02 11:35:27 compute-0 ceph-mon[73607]: 6.6 scrub starts
Oct 02 11:35:27 compute-0 ceph-mon[73607]: 6.6 scrub ok
Oct 02 11:35:27 compute-0 ceph-mon[73607]: 11.13 deep-scrub starts
Oct 02 11:35:27 compute-0 ceph-mon[73607]: 11.13 deep-scrub ok
Oct 02 11:35:27 compute-0 ceph-mon[73607]: osdmap e78: 3 total, 3 up, 3 in
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:27 compute-0 ceph-mon[73607]: Reconfiguring mgr.compute-0.fmcstn (monmap changed)...
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:27 compute-0 ceph-mon[73607]: Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct 02 11:35:27 compute-0 ceph-mon[73607]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 27 op/s; 95 B/s, 3 objects/s recovering
Oct 02 11:35:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:35:27 compute-0 podman[97118]: 2025-10-02 11:35:27.171323041 +0000 UTC m=+0.795049189 container remove 07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:27 compute-0 systemd[1]: libpod-conmon-07add0ae887dd765739d1559c69ba4dc5dc2bb1dc76a69173ed0faac4d063847.scope: Deactivated successfully.
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:35:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 02 11:35:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 02 11:35:27 compute-0 sudo[97076]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:27 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:35:27 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:35:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:35:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:27 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:35:27 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:35:27 compute-0 sudo[97154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:27 compute-0 sudo[97154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:27 compute-0 sudo[97154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:27 compute-0 sudo[97179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:27 compute-0 sudo[97179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:27 compute-0 sudo[97179]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 sudo[97204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:28 compute-0 sudo[97204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 sudo[97204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 ceph-mon[73607]: 6.9 scrub starts
Oct 02 11:35:28 compute-0 ceph-mon[73607]: 6.9 scrub ok
Oct 02 11:35:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:35:28 compute-0 ceph-mon[73607]: osdmap e79: 3 total, 3 up, 3 in
Oct 02 11:35:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:35:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:28 compute-0 sudo[97229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:35:28 compute-0 sudo[97229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 91 B/s, 3 objects/s recovering
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.31543678 +0000 UTC m=+0.039269291 container create 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:28 compute-0 systemd[1]: Started libpod-conmon-0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b.scope.
Oct 02 11:35:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.380341355 +0000 UTC m=+0.104173886 container init 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.386655002 +0000 UTC m=+0.110487513 container start 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:35:28 compute-0 happy_swanson[97287]: 167 167
Oct 02 11:35:28 compute-0 systemd[1]: libpod-0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b.scope: Deactivated successfully.
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.391235605 +0000 UTC m=+0.115068146 container attach 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.391600343 +0000 UTC m=+0.115432854 container died 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.296554763 +0000 UTC m=+0.020387304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84df634201f5706c31cad90d2360fbfc8180be8498f026f0447c78f488a536f-merged.mount: Deactivated successfully.
Oct 02 11:35:28 compute-0 podman[97270]: 2025-10-02 11:35:28.431457209 +0000 UTC m=+0.155289720 container remove 0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: [progress INFO root] Completed event 73668eb3-65dd-4189-bf1b-2d497b15d283 (Global Recovery Event) in 15 seconds
Oct 02 11:35:28 compute-0 systemd[1]: libpod-conmon-0748a59e8553274ff574fc7f4ef2bf0a9ef61c7cac901bb6849770f1c54c236b.scope: Deactivated successfully.
Oct 02 11:35:28 compute-0 sudo[97229]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Oct 02 11:35:28 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Oct 02 11:35:28 compute-0 sudo[97306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:28 compute-0 sudo[97306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 sudo[97306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 sudo[97331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:28 compute-0 sudo[97331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 sudo[97331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 sudo[97356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:28 compute-0 sudo[97356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 sudo[97356]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:28.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:28 compute-0 sudo[97381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct 02 11:35:28 compute-0 sudo[97381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:35:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 02 11:35:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 02 11:35:28 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 80 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:28 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 80 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:28 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 80 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:28 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 80 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:28 compute-0 podman[97423]: 2025-10-02 11:35:28.991071996 +0000 UTC m=+0.038596535 container create 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:35:29 compute-0 systemd[1]: Started libpod-conmon-31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07.scope.
Oct 02 11:35:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:29.058977485 +0000 UTC m=+0.106502044 container init 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:29.064545763 +0000 UTC m=+0.112070302 container start 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:29 compute-0 crazy_wing[97439]: 167 167
Oct 02 11:35:29 compute-0 systemd[1]: libpod-31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07.scope: Deactivated successfully.
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:28.973327107 +0000 UTC m=+0.020851666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:29.069048764 +0000 UTC m=+0.116573653 container attach 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:29.069342161 +0000 UTC m=+0.116866700 container died 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:35:29 compute-0 ceph-mon[73607]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:35:29 compute-0 ceph-mon[73607]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:35:29 compute-0 ceph-mon[73607]: 6.b scrub starts
Oct 02 11:35:29 compute-0 ceph-mon[73607]: 6.b scrub ok
Oct 02 11:35:29 compute-0 ceph-mon[73607]: pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 91 B/s, 3 objects/s recovering
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:29 compute-0 ceph-mon[73607]: Reconfiguring osd.1 (monmap changed)...
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:35:29 compute-0 ceph-mon[73607]: osdmap e80: 3 total, 3 up, 3 in
Oct 02 11:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-af53007682e992bf5cda37f076aaee8d0a90419d7a1e8e196c4bb33ccf3abbb1-merged.mount: Deactivated successfully.
Oct 02 11:35:29 compute-0 podman[97423]: 2025-10-02 11:35:29.115711047 +0000 UTC m=+0.163235586 container remove 31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:35:29 compute-0 systemd[1]: libpod-conmon-31538838b8e0237cdefe577edb1f74b9c7d884f3fb9fbca68a45ae048ba14c07.scope: Deactivated successfully.
Oct 02 11:35:29 compute-0 sudo[97381]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:29 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:35:29 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:35:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:29 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:35:29 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:35:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 02 11:35:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 02 11:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 02 11:35:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 02 11:35:29 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 81 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=6 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:29 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 81 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=5 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:30 compute-0 sudo[97486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrrskbknxebpxxgkliiwxmbfvmnxniqs ; /usr/bin/python3'
Oct 02 11:35:30 compute-0 sudo[97486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:35:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:30 compute-0 python3[97488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:35:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Oct 02 11:35:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 02 11:35:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:35:30 compute-0 podman[97489]: 2025-10-02 11:35:30.272779338 +0000 UTC m=+0.044547793 container create b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:30 compute-0 systemd[1]: Started libpod-conmon-b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80.scope.
Oct 02 11:35:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:30 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct 02 11:35:30 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct 02 11:35:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 02 11:35:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5774248f99dad66c84bfb58f326059419567af49059cf79139d308e79dca47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5774248f99dad66c84bfb58f326059419567af49059cf79139d308e79dca47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:30 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Oct 02 11:35:30 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Oct 02 11:35:30 compute-0 podman[97489]: 2025-10-02 11:35:30.256632128 +0000 UTC m=+0.028400603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:35:30 compute-0 podman[97489]: 2025-10-02 11:35:30.355753929 +0000 UTC m=+0.127522414 container init b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:30 compute-0 podman[97489]: 2025-10-02 11:35:30.361066361 +0000 UTC m=+0.132834816 container start b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:35:30 compute-0 podman[97489]: 2025-10-02 11:35:30.364520516 +0000 UTC m=+0.136288991 container attach b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:30 compute-0 ceph-mon[73607]: Reconfiguring daemon osd.1 on compute-0
Oct 02 11:35:30 compute-0 ceph-mon[73607]: 6.c scrub starts
Oct 02 11:35:30 compute-0 ceph-mon[73607]: 6.c scrub ok
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:30 compute-0 ceph-mon[73607]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:30 compute-0 ceph-mon[73607]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:35:30 compute-0 ceph-mon[73607]: 7.19 scrub starts
Oct 02 11:35:30 compute-0 ceph-mon[73607]: 7.19 scrub ok
Oct 02 11:35:30 compute-0 ceph-mon[73607]: osdmap e81: 3 total, 3 up, 3 in
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:35:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:30.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:30 compute-0 goofy_rubin[97505]: could not fetch user info: no user info saved
Oct 02 11:35:30 compute-0 systemd[1]: libpod-b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80.scope: Deactivated successfully.
Oct 02 11:35:30 compute-0 podman[97590]: 2025-10-02 11:35:30.965624849 +0000 UTC m=+0.027005460 container died b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5774248f99dad66c84bfb58f326059419567af49059cf79139d308e79dca47-merged.mount: Deactivated successfully.
Oct 02 11:35:31 compute-0 podman[97590]: 2025-10-02 11:35:31.007948195 +0000 UTC m=+0.069328806 container remove b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80 (image=quay.io/ceph/ceph:v18, name=goofy_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:31 compute-0 systemd[1]: libpod-conmon-b00746fe8ef5704c44d6e6908ee7b8a2a8526bd6a8c56ce2e6f3ab80006aac80.scope: Deactivated successfully.
Oct 02 11:35:31 compute-0 sudo[97486]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:31 compute-0 sudo[97628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewnhcwpvsdordoskhmoudhnacskwktmo ; /usr/bin/python3'
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:31 compute-0 sudo[97628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:31 compute-0 python3[97630]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 02 11:35:31 compute-0 podman[97631]: 2025-10-02 11:35:31.452736824 +0000 UTC m=+0.114003161 container create bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 02 11:35:31 compute-0 podman[97631]: 2025-10-02 11:35:31.362293917 +0000 UTC m=+0.023560284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:35:31 compute-0 systemd[1]: Started libpod-conmon-bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974.scope.
Oct 02 11:35:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f9056e4ac4b7da185bcead27c465f7bffbab5274512585fc63f711dde7d7b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f9056e4ac4b7da185bcead27c465f7bffbab5274512585fc63f711dde7d7b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:31 compute-0 podman[97631]: 2025-10-02 11:35:31.604507326 +0000 UTC m=+0.265773693 container init bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:35:31 compute-0 podman[97631]: 2025-10-02 11:35:31.610904574 +0000 UTC m=+0.272170921 container start bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:31 compute-0 podman[97631]: 2025-10-02 11:35:31.616427191 +0000 UTC m=+0.277693538 container attach bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:31 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:35:31 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:35:31 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 82 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=82) [1] r=0 lpr=82 pi=[67,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:31 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 82 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=82) [1] r=0 lpr=82 pi=[67,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:31 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:35:31 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Oct 02 11:35:32 compute-0 ceph-mon[73607]: 4.1c scrub starts
Oct 02 11:35:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:32 compute-0 ceph-mon[73607]: Reconfiguring osd.0 (monmap changed)...
Oct 02 11:35:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:35:32 compute-0 ceph-mon[73607]: 4.1c scrub ok
Oct 02 11:35:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:32 compute-0 ceph-mon[73607]: Reconfiguring daemon osd.0 on compute-1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:35:32 compute-0 ceph-mon[73607]: osdmap e82: 3 total, 3 up, 3 in
Oct 02 11:35:32 compute-0 recursing_banzai[97647]: {
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "user_id": "openstack",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "display_name": "openstack",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "email": "",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "suspended": 0,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "max_buckets": 1000,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "subusers": [],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "keys": [
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         {
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:             "user": "openstack",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:             "access_key": "WMEYV8ZI6TZQLBEZJJUG",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:             "secret_key": "dZB1s3kFjI92SgwBP0CLGJ0sE5SRCdDkGcMaEeO6"
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         }
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     ],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "swift_keys": [],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "caps": [],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "op_mask": "read, write, delete",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "default_placement": "",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "default_storage_class": "",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "placement_tags": [],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "bucket_quota": {
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "enabled": false,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "check_on_raw": false,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_size": -1,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_size_kb": 0,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_objects": -1
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     },
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "user_quota": {
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "enabled": false,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "check_on_raw": false,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_size": -1,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_size_kb": 0,
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:         "max_objects": -1
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     },
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "temp_url_keys": [],
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "type": "rgw",
Oct 02 11:35:32 compute-0 recursing_banzai[97647]:     "mfa_ids": []
Oct 02 11:35:32 compute-0 recursing_banzai[97647]: }
Oct 02 11:35:32 compute-0 recursing_banzai[97647]: 
Oct 02 11:35:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 47 B/s, 2 objects/s recovering
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:35:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 02 11:35:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 02 11:35:32 compute-0 systemd[1]: libpod-bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974.scope: Deactivated successfully.
Oct 02 11:35:32 compute-0 podman[97631]: 2025-10-02 11:35:32.508526969 +0000 UTC m=+1.169793316 container died bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f9056e4ac4b7da185bcead27c465f7bffbab5274512585fc63f711dde7d7b9-merged.mount: Deactivated successfully.
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:32.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:32 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:35:32 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:32 compute-0 podman[97631]: 2025-10-02 11:35:32.797162475 +0000 UTC m=+1.458428822 container remove bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974 (image=quay.io/ceph/ceph:v18, name=recursing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:35:32 compute-0 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:35:32 compute-0 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:35:32 compute-0 sudo[97628]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:32 compute-0 systemd[1]: libpod-conmon-bc34e67313feab93860b3a8ba3b930317250bd5b1d1f540bcd0f57019e3d6974.scope: Deactivated successfully.
Oct 02 11:35:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 02 11:35:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:35:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 02 11:35:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 02 11:35:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:33 compute-0 ceph-mon[73607]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:35:33 compute-0 ceph-mon[73607]: pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 47 B/s, 2 objects/s recovering
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: 7.1a scrub starts
Oct 02 11:35:33 compute-0 ceph-mon[73607]: 7.1a scrub ok
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:35:33 compute-0 ceph-mon[73607]: osdmap e83: 3 total, 3 up, 3 in
Oct 02 11:35:33 compute-0 ceph-mgr[73901]: [progress INFO root] Writing back 22 completed events
Oct 02 11:35:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:35:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:33.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:35:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:35:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:34 compute-0 sudo[97745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:34 compute-0 sudo[97745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:34 compute-0 sudo[97745]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:34 compute-0 sudo[97770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:34 compute-0 sudo[97770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:34 compute-0 sudo[97770]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 02 11:35:34 compute-0 sudo[97795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:34 compute-0 sudo[97795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:34 compute-0 sudo[97795]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:34 compute-0 sudo[97820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:35:34 compute-0 sudo[97820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 1 op/s; 40 B/s, 2 objects/s recovering
Oct 02 11:35:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 02 11:35:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 02 11:35:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 02 11:35:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:35:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 02 11:35:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 02 11:35:34 compute-0 ceph-mon[73607]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:35:34 compute-0 ceph-mon[73607]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:35:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:34 compute-0 podman[97918]: 2025-10-02 11:35:34.704365842 +0000 UTC m=+0.067254674 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:35:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:34 compute-0 podman[97918]: 2025-10-02 11:35:34.807147363 +0000 UTC m=+0.170036155 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:35:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:35:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:35 compute-0 podman[98052]: 2025-10-02 11:35:35.357399348 +0000 UTC m=+0.078004719 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:35:35 compute-0 podman[98052]: 2025-10-02 11:35:35.391670536 +0000 UTC m=+0.112275897 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:35:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 02 11:35:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:35:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 02 11:35:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=85) [1] r=0 lpr=85 pi=[64,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=85) [1] r=0 lpr=85 pi=[64,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 85 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:35 compute-0 ceph-mon[73607]: 6.f scrub starts
Oct 02 11:35:35 compute-0 ceph-mon[73607]: 6.f scrub ok
Oct 02 11:35:35 compute-0 ceph-mon[73607]: pgmap v192: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 1 op/s; 40 B/s, 2 objects/s recovering
Oct 02 11:35:35 compute-0 ceph-mon[73607]: 7.1c scrub starts
Oct 02 11:35:35 compute-0 ceph-mon[73607]: 7.1c scrub ok
Oct 02 11:35:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:35:35 compute-0 ceph-mon[73607]: osdmap e84: 3 total, 3 up, 3 in
Oct 02 11:35:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:35 compute-0 podman[98115]: 2025-10-02 11:35:35.754152989 +0000 UTC m=+0.138079005 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, architecture=x86_64, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, distribution-scope=public)
Oct 02 11:35:35 compute-0 podman[98115]: 2025-10-02 11:35:35.803213312 +0000 UTC m=+0.187139298 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container)
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:35:36 compute-0 sudo[97820]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 14553781-a54c-43df-a91a-7db959272c59 does not exist
Oct 02 11:35:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 08488184-75ae-421c-992c-11c088d5f4e1 does not exist
Oct 02 11:35:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aa6105ca-dccb-4c50-b01e-73ae93119b5a does not exist
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:36 compute-0 sudo[98166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:36 compute-0 sudo[98166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 2 op/s
Oct 02 11:35:36 compute-0 sudo[98166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 11:35:36 compute-0 sudo[98192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:36 compute-0 sudo[98192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:36 compute-0 sudo[98192]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:36 compute-0 sudo[98217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:36 compute-0 sudo[98217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:36 compute-0 sudo[98217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:36 compute-0 sudo[98242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:35:36 compute-0 sudo[98242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 11:35:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 02 11:35:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=86) [1]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=86) [1]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=86) [1] r=0 lpr=86 pi=[54,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=86) [1]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=86) [1]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=5 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 86 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=6 ec=54/41 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:36 compute-0 ceph-mon[73607]: 8.1 scrub starts
Oct 02 11:35:36 compute-0 ceph-mon[73607]: 8.1 scrub ok
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:35:36 compute-0 ceph-mon[73607]: osdmap e85: 3 total, 3 up, 3 in
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: pgmap v195: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 2 op/s
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 11:35:36 compute-0 ceph-mon[73607]: 8.11 deep-scrub starts
Oct 02 11:35:36 compute-0 ceph-mon[73607]: 8.11 deep-scrub ok
Oct 02 11:35:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 11:35:36 compute-0 ceph-mon[73607]: osdmap e86: 3 total, 3 up, 3 in
Oct 02 11:35:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:36 compute-0 podman[98304]: 2025-10-02 11:35:36.827216111 +0000 UTC m=+0.095047861 container create a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 11:35:36 compute-0 podman[98304]: 2025-10-02 11:35:36.760884641 +0000 UTC m=+0.028716441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:36 compute-0 systemd[1]: Started libpod-conmon-a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f.scope.
Oct 02 11:35:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:36 compute-0 podman[98304]: 2025-10-02 11:35:36.96996557 +0000 UTC m=+0.237797350 container init a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:36 compute-0 podman[98304]: 2025-10-02 11:35:36.977127227 +0000 UTC m=+0.244958977 container start a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 11:35:36 compute-0 dreamy_heisenberg[98321]: 167 167
Oct 02 11:35:36 compute-0 systemd[1]: libpod-a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f.scope: Deactivated successfully.
Oct 02 11:35:37 compute-0 podman[98304]: 2025-10-02 11:35:37.007011287 +0000 UTC m=+0.274843037 container attach a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:35:37 compute-0 podman[98304]: 2025-10-02 11:35:37.0075577 +0000 UTC m=+0.275389450 container died a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-35c96aa137c06a1a4476491082c16ef6fc902485e05818a06241ece1746ef3dd-merged.mount: Deactivated successfully.
Oct 02 11:35:37 compute-0 podman[98304]: 2025-10-02 11:35:37.309357302 +0000 UTC m=+0.577189052 container remove a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:35:37 compute-0 systemd[1]: libpod-conmon-a1e101fe2775ab6984a638d1a1ba1db682c217c9246c9da73529624d86167c4f.scope: Deactivated successfully.
Oct 02 11:35:37 compute-0 sudo[98342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:37 compute-0 sudo[98342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:37 compute-0 sudo[98342]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:37 compute-0 podman[98365]: 2025-10-02 11:35:37.490800458 +0000 UTC m=+0.052569100 container create eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:35:37 compute-0 sudo[98385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:37 compute-0 sudo[98385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:37 compute-0 sudo[98385]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:37 compute-0 systemd[1]: Started libpod-conmon-eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0.scope.
Oct 02 11:35:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:37.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:37 compute-0 podman[98365]: 2025-10-02 11:35:37.461095374 +0000 UTC m=+0.022864046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:37 compute-0 podman[98365]: 2025-10-02 11:35:37.579074501 +0000 UTC m=+0.140843153 container init eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:37 compute-0 podman[98365]: 2025-10-02 11:35:37.587879638 +0000 UTC m=+0.149648280 container start eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:35:37 compute-0 podman[98365]: 2025-10-02 11:35:37.591389616 +0000 UTC m=+0.153158268 container attach eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:35:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 02 11:35:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 02 11:35:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 02 11:35:37 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 87 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:37 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 87 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=87) [1]/[0] r=-1 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:37 compute-0 ceph-mon[73607]: 9.1 scrub starts
Oct 02 11:35:37 compute-0 ceph-mon[73607]: 9.1 scrub ok
Oct 02 11:35:37 compute-0 ceph-mon[73607]: osdmap e87: 3 total, 3 up, 3 in
Oct 02 11:35:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Oct 02 11:35:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 02 11:35:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 11:35:38 compute-0 youthful_kapitsa[98413]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:35:38 compute-0 youthful_kapitsa[98413]: --> relative data size: 1.0
Oct 02 11:35:38 compute-0 youthful_kapitsa[98413]: --> All data devices are unavailable
Oct 02 11:35:38 compute-0 systemd[1]: libpod-eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0.scope: Deactivated successfully.
Oct 02 11:35:38 compute-0 podman[98431]: 2025-10-02 11:35:38.484377886 +0000 UTC m=+0.025889402 container died eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa3a28699e2f754316f94b71d0865a3fcb67ac1fdd7978df7847ab50c56ed27a-merged.mount: Deactivated successfully.
Oct 02 11:35:38 compute-0 podman[98431]: 2025-10-02 11:35:38.578110294 +0000 UTC m=+0.119621790 container remove eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:38 compute-0 systemd[1]: libpod-conmon-eadcd56459fee8c4d53175ac0498c381cff0d0e2314c6fb1229070fc28c11da0.scope: Deactivated successfully.
Oct 02 11:35:38 compute-0 sudo[98242]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 02 11:35:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 11:35:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 02 11:35:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 02 11:35:38 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 88 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:38 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 88 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:38 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 88 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:38 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 88 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:38 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=88) [1] r=0 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:38 compute-0 sudo[98446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:38 compute-0 sudo[98446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:38 compute-0 sudo[98446]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:38.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:38 compute-0 sudo[98471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:38 compute-0 sudo[98471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:38 compute-0 ceph-mon[73607]: 8.7 scrub starts
Oct 02 11:35:38 compute-0 ceph-mon[73607]: 8.7 scrub ok
Oct 02 11:35:38 compute-0 ceph-mon[73607]: pgmap v198: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Oct 02 11:35:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 11:35:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 11:35:38 compute-0 ceph-mon[73607]: osdmap e88: 3 total, 3 up, 3 in
Oct 02 11:35:38 compute-0 sudo[98471]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:38 compute-0 sudo[98496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:38 compute-0 sudo[98496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:38 compute-0 sudo[98496]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:38 compute-0 sudo[98521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:35:38 compute-0 sudo[98521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.210071959 +0000 UTC m=+0.039496257 container create 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:35:39 compute-0 systemd[1]: Started libpod-conmon-3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105.scope.
Oct 02 11:35:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.191582402 +0000 UTC m=+0.021006720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.291511993 +0000 UTC m=+0.120936321 container init 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.300057264 +0000 UTC m=+0.129481562 container start 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:39 compute-0 boring_cray[98602]: 167 167
Oct 02 11:35:39 compute-0 systemd[1]: libpod-3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105.scope: Deactivated successfully.
Oct 02 11:35:39 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.327249637 +0000 UTC m=+0.156673945 container attach 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.32821506 +0000 UTC m=+0.157639358 container died 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:39 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 02 11:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f6d3b45785b749147c59bd05afb859f5c1e6472ca0e0e3a097fdc1e1b3655e4-merged.mount: Deactivated successfully.
Oct 02 11:35:39 compute-0 podman[98586]: 2025-10-02 11:35:39.389570757 +0000 UTC m=+0.218995055 container remove 3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:39 compute-0 systemd[1]: libpod-conmon-3a16b28eb19ab9f2891e52387357aecb762b0138130a6833d4459788b28f7105.scope: Deactivated successfully.
Oct 02 11:35:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 02 11:35:39 compute-0 podman[98627]: 2025-10-02 11:35:39.551062811 +0000 UTC m=+0.061316957 container create e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:35:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 02 11:35:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[54,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[54,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=87/54 les/c/f=88/55/0 sis=89) [1] r=0 lpr=89 pi=[54,89)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=87/54 les/c/f=88/55/0 sis=89) [1] r=0 lpr=89 pi=[54,89)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:39 compute-0 podman[98627]: 2025-10-02 11:35:39.511703577 +0000 UTC m=+0.021957743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:39 compute-0 systemd[1]: Started libpod-conmon-e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8.scope.
Oct 02 11:35:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=5 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8bf1a362dd9d83b67bdabb7984c3048bc31ff215339efc24f0b47bd403e09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8bf1a362dd9d83b67bdabb7984c3048bc31ff215339efc24f0b47bd403e09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8bf1a362dd9d83b67bdabb7984c3048bc31ff215339efc24f0b47bd403e09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8bf1a362dd9d83b67bdabb7984c3048bc31ff215339efc24f0b47bd403e09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:39 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 89 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=6 ec=54/41 lis/c=86/64 les/c/f=87/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:39 compute-0 podman[98627]: 2025-10-02 11:35:39.643379433 +0000 UTC m=+0.153633579 container init e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:35:39 compute-0 podman[98627]: 2025-10-02 11:35:39.650453498 +0000 UTC m=+0.160707644 container start e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:35:39 compute-0 podman[98627]: 2025-10-02 11:35:39.668760311 +0000 UTC m=+0.179014487 container attach e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:39 compute-0 ceph-mon[73607]: 9.2 scrub starts
Oct 02 11:35:39 compute-0 ceph-mon[73607]: 9.2 scrub ok
Oct 02 11:35:39 compute-0 ceph-mon[73607]: 10.6 scrub starts
Oct 02 11:35:39 compute-0 ceph-mon[73607]: 10.6 scrub ok
Oct 02 11:35:39 compute-0 ceph-mon[73607]: osdmap e89: 3 total, 3 up, 3 in
Oct 02 11:35:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 6 objects/s recovering
Oct 02 11:35:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 02 11:35:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]: {
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:     "1": [
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:         {
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "devices": [
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "/dev/loop3"
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             ],
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "lv_name": "ceph_lv0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "lv_size": "7511998464",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "name": "ceph_lv0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "tags": {
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.cluster_name": "ceph",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.crush_device_class": "",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.encrypted": "0",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.osd_id": "1",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.type": "block",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:                 "ceph.vdo": "0"
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             },
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "type": "block",
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:             "vg_name": "ceph_vg0"
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:         }
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]:     ]
Oct 02 11:35:40 compute-0 xenodochial_hoover[98643]: }
Oct 02 11:35:40 compute-0 systemd[1]: libpod-e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8.scope: Deactivated successfully.
Oct 02 11:35:40 compute-0 podman[98627]: 2025-10-02 11:35:40.422688912 +0000 UTC m=+0.932943048 container died e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd8bf1a362dd9d83b67bdabb7984c3048bc31ff215339efc24f0b47bd403e09-merged.mount: Deactivated successfully.
Oct 02 11:35:40 compute-0 podman[98627]: 2025-10-02 11:35:40.506188467 +0000 UTC m=+1.016442613 container remove e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hoover, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:40 compute-0 systemd[1]: libpod-conmon-e3ddc73102d39880ab77e3e49e4f3096eac573814a25872a3adbcddef1a825a8.scope: Deactivated successfully.
Oct 02 11:35:40 compute-0 sudo[98521]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 02 11:35:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 02 11:35:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 02 11:35:40 compute-0 sudo[98667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:40 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 90 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=89/90 n=6 ec=54/41 lis/c=87/54 les/c/f=88/55/0 sis=89) [1] r=0 lpr=89 pi=[54,89)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:40 compute-0 sudo[98667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:40 compute-0 sudo[98667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:40 compute-0 sudo[98692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:35:40 compute-0 sudo[98692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:40 compute-0 sudo[98692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:40 compute-0 sudo[98717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:40 compute-0 sudo[98717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:40 compute-0 sudo[98717]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:40 compute-0 sudo[98742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:35:40 compute-0 sudo[98742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:40.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 8.e scrub starts
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 8.e scrub ok
Oct 02 11:35:41 compute-0 ceph-mon[73607]: pgmap v201: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 6 objects/s recovering
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 10.7 scrub starts
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 10.7 scrub ok
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 8.1f scrub starts
Oct 02 11:35:41 compute-0 ceph-mon[73607]: 8.1f scrub ok
Oct 02 11:35:41 compute-0 ceph-mon[73607]: osdmap e90: 3 total, 3 up, 3 in
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.062149674 +0000 UTC m=+0.057552194 container create a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:35:41 compute-0 systemd[1]: Started libpod-conmon-a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f.scope.
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.028491531 +0000 UTC m=+0.023894081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.150297863 +0000 UTC m=+0.145700403 container init a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.156690011 +0000 UTC m=+0.152092531 container start a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.160822303 +0000 UTC m=+0.156224823 container attach a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:41 compute-0 hardcore_archimedes[98825]: 167 167
Oct 02 11:35:41 compute-0 systemd[1]: libpod-a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f.scope: Deactivated successfully.
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.16269728 +0000 UTC m=+0.158099820 container died a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfb8565c94d9467dd276c8f6e0056eff53b742c89b0d26caafef77baac8f406-merged.mount: Deactivated successfully.
Oct 02 11:35:41 compute-0 podman[98808]: 2025-10-02 11:35:41.20395445 +0000 UTC m=+0.199356970 container remove a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:35:41 compute-0 systemd[1]: libpod-conmon-a1f7def5614a2e93ba47d5b05204415089a63324224362efefb5cf70c8cccb2f.scope: Deactivated successfully.
Oct 02 11:35:41 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 02 11:35:41 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 02 11:35:41 compute-0 podman[98849]: 2025-10-02 11:35:41.355420034 +0000 UTC m=+0.038336578 container create 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:35:41 compute-0 systemd[1]: Started libpod-conmon-715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd.scope.
Oct 02 11:35:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f14a07f4a161307b06970c8044fdb544ef34a09581e48ecb6882a91395159f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f14a07f4a161307b06970c8044fdb544ef34a09581e48ecb6882a91395159f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f14a07f4a161307b06970c8044fdb544ef34a09581e48ecb6882a91395159f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f14a07f4a161307b06970c8044fdb544ef34a09581e48ecb6882a91395159f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:35:41 compute-0 podman[98849]: 2025-10-02 11:35:41.422103684 +0000 UTC m=+0.105020238 container init 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:35:41 compute-0 podman[98849]: 2025-10-02 11:35:41.42842916 +0000 UTC m=+0.111345704 container start 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:35:41 compute-0 podman[98849]: 2025-10-02 11:35:41.338819554 +0000 UTC m=+0.021736118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:35:41 compute-0 podman[98849]: 2025-10-02 11:35:41.436480439 +0000 UTC m=+0.119396983 container attach 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:35:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:41.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 02 11:35:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 02 11:35:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 02 11:35:41 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 91 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=89/54 les/c/f=90/55/0 sis=91) [1] r=0 lpr=91 pi=[54,91)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:41 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 91 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=6 ec=54/41 lis/c=89/54 les/c/f=90/55/0 sis=91) [1] r=0 lpr=91 pi=[54,91)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:35:42
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Oct 02 11:35:42 compute-0 charming_allen[98866]: {
Oct 02 11:35:42 compute-0 charming_allen[98866]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:35:42 compute-0 charming_allen[98866]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:35:42 compute-0 charming_allen[98866]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:35:42 compute-0 charming_allen[98866]:         "osd_id": 1,
Oct 02 11:35:42 compute-0 charming_allen[98866]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:35:42 compute-0 charming_allen[98866]:         "type": "bluestore"
Oct 02 11:35:42 compute-0 charming_allen[98866]:     }
Oct 02 11:35:42 compute-0 charming_allen[98866]: }
Oct 02 11:35:42 compute-0 systemd[1]: libpod-715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd.scope: Deactivated successfully.
Oct 02 11:35:42 compute-0 podman[98849]: 2025-10-02 11:35:42.353392371 +0000 UTC m=+1.036308915 container died 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-92f14a07f4a161307b06970c8044fdb544ef34a09581e48ecb6882a91395159f-merged.mount: Deactivated successfully.
Oct 02 11:35:42 compute-0 podman[98849]: 2025-10-02 11:35:42.418218993 +0000 UTC m=+1.101135527 container remove 715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_allen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:35:42 compute-0 systemd[1]: libpod-conmon-715f1779797af685303cefc6320008248936227eed82cfff29372ff8ea1e62dd.scope: Deactivated successfully.
Oct 02 11:35:42 compute-0 sudo[98742]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:35:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:35:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a55bd68b-37ab-4309-af8f-2aee3bda5e74 does not exist
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 309b44c5-7e76-4174-842a-38a6a5458c35 does not exist
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1a8aab9e-c717-422c-94a2-f2b57ab02283 does not exist
Oct 02 11:35:42 compute-0 sudo[98902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:42 compute-0 sudo[98902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:42 compute-0 sudo[98902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:42 compute-0 sudo[98927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:35:42 compute-0 sudo[98927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:42 compute-0 sudo[98927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:35:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:35:42 compute-0 ceph-mon[73607]: 10.9 scrub starts
Oct 02 11:35:42 compute-0 ceph-mon[73607]: 10.9 scrub ok
Oct 02 11:35:42 compute-0 ceph-mon[73607]: osdmap e91: 3 total, 3 up, 3 in
Oct 02 11:35:42 compute-0 ceph-mon[73607]: pgmap v204: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Oct 02 11:35:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:35:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 02 11:35:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 02 11:35:42 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 92 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=91/92 n=6 ec=54/41 lis/c=89/54 les/c/f=90/55/0 sis=91) [1] r=0 lpr=91 pi=[54,91)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:43.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:43 compute-0 ceph-mon[73607]: osdmap e92: 3 total, 3 up, 3 in
Oct 02 11:35:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 70 B/s, 2 objects/s recovering
Oct 02 11:35:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 02 11:35:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 02 11:35:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:44.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:45 compute-0 ceph-mon[73607]: 8.13 scrub starts
Oct 02 11:35:45 compute-0 ceph-mon[73607]: 8.13 scrub ok
Oct 02 11:35:45 compute-0 ceph-mon[73607]: pgmap v206: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 70 B/s, 2 objects/s recovering
Oct 02 11:35:45 compute-0 ceph-mon[73607]: 10.a scrub starts
Oct 02 11:35:45 compute-0 ceph-mon[73607]: 10.a scrub ok
Oct 02 11:35:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:45.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 02 11:35:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 11:35:46 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 02 11:35:46 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 02 11:35:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 02 11:35:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:46.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 11:35:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 02 11:35:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 02 11:35:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 02 11:35:47 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 93 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 02 11:35:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:47.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:47 compute-0 ceph-mon[73607]: 8.1a scrub starts
Oct 02 11:35:47 compute-0 ceph-mon[73607]: 8.1a scrub ok
Oct 02 11:35:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 11:35:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 02 11:35:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 02 11:35:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 02 11:35:47 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 94 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:47 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 94 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:35:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 02 11:35:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 02 11:35:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 02 11:35:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 11:35:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:48.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 9.4 scrub starts
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 9.4 scrub ok
Oct 02 11:35:48 compute-0 ceph-mon[73607]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 10.b scrub starts
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 10.b scrub ok
Oct 02 11:35:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 11:35:48 compute-0 ceph-mon[73607]: osdmap e93: 3 total, 3 up, 3 in
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 8.1d scrub starts
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 8.1d scrub ok
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 10.c scrub starts
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 10.c scrub ok
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 4.1d deep-scrub starts
Oct 02 11:35:48 compute-0 ceph-mon[73607]: 4.1d deep-scrub ok
Oct 02 11:35:48 compute-0 ceph-mon[73607]: osdmap e94: 3 total, 3 up, 3 in
Oct 02 11:35:48 compute-0 ceph-mon[73607]: pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 11:35:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 02 11:35:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:50 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 02 11:35:50 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 02 11:35:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 remapped+peering, 304 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Oct 02 11:35:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 11:35:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 02 11:35:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 02 11:35:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:50.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:51 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 02 11:35:51 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 02 11:35:51 compute-0 ceph-mon[73607]: 10.d scrub starts
Oct 02 11:35:51 compute-0 ceph-mon[73607]: 10.d scrub ok
Oct 02 11:35:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:51.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:52 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct 02 11:35:52 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct 02 11:35:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 02 11:35:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:53.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:35:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 8.1e scrub starts
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 8.1e scrub ok
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 10.e deep-scrub starts
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 10.e deep-scrub ok
Oct 02 11:35:53 compute-0 ceph-mon[73607]: pgmap v211: 305 pgs: 1 remapped+peering, 304 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Oct 02 11:35:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 11:35:53 compute-0 ceph-mon[73607]: osdmap e95: 3 total, 3 up, 3 in
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 9.c scrub starts
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 10.16 scrub starts
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 9.c scrub ok
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 10.16 scrub ok
Oct 02 11:35:53 compute-0 ceph-mon[73607]: 9.14 scrub starts
Oct 02 11:35:53 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 96 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:35:53 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 96 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:35:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:35:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:35:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 active+clean+scrubbing, 1 remapped+peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 02 11:35:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:54 compute-0 ceph-mon[73607]: 10.17 scrub starts
Oct 02 11:35:54 compute-0 ceph-mon[73607]: 10.17 scrub ok
Oct 02 11:35:54 compute-0 ceph-mon[73607]: pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:54 compute-0 ceph-mon[73607]: 9.14 scrub ok
Oct 02 11:35:54 compute-0 ceph-mon[73607]: 9.1c scrub starts
Oct 02 11:35:54 compute-0 ceph-mon[73607]: 9.1c scrub ok
Oct 02 11:35:54 compute-0 ceph-mon[73607]: osdmap e96: 3 total, 3 up, 3 in
Oct 02 11:35:54 compute-0 ceph-mon[73607]: pgmap v215: 305 pgs: 1 active+clean+scrubbing, 1 remapped+peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:35:55 compute-0 sshd-session[98958]: Accepted publickey for zuul from 192.168.122.30 port 36308 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:35:55 compute-0 systemd-logind[789]: New session 35 of user zuul.
Oct 02 11:35:55 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 02 11:35:55 compute-0 sshd-session[98958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:35:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 02 11:35:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 02 11:35:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:55.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 02 11:35:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 02 11:35:55 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 97 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=96/97 n=5 ec=54/41 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 11.2 scrub starts
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 11.2 scrub ok
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 11.6 scrub starts
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 11.6 scrub ok
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 10.1a scrub starts
Oct 02 11:35:55 compute-0 ceph-mon[73607]: 10.1a scrub ok
Oct 02 11:35:55 compute-0 ceph-mon[73607]: osdmap e97: 3 total, 3 up, 3 in
Oct 02 11:35:56 compute-0 python3.9[99111]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:35:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 peering, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:56.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:56 compute-0 ceph-mon[73607]: pgmap v217: 305 pgs: 1 peering, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:35:56 compute-0 ceph-mon[73607]: 11.9 scrub starts
Oct 02 11:35:56 compute-0 ceph-mon[73607]: 11.9 scrub ok
Oct 02 11:35:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:57 compute-0 sudo[99199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:57 compute-0 sudo[99199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:57 compute-0 sudo[99199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:57 compute-0 sudo[99224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:35:57 compute-0 sudo[99224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:35:57 compute-0 sudo[99224]: pam_unix(sudo:session): session closed for user root
Oct 02 11:35:58 compute-0 ceph-mon[73607]: 8.16 scrub starts
Oct 02 11:35:58 compute-0 ceph-mon[73607]: 8.16 scrub ok
Oct 02 11:35:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Oct 02 11:35:58 compute-0 sudo[99375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iahkmlxnhmtawufwyltlrhgqcyvqgwyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404958.2408926-61-251594893059764/AnsiballZ_command.py'
Oct 02 11:35:58 compute-0 sudo[99375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:35:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:35:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:35:58 compute-0 python3.9[99377]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:35:59 compute-0 ceph-mon[73607]: pgmap v218: 305 pgs: 1 peering, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Oct 02 11:35:59 compute-0 ceph-mon[73607]: 11.b scrub starts
Oct 02 11:35:59 compute-0 ceph-mon[73607]: 11.b scrub ok
Oct 02 11:35:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:35:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:35:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:35:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:00 compute-0 ceph-mon[73607]: 11.c scrub starts
Oct 02 11:36:00 compute-0 ceph-mon[73607]: 11.c scrub ok
Oct 02 11:36:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 02 11:36:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 11:36:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 02 11:36:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct 02 11:36:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct 02 11:36:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 11:36:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 02 11:36:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 02 11:36:01 compute-0 ceph-mon[73607]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 11:36:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 02 11:36:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 11:36:02 compute-0 ceph-mon[73607]: 10.1c deep-scrub starts
Oct 02 11:36:02 compute-0 ceph-mon[73607]: 10.1c deep-scrub ok
Oct 02 11:36:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 11:36:02 compute-0 ceph-mon[73607]: osdmap e98: 3 total, 3 up, 3 in
Oct 02 11:36:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 02 11:36:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:02.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 11:36:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 02 11:36:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 02 11:36:03 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 02 11:36:03 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=99) [1] r=0 lpr=99 pi=[67,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:03 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 02 11:36:03 compute-0 ceph-mon[73607]: pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 11:36:03 compute-0 ceph-mon[73607]: 11.d scrub starts
Oct 02 11:36:03 compute-0 ceph-mon[73607]: 11.d scrub ok
Oct 02 11:36:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 11:36:03 compute-0 ceph-mon[73607]: osdmap e99: 3 total, 3 up, 3 in
Oct 02 11:36:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 02 11:36:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 02 11:36:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 02 11:36:03 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:03 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 02 11:36:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 11:36:04 compute-0 ceph-mon[73607]: 10.1d scrub starts
Oct 02 11:36:04 compute-0 ceph-mon[73607]: 10.1d scrub ok
Oct 02 11:36:04 compute-0 ceph-mon[73607]: osdmap e100: 3 total, 3 up, 3 in
Oct 02 11:36:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 11:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:36:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:04.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 02 11:36:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 11:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 02 11:36:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 02 11:36:05 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 02 11:36:05 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 101 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=101 pruub=13.597920418s) [2] r=-1 lpr=101 pi=[69,101)/1 crt=47'1065 mlcod 0'0 active pruub 200.120315552s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:05 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 101 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=101 pruub=13.597702026s) [2] r=-1 lpr=101 pi=[69,101)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 200.120315552s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:05 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 02 11:36:05 compute-0 sshd-session[99406]: Invalid user hpc-riscv from 167.99.55.34 port 55770
Oct 02 11:36:05 compute-0 sshd-session[99406]: Connection closed by invalid user hpc-riscv 167.99.55.34 port 55770 [preauth]
Oct 02 11:36:05 compute-0 ceph-mon[73607]: pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:05 compute-0 ceph-mon[73607]: 11.10 deep-scrub starts
Oct 02 11:36:05 compute-0 ceph-mon[73607]: 11.10 deep-scrub ok
Oct 02 11:36:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 11:36:05 compute-0 ceph-mon[73607]: osdmap e101: 3 total, 3 up, 3 in
Oct 02 11:36:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 02 11:36:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 02 11:36:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 02 11:36:06 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 102 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=102) [2]/[1] r=0 lpr=102 pi=[69,102)/1 crt=47'1065 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:06 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 102 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:06 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 102 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=102) [2]/[1] r=0 lpr=102 pi=[69,102)/1 crt=47'1065 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:06 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 102 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:06 compute-0 sudo[99375]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 02 11:36:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 11:36:06 compute-0 sshd-session[98961]: Connection closed by 192.168.122.30 port 36308
Oct 02 11:36:06 compute-0 sshd-session[98958]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:36:06 compute-0 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Oct 02 11:36:06 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 02 11:36:06 compute-0 systemd[1]: session-35.scope: Consumed 8.084s CPU time.
Oct 02 11:36:06 compute-0 systemd-logind[789]: Removed session 35.
Oct 02 11:36:06 compute-0 ceph-mon[73607]: 10.1f deep-scrub starts
Oct 02 11:36:06 compute-0 ceph-mon[73607]: 10.1f deep-scrub ok
Oct 02 11:36:06 compute-0 ceph-mon[73607]: 11.11 scrub starts
Oct 02 11:36:06 compute-0 ceph-mon[73607]: 11.11 scrub ok
Oct 02 11:36:06 compute-0 ceph-mon[73607]: osdmap e102: 3 total, 3 up, 3 in
Oct 02 11:36:06 compute-0 ceph-mon[73607]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 11:36:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 02 11:36:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 11:36:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 02 11:36:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 02 11:36:07 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 103 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=102/103 n=5 ec=54/41 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:07 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 103 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=102/103 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=102) [2]/[1] async=[2] r=0 lpr=102 pi=[69,102)/1 crt=47'1065 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:07 compute-0 ceph-mon[73607]: 8.6 scrub starts
Oct 02 11:36:07 compute-0 ceph-mon[73607]: 8.6 scrub ok
Oct 02 11:36:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 11:36:07 compute-0 ceph-mon[73607]: osdmap e103: 3 total, 3 up, 3 in
Oct 02 11:36:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 02 11:36:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 11:36:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 02 11:36:08 compute-0 ceph-mon[73607]: 4.14 deep-scrub starts
Oct 02 11:36:08 compute-0 ceph-mon[73607]: 4.14 deep-scrub ok
Oct 02 11:36:08 compute-0 ceph-mon[73607]: pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 11:36:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 11:36:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 02 11:36:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 02 11:36:08 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 104 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=102/103 n=5 ec=54/41 lis/c=102/69 les/c/f=103/70/0 sis=104 pruub=14.686717987s) [2] async=[2] r=-1 lpr=104 pi=[69,104)/1 crt=47'1065 mlcod 47'1065 active pruub 204.966400146s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:08 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 104 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=102/103 n=5 ec=54/41 lis/c=102/69 les/c/f=103/70/0 sis=104 pruub=14.686654091s) [2] r=-1 lpr=104 pi=[69,104)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 204.966400146s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 02 11:36:09 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 02 11:36:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 8.5 scrub starts
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 8.5 scrub ok
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 11.15 scrub starts
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 11.15 scrub ok
Oct 02 11:36:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 11:36:09 compute-0 ceph-mon[73607]: osdmap e104: 3 total, 3 up, 3 in
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 8.10 scrub starts
Oct 02 11:36:09 compute-0 ceph-mon[73607]: 8.10 scrub ok
Oct 02 11:36:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 02 11:36:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 02 11:36:10 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 02 11:36:10 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 02 11:36:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Oct 02 11:36:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 02 11:36:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 11:36:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 02 11:36:11 compute-0 ceph-mon[73607]: 4.9 deep-scrub starts
Oct 02 11:36:11 compute-0 ceph-mon[73607]: 4.9 deep-scrub ok
Oct 02 11:36:11 compute-0 ceph-mon[73607]: osdmap e105: 3 total, 3 up, 3 in
Oct 02 11:36:11 compute-0 ceph-mon[73607]: 11.1e scrub starts
Oct 02 11:36:11 compute-0 ceph-mon[73607]: 11.1e scrub ok
Oct 02 11:36:11 compute-0 ceph-mon[73607]: pgmap v232: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Oct 02 11:36:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 11:36:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 11:36:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 02 11:36:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 02 11:36:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 02 11:36:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 02 11:36:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 02 11:36:12 compute-0 ceph-mon[73607]: 11.17 scrub starts
Oct 02 11:36:12 compute-0 ceph-mon[73607]: 11.17 scrub ok
Oct 02 11:36:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 11:36:12 compute-0 ceph-mon[73607]: osdmap e106: 3 total, 3 up, 3 in
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Oct 02 11:36:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 02 11:36:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 02 11:36:13 compute-0 ceph-mon[73607]: 11.16 scrub starts
Oct 02 11:36:13 compute-0 ceph-mon[73607]: 11.16 scrub ok
Oct 02 11:36:13 compute-0 ceph-mon[73607]: osdmap e107: 3 total, 3 up, 3 in
Oct 02 11:36:13 compute-0 ceph-mon[73607]: pgmap v235: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Oct 02 11:36:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 11:36:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 11:36:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 02 11:36:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 02 11:36:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 108 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=5 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=12.382143021s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=47'1065 mlcod 0'0 active pruub 207.268783569s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:13 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 108 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=5 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=12.381981850s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 207.268783569s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 1 unknown, 1 remapped+peering, 1 active+clean+scrubbing, 302 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Oct 02 11:36:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 02 11:36:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 11:36:14 compute-0 ceph-mon[73607]: osdmap e108: 3 total, 3 up, 3 in
Oct 02 11:36:14 compute-0 ceph-mon[73607]: 11.18 scrub starts
Oct 02 11:36:14 compute-0 ceph-mon[73607]: 11.18 scrub ok
Oct 02 11:36:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 02 11:36:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 02 11:36:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 02 11:36:15 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 109 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=5 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[1] r=0 lpr=109 pi=[80,109)/1 crt=47'1065 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:15 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 109 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=80/81 n=5 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[1] r=0 lpr=109 pi=[80,109)/1 crt=47'1065 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:15 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 02 11:36:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:36:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 4.19 scrub starts
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 4.19 scrub ok
Oct 02 11:36:15 compute-0 ceph-mon[73607]: pgmap v237: 305 pgs: 1 unknown, 1 remapped+peering, 1 active+clean+scrubbing, 302 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 8.3 scrub starts
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 8.3 scrub ok
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 11.1d scrub starts
Oct 02 11:36:15 compute-0 ceph-mon[73607]: osdmap e109: 3 total, 3 up, 3 in
Oct 02 11:36:15 compute-0 ceph-mon[73607]: 11.1d scrub ok
Oct 02 11:36:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 02 11:36:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 02 11:36:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 02 11:36:16 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 110 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=109/110 n=5 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[1] async=[0] r=0 lpr=109 pi=[80,109)/1 crt=47'1065 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 02 11:36:17 compute-0 ceph-mon[73607]: osdmap e110: 3 total, 3 up, 3 in
Oct 02 11:36:17 compute-0 ceph-mon[73607]: pgmap v240: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:17 compute-0 ceph-mon[73607]: 8.2 scrub starts
Oct 02 11:36:17 compute-0 ceph-mon[73607]: 8.2 scrub ok
Oct 02 11:36:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 02 11:36:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 02 11:36:17 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 111 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=109/110 n=5 ec=54/41 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=14.952245712s) [0] async=[0] r=-1 lpr=111 pi=[80,111)/1 crt=47'1065 mlcod 47'1065 active pruub 213.606338501s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:17 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 111 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=109/110 n=5 ec=54/41 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=14.952158928s) [0] r=-1 lpr=111 pi=[80,111)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 213.606338501s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:17 compute-0 sudo[99446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:17 compute-0 sudo[99446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:17 compute-0 sudo[99446]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:17 compute-0 sudo[99471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:17 compute-0 sudo[99471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:17 compute-0 sudo[99471]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 02 11:36:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 02 11:36:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 02 11:36:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 02 11:36:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 02 11:36:18 compute-0 ceph-mon[73607]: osdmap e111: 3 total, 3 up, 3 in
Oct 02 11:36:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:18.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 02 11:36:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 02 11:36:19 compute-0 ceph-mon[73607]: 11.1c scrub starts
Oct 02 11:36:19 compute-0 ceph-mon[73607]: 11.1c scrub ok
Oct 02 11:36:19 compute-0 ceph-mon[73607]: pgmap v242: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:19 compute-0 ceph-mon[73607]: osdmap e112: 3 total, 3 up, 3 in
Oct 02 11:36:19 compute-0 ceph-mon[73607]: 8.15 scrub starts
Oct 02 11:36:19 compute-0 ceph-mon[73607]: 8.15 scrub ok
Oct 02 11:36:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:19.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 15 op/s; 65 B/s, 2 objects/s recovering
Oct 02 11:36:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 02 11:36:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 11:36:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 02 11:36:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 11:36:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 02 11:36:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 02 11:36:20 compute-0 ceph-mon[73607]: 4.13 scrub starts
Oct 02 11:36:20 compute-0 ceph-mon[73607]: 4.13 scrub ok
Oct 02 11:36:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 11:36:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:20.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:21 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 02 11:36:21 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 02 11:36:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 02 11:36:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 02 11:36:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 02 11:36:21 compute-0 ceph-mon[73607]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 15 op/s; 65 B/s, 2 objects/s recovering
Oct 02 11:36:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 11:36:21 compute-0 ceph-mon[73607]: osdmap e113: 3 total, 3 up, 3 in
Oct 02 11:36:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:21.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:21 compute-0 sshd-session[99498]: Accepted publickey for zuul from 192.168.122.30 port 32780 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:36:21 compute-0 systemd-logind[789]: New session 36 of user zuul.
Oct 02 11:36:21 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 02 11:36:21 compute-0 sshd-session[99498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:36:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 02 11:36:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 02 11:36:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 15 op/s; 65 B/s, 2 objects/s recovering
Oct 02 11:36:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 02 11:36:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 11:36:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 02 11:36:22 compute-0 ceph-mon[73607]: 8.18 scrub starts
Oct 02 11:36:22 compute-0 ceph-mon[73607]: 8.18 scrub ok
Oct 02 11:36:22 compute-0 ceph-mon[73607]: osdmap e114: 3 total, 3 up, 3 in
Oct 02 11:36:22 compute-0 ceph-mon[73607]: 11.19 scrub starts
Oct 02 11:36:22 compute-0 ceph-mon[73607]: 11.19 scrub ok
Oct 02 11:36:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 11:36:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 11:36:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 02 11:36:22 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 02 11:36:22 compute-0 python3.9[99652]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 11:36:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:22.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:23 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 02 11:36:23 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 02 11:36:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 02 11:36:23 compute-0 ceph-mon[73607]: 11.1b scrub starts
Oct 02 11:36:23 compute-0 ceph-mon[73607]: 11.1b scrub ok
Oct 02 11:36:23 compute-0 ceph-mon[73607]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 15 op/s; 65 B/s, 2 objects/s recovering
Oct 02 11:36:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 11:36:23 compute-0 ceph-mon[73607]: osdmap e115: 3 total, 3 up, 3 in
Oct 02 11:36:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 02 11:36:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 02 11:36:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:23 compute-0 python3.9[99826]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:36:24 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct 02 11:36:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Oct 02 11:36:24 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct 02 11:36:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 02 11:36:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 02 11:36:24 compute-0 ceph-mon[73607]: 8.1b scrub starts
Oct 02 11:36:24 compute-0 ceph-mon[73607]: 8.1b scrub ok
Oct 02 11:36:24 compute-0 ceph-mon[73607]: osdmap e116: 3 total, 3 up, 3 in
Oct 02 11:36:24 compute-0 ceph-mon[73607]: pgmap v250: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Oct 02 11:36:24 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 02 11:36:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:24.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:24 compute-0 sudo[99981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apffrefhgszqishrqoynkhmrirucacwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404984.447863-98-26932285248046/AnsiballZ_command.py'
Oct 02 11:36:24 compute-0 sudo[99981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:25 compute-0 python3.9[99983]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:36:25 compute-0 sudo[99981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Oct 02 11:36:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Oct 02 11:36:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:25 compute-0 ceph-mon[73607]: 6.a deep-scrub starts
Oct 02 11:36:25 compute-0 ceph-mon[73607]: 6.a deep-scrub ok
Oct 02 11:36:25 compute-0 ceph-mon[73607]: osdmap e117: 3 total, 3 up, 3 in
Oct 02 11:36:25 compute-0 ceph-mon[73607]: 11.1f scrub starts
Oct 02 11:36:25 compute-0 ceph-mon[73607]: 11.1f scrub ok
Oct 02 11:36:25 compute-0 sudo[100134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibktwugqpcbmyjgaffkypcjjmcyalgwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404985.5675998-134-276036964800175/AnsiballZ_stat.py'
Oct 02 11:36:25 compute-0 sudo[100134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:26 compute-0 python3.9[100136]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:36:26 compute-0 sudo[100134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Oct 02 11:36:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:26.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 11.7 deep-scrub starts
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 11.7 deep-scrub ok
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 5.1e scrub starts
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 5.1e scrub ok
Oct 02 11:36:26 compute-0 ceph-mon[73607]: pgmap v252: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 8.f scrub starts
Oct 02 11:36:26 compute-0 ceph-mon[73607]: 8.f scrub ok
Oct 02 11:36:27 compute-0 sudo[100289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coqtbgjutobgnmhbvwpskfvemwnzwxpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404986.6583414-167-183731442067839/AnsiballZ_file.py'
Oct 02 11:36:27 compute-0 sudo[100289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:27 compute-0 python3.9[100291]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:36:27 compute-0 sudo[100289]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:27 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 02 11:36:27 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 02 11:36:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:28 compute-0 ceph-mon[73607]: 8.4 scrub starts
Oct 02 11:36:28 compute-0 ceph-mon[73607]: 8.4 scrub ok
Oct 02 11:36:28 compute-0 ceph-mon[73607]: 5.14 scrub starts
Oct 02 11:36:28 compute-0 ceph-mon[73607]: 5.14 scrub ok
Oct 02 11:36:28 compute-0 python3.9[100441]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:36:28 compute-0 network[100458]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:36:28 compute-0 network[100459]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:36:28 compute-0 network[100460]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:36:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Oct 02 11:36:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:29 compute-0 ceph-mon[73607]: pgmap v253: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Oct 02 11:36:29 compute-0 ceph-mon[73607]: 5.6 scrub starts
Oct 02 11:36:29 compute-0 ceph-mon[73607]: 5.6 scrub ok
Oct 02 11:36:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 02 11:36:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 11:36:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 02 11:36:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 11:36:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 02 11:36:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 11:36:30 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 02 11:36:30 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 118 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=5 ec=54/41 lis/c=85/85 les/c/f=86/86/0 sis=118 pruub=10.285161018s) [2] r=-1 lpr=118 pi=[85,118)/1 crt=47'1065 mlcod 0'0 active pruub 222.037567139s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:30 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 118 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=5 ec=54/41 lis/c=85/85 les/c/f=86/86/0 sis=118 pruub=10.284435272s) [2] r=-1 lpr=118 pi=[85,118)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 222.037567139s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:30.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 02 11:36:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 02 11:36:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 02 11:36:31 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 119 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=5 ec=54/41 lis/c=85/85 les/c/f=86/86/0 sis=119) [2]/[1] r=0 lpr=119 pi=[85,119)/1 crt=47'1065 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:31 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 119 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=85/86 n=5 ec=54/41 lis/c=85/85 les/c/f=86/86/0 sis=119) [2]/[1] r=0 lpr=119 pi=[85,119)/1 crt=47'1065 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:31 compute-0 ceph-mon[73607]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 11:36:31 compute-0 ceph-mon[73607]: osdmap e118: 3 total, 3 up, 3 in
Oct 02 11:36:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 02 11:36:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 11:36:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 02 11:36:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 11:36:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 02 11:36:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 02 11:36:32 compute-0 ceph-mon[73607]: osdmap e119: 3 total, 3 up, 3 in
Oct 02 11:36:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 11:36:32 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 120 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=120 pruub=10.294061661s) [0] r=-1 lpr=120 pi=[69,120)/1 crt=47'1065 mlcod 0'0 active pruub 224.126708984s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:32 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 120 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=120 pruub=10.294014931s) [0] r=-1 lpr=120 pi=[69,120)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 224.126708984s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:32 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 120 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=119/120 n=5 ec=54/41 lis/c=85/85 les/c/f=86/86/0 sis=119) [2]/[1] async=[2] r=0 lpr=119 pi=[85,119)/1 crt=47'1065 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:33 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 02 11:36:33 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 02 11:36:33 compute-0 python3.9[100726]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:36:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 02 11:36:33 compute-0 ceph-mon[73607]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 11:36:33 compute-0 ceph-mon[73607]: osdmap e120: 3 total, 3 up, 3 in
Oct 02 11:36:33 compute-0 ceph-mon[73607]: 6.1 scrub starts
Oct 02 11:36:33 compute-0 ceph-mon[73607]: 6.1 scrub ok
Oct 02 11:36:33 compute-0 ceph-mon[73607]: 5.5 scrub starts
Oct 02 11:36:33 compute-0 ceph-mon[73607]: 5.5 scrub ok
Oct 02 11:36:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 02 11:36:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 02 11:36:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 121 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=119/120 n=5 ec=54/41 lis/c=119/85 les/c/f=120/86/0 sis=121 pruub=14.912616730s) [2] async=[2] r=-1 lpr=121 pi=[85,121)/1 crt=47'1065 mlcod 47'1065 active pruub 229.842941284s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 121 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=121) [0]/[1] r=0 lpr=121 pi=[69,121)/1 crt=47'1065 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 121 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=69/70 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=121) [0]/[1] r=0 lpr=121 pi=[69,121)/1 crt=47'1065 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:33 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 121 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=119/120 n=5 ec=54/41 lis/c=119/85 les/c/f=120/86/0 sis=121 pruub=14.912532806s) [2] r=-1 lpr=121 pi=[85,121)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 229.842941284s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:36:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:36:34 compute-0 python3.9[100876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:36:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 02 11:36:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 02 11:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 02 11:36:34 compute-0 ceph-mon[73607]: 11.4 scrub starts
Oct 02 11:36:34 compute-0 ceph-mon[73607]: 11.4 scrub ok
Oct 02 11:36:34 compute-0 ceph-mon[73607]: osdmap e121: 3 total, 3 up, 3 in
Oct 02 11:36:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:36:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 02 11:36:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 02 11:36:34 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 122 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=5 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=8.847887039s) [0] r=-1 lpr=122 pi=[88,122)/1 crt=47'1065 mlcod 0'0 active pruub 225.001129150s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:34 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 122 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=5 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=8.847837448s) [0] r=-1 lpr=122 pi=[88,122)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 225.001129150s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:34 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 122 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=121/122 n=5 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=121) [0]/[1] async=[0] r=0 lpr=121 pi=[69,121)/1 crt=47'1065 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:35 compute-0 ceph-mon[73607]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:35 compute-0 ceph-mon[73607]: 11.5 scrub starts
Oct 02 11:36:35 compute-0 ceph-mon[73607]: 11.5 scrub ok
Oct 02 11:36:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:36:35 compute-0 ceph-mon[73607]: 5.3 scrub starts
Oct 02 11:36:35 compute-0 ceph-mon[73607]: 5.3 scrub ok
Oct 02 11:36:35 compute-0 ceph-mon[73607]: osdmap e122: 3 total, 3 up, 3 in
Oct 02 11:36:35 compute-0 python3.9[101031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:36:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 02 11:36:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 02 11:36:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 02 11:36:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 123 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=121/122 n=5 ec=54/41 lis/c=121/69 les/c/f=122/70/0 sis=123 pruub=14.975830078s) [0] async=[0] r=-1 lpr=123 pi=[69,123)/1 crt=47'1065 mlcod 47'1065 active pruub 232.167419434s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 123 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=121/122 n=5 ec=54/41 lis/c=121/69 les/c/f=122/70/0 sis=123 pruub=14.975744247s) [0] r=-1 lpr=123 pi=[69,123)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 232.167419434s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 123 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=5 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=123) [0]/[1] r=0 lpr=123 pi=[88,123)/1 crt=47'1065 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:35 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 123 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=88/89 n=5 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=123) [0]/[1] r=0 lpr=123 pi=[88,123)/1 crt=47'1065 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:36:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Oct 02 11:36:36 compute-0 sudo[101188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpwueaozeorzjcmwykkigwmyuygejzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404996.1509264-311-133533357667956/AnsiballZ_setup.py'
Oct 02 11:36:36 compute-0 sudo[101188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:36 compute-0 python3.9[101190]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:36:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 02 11:36:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 02 11:36:36 compute-0 ceph-mon[73607]: osdmap e123: 3 total, 3 up, 3 in
Oct 02 11:36:36 compute-0 ceph-mon[73607]: pgmap v263: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Oct 02 11:36:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 02 11:36:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:36 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 124 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=123/124 n=5 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=123) [0]/[1] async=[0] r=0 lpr=123 pi=[88,123)/1 crt=47'1065 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:36:36 compute-0 sudo[101188]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:37 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Oct 02 11:36:37 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Oct 02 11:36:37 compute-0 sudo[101272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keuwxhebsxdphvkraibozlzborutvpki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404996.1509264-311-133533357667956/AnsiballZ_dnf.py'
Oct 02 11:36:37 compute-0 sudo[101272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:37 compute-0 python3.9[101274]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:36:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 02 11:36:37 compute-0 ceph-mon[73607]: osdmap e124: 3 total, 3 up, 3 in
Oct 02 11:36:37 compute-0 ceph-mon[73607]: 5.15 deep-scrub starts
Oct 02 11:36:37 compute-0 ceph-mon[73607]: 5.15 deep-scrub ok
Oct 02 11:36:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 02 11:36:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 02 11:36:37 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 125 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=123/124 n=5 ec=54/41 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=14.936563492s) [0] async=[0] r=-1 lpr=125 pi=[88,125)/1 crt=47'1065 mlcod 47'1065 active pruub 234.229125977s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:36:37 compute-0 ceph-osd[83986]: osd.1 pg_epoch: 125 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=123/124 n=5 ec=54/41 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=14.936487198s) [0] r=-1 lpr=125 pi=[88,125)/1 crt=47'1065 mlcod 0'0 unknown NOTIFY pruub 234.229125977s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:36:37 compute-0 sudo[101279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:37 compute-0 sudo[101279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:37 compute-0 sudo[101279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:38 compute-0 sudo[101304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:38 compute-0 sudo[101304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:38 compute-0 sudo[101304]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:36:38 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 02 11:36:38 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 02 11:36:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:38.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 02 11:36:38 compute-0 ceph-mon[73607]: osdmap e125: 3 total, 3 up, 3 in
Oct 02 11:36:38 compute-0 ceph-mon[73607]: pgmap v266: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:36:38 compute-0 ceph-mon[73607]: 5.10 scrub starts
Oct 02 11:36:38 compute-0 ceph-mon[73607]: 5.10 scrub ok
Oct 02 11:36:38 compute-0 ceph-mon[73607]: 4.6 deep-scrub starts
Oct 02 11:36:38 compute-0 ceph-mon[73607]: 4.6 deep-scrub ok
Oct 02 11:36:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 02 11:36:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:36:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:40 compute-0 ceph-mon[73607]: osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:36:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 2 objects/s recovering
Oct 02 11:36:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 02 11:36:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 02 11:36:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:41 compute-0 ceph-mon[73607]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 2 objects/s recovering
Oct 02 11:36:41 compute-0 ceph-mon[73607]: 5.11 scrub starts
Oct 02 11:36:41 compute-0 ceph-mon[73607]: 11.e scrub starts
Oct 02 11:36:41 compute-0 ceph-mon[73607]: 5.11 scrub ok
Oct 02 11:36:41 compute-0 ceph-mon[73607]: 11.e scrub ok
Oct 02 11:36:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:36:42
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr']
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:36:42 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 02 11:36:42 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:36:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:36:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:42.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:42 compute-0 sudo[101391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:42 compute-0 sudo[101391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:42 compute-0 sudo[101391]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:42 compute-0 sudo[101416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:36:42 compute-0 sudo[101416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:42 compute-0 sudo[101416]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:43 compute-0 sudo[101442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:43 compute-0 sudo[101442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:43 compute-0 sudo[101442]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:43 compute-0 sudo[101467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:36:43 compute-0 sudo[101467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:43 compute-0 ceph-mon[73607]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:36:43 compute-0 ceph-mon[73607]: 5.16 scrub starts
Oct 02 11:36:43 compute-0 ceph-mon[73607]: 5.16 scrub ok
Oct 02 11:36:43 compute-0 podman[101564]: 2025-10-02 11:36:43.598516372 +0000 UTC m=+0.111973930 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:36:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:43 compute-0 podman[101564]: 2025-10-02 11:36:43.705275262 +0000 UTC m=+0.218732820 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:36:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:36:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:36:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:44 compute-0 podman[101704]: 2025-10-02 11:36:44.259115478 +0000 UTC m=+0.112938355 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:36:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 44 B/s, 1 objects/s recovering
Oct 02 11:36:44 compute-0 podman[101726]: 2025-10-02 11:36:44.324972656 +0000 UTC m=+0.051723160 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:36:44 compute-0 podman[101704]: 2025-10-02 11:36:44.333567388 +0000 UTC m=+0.187390265 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:36:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:44 compute-0 podman[101769]: 2025-10-02 11:36:44.727507109 +0000 UTC m=+0.250326500 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64)
Oct 02 11:36:44 compute-0 podman[101790]: 2025-10-02 11:36:44.817997187 +0000 UTC m=+0.060634990 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 02 11:36:44 compute-0 podman[101769]: 2025-10-02 11:36:44.824017486 +0000 UTC m=+0.346836857 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., release=1793)
Oct 02 11:36:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:44 compute-0 sudo[101467]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:36:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 ceph-mon[73607]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 44 B/s, 1 objects/s recovering
Oct 02 11:36:45 compute-0 ceph-mon[73607]: 8.9 scrub starts
Oct 02 11:36:45 compute-0 ceph-mon[73607]: 8.9 scrub ok
Oct 02 11:36:45 compute-0 ceph-mon[73607]: 5.a scrub starts
Oct 02 11:36:45 compute-0 ceph-mon[73607]: 5.a scrub ok
Oct 02 11:36:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:36:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:36:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 sudo[101819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:45 compute-0 sudo[101819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:45 compute-0 sudo[101819]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:36:45 compute-0 sudo[101844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:36:45 compute-0 sudo[101844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:45 compute-0 sudo[101844]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.331209) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005331305, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7510, "num_deletes": 251, "total_data_size": 9841908, "memory_usage": 10038896, "flush_reason": "Manual Compaction"}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 02 11:36:45 compute-0 sudo[101869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:45 compute-0 sudo[101869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:45 compute-0 sudo[101869]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005378086, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7974308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 151, "largest_seqno": 7652, "table_properties": {"data_size": 7946379, "index_size": 18406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 78881, "raw_average_key_size": 23, "raw_value_size": 7880824, "raw_average_value_size": 2337, "num_data_blocks": 813, "num_entries": 3372, "num_filter_entries": 3372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404649, "oldest_key_time": 1759404649, "file_creation_time": 1759405005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 47121 microseconds, and 14970 cpu microseconds.
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.378342) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7974308 bytes OK
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.378426) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.379412) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.379432) EVENT_LOG_v1 {"time_micros": 1759405005379428, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.379455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9809031, prev total WAL file size 9849414, number of live WAL files 2.
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.381794) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7787KB) 13(53KB) 8(1944B)]
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005381909, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8031248, "oldest_snapshot_seqno": -1}
Oct 02 11:36:45 compute-0 sudo[101895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:36:45 compute-0 sudo[101895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3185 keys, 7986603 bytes, temperature: kUnknown
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005427630, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7986603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7959097, "index_size": 18436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 76800, "raw_average_key_size": 24, "raw_value_size": 7895256, "raw_average_value_size": 2478, "num_data_blocks": 817, "num_entries": 3185, "num_filter_entries": 3185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.427893) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7986603 bytes
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.429400) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 174.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3478, records dropped: 293 output_compression: NoCompression
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.429429) EVENT_LOG_v1 {"time_micros": 1759405005429419, "job": 4, "event": "compaction_finished", "compaction_time_micros": 45794, "compaction_time_cpu_micros": 16387, "output_level": 6, "num_output_files": 1, "total_output_size": 7986603, "num_input_records": 3478, "num_output_records": 3185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005430706, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005430792, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005430872, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 02 11:36:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:36:45.381724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:36:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:45 compute-0 sudo[101895]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:46 compute-0 ceph-mon[73607]: 11.a scrub starts
Oct 02 11:36:46 compute-0 ceph-mon[73607]: 11.a scrub ok
Oct 02 11:36:46 compute-0 ceph-mon[73607]: 5.17 scrub starts
Oct 02 11:36:46 compute-0 ceph-mon[73607]: 5.17 scrub ok
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:36:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bb75cee9-2560-4849-88d5-64b0f9fdad0e does not exist
Oct 02 11:36:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 38b3cc18-4f4f-483e-a607-948e5f340499 does not exist
Oct 02 11:36:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 47b537b9-f12e-4036-a405-ba1ee79215b7 does not exist
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:36:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:36:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:36:46 compute-0 sudo[101952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:46 compute-0 sudo[101952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:46 compute-0 sudo[101952]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:46 compute-0 sudo[101977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:36:46 compute-0 sudo[101977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:46 compute-0 sudo[101977]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:46 compute-0 sudo[102002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:46 compute-0 sudo[102002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:46 compute-0 sudo[102002]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:46 compute-0 sudo[102027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:36:46 compute-0 sudo[102027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.877658647 +0000 UTC m=+0.045464164 container create ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:36:46 compute-0 systemd[1]: Started libpod-conmon-ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407.scope.
Oct 02 11:36:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.945483495 +0000 UTC m=+0.113289032 container init ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.953666647 +0000 UTC m=+0.121472164 container start ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.95699114 +0000 UTC m=+0.124796657 container attach ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:36:46 compute-0 serene_wescoff[102108]: 167 167
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.860550255 +0000 UTC m=+0.028355792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:46 compute-0 podman[102092]: 2025-10-02 11:36:46.958878556 +0000 UTC m=+0.126684073 container died ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:36:46 compute-0 systemd[1]: libpod-ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407.scope: Deactivated successfully.
Oct 02 11:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a4caaa31cd02d9b672055ca80059eeddde1feae1b34945ef6f116b301be41e0-merged.mount: Deactivated successfully.
Oct 02 11:36:47 compute-0 podman[102092]: 2025-10-02 11:36:47.01040462 +0000 UTC m=+0.178210137 container remove ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:36:47 compute-0 systemd[1]: libpod-conmon-ec5783342549c7daba4bde5f932db7f7c73733b502458027f3b134b139af6407.scope: Deactivated successfully.
Oct 02 11:36:47 compute-0 podman[102133]: 2025-10-02 11:36:47.152782851 +0000 UTC m=+0.033480289 container create 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:36:47 compute-0 systemd[1]: Started libpod-conmon-991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02.scope.
Oct 02 11:36:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:36:47 compute-0 ceph-mon[73607]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:36:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:36:47 compute-0 podman[102133]: 2025-10-02 11:36:47.137593695 +0000 UTC m=+0.018291153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:47 compute-0 podman[102133]: 2025-10-02 11:36:47.241959916 +0000 UTC m=+0.122657364 container init 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:36:47 compute-0 podman[102133]: 2025-10-02 11:36:47.248504668 +0000 UTC m=+0.129202106 container start 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 02 11:36:47 compute-0 podman[102133]: 2025-10-02 11:36:47.25185427 +0000 UTC m=+0.132551738 container attach 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:36:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:48 compute-0 kind_matsumoto[102149]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:36:48 compute-0 kind_matsumoto[102149]: --> relative data size: 1.0
Oct 02 11:36:48 compute-0 kind_matsumoto[102149]: --> All data devices are unavailable
Oct 02 11:36:48 compute-0 systemd[1]: libpod-991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02.scope: Deactivated successfully.
Oct 02 11:36:48 compute-0 podman[102133]: 2025-10-02 11:36:48.038098863 +0000 UTC m=+0.918796301 container died 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:36:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Oct 02 11:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a8c915474f2864cfbcc781ddfd41e0e2aacb74d044139c31dfa7cbefeefba0-merged.mount: Deactivated successfully.
Oct 02 11:36:48 compute-0 systemd[75232]: Created slice User Background Tasks Slice.
Oct 02 11:36:48 compute-0 systemd[75232]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 11:36:48 compute-0 ceph-mon[73607]: 8.d scrub starts
Oct 02 11:36:48 compute-0 ceph-mon[73607]: 8.d scrub ok
Oct 02 11:36:48 compute-0 systemd[75232]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 11:36:48 compute-0 podman[102133]: 2025-10-02 11:36:48.521932406 +0000 UTC m=+1.402629844 container remove 991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:36:48 compute-0 sudo[102027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:48 compute-0 systemd[1]: libpod-conmon-991a6aad5ec5c4dc2255df86b2011496b7f7ef8e1a4e19c01b9911f949792c02.scope: Deactivated successfully.
Oct 02 11:36:48 compute-0 sudo[102178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:48 compute-0 sudo[102178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:48 compute-0 sudo[102178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:48 compute-0 sudo[102203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:36:48 compute-0 sudo[102203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:48 compute-0 sudo[102203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:48 compute-0 sudo[102228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:48 compute-0 sudo[102228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:48 compute-0 sudo[102228]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:48 compute-0 sudo[102253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:36:48 compute-0 sudo[102253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.049322947 +0000 UTC m=+0.038337648 container create 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:36:49 compute-0 systemd[1]: Started libpod-conmon-7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241.scope.
Oct 02 11:36:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.114268894 +0000 UTC m=+0.103283615 container init 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.120399745 +0000 UTC m=+0.109414446 container start 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.123285837 +0000 UTC m=+0.112300558 container attach 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:36:49 compute-0 unruffled_chebyshev[102334]: 167 167
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.124302052 +0000 UTC m=+0.113316743 container died 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:36:49 compute-0 systemd[1]: libpod-7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241.scope: Deactivated successfully.
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.031314582 +0000 UTC m=+0.020329313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e6e0288d1614be0d9287df0278049192b5d73d050d88bb367f7cd8838de51eb-merged.mount: Deactivated successfully.
Oct 02 11:36:49 compute-0 podman[102318]: 2025-10-02 11:36:49.209461068 +0000 UTC m=+0.198475769 container remove 7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:36:49 compute-0 systemd[1]: libpod-conmon-7217e73c076919a00621ee27114f204c8b24fed0a02dfa71905fa7c4a4de5241.scope: Deactivated successfully.
Oct 02 11:36:49 compute-0 podman[102358]: 2025-10-02 11:36:49.406078449 +0000 UTC m=+0.078112562 container create 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:36:49 compute-0 systemd[1]: Started libpod-conmon-31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b.scope.
Oct 02 11:36:49 compute-0 podman[102358]: 2025-10-02 11:36:49.369505945 +0000 UTC m=+0.041540078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3586db7653e6fd15a3d8349a3f1c16c26f5df34df470c7390c9cc831fd201/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3586db7653e6fd15a3d8349a3f1c16c26f5df34df470c7390c9cc831fd201/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3586db7653e6fd15a3d8349a3f1c16c26f5df34df470c7390c9cc831fd201/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3586db7653e6fd15a3d8349a3f1c16c26f5df34df470c7390c9cc831fd201/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:49 compute-0 podman[102358]: 2025-10-02 11:36:49.504694377 +0000 UTC m=+0.176728510 container init 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:36:49 compute-0 podman[102358]: 2025-10-02 11:36:49.513948177 +0000 UTC m=+0.185982290 container start 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:36:49 compute-0 podman[102358]: 2025-10-02 11:36:49.5177128 +0000 UTC m=+0.189746933 container attach 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:36:49 compute-0 ceph-mon[73607]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Oct 02 11:36:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]: {
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:     "1": [
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:         {
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "devices": [
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "/dev/loop3"
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             ],
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "lv_name": "ceph_lv0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "lv_size": "7511998464",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "name": "ceph_lv0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "tags": {
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.cluster_name": "ceph",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.crush_device_class": "",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.encrypted": "0",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.osd_id": "1",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.type": "block",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:                 "ceph.vdo": "0"
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             },
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "type": "block",
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:             "vg_name": "ceph_vg0"
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:         }
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]:     ]
Oct 02 11:36:50 compute-0 suspicious_wiles[102374]: }
Oct 02 11:36:50 compute-0 systemd[1]: libpod-31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b.scope: Deactivated successfully.
Oct 02 11:36:50 compute-0 podman[102358]: 2025-10-02 11:36:50.260587219 +0000 UTC m=+0.932621332 container died 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:36:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 11:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c3586db7653e6fd15a3d8349a3f1c16c26f5df34df470c7390c9cc831fd201-merged.mount: Deactivated successfully.
Oct 02 11:36:50 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 02 11:36:50 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 02 11:36:50 compute-0 podman[102358]: 2025-10-02 11:36:50.809349639 +0000 UTC m=+1.481383752 container remove 31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:36:50 compute-0 systemd[1]: libpod-conmon-31ad2d591db329268100d3047085139cdb06e753287d3f29ef20a712413ad68b.scope: Deactivated successfully.
Oct 02 11:36:50 compute-0 sudo[102253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:50 compute-0 ceph-mon[73607]: 5.19 scrub starts
Oct 02 11:36:50 compute-0 ceph-mon[73607]: 5.19 scrub ok
Oct 02 11:36:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:50.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:50 compute-0 sudo[102401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:50 compute-0 sudo[102401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:50 compute-0 sudo[102401]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:50 compute-0 sudo[102426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:36:50 compute-0 sudo[102426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:50 compute-0 sudo[102426]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:51 compute-0 sudo[102451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:51 compute-0 sudo[102451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:51 compute-0 sudo[102451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:51 compute-0 sudo[102476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:36:51 compute-0 sudo[102476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.432279893 +0000 UTC m=+0.060597400 container create 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:36:51 compute-0 systemd[1]: Started libpod-conmon-70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e.scope.
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.393497803 +0000 UTC m=+0.021815330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.520486844 +0000 UTC m=+0.148804371 container init 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.527088087 +0000 UTC m=+0.155405594 container start 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:36:51 compute-0 loving_yonath[102558]: 167 167
Oct 02 11:36:51 compute-0 systemd[1]: libpod-70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e.scope: Deactivated successfully.
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.572958122 +0000 UTC m=+0.201275659 container attach 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.574071538 +0000 UTC m=+0.202389045 container died 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a3657f5b0a66aa9576bf664eb9cbfac3a7251d3f9896f06cffdaf38f5a3526b-merged.mount: Deactivated successfully.
Oct 02 11:36:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:51 compute-0 podman[102541]: 2025-10-02 11:36:51.641678451 +0000 UTC m=+0.269995958 container remove 70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:36:51 compute-0 systemd[1]: libpod-conmon-70527812dad27b0d7dcb1e707850a2421139925a45e9367ddaa2e150573acb4e.scope: Deactivated successfully.
Oct 02 11:36:51 compute-0 podman[102582]: 2025-10-02 11:36:51.799470082 +0000 UTC m=+0.047694500 container create 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:36:51 compute-0 systemd[1]: Started libpod-conmon-5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e.scope.
Oct 02 11:36:51 compute-0 podman[102582]: 2025-10-02 11:36:51.779352315 +0000 UTC m=+0.027576763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e8866d1a0c6be00ca2b4637346b96ba7545fd7cef4be1a54b5615a6df17ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e8866d1a0c6be00ca2b4637346b96ba7545fd7cef4be1a54b5615a6df17ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e8866d1a0c6be00ca2b4637346b96ba7545fd7cef4be1a54b5615a6df17ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e8866d1a0c6be00ca2b4637346b96ba7545fd7cef4be1a54b5615a6df17ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:36:51 compute-0 podman[102582]: 2025-10-02 11:36:51.891873617 +0000 UTC m=+0.140098065 container init 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:36:51 compute-0 podman[102582]: 2025-10-02 11:36:51.89767364 +0000 UTC m=+0.145898068 container start 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:36:51 compute-0 podman[102582]: 2025-10-02 11:36:51.903378972 +0000 UTC m=+0.151603430 container attach 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:36:51 compute-0 ceph-mon[73607]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 11:36:51 compute-0 ceph-mon[73607]: 5.9 scrub starts
Oct 02 11:36:51 compute-0 ceph-mon[73607]: 5.9 scrub ok
Oct 02 11:36:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:52 compute-0 epic_engelbart[102598]: {
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:         "osd_id": 1,
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:         "type": "bluestore"
Oct 02 11:36:52 compute-0 epic_engelbart[102598]:     }
Oct 02 11:36:52 compute-0 epic_engelbart[102598]: }
Oct 02 11:36:52 compute-0 systemd[1]: libpod-5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e.scope: Deactivated successfully.
Oct 02 11:36:52 compute-0 podman[102582]: 2025-10-02 11:36:52.745858284 +0000 UTC m=+0.994082742 container died 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97e8866d1a0c6be00ca2b4637346b96ba7545fd7cef4be1a54b5615a6df17ab-merged.mount: Deactivated successfully.
Oct 02 11:36:52 compute-0 podman[102582]: 2025-10-02 11:36:52.827641807 +0000 UTC m=+1.075866235 container remove 5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_engelbart, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:36:52 compute-0 systemd[1]: libpod-conmon-5d4e1442ec98f5e596a6051b06ecd52eed8e90057999ed88fe0ceb7b22ce430e.scope: Deactivated successfully.
Oct 02 11:36:52 compute-0 sudo[102476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:36:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:36:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a66ce726-07eb-450b-bf30-33b97d265097 does not exist
Oct 02 11:36:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 63297439-ce3f-44b3-b7ee-94d0c2bfb886 does not exist
Oct 02 11:36:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 406f3515-f538-437c-a02b-0a45f4caed5a does not exist
Oct 02 11:36:52 compute-0 sudo[102634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:52 compute-0 sudo[102634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:52 compute-0 sudo[102634]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:53 compute-0 sudo[102660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:36:53 compute-0 sudo[102660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:53 compute-0 sudo[102660]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:53 compute-0 ceph-mon[73607]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:53 compute-0 ceph-mon[73607]: 8.c scrub starts
Oct 02 11:36:53 compute-0 ceph-mon[73607]: 8.c scrub ok
Oct 02 11:36:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:36:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:36:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:36:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:54 compute-0 ceph-mon[73607]: 8.b scrub starts
Oct 02 11:36:54 compute-0 ceph-mon[73607]: 8.b scrub ok
Oct 02 11:36:54 compute-0 ceph-mon[73607]: 5.1d scrub starts
Oct 02 11:36:54 compute-0 ceph-mon[73607]: 5.1d scrub ok
Oct 02 11:36:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:54.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 02 11:36:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 02 11:36:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:55 compute-0 ceph-mon[73607]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:55 compute-0 ceph-mon[73607]: 11.3 scrub starts
Oct 02 11:36:55 compute-0 ceph-mon[73607]: 11.3 scrub ok
Oct 02 11:36:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:56 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 02 11:36:56 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 02 11:36:56 compute-0 ceph-mon[73607]: 4.5 scrub starts
Oct 02 11:36:56 compute-0 ceph-mon[73607]: 4.5 scrub ok
Oct 02 11:36:56 compute-0 ceph-mon[73607]: 5.c scrub starts
Oct 02 11:36:56 compute-0 ceph-mon[73607]: 5.c scrub ok
Oct 02 11:36:56 compute-0 ceph-mon[73607]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:57 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Oct 02 11:36:57 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Oct 02 11:36:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:36:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:57.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 4.a scrub starts
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 4.a scrub ok
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 4.8 scrub starts
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 4.8 scrub ok
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 8.a scrub starts
Oct 02 11:36:57 compute-0 ceph-mon[73607]: 8.a scrub ok
Oct 02 11:36:58 compute-0 sudo[102727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:58 compute-0 sudo[102727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:58 compute-0 sudo[102727]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:58 compute-0 sudo[102752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:36:58 compute-0 sudo[102752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:36:58 compute-0 sudo[102752]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:58 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 02 11:36:58 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 02 11:36:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 8.8 deep-scrub starts
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 8.8 deep-scrub ok
Oct 02 11:36:59 compute-0 ceph-mon[73607]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 6.8 scrub starts
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 6.8 scrub ok
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 8.1c deep-scrub starts
Oct 02 11:36:59 compute-0 ceph-mon[73607]: 8.1c deep-scrub ok
Oct 02 11:36:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:36:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:36:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:36:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:00 compute-0 ceph-mon[73607]: 4.15 deep-scrub starts
Oct 02 11:37:00 compute-0 ceph-mon[73607]: 4.15 deep-scrub ok
Oct 02 11:37:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:01 compute-0 ceph-mon[73607]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 02 11:37:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 02 11:37:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:01.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:02 compute-0 ceph-mon[73607]: 6.7 scrub starts
Oct 02 11:37:02 compute-0 ceph-mon[73607]: 6.7 scrub ok
Oct 02 11:37:02 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 02 11:37:02 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 02 11:37:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:02.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:03 compute-0 ceph-mon[73607]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:03 compute-0 ceph-mon[73607]: 4.d scrub starts
Oct 02 11:37:03 compute-0 ceph-mon[73607]: 4.d scrub ok
Oct 02 11:37:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:03.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:04 compute-0 ceph-mon[73607]: 4.1f scrub starts
Oct 02 11:37:04 compute-0 ceph-mon[73607]: 4.1f scrub ok
Oct 02 11:37:04 compute-0 ceph-mon[73607]: 11.8 scrub starts
Oct 02 11:37:04 compute-0 ceph-mon[73607]: 11.8 scrub ok
Oct 02 11:37:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:04.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:05 compute-0 ceph-mon[73607]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:05 compute-0 ceph-mon[73607]: 10.13 scrub starts
Oct 02 11:37:05 compute-0 ceph-mon[73607]: 10.13 scrub ok
Oct 02 11:37:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:05.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:06.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:07 compute-0 ceph-mon[73607]: 7.1e scrub starts
Oct 02 11:37:07 compute-0 ceph-mon[73607]: 7.1e scrub ok
Oct 02 11:37:07 compute-0 ceph-mon[73607]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:08 compute-0 ceph-mon[73607]: 7.1d scrub starts
Oct 02 11:37:08 compute-0 ceph-mon[73607]: 7.1d scrub ok
Oct 02 11:37:08 compute-0 ceph-mon[73607]: 7.1b scrub starts
Oct 02 11:37:08 compute-0 ceph-mon[73607]: 7.1b scrub ok
Oct 02 11:37:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:09 compute-0 ceph-mon[73607]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:10 compute-0 ceph-mon[73607]: 10.10 scrub starts
Oct 02 11:37:10 compute-0 ceph-mon[73607]: 10.10 scrub ok
Oct 02 11:37:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 02 11:37:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 02 11:37:11 compute-0 ceph-mon[73607]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:11 compute-0 ceph-mon[73607]: 7.16 scrub starts
Oct 02 11:37:11 compute-0 ceph-mon[73607]: 7.16 scrub ok
Oct 02 11:37:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:12 compute-0 ceph-mon[73607]: 11.14 scrub starts
Oct 02 11:37:12 compute-0 ceph-mon[73607]: 11.14 scrub ok
Oct 02 11:37:12 compute-0 ceph-mon[73607]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:13 compute-0 ceph-mon[73607]: 10.18 scrub starts
Oct 02 11:37:13 compute-0 ceph-mon[73607]: 10.18 scrub ok
Oct 02 11:37:13 compute-0 ceph-mon[73607]: 7.a scrub starts
Oct 02 11:37:13 compute-0 ceph-mon[73607]: 7.a scrub ok
Oct 02 11:37:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:14 compute-0 ceph-mon[73607]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:15.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:15 compute-0 ceph-mon[73607]: 10.11 scrub starts
Oct 02 11:37:15 compute-0 ceph-mon[73607]: 10.11 scrub ok
Oct 02 11:37:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:16 compute-0 ceph-mon[73607]: 10.1b deep-scrub starts
Oct 02 11:37:16 compute-0 ceph-mon[73607]: 10.1b deep-scrub ok
Oct 02 11:37:16 compute-0 ceph-mon[73607]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:16 compute-0 ceph-mon[73607]: 10.1 scrub starts
Oct 02 11:37:16 compute-0 ceph-mon[73607]: 10.1 scrub ok
Oct 02 11:37:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:17 compute-0 ceph-mon[73607]: 10.4 scrub starts
Oct 02 11:37:17 compute-0 ceph-mon[73607]: 10.4 scrub ok
Oct 02 11:37:17 compute-0 ceph-mon[73607]: 10.14 scrub starts
Oct 02 11:37:17 compute-0 ceph-mon[73607]: 10.14 scrub ok
Oct 02 11:37:18 compute-0 sudo[102816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:18 compute-0 sudo[102816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:18 compute-0 sudo[102816]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:18 compute-0 sudo[102842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:18 compute-0 sudo[102842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:18 compute-0 sudo[102842]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 02 11:37:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 02 11:37:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:18 compute-0 ceph-mon[73607]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:18 compute-0 ceph-mon[73607]: 11.1 scrub starts
Oct 02 11:37:18 compute-0 ceph-mon[73607]: 11.1 scrub ok
Oct 02 11:37:18 compute-0 ceph-mon[73607]: 7.4 scrub starts
Oct 02 11:37:18 compute-0 ceph-mon[73607]: 7.4 scrub ok
Oct 02 11:37:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 02 11:37:19 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 02 11:37:19 compute-0 sudo[101272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:20 compute-0 ceph-mon[73607]: 4.1b scrub starts
Oct 02 11:37:20 compute-0 ceph-mon[73607]: 4.1b scrub ok
Oct 02 11:37:20 compute-0 ceph-mon[73607]: 7.5 scrub starts
Oct 02 11:37:20 compute-0 ceph-mon[73607]: 7.5 scrub ok
Oct 02 11:37:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:21 compute-0 ceph-mon[73607]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:21 compute-0 sudo[103017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzugtdhqvhisruddnglsfaerywbfghhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405041.4999003-347-140572657988835/AnsiballZ_command.py'
Oct 02 11:37:21 compute-0 sudo[103017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:21 compute-0 python3.9[103019]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:22 compute-0 ceph-mon[73607]: 7.18 scrub starts
Oct 02 11:37:22 compute-0 ceph-mon[73607]: 7.18 scrub ok
Oct 02 11:37:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 02 11:37:22 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 02 11:37:22 compute-0 sudo[103017]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:23 compute-0 ceph-mon[73607]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 7.14 deep-scrub starts
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 7.14 deep-scrub ok
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 11.f scrub starts
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 11.f scrub ok
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 7.e scrub starts
Oct 02 11:37:23 compute-0 ceph-mon[73607]: 7.e scrub ok
Oct 02 11:37:23 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 02 11:37:23 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 02 11:37:23 compute-0 sudo[103305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyicselwthaipsqcdpvpqaadzcfaxqph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405043.0576794-371-83566311349976/AnsiballZ_selinux.py'
Oct 02 11:37:23 compute-0 sudo[103305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:23.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:23 compute-0 python3.9[103307]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 11:37:23 compute-0 sudo[103305]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:24 compute-0 ceph-mon[73607]: 6.2 scrub starts
Oct 02 11:37:24 compute-0 ceph-mon[73607]: 6.2 scrub ok
Oct 02 11:37:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:24 compute-0 sudo[103458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udijdsiiyblgwxqonnkvtsrkoktollng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405044.4790046-404-241881384414167/AnsiballZ_command.py'
Oct 02 11:37:24 compute-0 sudo[103458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:24 compute-0 python3.9[103460]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 11:37:24 compute-0 sudo[103458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:25 compute-0 ceph-mon[73607]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:25 compute-0 ceph-mon[73607]: 10.3 scrub starts
Oct 02 11:37:25 compute-0 ceph-mon[73607]: 10.3 scrub ok
Oct 02 11:37:25 compute-0 ceph-mon[73607]: 10.15 scrub starts
Oct 02 11:37:25 compute-0 ceph-mon[73607]: 10.15 scrub ok
Oct 02 11:37:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 02 11:37:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 02 11:37:25 compute-0 sudo[103610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eahihfcuggngeoieothqggtsrgcgeeyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405045.2302167-428-87589039756777/AnsiballZ_file.py'
Oct 02 11:37:25 compute-0 sudo[103610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:25 compute-0 python3.9[103612]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:37:25 compute-0 sudo[103610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:26 compute-0 sudo[103763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oifoisrutjhdkribvfdqhgqqykzzuuis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405046.0893805-452-17495070544213/AnsiballZ_mount.py'
Oct 02 11:37:26 compute-0 sudo[103763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:26 compute-0 python3.9[103765]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 11:37:26 compute-0 sudo[103763]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:26 compute-0 ceph-mon[73607]: 6.5 scrub starts
Oct 02 11:37:26 compute-0 ceph-mon[73607]: 6.5 scrub ok
Oct 02 11:37:26 compute-0 ceph-mon[73607]: 7.f scrub starts
Oct 02 11:37:26 compute-0 ceph-mon[73607]: 7.f scrub ok
Oct 02 11:37:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:27 compute-0 ceph-mon[73607]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:27 compute-0 ceph-mon[73607]: 10.f scrub starts
Oct 02 11:37:27 compute-0 ceph-mon[73607]: 10.f scrub ok
Oct 02 11:37:28 compute-0 sudo[103915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwedeadzkjphsvarnhqisbrrhwpzjcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405047.74984-536-18680624359169/AnsiballZ_file.py'
Oct 02 11:37:28 compute-0 sudo[103915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:28 compute-0 python3.9[103917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:28 compute-0 sudo[103915]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:28 compute-0 sudo[104068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkmrbgikfnzpvpdehtgkybwwxbuaqrkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405048.606035-560-76862918130676/AnsiballZ_stat.py'
Oct 02 11:37:28 compute-0 sudo[104068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:29 compute-0 python3.9[104070]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:29 compute-0 ceph-mon[73607]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:29 compute-0 sudo[104068]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:29 compute-0 sudo[104146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kleqpqrfrlqcohwsbvvqtriohafpfdaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405048.606035-560-76862918130676/AnsiballZ_file.py'
Oct 02 11:37:29 compute-0 sudo[104146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:29 compute-0 python3.9[104148]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:37:29 compute-0 sudo[104146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:29.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:31 compute-0 sudo[104299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvocbqqeilluxklpoldgaiesdqmjnbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405050.9846923-632-156820453000115/AnsiballZ_getent.py'
Oct 02 11:37:31 compute-0 sudo[104299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:31 compute-0 ceph-mon[73607]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:31 compute-0 ceph-mon[73607]: 10.2 scrub starts
Oct 02 11:37:31 compute-0 ceph-mon[73607]: 10.2 scrub ok
Oct 02 11:37:31 compute-0 python3.9[104301]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 11:37:31 compute-0 sudo[104299]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:31.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:32 compute-0 sudo[104452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aosebploapbtjignjgwsvrtebyofsxue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405051.875674-662-272409693278009/AnsiballZ_getent.py'
Oct 02 11:37:32 compute-0 sudo[104452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:32 compute-0 python3.9[104454]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 11:37:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:32 compute-0 sudo[104452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:33 compute-0 sudo[104606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezvhhdgykanfkzurktjqvobhyuphnhbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405052.6715696-686-141072769517197/AnsiballZ_group.py'
Oct 02 11:37:33 compute-0 sudo[104606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:33 compute-0 python3.9[104608]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:37:33 compute-0 sudo[104606]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:33 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 02 11:37:33 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 02 11:37:33 compute-0 ceph-mon[73607]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:33.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:33 compute-0 sudo[104758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckwjtdpthzqqnufhkdqhtftnzwprezv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405053.6912134-713-68136619181877/AnsiballZ_file.py'
Oct 02 11:37:33 compute-0 sudo[104758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:34 compute-0 python3.9[104760]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 11:37:34 compute-0 sudo[104758]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 02 11:37:34 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 02 11:37:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:34 compute-0 ceph-mon[73607]: 6.d scrub starts
Oct 02 11:37:34 compute-0 ceph-mon[73607]: 6.d scrub ok
Oct 02 11:37:34 compute-0 ceph-mon[73607]: 7.9 scrub starts
Oct 02 11:37:34 compute-0 ceph-mon[73607]: 7.9 scrub ok
Oct 02 11:37:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:34.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:35 compute-0 sudo[104911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihunxqducgukhmgtmablfycnmjjupgcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405054.7673755-746-246521117474365/AnsiballZ_dnf.py'
Oct 02 11:37:35 compute-0 sudo[104911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:35 compute-0 python3.9[104913]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:37:35 compute-0 ceph-mon[73607]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 4.c scrub starts
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 4.c scrub ok
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 7.11 scrub starts
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 7.11 scrub ok
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 7.8 scrub starts
Oct 02 11:37:35 compute-0 ceph-mon[73607]: 7.8 scrub ok
Oct 02 11:37:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:36 compute-0 sudo[104911]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:36 compute-0 ceph-mon[73607]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:36.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:36 compute-0 sudo[105065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyanzxxyjlmltedrtrjvjqikyysbkge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405056.7115502-770-212824296995471/AnsiballZ_file.py'
Oct 02 11:37:36 compute-0 sudo[105065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:37 compute-0 python3.9[105067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:37 compute-0 sudo[105065]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:37 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 02 11:37:37 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 02 11:37:37 compute-0 ceph-mon[73607]: 7.6 scrub starts
Oct 02 11:37:37 compute-0 ceph-mon[73607]: 7.6 scrub ok
Oct 02 11:37:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:37 compute-0 sudo[105217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnizydisurqunykskfzojxxigtuogoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405057.4507701-794-270450117499071/AnsiballZ_stat.py'
Oct 02 11:37:37 compute-0 sudo[105217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:37 compute-0 python3.9[105219]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:37 compute-0 sudo[105217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-0 sudo[105295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twhogvmjbnsynigbpzvtjxpanvcqiooc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405057.4507701-794-270450117499071/AnsiballZ_file.py'
Oct 02 11:37:38 compute-0 sudo[105295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:38 compute-0 python3.9[105297]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:38 compute-0 sudo[105295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-0 sudo[105299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:38 compute-0 sudo[105299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:38 compute-0 sudo[105299]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-0 sudo[105347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:38 compute-0 sudo[105347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:38 compute-0 sudo[105347]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-0 ceph-mon[73607]: 6.e scrub starts
Oct 02 11:37:38 compute-0 ceph-mon[73607]: 6.e scrub ok
Oct 02 11:37:38 compute-0 ceph-mon[73607]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:38 compute-0 ceph-mon[73607]: 10.1e scrub starts
Oct 02 11:37:38 compute-0 ceph-mon[73607]: 10.1e scrub ok
Oct 02 11:37:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:39 compute-0 sudo[105498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrjfsehjvccggnzrzmzwnyhuljrywll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405058.753075-833-20377625367629/AnsiballZ_stat.py'
Oct 02 11:37:39 compute-0 sudo[105498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:39 compute-0 python3.9[105500]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:39 compute-0 sudo[105498]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:39 compute-0 sudo[105576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnnhbxvrcqgxrjvmhscigmypwxptwmce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405058.753075-833-20377625367629/AnsiballZ_file.py'
Oct 02 11:37:39 compute-0 sudo[105576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:39 compute-0 python3.9[105578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:39 compute-0 sudo[105576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:39 compute-0 ceph-mon[73607]: 10.5 scrub starts
Oct 02 11:37:39 compute-0 ceph-mon[73607]: 10.5 scrub ok
Oct 02 11:37:39 compute-0 ceph-mon[73607]: 7.1f scrub starts
Oct 02 11:37:39 compute-0 ceph-mon[73607]: 7.1f scrub ok
Oct 02 11:37:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 02 11:37:40 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 02 11:37:40 compute-0 sudo[105729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvayvpxhuybljmqknxopmpuaarpktofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405060.2814693-878-161737601829613/AnsiballZ_dnf.py'
Oct 02 11:37:40 compute-0 sudo[105729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:40 compute-0 ceph-mon[73607]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:40 compute-0 ceph-mon[73607]: 11.1a scrub starts
Oct 02 11:37:40 compute-0 ceph-mon[73607]: 11.1a scrub ok
Oct 02 11:37:40 compute-0 python3.9[105731]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:37:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:41 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 02 11:37:41 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 02 11:37:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:41 compute-0 ceph-mon[73607]: 10.12 scrub starts
Oct 02 11:37:41 compute-0 ceph-mon[73607]: 10.12 scrub ok
Oct 02 11:37:41 compute-0 ceph-mon[73607]: 4.1a scrub starts
Oct 02 11:37:41 compute-0 ceph-mon[73607]: 4.1a scrub ok
Oct 02 11:37:42 compute-0 sudo[105729]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:37:42
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images']
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:37:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:37:42 compute-0 ceph-mon[73607]: 10.8 deep-scrub starts
Oct 02 11:37:42 compute-0 ceph-mon[73607]: 10.8 deep-scrub ok
Oct 02 11:37:42 compute-0 ceph-mon[73607]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:42.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:43 compute-0 python3.9[105883]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:37:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:43.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:43 compute-0 python3.9[106035]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 11:37:44 compute-0 ceph-mon[73607]: 7.10 deep-scrub starts
Oct 02 11:37:44 compute-0 ceph-mon[73607]: 7.10 deep-scrub ok
Oct 02 11:37:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:44 compute-0 python3.9[106186]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:37:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:45 compute-0 ceph-mon[73607]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:45 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 02 11:37:45 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 02 11:37:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:45.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:46 compute-0 ceph-mon[73607]: 8.19 scrub starts
Oct 02 11:37:46 compute-0 ceph-mon[73607]: 8.19 scrub ok
Oct 02 11:37:46 compute-0 ceph-mon[73607]: 9.13 deep-scrub starts
Oct 02 11:37:46 compute-0 ceph-mon[73607]: 9.13 deep-scrub ok
Oct 02 11:37:46 compute-0 sudo[106336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khaoixifucxdpdhedpshcbzczbexwjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405065.5963273-1001-24050658095185/AnsiballZ_systemd.py'
Oct 02 11:37:46 compute-0 sudo[106336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:46 compute-0 python3.9[106339]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:46 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 11:37:46 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 11:37:46 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 11:37:46 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:37:46 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:37:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:46 compute-0 sudo[106336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:47 compute-0 ceph-mon[73607]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:47 compute-0 ceph-mon[73607]: 7.b scrub starts
Oct 02 11:37:47 compute-0 ceph-mon[73607]: 7.b scrub ok
Oct 02 11:37:47 compute-0 python3.9[106500]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 11:37:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:47.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 02 11:37:48 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 02 11:37:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:49 compute-0 ceph-mon[73607]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:49 compute-0 ceph-mon[73607]: 9.b scrub starts
Oct 02 11:37:49 compute-0 ceph-mon[73607]: 9.b scrub ok
Oct 02 11:37:49 compute-0 ceph-mon[73607]: 6.3 scrub starts
Oct 02 11:37:49 compute-0 ceph-mon[73607]: 6.3 scrub ok
Oct 02 11:37:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:50 compute-0 ceph-mon[73607]: 9.3 scrub starts
Oct 02 11:37:50 compute-0 ceph-mon[73607]: 9.3 scrub ok
Oct 02 11:37:50 compute-0 ceph-mon[73607]: 10.19 scrub starts
Oct 02 11:37:50 compute-0 ceph-mon[73607]: 10.19 scrub ok
Oct 02 11:37:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:50.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:51 compute-0 sudo[106652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkinudnitpwlfhhfsejpyhjniqlmalif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405070.7785962-1172-49500451008948/AnsiballZ_systemd.py'
Oct 02 11:37:51 compute-0 sudo[106652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:51 compute-0 python3.9[106654]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:51 compute-0 ceph-mon[73607]: pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:51 compute-0 sudo[106652]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:52 compute-0 sudo[106806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblyhllbylmqtzxemsuildbvwuhmuzug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405071.7201452-1172-93604516282828/AnsiballZ_systemd.py'
Oct 02 11:37:52 compute-0 sudo[106806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:52 compute-0 python3.9[106808]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:52 compute-0 sudo[106806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:52 compute-0 ceph-mon[73607]: 7.13 scrub starts
Oct 02 11:37:52 compute-0 ceph-mon[73607]: 7.13 scrub ok
Oct 02 11:37:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:52.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:53 compute-0 sudo[106836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:53 compute-0 sudo[106836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:53 compute-0 sudo[106836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:53 compute-0 sshd-session[99501]: Connection closed by 192.168.122.30 port 32780
Oct 02 11:37:53 compute-0 sshd-session[99498]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:37:53 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 02 11:37:53 compute-0 systemd[1]: session-36.scope: Consumed 1min 418ms CPU time.
Oct 02 11:37:53 compute-0 systemd-logind[789]: Session 36 logged out. Waiting for processes to exit.
Oct 02 11:37:53 compute-0 systemd-logind[789]: Removed session 36.
Oct 02 11:37:53 compute-0 sudo[106861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:37:53 compute-0 sudo[106861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:53 compute-0 sudo[106861]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:53 compute-0 sudo[106886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:53 compute-0 sudo[106886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:53 compute-0 sudo[106886]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:53 compute-0 sudo[106911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:37:53 compute-0 sudo[106911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:53 compute-0 ceph-mon[73607]: pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:53.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:37:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:37:54 compute-0 sudo[106911]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:37:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:54.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:37:55 compute-0 ceph-mon[73607]: 9.19 deep-scrub starts
Oct 02 11:37:55 compute-0 ceph-mon[73607]: 9.19 deep-scrub ok
Oct 02 11:37:55 compute-0 ceph-mon[73607]: pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:37:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:37:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 02 11:37:55 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 02 11:37:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.17 scrub starts
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.17 scrub ok
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.1a scrub starts
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.1a scrub ok
Oct 02 11:37:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 8.14 scrub starts
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 8.14 scrub ok
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.7 scrub starts
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.7 scrub ok
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.1b deep-scrub starts
Oct 02 11:37:56 compute-0 ceph-mon[73607]: 9.1b deep-scrub ok
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5eb0bc43-f53f-4595-a818-9705f2665a33 does not exist
Oct 02 11:37:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d4699c3e-4871-4e3b-8010-79f24a0d19fa does not exist
Oct 02 11:37:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 97417acd-cc79-4886-b810-f87bfb3cc6f4 does not exist
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:37:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:37:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:37:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:56 compute-0 sudo[106969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:56 compute-0 sudo[106969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:56 compute-0 sudo[106969]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:56 compute-0 sudo[106994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:37:56 compute-0 sudo[106994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:56 compute-0 sudo[106994]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:56 compute-0 sudo[107019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:56 compute-0 sudo[107019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:56 compute-0 sudo[107019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:56 compute-0 sudo[107044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:37:56 compute-0 sudo[107044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:56 compute-0 podman[107108]: 2025-10-02 11:37:56.914028423 +0000 UTC m=+0.072780899 container create 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:37:56 compute-0 podman[107108]: 2025-10-02 11:37:56.864895273 +0000 UTC m=+0.023647769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:37:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:56.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:56 compute-0 systemd[1]: Started libpod-conmon-6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3.scope.
Oct 02 11:37:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:37:57 compute-0 podman[107108]: 2025-10-02 11:37:57.030040163 +0000 UTC m=+0.188792659 container init 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:37:57 compute-0 podman[107108]: 2025-10-02 11:37:57.039146599 +0000 UTC m=+0.197899065 container start 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:37:57 compute-0 festive_chandrasekhar[107124]: 167 167
Oct 02 11:37:57 compute-0 systemd[1]: libpod-6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3.scope: Deactivated successfully.
Oct 02 11:37:57 compute-0 podman[107108]: 2025-10-02 11:37:57.047374163 +0000 UTC m=+0.206126619 container attach 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:37:57 compute-0 conmon[107124]: conmon 6251e32698c40874d3aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3.scope/container/memory.events
Oct 02 11:37:57 compute-0 podman[107108]: 2025-10-02 11:37:57.048710726 +0000 UTC m=+0.207463192 container died 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d41671324580903fcdda112ea2a7e5634d1f0094cca875082be8ac78cb001fcb-merged.mount: Deactivated successfully.
Oct 02 11:37:57 compute-0 podman[107108]: 2025-10-02 11:37:57.219030476 +0000 UTC m=+0.377782942 container remove 6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chandrasekhar, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:37:57 compute-0 systemd[1]: libpod-conmon-6251e32698c40874d3aaec7d52661e8262a2adca0a24f62f057bfa99be15c9c3.scope: Deactivated successfully.
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:37:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:37:57 compute-0 ceph-mon[73607]: pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:57 compute-0 ceph-mon[73607]: 9.1e scrub starts
Oct 02 11:37:57 compute-0 ceph-mon[73607]: 9.1e scrub ok
Oct 02 11:37:57 compute-0 podman[107147]: 2025-10-02 11:37:57.391008556 +0000 UTC m=+0.043728817 container create cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:37:57 compute-0 systemd[1]: Started libpod-conmon-cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef.scope.
Oct 02 11:37:57 compute-0 podman[107147]: 2025-10-02 11:37:57.372568449 +0000 UTC m=+0.025288720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:37:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:57 compute-0 podman[107147]: 2025-10-02 11:37:57.585247999 +0000 UTC m=+0.237968250 container init cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:37:57 compute-0 podman[107147]: 2025-10-02 11:37:57.592275853 +0000 UTC m=+0.244996134 container start cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:37:57 compute-0 podman[107147]: 2025-10-02 11:37:57.608302942 +0000 UTC m=+0.261023183 container attach cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:37:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:37:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:37:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:58 compute-0 nice_fermat[107164]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:37:58 compute-0 nice_fermat[107164]: --> relative data size: 1.0
Oct 02 11:37:58 compute-0 nice_fermat[107164]: --> All data devices are unavailable
Oct 02 11:37:58 compute-0 systemd[1]: libpod-cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef.scope: Deactivated successfully.
Oct 02 11:37:58 compute-0 podman[107147]: 2025-10-02 11:37:58.380073815 +0000 UTC m=+1.032794056 container died cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-25aff7f91b580bd2fa3e16cc6866197e82f37318d1e2361f49fee0d617a26baf-merged.mount: Deactivated successfully.
Oct 02 11:37:58 compute-0 podman[107147]: 2025-10-02 11:37:58.443759487 +0000 UTC m=+1.096479748 container remove cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:37:58 compute-0 systemd[1]: libpod-conmon-cafb8672920fb26a10a762140a9cf9cb4a0ad47b3fc7444a8de97c0d780937ef.scope: Deactivated successfully.
Oct 02 11:37:58 compute-0 sudo[107044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:58 compute-0 sudo[107198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:58 compute-0 sudo[107195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 sudo[107198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 sudo[107195]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107198]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:37:58 compute-0 sudo[107246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:58 compute-0 sudo[107245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 sudo[107246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 sudo[107246]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107245]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:37:58 compute-0 sudo[107295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 sudo[107295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:58 compute-0 sudo[107320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:37:58 compute-0 sudo[107320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:37:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:58.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.016309944 +0000 UTC m=+0.038263952 container create 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:37:59 compute-0 systemd[1]: Started libpod-conmon-7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1.scope.
Oct 02 11:37:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.072416297 +0000 UTC m=+0.094370295 container init 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.077900112 +0000 UTC m=+0.099854120 container start 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.080751314 +0000 UTC m=+0.102705332 container attach 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:37:59 compute-0 pedantic_mendel[107404]: 167 167
Oct 02 11:37:59 compute-0 systemd[1]: libpod-7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1.scope: Deactivated successfully.
Oct 02 11:37:59 compute-0 conmon[107404]: conmon 7395c3f520b4ff4979ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1.scope/container/memory.events
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.082541588 +0000 UTC m=+0.104495596 container died 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.002260274 +0000 UTC m=+0.024214282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:37:59 compute-0 sshd-session[107387]: Accepted publickey for zuul from 192.168.122.30 port 53270 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66009af82117fac4ee793cce3824953930bcf97be6e3b9ee7275433738bef2e-merged.mount: Deactivated successfully.
Oct 02 11:37:59 compute-0 systemd-logind[789]: New session 37 of user zuul.
Oct 02 11:37:59 compute-0 podman[107385]: 2025-10-02 11:37:59.11647184 +0000 UTC m=+0.138425848 container remove 7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:37:59 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 02 11:37:59 compute-0 sshd-session[107387]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:37:59 compute-0 systemd[1]: libpod-conmon-7395c3f520b4ff4979effccfa819c4bc4a0aa5091bd51c1e65501b03f7b43af1.scope: Deactivated successfully.
Oct 02 11:37:59 compute-0 podman[107452]: 2025-10-02 11:37:59.262370093 +0000 UTC m=+0.037807420 container create 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:37:59 compute-0 systemd[1]: Started libpod-conmon-082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47.scope.
Oct 02 11:37:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9794db17396dcfd0b73a4937690bdb8269c9de138fe06a541b577f85ebe2504e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9794db17396dcfd0b73a4937690bdb8269c9de138fe06a541b577f85ebe2504e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9794db17396dcfd0b73a4937690bdb8269c9de138fe06a541b577f85ebe2504e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9794db17396dcfd0b73a4937690bdb8269c9de138fe06a541b577f85ebe2504e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:37:59 compute-0 podman[107452]: 2025-10-02 11:37:59.3375486 +0000 UTC m=+0.112985947 container init 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:37:59 compute-0 podman[107452]: 2025-10-02 11:37:59.24694721 +0000 UTC m=+0.022384557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:37:59 compute-0 podman[107452]: 2025-10-02 11:37:59.34642567 +0000 UTC m=+0.121862997 container start 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:37:59 compute-0 podman[107452]: 2025-10-02 11:37:59.349994799 +0000 UTC m=+0.125432156 container attach 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:37:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:37:59 compute-0 ceph-mon[73607]: pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:37:59 compute-0 ceph-mon[73607]: 9.1f scrub starts
Oct 02 11:37:59 compute-0 ceph-mon[73607]: 9.1f scrub ok
Oct 02 11:37:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:37:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:37:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:59.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:00 compute-0 charming_lalande[107498]: {
Oct 02 11:38:00 compute-0 charming_lalande[107498]:     "1": [
Oct 02 11:38:00 compute-0 charming_lalande[107498]:         {
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "devices": [
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "/dev/loop3"
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             ],
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "lv_name": "ceph_lv0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "lv_size": "7511998464",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "name": "ceph_lv0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "tags": {
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.cluster_name": "ceph",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.crush_device_class": "",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.encrypted": "0",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.osd_id": "1",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.type": "block",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:                 "ceph.vdo": "0"
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             },
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "type": "block",
Oct 02 11:38:00 compute-0 charming_lalande[107498]:             "vg_name": "ceph_vg0"
Oct 02 11:38:00 compute-0 charming_lalande[107498]:         }
Oct 02 11:38:00 compute-0 charming_lalande[107498]:     ]
Oct 02 11:38:00 compute-0 charming_lalande[107498]: }
Oct 02 11:38:00 compute-0 systemd[1]: libpod-082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47.scope: Deactivated successfully.
Oct 02 11:38:00 compute-0 podman[107452]: 2025-10-02 11:38:00.112633616 +0000 UTC m=+0.888070943 container died 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9794db17396dcfd0b73a4937690bdb8269c9de138fe06a541b577f85ebe2504e-merged.mount: Deactivated successfully.
Oct 02 11:38:00 compute-0 podman[107452]: 2025-10-02 11:38:00.191413932 +0000 UTC m=+0.966851259 container remove 082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lalande, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:38:00 compute-0 systemd[1]: libpod-conmon-082f0d65435dd1539db3d8230a6441bd6cdd5469fdb9ee41555e85e0ad091b47.scope: Deactivated successfully.
Oct 02 11:38:00 compute-0 sudo[107320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:00 compute-0 python3.9[107600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:00 compute-0 sudo[107618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:00 compute-0 sudo[107618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:00 compute-0 sudo[107618]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:00 compute-0 sudo[107648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:38:00 compute-0 sudo[107648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:00 compute-0 sudo[107648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:00 compute-0 sudo[107673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:00 compute-0 sudo[107673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:00 compute-0 sudo[107673]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:00 compute-0 sudo[107698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:38:00 compute-0 sudo[107698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:00 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Oct 02 11:38:00 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.735881561 +0000 UTC m=+0.038240551 container create dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 11:38:00 compute-0 ceph-mon[73607]: pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:00 compute-0 systemd[1]: Started libpod-conmon-dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54.scope.
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.716965771 +0000 UTC m=+0.019324791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.829214978 +0000 UTC m=+0.131573978 container init dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.837939055 +0000 UTC m=+0.140298055 container start dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:38:00 compute-0 laughing_black[107805]: 167 167
Oct 02 11:38:00 compute-0 systemd[1]: libpod-dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54.scope: Deactivated successfully.
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.842071527 +0000 UTC m=+0.144430507 container attach dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.842338004 +0000 UTC m=+0.144696984 container died dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b7f35a466af48d16e2795eac0b8f48e751d945d14f240299a1b5c1c49b58aa-merged.mount: Deactivated successfully.
Oct 02 11:38:00 compute-0 podman[107788]: 2025-10-02 11:38:00.883866965 +0000 UTC m=+0.186225945 container remove dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_black, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:38:00 compute-0 systemd[1]: libpod-conmon-dd8ab299531873acfeb4bf1226b6ecce9c4a37627df6237ae7bc73fa3b126c54.scope: Deactivated successfully.
Oct 02 11:38:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:01 compute-0 podman[107857]: 2025-10-02 11:38:01.041859158 +0000 UTC m=+0.045998283 container create 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:38:01 compute-0 systemd[1]: Started libpod-conmon-569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4.scope.
Oct 02 11:38:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc86745b132c17534c9fb442ac2fa8deef6555ef580aa3fd7fca651972cc888/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc86745b132c17534c9fb442ac2fa8deef6555ef580aa3fd7fca651972cc888/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc86745b132c17534c9fb442ac2fa8deef6555ef580aa3fd7fca651972cc888/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc86745b132c17534c9fb442ac2fa8deef6555ef580aa3fd7fca651972cc888/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:38:01 compute-0 podman[107857]: 2025-10-02 11:38:01.018397835 +0000 UTC m=+0.022536990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:38:01 compute-0 podman[107857]: 2025-10-02 11:38:01.123410703 +0000 UTC m=+0.127549848 container init 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:38:01 compute-0 podman[107857]: 2025-10-02 11:38:01.134483988 +0000 UTC m=+0.138623113 container start 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:38:01 compute-0 podman[107857]: 2025-10-02 11:38:01.137226826 +0000 UTC m=+0.141365951 container attach 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:38:01 compute-0 sudo[107976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtrinjqrflrtnedwoslzwtfvxjerhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405080.964369-73-95819145890672/AnsiballZ_getent.py'
Oct 02 11:38:01 compute-0 sudo[107976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct 02 11:38:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct 02 11:38:01 compute-0 python3.9[107978]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 11:38:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:01.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:01 compute-0 sudo[107976]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:01 compute-0 ceph-mon[73607]: 4.e deep-scrub starts
Oct 02 11:38:01 compute-0 ceph-mon[73607]: 4.e deep-scrub ok
Oct 02 11:38:01 compute-0 ceph-mon[73607]: 8.17 deep-scrub starts
Oct 02 11:38:01 compute-0 ceph-mon[73607]: 8.17 deep-scrub ok
Oct 02 11:38:01 compute-0 hardcore_napier[107898]: {
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:         "osd_id": 1,
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:         "type": "bluestore"
Oct 02 11:38:01 compute-0 hardcore_napier[107898]:     }
Oct 02 11:38:01 compute-0 hardcore_napier[107898]: }
Oct 02 11:38:01 compute-0 systemd[1]: libpod-569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4.scope: Deactivated successfully.
Oct 02 11:38:02 compute-0 podman[108021]: 2025-10-02 11:38:02.024022966 +0000 UTC m=+0.023948255 container died 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dc86745b132c17534c9fb442ac2fa8deef6555ef580aa3fd7fca651972cc888-merged.mount: Deactivated successfully.
Oct 02 11:38:02 compute-0 podman[108021]: 2025-10-02 11:38:02.077552185 +0000 UTC m=+0.077477474 container remove 569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:38:02 compute-0 systemd[1]: libpod-conmon-569430a262215a55ec84792ee1639060dafc9e7228fffa62dfd10733eff9e5c4.scope: Deactivated successfully.
Oct 02 11:38:02 compute-0 sudo[107698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:38:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:38:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:38:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:38:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1bac00b9-f212-4061-a15c-327252169de1 does not exist
Oct 02 11:38:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4927fb0d-29e2-449a-9dd7-8bd20cb30924 does not exist
Oct 02 11:38:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6c179a47-8682-4cf6-af84-7efdd263c5a1 does not exist
Oct 02 11:38:02 compute-0 sudo[108089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:02 compute-0 sudo[108089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:02 compute-0 sudo[108089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:02 compute-0 sudo[108137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:38:02 compute-0 sudo[108137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:02 compute-0 sudo[108137]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:02 compute-0 sudo[108213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pceshijocusjeybcrqbuigeygfaqtruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405082.109144-109-108595336247799/AnsiballZ_setup.py'
Oct 02 11:38:02 compute-0 sudo[108213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:02 compute-0 python3.9[108215]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:38:02 compute-0 sudo[108213]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:38:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:38:03 compute-0 ceph-mon[73607]: pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:03 compute-0 sudo[108297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhkqbgyeeiotxbdprfjckoqwpjsnogr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405082.109144-109-108595336247799/AnsiballZ_dnf.py'
Oct 02 11:38:03 compute-0 sudo[108297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:03 compute-0 python3.9[108299]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:38:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:04 compute-0 ceph-mon[73607]: pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:04 compute-0 sudo[108297]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:04.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:05 compute-0 sudo[108451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwhfwoqjqkszyigwtsdkmtmadvwjdpmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405085.2011127-151-71991941898735/AnsiballZ_dnf.py'
Oct 02 11:38:05 compute-0 sudo[108451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:05 compute-0 python3.9[108453]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:06 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 02 11:38:06 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 02 11:38:06 compute-0 ceph-mon[73607]: pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:06 compute-0 ceph-mon[73607]: 4.18 scrub starts
Oct 02 11:38:06 compute-0 ceph-mon[73607]: 4.18 scrub ok
Oct 02 11:38:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:06.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:07 compute-0 sudo[108451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:07 compute-0 sudo[108605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqonjygfbnzxfqsbavnbfazsteyzaxgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405087.222492-175-2252070454985/AnsiballZ_systemd.py'
Oct 02 11:38:07 compute-0 sudo[108605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:08 compute-0 python3.9[108607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:38:08 compute-0 sudo[108605]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:08.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:09 compute-0 python3.9[108761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:09 compute-0 ceph-mon[73607]: pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:10 compute-0 sudo[108911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjenjcacmrmgjglrtonwykhkuychheil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405089.6704507-229-104985442058303/AnsiballZ_sefcontext.py'
Oct 02 11:38:10 compute-0 sudo[108911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:10 compute-0 python3.9[108913]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 11:38:10 compute-0 ceph-mon[73607]: 9.5 scrub starts
Oct 02 11:38:10 compute-0 ceph-mon[73607]: 9.5 scrub ok
Oct 02 11:38:10 compute-0 sudo[108911]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:10.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:11 compute-0 ceph-mon[73607]: pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 02 11:38:11 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 02 11:38:11 compute-0 python3.9[109064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:12 compute-0 sudo[109221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edmvebgyrptybymeafdelzhvfopjsyxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405092.0976048-283-142295359405560/AnsiballZ_dnf.py'
Oct 02 11:38:12 compute-0 sudo[109221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:12 compute-0 ceph-mon[73607]: 11.12 scrub starts
Oct 02 11:38:12 compute-0 ceph-mon[73607]: 11.12 scrub ok
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:12 compute-0 python3.9[109223]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:12.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:13 compute-0 ceph-mon[73607]: pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:13 compute-0 ceph-mon[73607]: 9.8 scrub starts
Oct 02 11:38:13 compute-0 ceph-mon[73607]: 9.8 scrub ok
Oct 02 11:38:13 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct 02 11:38:13 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct 02 11:38:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:13.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:13 compute-0 sudo[109221]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:14 compute-0 ceph-mon[73607]: 8.12 deep-scrub starts
Oct 02 11:38:14 compute-0 ceph-mon[73607]: 8.12 deep-scrub ok
Oct 02 11:38:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:14 compute-0 sudo[109375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdeubhgqtrbpipszgdzvjqtrfiknobi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405094.1615808-307-247745253588378/AnsiballZ_command.py'
Oct 02 11:38:14 compute-0 sudo[109375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:14 compute-0 python3.9[109377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:38:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:15 compute-0 sudo[109375]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:15 compute-0 ceph-mon[73607]: pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:16 compute-0 sudo[109663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbsbekgbnldjhrocfivndpxaktjmmjvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405095.9353712-331-203253173669858/AnsiballZ_file.py'
Oct 02 11:38:16 compute-0 sudo[109663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:16 compute-0 python3.9[109665]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:38:16 compute-0 sudo[109663]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:17 compute-0 python3.9[109815]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:38:17 compute-0 ceph-mon[73607]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:17 compute-0 ceph-mon[73607]: 9.18 scrub starts
Oct 02 11:38:17 compute-0 ceph-mon[73607]: 9.18 scrub ok
Oct 02 11:38:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:17.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:18 compute-0 sudo[109967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvomkjuzlkrjbanqiwkzgoavycbgnwue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405097.7604206-379-90294823915783/AnsiballZ_dnf.py'
Oct 02 11:38:18 compute-0 sudo[109967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:18 compute-0 python3.9[109969]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 02 11:38:18 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 02 11:38:18 compute-0 sudo[109972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:18 compute-0 sudo[109972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:18 compute-0 sudo[109972]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:18 compute-0 sudo[109997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:18 compute-0 sudo[109997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:18 compute-0 sudo[109997]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:18.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:19 compute-0 sudo[109967]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:19 compute-0 ceph-mon[73607]: pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:19 compute-0 ceph-mon[73607]: 5.2 scrub starts
Oct 02 11:38:19 compute-0 ceph-mon[73607]: 5.2 scrub ok
Oct 02 11:38:19 compute-0 ceph-mon[73607]: 9.9 scrub starts
Oct 02 11:38:19 compute-0 ceph-mon[73607]: 9.9 scrub ok
Oct 02 11:38:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:19.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:20 compute-0 sudo[110171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevcbrlrjqnxswspzaxtqecbixynxbfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405099.776289-406-268005998947957/AnsiballZ_dnf.py'
Oct 02 11:38:20 compute-0 sudo[110171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:20 compute-0 python3.9[110173]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:20 compute-0 ceph-mon[73607]: pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:20.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:21 compute-0 sudo[110171]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:21.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:21 compute-0 ceph-mon[73607]: 9.16 deep-scrub starts
Oct 02 11:38:21 compute-0 ceph-mon[73607]: 9.16 deep-scrub ok
Oct 02 11:38:22 compute-0 sudo[110326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcdiikiabtvxxwhhjybuzpytcwxnwyog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405102.0033193-442-27251707231173/AnsiballZ_stat.py'
Oct 02 11:38:22 compute-0 sudo[110326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:22 compute-0 python3.9[110328]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:38:22 compute-0 sudo[110326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:22 compute-0 ceph-mon[73607]: pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:38:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:38:23 compute-0 sudo[110480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmawpuxfmellvnnzhgcapxgkttyspvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405102.7271557-466-141986938510275/AnsiballZ_slurp.py'
Oct 02 11:38:23 compute-0 sudo[110480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:23 compute-0 python3.9[110482]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 02 11:38:23 compute-0 sudo[110480]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:24 compute-0 sshd-session[107421]: Connection closed by 192.168.122.30 port 53270
Oct 02 11:38:24 compute-0 sshd-session[107387]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:38:24 compute-0 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Oct 02 11:38:24 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 02 11:38:24 compute-0 systemd[1]: session-37.scope: Consumed 17.124s CPU time.
Oct 02 11:38:24 compute-0 systemd-logind[789]: Removed session 37.
Oct 02 11:38:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 02 11:38:25 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 02 11:38:25 compute-0 ceph-mon[73607]: pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:25 compute-0 ceph-mon[73607]: 9.1d scrub starts
Oct 02 11:38:25 compute-0 ceph-mon[73607]: 9.1d scrub ok
Oct 02 11:38:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:26 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 02 11:38:26 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 02 11:38:26 compute-0 ceph-mon[73607]: 5.7 scrub starts
Oct 02 11:38:26 compute-0 ceph-mon[73607]: 5.7 scrub ok
Oct 02 11:38:26 compute-0 ceph-mon[73607]: pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:27.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:27 compute-0 ceph-mon[73607]: 5.f scrub starts
Oct 02 11:38:27 compute-0 ceph-mon[73607]: 5.f scrub ok
Oct 02 11:38:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:28 compute-0 ceph-mon[73607]: pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 02 11:38:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:29 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 02 11:38:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 11:38:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 11:38:29 compute-0 sshd-session[110510]: Accepted publickey for zuul from 192.168.122.30 port 36450 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:38:29 compute-0 systemd-logind[789]: New session 38 of user zuul.
Oct 02 11:38:29 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 02 11:38:29 compute-0 sshd-session[110510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:38:29 compute-0 ceph-mon[73607]: 5.1 scrub starts
Oct 02 11:38:29 compute-0 ceph-mon[73607]: 5.1 scrub ok
Oct 02 11:38:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:30 compute-0 python3.9[110664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:31 compute-0 ceph-mon[73607]: pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:31 compute-0 python3.9[110818]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:38:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 02 11:38:32 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 02 11:38:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:33 compute-0 python3.9[111012]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:38:33 compute-0 ceph-mon[73607]: pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:33 compute-0 ceph-mon[73607]: 5.1b scrub starts
Oct 02 11:38:33 compute-0 ceph-mon[73607]: 5.1b scrub ok
Oct 02 11:38:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:33 compute-0 sshd-session[110513]: Connection closed by 192.168.122.30 port 36450
Oct 02 11:38:33 compute-0 sshd-session[110510]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:38:33 compute-0 systemd-logind[789]: Session 38 logged out. Waiting for processes to exit.
Oct 02 11:38:33 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 02 11:38:33 compute-0 systemd[1]: session-38.scope: Consumed 2.053s CPU time.
Oct 02 11:38:33 compute-0 systemd-logind[789]: Removed session 38.
Oct 02 11:38:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:35.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:35 compute-0 ceph-mon[73607]: pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:35.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:36 compute-0 ceph-mon[73607]: pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:37.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:38 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 02 11:38:38 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 02 11:38:38 compute-0 sudo[111042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:38 compute-0 sudo[111042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:38 compute-0 sudo[111042]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:38 compute-0 sudo[111067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:38 compute-0 sudo[111067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:38 compute-0 sudo[111067]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:39.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:39 compute-0 sshd-session[111092]: Accepted publickey for zuul from 192.168.122.30 port 38064 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:38:39 compute-0 systemd-logind[789]: New session 39 of user zuul.
Oct 02 11:38:39 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 02 11:38:39 compute-0 sshd-session[111092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:38:39 compute-0 ceph-mon[73607]: pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:39 compute-0 ceph-mon[73607]: 5.1f scrub starts
Oct 02 11:38:39 compute-0 ceph-mon[73607]: 5.1f scrub ok
Oct 02 11:38:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:40 compute-0 python3.9[111245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:41.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:41 compute-0 python3.9[111400]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:41 compute-0 ceph-mon[73607]: pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:42 compute-0 sudo[111554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkrebgejvcsezfdleybrladsqldngeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405121.824364-85-23688377701091/AnsiballZ_setup.py'
Oct 02 11:38:42 compute-0 sudo[111554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:38:42
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', '.rgw.root', '.mgr']
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:42 compute-0 python3.9[111556]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:42 compute-0 sudo[111554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:38:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:38:42 compute-0 ceph-mon[73607]: pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:43.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:43 compute-0 sudo[111639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiydopbeuoabqgkvjqrqbnradwjlbqxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405121.824364-85-23688377701091/AnsiballZ_dnf.py'
Oct 02 11:38:43 compute-0 sudo[111639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:43 compute-0 python3.9[111641]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:44 compute-0 sudo[111639]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 02 11:38:44 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 02 11:38:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:45 compute-0 sudo[111793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoxnoipwetdoffuxjezgenztcrjvmeyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405124.723523-121-46264066517552/AnsiballZ_setup.py'
Oct 02 11:38:45 compute-0 sudo[111793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:45 compute-0 python3.9[111795]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:38:45 compute-0 ceph-mon[73607]: pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:45 compute-0 ceph-mon[73607]: 5.1c scrub starts
Oct 02 11:38:45 compute-0 ceph-mon[73607]: 5.1c scrub ok
Oct 02 11:38:45 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 02 11:38:45 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 02 11:38:45 compute-0 sudo[111793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:46 compute-0 sudo[111989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpbqydkjwrkelsbgzufwohcuawbqxsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405126.0190227-154-229356400122457/AnsiballZ_file.py'
Oct 02 11:38:46 compute-0 sudo[111989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:46 compute-0 ceph-mon[73607]: 5.18 scrub starts
Oct 02 11:38:46 compute-0 ceph-mon[73607]: 5.18 scrub ok
Oct 02 11:38:46 compute-0 python3.9[111991]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:46 compute-0 sudo[111989]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:47.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 02 11:38:47 compute-0 sudo[112141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumuxfevzvijcwgpbjxobjarqftfcbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405127.0386415-178-74593052060718/AnsiballZ_command.py'
Oct 02 11:38:47 compute-0 sudo[112141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:47 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 02 11:38:47 compute-0 python3.9[112143]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:38:47 compute-0 ceph-mon[73607]: pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:47.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:47 compute-0 sudo[112141]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:48 compute-0 sudo[112307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqaebrymdmzpeoqgynevbnvsrzjgvwil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405128.115815-202-119581480297218/AnsiballZ_stat.py'
Oct 02 11:38:48 compute-0 sudo[112307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:48 compute-0 python3.9[112309]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:38:48 compute-0 sudo[112307]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:48 compute-0 ceph-mon[73607]: 9.6 scrub starts
Oct 02 11:38:48 compute-0 ceph-mon[73607]: 9.6 scrub ok
Oct 02 11:38:48 compute-0 ceph-mon[73607]: pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:49 compute-0 sudo[112385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osdkwjzgshudgmcmiwjlynmnhehvesni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405128.115815-202-119581480297218/AnsiballZ_file.py'
Oct 02 11:38:49 compute-0 sudo[112385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:49 compute-0 python3.9[112387]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:49 compute-0 sudo[112385]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:49 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 02 11:38:49 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 02 11:38:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:50 compute-0 sudo[112537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbxzfmgrsgwghigedsmnvjkuqnhsiat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405129.71656-238-139694221776277/AnsiballZ_stat.py'
Oct 02 11:38:50 compute-0 sudo[112537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:50 compute-0 ceph-mon[73607]: 9.e scrub starts
Oct 02 11:38:50 compute-0 ceph-mon[73607]: 9.e scrub ok
Oct 02 11:38:50 compute-0 python3.9[112539]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:38:50 compute-0 sudo[112537]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:50 compute-0 sudo[112616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvfpdauqpalsumkgxukivtgknkjmjdie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405129.71656-238-139694221776277/AnsiballZ_file.py'
Oct 02 11:38:50 compute-0 sudo[112616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:50 compute-0 python3.9[112618]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:38:50 compute-0 sudo[112616]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:51.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:51 compute-0 ceph-mon[73607]: pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:51 compute-0 sudo[112768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botzyhmugrvsgeovkbqpokhgbnwbqkug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405131.224389-277-246891468371982/AnsiballZ_ini_file.py'
Oct 02 11:38:51 compute-0 sudo[112768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:51 compute-0 python3.9[112770]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:38:51 compute-0 sudo[112768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:52 compute-0 sudo[112921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymkpradaskwclmnznjqragzqgqatnog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405132.1039653-277-141765607727317/AnsiballZ_ini_file.py'
Oct 02 11:38:52 compute-0 sudo[112921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:52 compute-0 python3.9[112923]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:38:52 compute-0 sudo[112921]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:52 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Oct 02 11:38:52 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Oct 02 11:38:53 compute-0 sudo[113073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslchyphpcpmwvkiybcdyjrvmzwvhdwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405132.7399478-277-144261092281647/AnsiballZ_ini_file.py'
Oct 02 11:38:53 compute-0 sudo[113073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:53 compute-0 python3.9[113075]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:38:53 compute-0 sudo[113073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:53 compute-0 ceph-mon[73607]: pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:53 compute-0 ceph-mon[73607]: 9.a deep-scrub starts
Oct 02 11:38:53 compute-0 ceph-mon[73607]: 9.a deep-scrub ok
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:38:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:38:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:53 compute-0 sudo[113225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzmmzldwgpgvrvaesxgtewwxoutujhvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405133.3761408-277-225133768574570/AnsiballZ_ini_file.py'
Oct 02 11:38:53 compute-0 sudo[113225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:54 compute-0 python3.9[113227]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:38:54 compute-0 sudo[113225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:38:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:55.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:38:55 compute-0 sudo[113378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sinzlkqrrgharhonkrapzlsuhphyscwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405134.874507-370-110629426984701/AnsiballZ_dnf.py'
Oct 02 11:38:55 compute-0 sudo[113378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:55 compute-0 python3.9[113380]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:55 compute-0 ceph-mon[73607]: pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:56 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 02 11:38:56 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 02 11:38:56 compute-0 sudo[113378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:56 compute-0 ceph-mon[73607]: pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:57.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:57 compute-0 ceph-mon[73607]: 9.d scrub starts
Oct 02 11:38:57 compute-0 ceph-mon[73607]: 9.d scrub ok
Oct 02 11:38:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:58 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 02 11:38:58 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 02 11:38:58 compute-0 ceph-mon[73607]: pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:38:58 compute-0 ceph-mon[73607]: 9.f scrub starts
Oct 02 11:38:58 compute-0 ceph-mon[73607]: 9.f scrub ok
Oct 02 11:38:59 compute-0 sudo[113483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:59 compute-0 sudo[113483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:59 compute-0 sudo[113483]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:59.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:38:59 compute-0 sudo[113508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:38:59 compute-0 sudo[113508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:38:59 compute-0 sudo[113508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:38:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:38:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:38:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:01.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Oct 02 11:39:01 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Oct 02 11:39:01 compute-0 ceph-mon[73607]: pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:39:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:39:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:02 compute-0 ceph-mon[73607]: 9.10 deep-scrub starts
Oct 02 11:39:02 compute-0 ceph-mon[73607]: 9.10 deep-scrub ok
Oct 02 11:39:02 compute-0 sudo[113535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:02 compute-0 sudo[113535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:02 compute-0 sudo[113535]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:02 compute-0 sudo[113560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:39:02 compute-0 sudo[113560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:02 compute-0 sudo[113560]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:02 compute-0 sudo[113585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:02 compute-0 sudo[113585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:02 compute-0 sudo[113585]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:02 compute-0 sudo[113610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:39:02 compute-0 sudo[113610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:02 compute-0 sudo[113685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupzdpfhsxdkhluveszahwtrsaixlili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405138.1437888-403-126349777440225/AnsiballZ_setup.py'
Oct 02 11:39:02 compute-0 sudo[113685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:03 compute-0 python3.9[113687]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:03 compute-0 sudo[113685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:03 compute-0 sudo[113610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:03 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 02 11:39:03 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 02 11:39:03 compute-0 ceph-mon[73607]: pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:39:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:39:03 compute-0 sudo[113870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skazdcfivvlpjwsvqyivnxnknjwzjwem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405143.4581106-427-14024986156559/AnsiballZ_stat.py'
Oct 02 11:39:03 compute-0 sudo[113870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:03 compute-0 python3.9[113872]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:39:04 compute-0 sudo[113870]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:39:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:39:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:04 compute-0 sudo[114023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rooynhaiddytnrvhmpoijhnwhnwjhehv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405144.3544872-454-46057939309333/AnsiballZ_stat.py'
Oct 02 11:39:04 compute-0 sudo[114023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:04 compute-0 ceph-mon[73607]: 9.11 scrub starts
Oct 02 11:39:04 compute-0 ceph-mon[73607]: 9.11 scrub ok
Oct 02 11:39:04 compute-0 ceph-mon[73607]: pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:04 compute-0 python3.9[114025]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:39:04 compute-0 sudo[114023]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:39:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:05.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c21a57ed-4937-4d08-bde9-6b82dbba6b9e does not exist
Oct 02 11:39:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e1b1cbc7-e2d4-46eb-aa69-4a948213d18a does not exist
Oct 02 11:39:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 34df1d23-8ed3-4dae-a63f-8a86b93b001f does not exist
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:39:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:39:05 compute-0 sudo[114102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:05 compute-0 sudo[114102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:05 compute-0 sudo[114102]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:05 compute-0 sudo[114127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:39:05 compute-0 sudo[114127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:05 compute-0 sudo[114127]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:05 compute-0 sudo[114175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:05 compute-0 sudo[114175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:05 compute-0 sudo[114175]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:05 compute-0 sudo[114220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:39:05 compute-0 sudo[114220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:05 compute-0 sudo[114275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvvcmbxwdvplmrlopffrzjdwzngigoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405145.2429147-484-76149609822626/AnsiballZ_service_facts.py'
Oct 02 11:39:05 compute-0 sudo[114275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:39:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:39:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:05.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:05 compute-0 python3.9[114277]: ansible-service_facts Invoked
Oct 02 11:39:05 compute-0 network[114344]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:39:05 compute-0 network[114347]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:39:05 compute-0 network[114348]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:39:05 compute-0 podman[114324]: 2025-10-02 11:39:05.969748158 +0000 UTC m=+0.046936227 container create cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:06 compute-0 systemd[1]: Started libpod-conmon-cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9.scope.
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:05.94710222 +0000 UTC m=+0.024290319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:06 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 02 11:39:06 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 02 11:39:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:06.65781255 +0000 UTC m=+0.735000699 container init cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:06.667055542 +0000 UTC m=+0.744243591 container start cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:06.670280043 +0000 UTC m=+0.747468142 container attach cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:39:06 compute-0 happy_rosalind[114357]: 167 167
Oct 02 11:39:06 compute-0 systemd[1]: libpod-cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9.scope: Deactivated successfully.
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:06.674009606 +0000 UTC m=+0.751197655 container died cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed240408ca642b61a197751cd10ffed7c2693fad7cd06fa37854810d4686fe3d-merged.mount: Deactivated successfully.
Oct 02 11:39:06 compute-0 ceph-mon[73607]: pgmap v341: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:06 compute-0 ceph-mon[73607]: 9.12 scrub starts
Oct 02 11:39:06 compute-0 ceph-mon[73607]: 9.12 scrub ok
Oct 02 11:39:06 compute-0 podman[114324]: 2025-10-02 11:39:06.762172363 +0000 UTC m=+0.839360412 container remove cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:39:06 compute-0 systemd[1]: libpod-conmon-cd5b911a2041f40f4d197f614c54742f454d5cafd96ebdf4f3eae75d4c9ce4f9.scope: Deactivated successfully.
Oct 02 11:39:06 compute-0 podman[114396]: 2025-10-02 11:39:06.928096396 +0000 UTC m=+0.067331706 container create 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:39:06 compute-0 systemd[1]: Started libpod-conmon-6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e.scope.
Oct 02 11:39:06 compute-0 podman[114396]: 2025-10-02 11:39:06.882445203 +0000 UTC m=+0.021680533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:07 compute-0 podman[114396]: 2025-10-02 11:39:07.024203112 +0000 UTC m=+0.163438432 container init 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:39:07 compute-0 podman[114396]: 2025-10-02 11:39:07.030759416 +0000 UTC m=+0.169994726 container start 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:07 compute-0 podman[114396]: 2025-10-02 11:39:07.041463114 +0000 UTC m=+0.180698444 container attach 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:39:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:07.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:39:07 compute-0 musing_swirles[114421]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:39:07 compute-0 musing_swirles[114421]: --> relative data size: 1.0
Oct 02 11:39:07 compute-0 musing_swirles[114421]: --> All data devices are unavailable
Oct 02 11:39:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:07.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:07 compute-0 systemd[1]: libpod-6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e.scope: Deactivated successfully.
Oct 02 11:39:07 compute-0 podman[114396]: 2025-10-02 11:39:07.796914455 +0000 UTC m=+0.936149775 container died 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-879aa7eb6b632c02b7ba16525cb346a2f4461d706d458303b2405b5172a6ef27-merged.mount: Deactivated successfully.
Oct 02 11:39:07 compute-0 podman[114396]: 2025-10-02 11:39:07.96177296 +0000 UTC m=+1.101008280 container remove 6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:39:07 compute-0 systemd[1]: libpod-conmon-6109880af250d4f12dbda70480c72d94c3f358978a11eb71b493920bf1ef027e.scope: Deactivated successfully.
Oct 02 11:39:07 compute-0 sudo[114220]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-0 sudo[114506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:08 compute-0 sudo[114506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:08 compute-0 sudo[114506]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-0 sudo[114535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:39:08 compute-0 sudo[114535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:08 compute-0 sudo[114535]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-0 sudo[114563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:08 compute-0 sudo[114563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:08 compute-0 sudo[114563]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-0 sudo[114592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:39:08 compute-0 sudo[114592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.519674746 +0000 UTC m=+0.043234243 container create 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:08 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 02 11:39:08 compute-0 systemd[1]: Started libpod-conmon-689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463.scope.
Oct 02 11:39:08 compute-0 ceph-osd[83986]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 02 11:39:08 compute-0 sudo[114275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.495743767 +0000 UTC m=+0.019303304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.620657464 +0000 UTC m=+0.144216991 container init 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.628579012 +0000 UTC m=+0.152138509 container start 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:39:08 compute-0 frosty_jemison[114690]: 167 167
Oct 02 11:39:08 compute-0 systemd[1]: libpod-689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463.scope: Deactivated successfully.
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.660246765 +0000 UTC m=+0.183806592 container attach 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.660587063 +0000 UTC m=+0.184146550 container died 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e1ada038a2084a574bec5f7c9c67b9b05fc6442f17aac8354f3765f0c5343a-merged.mount: Deactivated successfully.
Oct 02 11:39:08 compute-0 podman[114671]: 2025-10-02 11:39:08.727097968 +0000 UTC m=+0.250657445 container remove 689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:39:08 compute-0 systemd[1]: libpod-conmon-689cdc97dab1ef114ed3edaf0612d67be1b0913befb9f2c26a12c1ddadd42463.scope: Deactivated successfully.
Oct 02 11:39:08 compute-0 podman[114741]: 2025-10-02 11:39:08.873738069 +0000 UTC m=+0.046731191 container create c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:39:08 compute-0 systemd[1]: Started libpod-conmon-c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1.scope.
Oct 02 11:39:08 compute-0 podman[114741]: 2025-10-02 11:39:08.854496977 +0000 UTC m=+0.027490119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3516edee94bc726206a1e727fc40a0c1cac2d7b62c81121fdc1b7c7ee4060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3516edee94bc726206a1e727fc40a0c1cac2d7b62c81121fdc1b7c7ee4060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3516edee94bc726206a1e727fc40a0c1cac2d7b62c81121fdc1b7c7ee4060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3516edee94bc726206a1e727fc40a0c1cac2d7b62c81121fdc1b7c7ee4060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:08 compute-0 podman[114741]: 2025-10-02 11:39:08.982676545 +0000 UTC m=+0.155669697 container init c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:39:08 compute-0 podman[114741]: 2025-10-02 11:39:08.99560481 +0000 UTC m=+0.168597972 container start c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:39:09 compute-0 podman[114741]: 2025-10-02 11:39:09.044044941 +0000 UTC m=+0.217038093 container attach c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:39:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:09.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:09 compute-0 musing_beaver[114758]: {
Oct 02 11:39:09 compute-0 musing_beaver[114758]:     "1": [
Oct 02 11:39:09 compute-0 musing_beaver[114758]:         {
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "devices": [
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "/dev/loop3"
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             ],
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "lv_name": "ceph_lv0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "lv_size": "7511998464",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "name": "ceph_lv0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "tags": {
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.cluster_name": "ceph",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.crush_device_class": "",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.encrypted": "0",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.osd_id": "1",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.type": "block",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:                 "ceph.vdo": "0"
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             },
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "type": "block",
Oct 02 11:39:09 compute-0 musing_beaver[114758]:             "vg_name": "ceph_vg0"
Oct 02 11:39:09 compute-0 musing_beaver[114758]:         }
Oct 02 11:39:09 compute-0 musing_beaver[114758]:     ]
Oct 02 11:39:09 compute-0 musing_beaver[114758]: }
Oct 02 11:39:09 compute-0 systemd[1]: libpod-c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1.scope: Deactivated successfully.
Oct 02 11:39:09 compute-0 podman[114741]: 2025-10-02 11:39:09.727505709 +0000 UTC m=+0.900498841 container died c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:39:09 compute-0 ceph-mon[73607]: pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:09 compute-0 ceph-mon[73607]: 9.15 scrub starts
Oct 02 11:39:09 compute-0 ceph-mon[73607]: 9.15 scrub ok
Oct 02 11:39:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:09.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e3516edee94bc726206a1e727fc40a0c1cac2d7b62c81121fdc1b7c7ee4060-merged.mount: Deactivated successfully.
Oct 02 11:39:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:10 compute-0 podman[114741]: 2025-10-02 11:39:10.37707856 +0000 UTC m=+1.550071682 container remove c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:39:10 compute-0 systemd[1]: libpod-conmon-c1df27cb6e5753ca1cfe86b4ba4a4bc47f53fbfa21986643b7baa46e5cf810c1.scope: Deactivated successfully.
Oct 02 11:39:10 compute-0 sudo[114592]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:10 compute-0 sudo[114879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:10 compute-0 sudo[114879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:10 compute-0 sudo[114879]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:10 compute-0 sudo[114927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:39:10 compute-0 sudo[114927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:10 compute-0 sudo[114927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:10 compute-0 sudo[114954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:10 compute-0 sudo[114954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:10 compute-0 sudo[115002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqyolqwydyicqglulvpksombvhfviox ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759405150.2272077-523-37617222390722/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759405150.2272077-523-37617222390722/args'
Oct 02 11:39:10 compute-0 sudo[115002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:10 compute-0 sudo[114954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:10 compute-0 sudo[115009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:39:10 compute-0 sudo[115009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:10 compute-0 sudo[115002]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:10 compute-0 ceph-mon[73607]: pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:10 compute-0 podman[115110]: 2025-10-02 11:39:10.95197119 +0000 UTC m=+0.092506166 container create 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:39:10 compute-0 podman[115110]: 2025-10-02 11:39:10.878886731 +0000 UTC m=+0.019421707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:11.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:11 compute-0 systemd[1]: Started libpod-conmon-0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5.scope.
Oct 02 11:39:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:11 compute-0 podman[115110]: 2025-10-02 11:39:11.211368213 +0000 UTC m=+0.351903209 container init 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:39:11 compute-0 podman[115110]: 2025-10-02 11:39:11.219938478 +0000 UTC m=+0.360473454 container start 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:39:11 compute-0 gracious_cray[115178]: 167 167
Oct 02 11:39:11 compute-0 systemd[1]: libpod-0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5.scope: Deactivated successfully.
Oct 02 11:39:11 compute-0 podman[115110]: 2025-10-02 11:39:11.237361704 +0000 UTC m=+0.377896680 container attach 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:11 compute-0 podman[115110]: 2025-10-02 11:39:11.238147524 +0000 UTC m=+0.378682500 container died 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-89f5552b85cd3f6fd2622a876c2d99270a652058edce06ecb11cc7a5dc4775e1-merged.mount: Deactivated successfully.
Oct 02 11:39:11 compute-0 podman[115110]: 2025-10-02 11:39:11.314383872 +0000 UTC m=+0.454918848 container remove 0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cray, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:39:11 compute-0 systemd[1]: libpod-conmon-0a446211222c4afecdc3b0c932e9275e39400f989acfbf99398f98c3ce25efa5.scope: Deactivated successfully.
Oct 02 11:39:11 compute-0 sudo[115269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwkjetbknrzwidzrqjhsrifrrhoxcao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405151.0774996-556-98897479195413/AnsiballZ_dnf.py'
Oct 02 11:39:11 compute-0 sudo[115269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:11 compute-0 podman[115277]: 2025-10-02 11:39:11.468550601 +0000 UTC m=+0.047171162 container create 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:11 compute-0 systemd[1]: Started libpod-conmon-913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e.scope.
Oct 02 11:39:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f987ceb6116bf53413b64c2f6cee9eefeaf5533219fe79aa837f60136b9e4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f987ceb6116bf53413b64c2f6cee9eefeaf5533219fe79aa837f60136b9e4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f987ceb6116bf53413b64c2f6cee9eefeaf5533219fe79aa837f60136b9e4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f987ceb6116bf53413b64c2f6cee9eefeaf5533219fe79aa837f60136b9e4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:39:11 compute-0 podman[115277]: 2025-10-02 11:39:11.447750591 +0000 UTC m=+0.026371182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:39:11 compute-0 podman[115277]: 2025-10-02 11:39:11.552043652 +0000 UTC m=+0.130664233 container init 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:39:11 compute-0 podman[115277]: 2025-10-02 11:39:11.558877882 +0000 UTC m=+0.137498443 container start 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:39:11 compute-0 podman[115277]: 2025-10-02 11:39:11.567289803 +0000 UTC m=+0.145910364 container attach 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:39:11 compute-0 python3.9[115272]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:39:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:12 compute-0 happy_saha[115294]: {
Oct 02 11:39:12 compute-0 happy_saha[115294]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:39:12 compute-0 happy_saha[115294]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:39:12 compute-0 happy_saha[115294]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:39:12 compute-0 happy_saha[115294]:         "osd_id": 1,
Oct 02 11:39:12 compute-0 happy_saha[115294]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:39:12 compute-0 happy_saha[115294]:         "type": "bluestore"
Oct 02 11:39:12 compute-0 happy_saha[115294]:     }
Oct 02 11:39:12 compute-0 happy_saha[115294]: }
Oct 02 11:39:12 compute-0 systemd[1]: libpod-913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e.scope: Deactivated successfully.
Oct 02 11:39:12 compute-0 podman[115317]: 2025-10-02 11:39:12.447350922 +0000 UTC m=+0.024224217 container died 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f987ceb6116bf53413b64c2f6cee9eefeaf5533219fe79aa837f60136b9e4d-merged.mount: Deactivated successfully.
Oct 02 11:39:12 compute-0 podman[115317]: 2025-10-02 11:39:12.509089138 +0000 UTC m=+0.085962403 container remove 913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:39:12 compute-0 systemd[1]: libpod-conmon-913c4d54b7496db1cea80a8afcd4b2ff84f70f8773a635d8e24cf8bacdbf742e.scope: Deactivated successfully.
Oct 02 11:39:12 compute-0 sudo[115009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:39:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:39:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6fc1b6ed-8aed-40e8-8df8-17af2a60f73e does not exist
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e7d013b9-ecbe-48d0-8274-6351a530b973 does not exist
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 937fc0c6-54fa-497b-baba-a5e1faaa3fc9 does not exist
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:12 compute-0 sudo[115332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:12 compute-0 sudo[115332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:12 compute-0 sudo[115332]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:12 compute-0 sudo[115357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:39:12 compute-0 sudo[115357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:12 compute-0 sudo[115357]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:12 compute-0 sudo[115269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:39:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:13.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:39:13 compute-0 ceph-mon[73607]: pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:39:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:13.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:14 compute-0 sudo[115531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndmhhgoshhlieggvayczjbribepkoeik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405153.4110222-595-122268641979733/AnsiballZ_package_facts.py'
Oct 02 11:39:14 compute-0 sudo[115531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:14 compute-0 python3.9[115533]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 11:39:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:14 compute-0 sudo[115531]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:15.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:15 compute-0 ceph-mon[73607]: pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:15 compute-0 sudo[115684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmqcdibfxrcqgisfnpeypumlclhyqqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405155.4150472-625-160580392099227/AnsiballZ_stat.py'
Oct 02 11:39:15 compute-0 sudo[115684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:15 compute-0 python3.9[115686]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:15 compute-0 sudo[115684]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:16 compute-0 sudo[115762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siwuawjnqhmfrbllkxwnnprwyxyqcuat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405155.4150472-625-160580392099227/AnsiballZ_file.py'
Oct 02 11:39:16 compute-0 sudo[115762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:16 compute-0 python3.9[115764]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:16 compute-0 sudo[115762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:16 compute-0 ceph-mon[73607]: pgmap v346: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:16 compute-0 sudo[115915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miififraqhixlrwdnadwhhfzmtvuabuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405156.6973255-661-60638501433010/AnsiballZ_stat.py'
Oct 02 11:39:16 compute-0 sudo[115915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:17.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:17 compute-0 python3.9[115917]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:17 compute-0 sudo[115915]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:17 compute-0 sudo[115993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwfvrxsswdxihlgrlsznkntwtahdeom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405156.6973255-661-60638501433010/AnsiballZ_file.py'
Oct 02 11:39:17 compute-0 sudo[115993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:17 compute-0 python3.9[115995]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:17 compute-0 sudo[115993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:18 compute-0 ceph-mon[73607]: pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:19.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:19 compute-0 sudo[116101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:19 compute-0 sudo[116101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:19 compute-0 sudo[116101]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:19 compute-0 sudo[116188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wouozgutodnhynaofbvogwtzrupfdenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405158.7789705-715-166332549155906/AnsiballZ_lineinfile.py'
Oct 02 11:39:19 compute-0 sudo[116188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:19 compute-0 sudo[116156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:19 compute-0 sudo[116156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:19 compute-0 sudo[116156]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:19 compute-0 python3.9[116196]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:19 compute-0 sudo[116188]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:20 compute-0 sudo[116349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkglhdrfhkjcbqjafvwmngmwkccnqraw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405160.47998-760-38955025909907/AnsiballZ_setup.py'
Oct 02 11:39:20 compute-0 sudo[116349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:21 compute-0 python3.9[116351]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:21.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:21 compute-0 sudo[116349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:21 compute-0 ceph-mon[73607]: pgmap v348: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:21 compute-0 sudo[116433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhkrmxgznsiblxdjqtnwikltuxpvuab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405160.47998-760-38955025909907/AnsiballZ_systemd.py'
Oct 02 11:39:21 compute-0 sudo[116433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:22 compute-0 python3.9[116435]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:39:22 compute-0 sudo[116433]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:23.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:23 compute-0 sshd-session[111095]: Connection closed by 192.168.122.30 port 38064
Oct 02 11:39:23 compute-0 sshd-session[111092]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:39:23 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 02 11:39:23 compute-0 systemd[1]: session-39.scope: Consumed 22.174s CPU time.
Oct 02 11:39:23 compute-0 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Oct 02 11:39:23 compute-0 systemd-logind[789]: Removed session 39.
Oct 02 11:39:23 compute-0 ceph-mon[73607]: pgmap v349: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:25.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:25 compute-0 ceph-mon[73607]: pgmap v350: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:27 compute-0 ceph-mon[73607]: pgmap v351: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:28 compute-0 ceph-mon[73607]: pgmap v352: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:28 compute-0 sshd-session[116466]: Accepted publickey for zuul from 192.168.122.30 port 46952 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:39:28 compute-0 systemd-logind[789]: New session 40 of user zuul.
Oct 02 11:39:28 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 02 11:39:28 compute-0 sshd-session[116466]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:39:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:29 compute-0 sudo[116619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmdaeeizpfafcytbnodiiaopnvuufwdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405169.0166175-31-93668354761561/AnsiballZ_file.py'
Oct 02 11:39:29 compute-0 sudo[116619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:29 compute-0 python3.9[116621]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:29 compute-0 sudo[116619]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:30 compute-0 sudo[116772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshlfdndhuepcpmobvdnyakvtktwymeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405169.9907172-67-202660582731593/AnsiballZ_stat.py'
Oct 02 11:39:30 compute-0 sudo[116772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:30 compute-0 python3.9[116774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:30 compute-0 sudo[116772]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:30 compute-0 sudo[116850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebpmeewuqhciwruzslbgcvymrafqzbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405169.9907172-67-202660582731593/AnsiballZ_file.py'
Oct 02 11:39:30 compute-0 sudo[116850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:31 compute-0 python3.9[116852]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:31 compute-0 sudo[116850]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:31 compute-0 ceph-mon[73607]: pgmap v353: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:31 compute-0 sshd-session[116469]: Connection closed by 192.168.122.30 port 46952
Oct 02 11:39:31 compute-0 sshd-session[116466]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:39:31 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 02 11:39:31 compute-0 systemd[1]: session-40.scope: Consumed 1.448s CPU time.
Oct 02 11:39:31 compute-0 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Oct 02 11:39:31 compute-0 systemd-logind[789]: Removed session 40.
Oct 02 11:39:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:33.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:33 compute-0 ceph-mon[73607]: pgmap v354: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:35.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:35 compute-0 ceph-mon[73607]: pgmap v355: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:36 compute-0 sshd-session[116880]: Accepted publickey for zuul from 192.168.122.30 port 33594 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:39:36 compute-0 systemd-logind[789]: New session 41 of user zuul.
Oct 02 11:39:36 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 02 11:39:36 compute-0 sshd-session[116880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:39:36 compute-0 ceph-mon[73607]: pgmap v356: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:37 compute-0 python3.9[117033]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:37.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:38 compute-0 sudo[117188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcabsyokthdoiyopmnyqpqwgoucimwag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405178.0338852-64-107773241164774/AnsiballZ_file.py'
Oct 02 11:39:38 compute-0 sudo[117188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:38 compute-0 python3.9[117190]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:38 compute-0 sudo[117188]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:39 compute-0 ceph-mon[73607]: pgmap v357: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:39.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:39 compute-0 sudo[117290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:39 compute-0 sudo[117290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:39 compute-0 sudo[117290]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:39 compute-0 sudo[117315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:39 compute-0 sudo[117315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:39 compute-0 sudo[117315]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:39 compute-0 sudo[117413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksxzltrmetfarltqbjkmrtcigqafbrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405178.9834907-88-280508366702115/AnsiballZ_stat.py'
Oct 02 11:39:39 compute-0 sudo[117413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:39 compute-0 python3.9[117415]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:39 compute-0 sudo[117413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:40 compute-0 sudo[117491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmsqdzxjppwywgzxxqkmmfiyzrdpgitk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405178.9834907-88-280508366702115/AnsiballZ_file.py'
Oct 02 11:39:40 compute-0 sudo[117491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:40 compute-0 python3.9[117493]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.1obc92cu recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:40 compute-0 sudo[117491]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:40 compute-0 ceph-mon[73607]: pgmap v358: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:41 compute-0 sudo[117644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uetjhykwvfkxjznyyhchxpuighfcgdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405180.838816-148-23614956558546/AnsiballZ_stat.py'
Oct 02 11:39:41 compute-0 sudo[117644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:41 compute-0 python3.9[117646]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:41 compute-0 sudo[117644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:41 compute-0 sudo[117722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjjqyjduegglgsddcuczghjcisicvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405180.838816-148-23614956558546/AnsiballZ_file.py'
Oct 02 11:39:41 compute-0 sudo[117722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:41.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:41 compute-0 python3.9[117724]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.iu31yvaq recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:42 compute-0 sudo[117722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:39:42
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:42 compute-0 sudo[117875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvakaplkblxkzvycedtkqqsmlevkxzvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405182.3051655-187-109148348302532/AnsiballZ_file.py'
Oct 02 11:39:42 compute-0 sudo[117875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:39:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:39:42 compute-0 python3.9[117877]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:42 compute-0 sudo[117875]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:43.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:43 compute-0 sudo[118027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rffpnnndqglvuxozqfkrqmbvqtzmsbwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405183.0773213-211-190363812476802/AnsiballZ_stat.py'
Oct 02 11:39:43 compute-0 sudo[118027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:43 compute-0 python3.9[118029]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:43 compute-0 ceph-mon[73607]: pgmap v359: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:43 compute-0 sudo[118027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:43 compute-0 sudo[118105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppinvlgomplktkegxriznodlnmwkchdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405183.0773213-211-190363812476802/AnsiballZ_file.py'
Oct 02 11:39:43 compute-0 sudo[118105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:43 compute-0 python3.9[118107]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:43 compute-0 sudo[118105]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:44 compute-0 sudo[118258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anbahdtuihignlnndxyvzliqihrvszcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405184.079363-211-219611079345982/AnsiballZ_stat.py'
Oct 02 11:39:44 compute-0 sudo[118258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:44 compute-0 python3.9[118260]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:44 compute-0 sudo[118258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:44 compute-0 sudo[118336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xulzctcjziurqsjrtibrwglypzyfcnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405184.079363-211-219611079345982/AnsiballZ_file.py'
Oct 02 11:39:44 compute-0 sudo[118336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:44 compute-0 python3.9[118338]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:44 compute-0 sudo[118336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:45.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:45 compute-0 ceph-mon[73607]: pgmap v360: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:45.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:45 compute-0 sudo[118488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjevbjxvzkrnpxhgxcusppwkqfvfyyie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405185.707639-280-159705048809991/AnsiballZ_file.py'
Oct 02 11:39:45 compute-0 sudo[118488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:46 compute-0 python3.9[118490]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:46 compute-0 sudo[118488]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:46 compute-0 sudo[118641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kotekxdvfplwdcymiztahvwztpvvnnkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405186.4121158-304-74828122277364/AnsiballZ_stat.py'
Oct 02 11:39:46 compute-0 sudo[118641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:46 compute-0 python3.9[118643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:46 compute-0 sudo[118641]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:47 compute-0 sudo[118719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqykywzveuknftqyetlmltbywzjllaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405186.4121158-304-74828122277364/AnsiballZ_file.py'
Oct 02 11:39:47 compute-0 sudo[118719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:47 compute-0 python3.9[118721]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:47 compute-0 sudo[118719]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:47 compute-0 ceph-mon[73607]: pgmap v361: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:47.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:47 compute-0 sudo[118871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqfjdynuaavyytoqzuigfyranwvhtoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405187.6822958-340-141221725001037/AnsiballZ_stat.py'
Oct 02 11:39:47 compute-0 sudo[118871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:48 compute-0 python3.9[118873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:48 compute-0 sudo[118871]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:48 compute-0 sudo[118950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysyxdevnplxwfgefkkzldarmrdtfplmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405187.6822958-340-141221725001037/AnsiballZ_file.py'
Oct 02 11:39:48 compute-0 sudo[118950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:48 compute-0 python3.9[118952]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:48 compute-0 sudo[118950]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:48 compute-0 ceph-mon[73607]: pgmap v362: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:49.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:49 compute-0 sudo[119102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmdrvkstoxdpkfnqgqnnnttpovlkmycc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405188.9189632-376-131379557354516/AnsiballZ_systemd.py'
Oct 02 11:39:49 compute-0 sudo[119102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:49.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:49 compute-0 python3.9[119104]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:39:49 compute-0 systemd[1]: Reloading.
Oct 02 11:39:49 compute-0 systemd-sysv-generator[119133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:39:49 compute-0 systemd-rc-local-generator[119129]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:39:50 compute-0 sudo[119102]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:50 compute-0 sudo[119291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyiopbdqojiloslmpafbcpjocuyhsojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405190.5314279-400-163110796883245/AnsiballZ_stat.py'
Oct 02 11:39:50 compute-0 sudo[119291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:51.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:51 compute-0 python3.9[119293]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:51 compute-0 sudo[119291]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:51 compute-0 sudo[119369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfgdwptivnknisifjzjdsnjwxxnngpcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405190.5314279-400-163110796883245/AnsiballZ_file.py'
Oct 02 11:39:51 compute-0 sudo[119369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:51 compute-0 ceph-mon[73607]: pgmap v363: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:51 compute-0 python3.9[119371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:51 compute-0 sudo[119369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:52 compute-0 sudo[119521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrxgzixjxfuvgkaroaljcqfjqhekkyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405191.9990528-436-71903048772688/AnsiballZ_stat.py'
Oct 02 11:39:52 compute-0 sudo[119521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:52 compute-0 python3.9[119523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:52 compute-0 sudo[119521]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:52 compute-0 sudo[119600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqltpmkderoylkfnbhckabbpklqulavy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405191.9990528-436-71903048772688/AnsiballZ_file.py'
Oct 02 11:39:52 compute-0 sudo[119600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:52 compute-0 ceph-mon[73607]: pgmap v364: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:52 compute-0 python3.9[119602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:52 compute-0 sudo[119600]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:53.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:53 compute-0 sudo[119752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxemdvscdzemgsymlwqczcrhgxuqcpsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405193.2180367-472-243867742678489/AnsiballZ_systemd.py'
Oct 02 11:39:53 compute-0 sudo[119752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:39:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:39:53 compute-0 python3.9[119754]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:39:53 compute-0 systemd[1]: Reloading.
Oct 02 11:39:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:53.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:53 compute-0 systemd-rc-local-generator[119778]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:39:53 compute-0 systemd-sysv-generator[119785]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:39:54 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:39:54 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:39:54 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:39:54 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:39:54 compute-0 sudo[119752]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:54 compute-0 ceph-mon[73607]: pgmap v365: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:55 compute-0 python3.9[119946]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:39:55 compute-0 network[119963]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:39:55 compute-0 network[119964]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:39:55 compute-0 network[119965]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:39:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:57.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:57 compute-0 ceph-mon[73607]: pgmap v366: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:39:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:57.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:39:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:58 compute-0 ceph-mon[73607]: pgmap v367: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:39:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:39:59 compute-0 sudo[120105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:59 compute-0 sudo[120105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:59 compute-0 sudo[120105]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:59 compute-0 sudo[120130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:39:59 compute-0 sudo[120130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:39:59 compute-0 sudo[120130]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:39:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:39:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:39:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:40:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 11:40:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:01 compute-0 ceph-mon[73607]: pgmap v368: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:02 compute-0 sudo[120282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvrxdyopkbhdyglqavanwgadyznmrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405202.0968971-550-257460321349968/AnsiballZ_stat.py'
Oct 02 11:40:02 compute-0 sudo[120282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:02 compute-0 python3.9[120284]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:02 compute-0 sudo[120282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:02 compute-0 sudo[120360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpehuthcbtkwrwcfewgmtscgabwcllk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405202.0968971-550-257460321349968/AnsiballZ_file.py'
Oct 02 11:40:02 compute-0 sudo[120360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.908289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202908317, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2266, "num_deletes": 251, "total_data_size": 3649162, "memory_usage": 3698720, "flush_reason": "Manual Compaction"}
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202927723, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3561289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7653, "largest_seqno": 9918, "table_properties": {"data_size": 3551728, "index_size": 5671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23520, "raw_average_key_size": 20, "raw_value_size": 3530765, "raw_average_value_size": 3149, "num_data_blocks": 256, "num_entries": 1121, "num_filter_entries": 1121, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405005, "oldest_key_time": 1759405005, "file_creation_time": 1759405202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 19603 microseconds, and 6670 cpu microseconds.
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.927888) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3561289 bytes OK
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.927941) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.929134) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.929148) EVENT_LOG_v1 {"time_micros": 1759405202929142, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.929165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3639541, prev total WAL file size 3640177, number of live WAL files 2.
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.930319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3477KB)], [20(7799KB)]
Oct 02 11:40:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202930393, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11547892, "oldest_snapshot_seqno": -1}
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3789 keys, 9851779 bytes, temperature: kUnknown
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203001915, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9851779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9820282, "index_size": 20891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91252, "raw_average_key_size": 24, "raw_value_size": 9745787, "raw_average_value_size": 2572, "num_data_blocks": 917, "num_entries": 3789, "num_filter_entries": 3789, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.002139) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9851779 bytes
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.003634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 137.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 4306, records dropped: 517 output_compression: NoCompression
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.003650) EVENT_LOG_v1 {"time_micros": 1759405203003641, "job": 6, "event": "compaction_finished", "compaction_time_micros": 71594, "compaction_time_cpu_micros": 19648, "output_level": 6, "num_output_files": 1, "total_output_size": 9851779, "num_input_records": 4306, "num_output_records": 3789, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203004225, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203005291, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:02.930180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.005316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.005321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.005322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.005324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:03.005325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:03 compute-0 python3.9[120362]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:03 compute-0 sudo[120360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:03 compute-0 ceph-mon[73607]: pgmap v369: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:03.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:03 compute-0 sudo[120512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izkxkcvacilofozlufvdvfbkfqwfzkks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405203.6525276-589-107606131636076/AnsiballZ_file.py'
Oct 02 11:40:03 compute-0 sudo[120512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:04 compute-0 python3.9[120514]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:04 compute-0 sudo[120512]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:04 compute-0 sudo[120665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxcvfoojtigvdpzoqzzcduzmgotlycja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405204.3668687-613-20749281871014/AnsiballZ_stat.py'
Oct 02 11:40:04 compute-0 sudo[120665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:04 compute-0 python3.9[120667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:04 compute-0 sudo[120665]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:05.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:05 compute-0 sudo[120743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmxfeksusqkvtbpwpkxdhnzguwqlirps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405204.3668687-613-20749281871014/AnsiballZ_file.py'
Oct 02 11:40:05 compute-0 sudo[120743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:05 compute-0 python3.9[120745]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:05 compute-0 ceph-mon[73607]: pgmap v370: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:05 compute-0 sudo[120743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:06 compute-0 sudo[120896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjynbkmgobfzqesmttelogkqwembugfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405205.8828597-658-119710775372733/AnsiballZ_timezone.py'
Oct 02 11:40:06 compute-0 sudo[120896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:06 compute-0 python3.9[120898]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:40:06 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 11:40:06 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 11:40:06 compute-0 sudo[120896]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:07 compute-0 sudo[121053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbkbverzuqwaptowrkszfulswtjlloh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405207.1169994-685-191441434724271/AnsiballZ_file.py'
Oct 02 11:40:07 compute-0 sudo[121053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:07 compute-0 ceph-mon[73607]: pgmap v371: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:07 compute-0 python3.9[121055]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:07 compute-0 sudo[121053]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:08 compute-0 sudo[121205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgrnyfvikugadvmgatlblftojxumbwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405207.9872186-709-206529565881249/AnsiballZ_stat.py'
Oct 02 11:40:08 compute-0 sudo[121205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:08 compute-0 python3.9[121207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:08 compute-0 sudo[121205]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:08 compute-0 ceph-mon[73607]: pgmap v372: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:08 compute-0 sudo[121284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wddwcjaxxkevuboqbpqmeuipupcjunzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405207.9872186-709-206529565881249/AnsiballZ_file.py'
Oct 02 11:40:08 compute-0 sudo[121284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:09 compute-0 python3.9[121286]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:09 compute-0 sudo[121284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:09 compute-0 sudo[121436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfxpcqtrxhycptrxpkcskysdmyiyfmux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405209.4121513-745-92373257619769/AnsiballZ_stat.py'
Oct 02 11:40:09 compute-0 sudo[121436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:09.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:09 compute-0 python3.9[121438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:09 compute-0 sudo[121436]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:10 compute-0 sudo[121514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfuacggzaebprjlxyugphnjoullwcslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405209.4121513-745-92373257619769/AnsiballZ_file.py'
Oct 02 11:40:10 compute-0 sudo[121514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:10 compute-0 python3.9[121516]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rd0124hu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:10 compute-0 sudo[121514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:10 compute-0 sudo[121667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonvclncgoezcgwyqejepoyzasmcsupt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405210.683123-781-239199487746718/AnsiballZ_stat.py'
Oct 02 11:40:10 compute-0 sudo[121667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:11 compute-0 python3.9[121669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:11 compute-0 sudo[121667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:11 compute-0 sudo[121745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncwwwrpqigerlfpodvikcfbmmxjiyqml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405210.683123-781-239199487746718/AnsiballZ_file.py'
Oct 02 11:40:11 compute-0 sudo[121745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:11 compute-0 ceph-mon[73607]: pgmap v373: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:11 compute-0 python3.9[121747]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:11 compute-0 sudo[121745]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:11.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:12 compute-0 sudo[121898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiiqpbxqyzmflkqqakkzoumtllqgyyqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405212.0686204-820-49731337843195/AnsiballZ_command.py'
Oct 02 11:40:12 compute-0 sudo[121898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:12 compute-0 python3.9[121900]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:40:12 compute-0 sudo[121898]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 sudo[121926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:13 compute-0 sudo[121926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:13 compute-0 sudo[121926]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 sudo[121951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:40:13 compute-0 sudo[121951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:13 compute-0 sudo[121951]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 sudo[121981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:13 compute-0 sudo[121981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:13 compute-0 sudo[121981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:13.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:13 compute-0 sudo[122032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:40:13 compute-0 sudo[122032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:13 compute-0 sudo[122171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjpzgedytwddiaagtnmmjmcxqckiuuda ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405213.1105256-844-106300201021178/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:40:13 compute-0 sudo[122171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:13 compute-0 sudo[122032]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 ceph-mon[73607]: pgmap v374: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:13 compute-0 python3[122173]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:40:13 compute-0 sudo[122171]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:13.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:14 compute-0 sudo[122336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdlcvhtwmwocwxbwgzylmcbdzbzbejyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405213.985608-868-267517505062221/AnsiballZ_stat.py'
Oct 02 11:40:14 compute-0 sudo[122336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:14 compute-0 python3.9[122338]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:14 compute-0 sudo[122336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:14 compute-0 ceph-mon[73607]: pgmap v375: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:14 compute-0 sudo[122414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpglbkfmhhvzaxizjquymkgqdfdtqvax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405213.985608-868-267517505062221/AnsiballZ_file.py'
Oct 02 11:40:14 compute-0 sudo[122414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:15 compute-0 python3.9[122416]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:15 compute-0 sudo[122414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:15.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:40:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:40:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:15 compute-0 sudo[122566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvzumspddmlfgmbbzudjibbvchuwfye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405215.2858903-904-167354892866067/AnsiballZ_stat.py'
Oct 02 11:40:15 compute-0 sudo[122566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:15 compute-0 python3.9[122568]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:15.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:15 compute-0 sudo[122566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9051d9f9-dd42-4306-9f7f-31160769ed7c does not exist
Oct 02 11:40:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f552abb6-f48f-400a-b73d-b6aa7b609d08 does not exist
Oct 02 11:40:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c0aa828a-12c6-4036-b89f-69c3dd24a471 does not exist
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:40:16 compute-0 sudo[122651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdjlqtabscvjlksmqazowuboroxdkyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405215.2858903-904-167354892866067/AnsiballZ_file.py'
Oct 02 11:40:16 compute-0 sudo[122651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:16 compute-0 sudo[122638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:16 compute-0 sudo[122638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:16 compute-0 sudo[122638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:16 compute-0 sudo[122672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:40:16 compute-0 sudo[122672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:16 compute-0 sudo[122672]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:16 compute-0 sudo[122697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:16 compute-0 sudo[122697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:16 compute-0 sudo[122697]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:16 compute-0 python3.9[122669]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:16 compute-0 sudo[122722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:40:16 compute-0 sudo[122722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:16 compute-0 sudo[122651]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:40:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.610887283 +0000 UTC m=+0.037407602 container create ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:40:16 compute-0 systemd[1]: Started libpod-conmon-ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e.scope.
Oct 02 11:40:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.68926932 +0000 UTC m=+0.115789659 container init ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.593473463 +0000 UTC m=+0.019993802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.696123762 +0000 UTC m=+0.122644081 container start ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:40:16 compute-0 heuristic_elgamal[122904]: 167 167
Oct 02 11:40:16 compute-0 systemd[1]: libpod-ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e.scope: Deactivated successfully.
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.703017575 +0000 UTC m=+0.129537914 container attach ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.703797924 +0000 UTC m=+0.130318243 container died ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e4175d9b061b9dcdd4ff5fbba8ba5ae62ad8b9b348035e8ff2998d4f161a36-merged.mount: Deactivated successfully.
Oct 02 11:40:16 compute-0 podman[122865]: 2025-10-02 11:40:16.743922219 +0000 UTC m=+0.170442538 container remove ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:40:16 compute-0 systemd[1]: libpod-conmon-ef281f88646837f66d80e237786874a95070545f785eb93d38e6da5a1920373e.scope: Deactivated successfully.
Oct 02 11:40:16 compute-0 sudo[122971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sriwmtlhafwdnaxiiedeoenguajybcaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405216.5057833-940-171714706447643/AnsiballZ_stat.py'
Oct 02 11:40:16 compute-0 sudo[122971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:16 compute-0 python3.9[122973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:16 compute-0 podman[122979]: 2025-10-02 11:40:16.955346133 +0000 UTC m=+0.112303568 container create 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:40:16 compute-0 podman[122979]: 2025-10-02 11:40:16.865688179 +0000 UTC m=+0.022645644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:16 compute-0 systemd[1]: Started libpod-conmon-6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364.scope.
Oct 02 11:40:17 compute-0 sudo[122971]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:17 compute-0 podman[122979]: 2025-10-02 11:40:17.04345259 +0000 UTC m=+0.200410035 container init 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:40:17 compute-0 podman[122979]: 2025-10-02 11:40:17.053944778 +0000 UTC m=+0.210902213 container start 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:40:17 compute-0 podman[122979]: 2025-10-02 11:40:17.058704579 +0000 UTC m=+0.215662014 container attach 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:40:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:17.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:17 compute-0 sudo[123075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnuqxzfdspdsgqwnwivjdbhplksucgtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405216.5057833-940-171714706447643/AnsiballZ_file.py'
Oct 02 11:40:17 compute-0 sudo[123075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:17 compute-0 ceph-mon[73607]: pgmap v376: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:17 compute-0 python3.9[123077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:17 compute-0 sudo[123075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:17 compute-0 beautiful_cohen[122997]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:40:17 compute-0 beautiful_cohen[122997]: --> relative data size: 1.0
Oct 02 11:40:17 compute-0 beautiful_cohen[122997]: --> All data devices are unavailable
Oct 02 11:40:17 compute-0 systemd[1]: libpod-6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364.scope: Deactivated successfully.
Oct 02 11:40:17 compute-0 podman[122979]: 2025-10-02 11:40:17.917193638 +0000 UTC m=+1.074151093 container died 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4083a4bc50d8abb018b239ce7971056f6c7ced63d486a2b98484f552004a7c46-merged.mount: Deactivated successfully.
Oct 02 11:40:17 compute-0 podman[122979]: 2025-10-02 11:40:17.979995338 +0000 UTC m=+1.136952773 container remove 6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:17 compute-0 systemd[1]: libpod-conmon-6f545095c0132f73d8c697d9d28b0cb7c00f2a9d764f287fc5ad25f4f2b6c364.scope: Deactivated successfully.
Oct 02 11:40:18 compute-0 sudo[123251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyotsxrzyfgpadfmsyuobkcpmfyhgkqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405217.7090425-976-200992703392343/AnsiballZ_stat.py'
Oct 02 11:40:18 compute-0 sudo[123251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:18 compute-0 sudo[122722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 sudo[123254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:18 compute-0 sudo[123254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:18 compute-0 sudo[123254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 sudo[123279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:40:18 compute-0 sudo[123279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:18 compute-0 sudo[123279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 sudo[123304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:18 compute-0 sudo[123304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:18 compute-0 sudo[123304]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 python3.9[123253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:18 compute-0 sudo[123329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:40:18 compute-0 sudo[123329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:18 compute-0 sudo[123251]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:18 compute-0 sudo[123455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqmfzcwsvisufmumabytrkyheuprpfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405217.7090425-976-200992703392343/AnsiballZ_file.py'
Oct 02 11:40:18 compute-0 sudo[123455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.468873807 +0000 UTC m=+0.033418191 container create b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:40:18 compute-0 systemd[1]: Started libpod-conmon-b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786.scope.
Oct 02 11:40:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.530456355 +0000 UTC m=+0.095000749 container init b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.536234601 +0000 UTC m=+0.100778985 container start b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:18 compute-0 vibrant_kilby[123488]: 167 167
Oct 02 11:40:18 compute-0 systemd[1]: libpod-b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786.scope: Deactivated successfully.
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.540993242 +0000 UTC m=+0.105537656 container attach b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.541645657 +0000 UTC m=+0.106190041 container died b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.454540493 +0000 UTC m=+0.019084897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3d6a80dd09dec73c7addf0b7e2c64f4d643b6b2a5ebee32940bf467192ac11-merged.mount: Deactivated successfully.
Oct 02 11:40:18 compute-0 podman[123472]: 2025-10-02 11:40:18.581281593 +0000 UTC m=+0.145825977 container remove b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:40:18 compute-0 python3.9[123464]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:18 compute-0 systemd[1]: libpod-conmon-b3e9914743ba50a59acabccb3e7183200071be8b1f48e14f4749e3e5b91c6786.scope: Deactivated successfully.
Oct 02 11:40:18 compute-0 sudo[123455]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-0 podman[123534]: 2025-10-02 11:40:18.717401482 +0000 UTC m=+0.034608719 container create d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:40:18 compute-0 systemd[1]: Started libpod-conmon-d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7.scope.
Oct 02 11:40:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c6b51a11f1fec10c73c60fed46511a55c46a6ddb0a87b7659694ccd886a8d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c6b51a11f1fec10c73c60fed46511a55c46a6ddb0a87b7659694ccd886a8d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c6b51a11f1fec10c73c60fed46511a55c46a6ddb0a87b7659694ccd886a8d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c6b51a11f1fec10c73c60fed46511a55c46a6ddb0a87b7659694ccd886a8d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:18 compute-0 podman[123534]: 2025-10-02 11:40:18.787334906 +0000 UTC m=+0.104542173 container init d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:40:18 compute-0 podman[123534]: 2025-10-02 11:40:18.79608386 +0000 UTC m=+0.113291097 container start d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 02 11:40:18 compute-0 podman[123534]: 2025-10-02 11:40:18.702103675 +0000 UTC m=+0.019310932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:18 compute-0 podman[123534]: 2025-10-02 11:40:18.799017048 +0000 UTC m=+0.116224285 container attach d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:40:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:19.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:19 compute-0 ceph-mon[73607]: pgmap v377: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:19 compute-0 sudo[123683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgulzzfcyrvkqxsnxmnzekrxnofbwsvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405219.1095755-1012-193496744708527/AnsiballZ_stat.py'
Oct 02 11:40:19 compute-0 sudo[123683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:19 compute-0 jolly_liskov[123551]: {
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:     "1": [
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:         {
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "devices": [
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "/dev/loop3"
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             ],
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "lv_name": "ceph_lv0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "lv_size": "7511998464",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "name": "ceph_lv0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "tags": {
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.cluster_name": "ceph",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.crush_device_class": "",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.encrypted": "0",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.osd_id": "1",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.type": "block",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:                 "ceph.vdo": "0"
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             },
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "type": "block",
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:             "vg_name": "ceph_vg0"
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:         }
Oct 02 11:40:19 compute-0 jolly_liskov[123551]:     ]
Oct 02 11:40:19 compute-0 jolly_liskov[123551]: }
Oct 02 11:40:19 compute-0 podman[123534]: 2025-10-02 11:40:19.543985198 +0000 UTC m=+0.861192435 container died d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:19 compute-0 systemd[1]: libpod-d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7.scope: Deactivated successfully.
Oct 02 11:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c6b51a11f1fec10c73c60fed46511a55c46a6ddb0a87b7659694ccd886a8d5-merged.mount: Deactivated successfully.
Oct 02 11:40:19 compute-0 podman[123534]: 2025-10-02 11:40:19.602247049 +0000 UTC m=+0.919454286 container remove d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_liskov, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:40:19 compute-0 systemd[1]: libpod-conmon-d6031d5808aa6431a0a966e2b7bfd26c99ea018c9bc1fcccde547767b2dce3a7.scope: Deactivated successfully.
Oct 02 11:40:19 compute-0 sudo[123329]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 python3.9[123685]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:19 compute-0 sudo[123702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:19 compute-0 sudo[123702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 sudo[123702]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:19 compute-0 sudo[123709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 sudo[123709]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123683]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:19 compute-0 sudo[123754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 sudo[123754]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:40:19 compute-0 sudo[123758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 sudo[123758]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:19 compute-0 sudo[123827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 sudo[123827]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-0 sudo[123859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:40:19 compute-0 sudo[123859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:19 compute-0 sudo[123927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafzqflxsfkofofptmancflcnnctwitk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405219.1095755-1012-193496744708527/AnsiballZ_file.py'
Oct 02 11:40:19 compute-0 sudo[123927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:20 compute-0 python3.9[123929]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:20 compute-0 sudo[123927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.097190969 +0000 UTC m=+0.035889659 container create 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:40:20 compute-0 systemd[1]: Started libpod-conmon-11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b.scope.
Oct 02 11:40:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.160900368 +0000 UTC m=+0.099599078 container init 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.167539773 +0000 UTC m=+0.106238463 container start 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.170729637 +0000 UTC m=+0.109428347 container attach 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:20 compute-0 clever_yalow[124009]: 167 167
Oct 02 11:40:20 compute-0 systemd[1]: libpod-11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b.scope: Deactivated successfully.
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.171999077 +0000 UTC m=+0.110697767 container died 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.081420251 +0000 UTC m=+0.020118951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-83e23b16465c2fc9a4730f4eeb9f1d26515f3dff4d6e7f8ee84e6a624cb90ab8-merged.mount: Deactivated successfully.
Oct 02 11:40:20 compute-0 podman[123970]: 2025-10-02 11:40:20.212652727 +0000 UTC m=+0.151351457 container remove 11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:40:20 compute-0 systemd[1]: libpod-conmon-11ce8a45d98a196addb28912e92a52b7e4fd5c614a57ee8802b4c87c6834b90b.scope: Deactivated successfully.
Oct 02 11:40:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:20 compute-0 podman[124034]: 2025-10-02 11:40:20.372809577 +0000 UTC m=+0.035393398 container create fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:40:20 compute-0 systemd[1]: Started libpod-conmon-fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b.scope.
Oct 02 11:40:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0535c047ae9a226f7027b468e0f8714e7c5440ec023b89f80d9667e6cd3090d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0535c047ae9a226f7027b468e0f8714e7c5440ec023b89f80d9667e6cd3090d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0535c047ae9a226f7027b468e0f8714e7c5440ec023b89f80d9667e6cd3090d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0535c047ae9a226f7027b468e0f8714e7c5440ec023b89f80d9667e6cd3090d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:40:20 compute-0 podman[124034]: 2025-10-02 11:40:20.449884067 +0000 UTC m=+0.112467908 container init fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:40:20 compute-0 podman[124034]: 2025-10-02 11:40:20.356967527 +0000 UTC m=+0.019551368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:40:20 compute-0 podman[124034]: 2025-10-02 11:40:20.462225445 +0000 UTC m=+0.124809266 container start fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:40:20 compute-0 podman[124034]: 2025-10-02 11:40:20.46500094 +0000 UTC m=+0.127584751 container attach fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:40:20 compute-0 sudo[124180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvxkgyqbfwxawdzsafsymbnvifclzbed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405220.5808122-1051-68216327668250/AnsiballZ_command.py'
Oct 02 11:40:20 compute-0 sudo[124180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:21 compute-0 python3.9[124182]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:40:21 compute-0 sudo[124180]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:21.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:21 compute-0 goofy_carver[124050]: {
Oct 02 11:40:21 compute-0 goofy_carver[124050]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:40:21 compute-0 goofy_carver[124050]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:40:21 compute-0 goofy_carver[124050]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:40:21 compute-0 goofy_carver[124050]:         "osd_id": 1,
Oct 02 11:40:21 compute-0 goofy_carver[124050]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:40:21 compute-0 goofy_carver[124050]:         "type": "bluestore"
Oct 02 11:40:21 compute-0 goofy_carver[124050]:     }
Oct 02 11:40:21 compute-0 goofy_carver[124050]: }
Oct 02 11:40:21 compute-0 systemd[1]: libpod-fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b.scope: Deactivated successfully.
Oct 02 11:40:21 compute-0 podman[124034]: 2025-10-02 11:40:21.306261179 +0000 UTC m=+0.968845000 container died fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0535c047ae9a226f7027b468e0f8714e7c5440ec023b89f80d9667e6cd3090d4-merged.mount: Deactivated successfully.
Oct 02 11:40:21 compute-0 podman[124034]: 2025-10-02 11:40:21.361236353 +0000 UTC m=+1.023820174 container remove fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:40:21 compute-0 systemd[1]: libpod-conmon-fb9412f5ad49e9126461a2ad8dc5061c415aa04c021f9b4cd3ac92e3f0a7888b.scope: Deactivated successfully.
Oct 02 11:40:21 compute-0 sudo[123859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:40:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:40:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e1d59274-531d-49e4-9c5b-17cff0116120 does not exist
Oct 02 11:40:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 25a5eb42-0260-411a-b446-98ef6e4b5757 does not exist
Oct 02 11:40:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 79943194-1b3e-4935-b9de-64d9e42e1649 does not exist
Oct 02 11:40:21 compute-0 ceph-mon[73607]: pgmap v378: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:40:21 compute-0 sudo[124289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:21 compute-0 sudo[124289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:21 compute-0 sudo[124289]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:21 compute-0 sudo[124315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:40:21 compute-0 sudo[124315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:21 compute-0 sudo[124315]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:21 compute-0 sudo[124413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxtakjciwuiyprkdocmthlzxxaeqoxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405221.3633478-1075-163888903237139/AnsiballZ_blockinfile.py'
Oct 02 11:40:21 compute-0 sudo[124413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000022s ======
Oct 02 11:40:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:21.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 02 11:40:22 compute-0 python3.9[124415]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:22 compute-0 sudo[124413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:22 compute-0 sudo[124566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiiffjegbntrtybzqbxiizbjrljcvpmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405222.265375-1102-215364678622661/AnsiballZ_file.py'
Oct 02 11:40:22 compute-0 sudo[124566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:22 compute-0 python3.9[124568]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:22 compute-0 sudo[124566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:23.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:23 compute-0 sudo[124718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsknizrtedztzzkfsvdxfjvfqwuoloxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405222.933028-1102-130879429519646/AnsiballZ_file.py'
Oct 02 11:40:23 compute-0 sudo[124718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:23 compute-0 python3.9[124720]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:23 compute-0 ceph-mon[73607]: pgmap v379: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:23 compute-0 sudo[124718]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:23.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:24 compute-0 sudo[124870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aonigkqgxxscsfstlyhpaihcgfbbaefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405223.6800616-1147-177614924234391/AnsiballZ_mount.py'
Oct 02 11:40:24 compute-0 sudo[124870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:24 compute-0 python3.9[124872]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:40:24 compute-0 sudo[124870]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:24 compute-0 sudo[125023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqacgfvupoeamlnzzhyvpywbohkqkoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405224.4658983-1147-123317938635090/AnsiballZ_mount.py'
Oct 02 11:40:24 compute-0 sudo[125023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:24 compute-0 python3.9[125025]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:40:24 compute-0 sudo[125023]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:25 compute-0 ceph-mon[73607]: pgmap v380: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:25 compute-0 sshd-session[116883]: Connection closed by 192.168.122.30 port 33594
Oct 02 11:40:25 compute-0 sshd-session[116880]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:25 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 02 11:40:25 compute-0 systemd[1]: session-41.scope: Consumed 27.995s CPU time.
Oct 02 11:40:25 compute-0 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Oct 02 11:40:25 compute-0 systemd-logind[789]: Removed session 41.
Oct 02 11:40:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:27 compute-0 ceph-mon[73607]: pgmap v381: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:29.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:29 compute-0 ceph-mon[73607]: pgmap v382: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:29.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:31 compute-0 sshd-session[125053]: Accepted publickey for zuul from 192.168.122.30 port 52828 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:40:31 compute-0 systemd-logind[789]: New session 42 of user zuul.
Oct 02 11:40:31 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 02 11:40:31 compute-0 sshd-session[125053]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:40:31 compute-0 ceph-mon[73607]: pgmap v383: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:31.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:31 compute-0 sudo[125206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fixmhixnucogkanywpskdxnvfegbjfbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405231.371472-23-264122592395370/AnsiballZ_tempfile.py'
Oct 02 11:40:31 compute-0 sudo[125206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:32 compute-0 python3.9[125208]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 11:40:32 compute-0 sudo[125206]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:32 compute-0 sshd-session[70594]: Received disconnect from 38.129.56.219 port 49246:11: disconnected by user
Oct 02 11:40:32 compute-0 sshd-session[70594]: Disconnected from user zuul 38.129.56.219 port 49246
Oct 02 11:40:32 compute-0 sshd-session[70591]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:32 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 02 11:40:32 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Oct 02 11:40:32 compute-0 systemd[1]: session-19.scope: Consumed 1min 14.346s CPU time.
Oct 02 11:40:32 compute-0 systemd-logind[789]: Removed session 19.
Oct 02 11:40:32 compute-0 sudo[125359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuwlsazfwqmeelnqoebbeocjhxaexcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405232.3137922-59-64879477746487/AnsiballZ_stat.py'
Oct 02 11:40:32 compute-0 sudo[125359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:32 compute-0 python3.9[125361]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:40:33 compute-0 sudo[125359]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:33 compute-0 ceph-mon[73607]: pgmap v384: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:33 compute-0 sudo[125513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqmdgcblwyqzvmcnjytrwiphrltaxfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405233.2168798-83-70836944146558/AnsiballZ_slurp.py'
Oct 02 11:40:33 compute-0 sudo[125513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:33.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:33 compute-0 python3.9[125515]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 02 11:40:33 compute-0 sudo[125513]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:34 compute-0 sudo[125666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxilmuvzikactsrcnikblawmakurpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405234.144498-107-85049448475532/AnsiballZ_stat.py'
Oct 02 11:40:34 compute-0 sudo[125666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:34 compute-0 python3.9[125668]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b1ln9wr_ follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:34 compute-0 sudo[125666]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:35 compute-0 sudo[125791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtyicawhxxgdjawfbotaknqsaneypfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405234.144498-107-85049448475532/AnsiballZ_copy.py'
Oct 02 11:40:35 compute-0 sudo[125791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:35 compute-0 python3.9[125793]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b1ln9wr_ mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405234.144498-107-85049448475532/.source.b1ln9wr_ _original_basename=.kvr3npuo follow=False checksum=279e85be4c03ef167cd5d1085e11bbf46760f3f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:35 compute-0 sudo[125791]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:35 compute-0 ceph-mon[73607]: pgmap v385: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:35.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:36 compute-0 sudo[125944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssboekuexhfuijsvhkicxeizrijotzcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405235.8276744-152-140023246628209/AnsiballZ_setup.py'
Oct 02 11:40:36 compute-0 sudo[125944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:36 compute-0 python3.9[125946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:40:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:40:36 compute-0 sudo[125944]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:36 compute-0 ceph-mon[73607]: pgmap v386: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:37 compute-0 sudo[126098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdlamfcqvnpmagcwoacqppmyykknuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405237.0893533-177-191295768819273/AnsiballZ_blockinfile.py'
Oct 02 11:40:37 compute-0 sudo[126098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:37 compute-0 python3.9[126100]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFg5rufAFy1itLjBBGlAJUDsQsaZUavZeI3stNJBLolkBBMB4sBpwAvQFbu2iUhtVavUC7q9xD2LsX0DVBu9DCaQn6tETqUUvMQqzvmaXd34gwo5fH6vo+bjqVdZEih0pIVI1O2OfOUvnv2MFLdKx8MWLQd54beGjWQsC3xCnYVuh0W/aAQtRC2EA77nBo+r40u5V3HXOhdmUbFNvL0r6I8FwP4IvbKC5jkBTtqIzewh+/cyJrURCh0aCpeUjBqNqw3ADhtuR2h5n3ioq+IwPXbhHViJUWQyJ5XKmlSzupEEYA+RV8i1Y3eHJK2RuYlCXkpRP3MEsyBxmISTPhVdQwfxClvyi/mTQkl6k5XFGyZher7KbE6lx4qzp8iCOyOWkw32N3tG0AlnOtPI5HJw8uKbwWl2Apb7RncDQ5fpNOKNFcB1sg61g2Vvew7xJs62OxhkOTiSkEEUYoFfXAqNLiH8gC0+Go12qYleZKbfzL00BDT2boQ2UxYn2rWK7YifU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB29M+5Yr1BRNmm2RoLe921umFtraZRFTbdptrBdgsAV
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7vUjfSNyE5eIqsBh1jfLF/N1YKOXT7KtCRIxAQ1i9+ljB9j4j/dQgL6TGk3m+hQRPyAVxTDwUpeBxHWIpFjU=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ry6wCZyHJmZsI4Z83U5DYzCaQhL5JmDbykEZokepEcnLFJt20bbnTU0eQzXJylkCgp7rhmpZo7V7qVNnZUMI3aLUHK30Yr5jzQVofHBRg6ZnAIq1MAwqwGH1s6vfNo+//zth4OMHvolMSEO6zSmOWeAsuHM2DTEJ6IdRasKfhOCc3oI/Tcf5vOUyVGg/BH+fFOHKzPiyJNXozsvw2u4ppfdkMJvVC9w2oTNHMIGcDxSsx0zD2bLdYe5l23tFIOaBM149ktg5KPPsKYyQFymOi5qJHHnf9027MqZ4N7Z9SYuQrqt2nY4C/XmaVFOmUIFNNMZ5qMWDsc38V9cHCgurSaMsQ4em1srXr9nzADLh9bw4WksIRfrtt3twMp7FG9fMsw8rdmFt0+4/IdHr/3wCmHeF07qp10kJPXa5z9dApoIKiQlbIl+UCzlaN5tHD6vb4q0MyhqAtU4mSA1zGz67c2lLSGbF4FTgU9yza15FZjHzQ0ArNu/1KIheA8nrpkE=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHwAEaDivXDvJkCgJw1MVhYQArg6qfdDb4SKBZRPdoOc
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMghqQyWdigdn5yyuBSIQ3tHLq/tZwQO222aoRtckuDI9Ml6snE/xKJ7YWmTvRTsqj2tqCqXIllFFfreYY7Apw=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpTq9G8ymc65djWd0YMUA1/KMKQbBxw7LoyOCyAnPotUx6UyYfBtjYX5I4TzqzEugao1w+4AHDZ5XKSwr8sv9kaSGm0ERmNxz22+5cmKwWxcvUfNGQQXbk6gk6z5p0qpH/Ue9e19xDUC+RDUMGcwrysoGQ05aVcGDaEmNUxvYjj0UUfs45KX/pHPk5xQ4c0WjiL0BfzPJmphY2PAj6O9b4iFA3HjIJgvQ3+i3jEOkvA1FsXm5s7O1/wEjqwsdfKPlX0LUuCqXyxI4uhWY16Ofi89lEtsdQRwFyoZcDMJUDHMH8oJSopUNwwMEe7UBD1MHJSIzrd6NUGnvRjhqH6dE/IoT2X3f4JN/Six+J9ayDqiIkd1QNsJzPBr6G2Lj/dQbUusb3nXhPk5TXKMOXm5i+J940nYQv8/Y9rf2H1qltGaDEOS95ktKpcL6EVplOsQand/Qmb/ShKbiAo2dr3YC3v/FFE2AAj+0Dnh4xob14bhivkYHDhIF0zyzcVGhHZXc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICufzWCrq7lQCIqxq8UNP+WfGRQD+uOEPLr+ZneqofrM
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHP494uEOdMq07v1W25s7bKFki4bQuHkde7xWzYJuUT44SD4tSCrPbQiOkLCqtg9H5yxKL0Ovnl22PYLf1HMKAs=
                                              create=True mode=0644 path=/tmp/ansible.b1ln9wr_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:37 compute-0 sudo[126098]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:37.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:38 compute-0 sudo[126251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zporwuoeyzzmyamwynhxiddrrshbhevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405237.965421-201-235968456397386/AnsiballZ_command.py'
Oct 02 11:40:38 compute-0 sudo[126251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:38 compute-0 python3.9[126253]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b1ln9wr_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:40:38 compute-0 sudo[126251]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:39 compute-0 sudo[126405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjblfhritcuocxfelsifwjyylvhxxvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405238.9243746-225-260153060150401/AnsiballZ_file.py'
Oct 02 11:40:39 compute-0 sudo[126405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:39 compute-0 ceph-mon[73607]: pgmap v387: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:39 compute-0 python3.9[126407]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b1ln9wr_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:39 compute-0 sudo[126405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.654667) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239654709, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 575, "num_deletes": 251, "total_data_size": 694093, "memory_usage": 704432, "flush_reason": "Manual Compaction"}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239660455, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 503823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9919, "largest_seqno": 10493, "table_properties": {"data_size": 500929, "index_size": 866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7335, "raw_average_key_size": 19, "raw_value_size": 494999, "raw_average_value_size": 1334, "num_data_blocks": 37, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405202, "oldest_key_time": 1759405202, "file_creation_time": 1759405239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5832 microseconds, and 2084 cpu microseconds.
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.660497) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 503823 bytes OK
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.660519) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.661816) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.661850) EVENT_LOG_v1 {"time_micros": 1759405239661844, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.661868) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 690957, prev total WAL file size 690957, number of live WAL files 2.
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.662396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(492KB)], [23(9620KB)]
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239662433, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10355602, "oldest_snapshot_seqno": -1}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3655 keys, 7502084 bytes, temperature: kUnknown
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239727332, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7502084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7474844, "index_size": 17018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 88957, "raw_average_key_size": 24, "raw_value_size": 7405916, "raw_average_value_size": 2026, "num_data_blocks": 747, "num_entries": 3655, "num_filter_entries": 3655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.727574) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7502084 bytes
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.730686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.3 rd, 115.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(35.4) write-amplify(14.9) OK, records in: 4160, records dropped: 505 output_compression: NoCompression
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.730712) EVENT_LOG_v1 {"time_micros": 1759405239730700, "job": 8, "event": "compaction_finished", "compaction_time_micros": 64987, "compaction_time_cpu_micros": 21100, "output_level": 6, "num_output_files": 1, "total_output_size": 7502084, "num_input_records": 4160, "num_output_records": 3655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239730956, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239733150, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.662325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.733261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.733267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.733269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.733270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:40:39.733271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:40:39 compute-0 sudo[126432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:39 compute-0 sudo[126432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:39 compute-0 sudo[126432]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:39 compute-0 sudo[126457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:40:39 compute-0 sudo[126457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:40:39 compute-0 sudo[126457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:40 compute-0 sshd-session[125056]: Connection closed by 192.168.122.30 port 52828
Oct 02 11:40:40 compute-0 sshd-session[125053]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:40 compute-0 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Oct 02 11:40:40 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 02 11:40:40 compute-0 systemd[1]: session-42.scope: Consumed 4.638s CPU time.
Oct 02 11:40:40 compute-0 systemd-logind[789]: Removed session 42.
Oct 02 11:40:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:40 compute-0 ceph-mon[73607]: pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:41.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:41.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:40:42
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'backups', 'default.rgw.control']
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:40:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:40:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:43.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:43 compute-0 ceph-mon[73607]: pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000022s ======
Oct 02 11:40:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:43.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 02 11:40:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:45.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:40:45 compute-0 sshd-session[126485]: Accepted publickey for zuul from 192.168.122.30 port 56288 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:40:45 compute-0 systemd-logind[789]: New session 43 of user zuul.
Oct 02 11:40:45 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 02 11:40:45 compute-0 sshd-session[126485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:40:45 compute-0 ceph-mon[73607]: pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:46 compute-0 python3.9[126638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:40:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:47.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:47 compute-0 sudo[126793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwrnirriuwwtmusattmlkzofmyaluhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405246.8266256-60-265748420233917/AnsiballZ_systemd.py'
Oct 02 11:40:47 compute-0 sudo[126793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:47 compute-0 ceph-mon[73607]: pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:47 compute-0 python3.9[126795]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:40:47 compute-0 sudo[126793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:47.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:48 compute-0 sudo[126947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxevoxzvquuodvwfgdwmenctsmhkuxfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405247.9995613-84-67774837163014/AnsiballZ_systemd.py'
Oct 02 11:40:48 compute-0 sudo[126947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:48 compute-0 python3.9[126949]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:40:48 compute-0 sudo[126947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:40:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2319 writes, 10K keys, 2318 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2319 writes, 2318 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2319 writes, 10K keys, 2318 commit groups, 1.0 writes per commit group, ingest: 13.64 MB, 0.02 MB/s
                                           Interval WAL: 2319 writes, 2318 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    149.8      0.08              0.02         4    0.019       0      0       0.0       0.0
                                             L6      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    156.5    132.5      0.18              0.06         3    0.061     11K   1315       0.0       0.0
                                            Sum      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    110.1    137.7      0.26              0.08         7    0.037     11K   1315       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    112.0    139.8      0.25              0.08         6    0.042     11K   1315       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    156.5    132.5      0.18              0.06         3    0.061     11K   1315       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    158.2      0.07              0.02         3    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 308.00 MB usage: 1.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(60,964.59 KB,0.30584%) FilterBlock(8,40.61 KB,0.0128758%) IndexBlock(8,91.70 KB,0.0290759%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:40:49 compute-0 sudo[127101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glimnpnfvksgdxjeoqukllamchjxnaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405248.8459756-111-123715337131432/AnsiballZ_command.py'
Oct 02 11:40:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:49.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:49 compute-0 sudo[127101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:49 compute-0 python3.9[127103]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:40:49 compute-0 sudo[127101]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:49 compute-0 ceph-mon[73607]: pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:50 compute-0 sudo[127254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frhqvmmalrswdgekqyuuozeptlkmznqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405249.6876478-135-276819454652685/AnsiballZ_stat.py'
Oct 02 11:40:50 compute-0 sudo[127254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:50 compute-0 python3.9[127256]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:40:50 compute-0 sudo[127254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:50 compute-0 ceph-mon[73607]: pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:50 compute-0 sudo[127407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwuztvynqsovdpvuqfkurubjrpsadmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405250.5053546-162-246857806761686/AnsiballZ_file.py'
Oct 02 11:40:50 compute-0 sudo[127407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:51 compute-0 python3.9[127409]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:51 compute-0 sudo[127407]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:51 compute-0 sshd-session[126488]: Connection closed by 192.168.122.30 port 56288
Oct 02 11:40:51 compute-0 sshd-session[126485]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:51 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 02 11:40:51 compute-0 systemd[1]: session-43.scope: Consumed 3.596s CPU time.
Oct 02 11:40:51 compute-0 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Oct 02 11:40:51 compute-0 systemd-logind[789]: Removed session 43.
Oct 02 11:40:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:51.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:53 compute-0 ceph-mon[73607]: pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:40:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:40:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:53.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:55.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:55 compute-0 ceph-mon[73607]: pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:55.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:55 compute-0 sshd-session[127437]: Accepted publickey for zuul from 192.168.122.30 port 34238 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:40:55 compute-0 systemd-logind[789]: New session 44 of user zuul.
Oct 02 11:40:55 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 02 11:40:55 compute-0 sshd-session[127437]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:40:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:56 compute-0 python3.9[127591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:40:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:57.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:57 compute-0 ceph-mon[73607]: pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:57 compute-0 sudo[127745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktjjoyenfptwxlgvvkonztriowyyokhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405257.4538233-67-241524394012584/AnsiballZ_setup.py'
Oct 02 11:40:57 compute-0 sudo[127745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:57.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:58 compute-0 python3.9[127747]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:40:58 compute-0 sudo[127745]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:58 compute-0 ceph-mon[73607]: pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:40:58 compute-0 sudo[127830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisoxrsjvkzsvpirehexhdygokhfgmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405257.4538233-67-241524394012584/AnsiballZ_dnf.py'
Oct 02 11:40:58 compute-0 sudo[127830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:58 compute-0 python3.9[127832]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:40:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:40:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:40:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:40:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:40:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:40:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:59.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:41:00 compute-0 sudo[127834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:00 compute-0 sudo[127834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:00 compute-0 sudo[127834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:00 compute-0 sudo[127830]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:00 compute-0 sudo[127859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:00 compute-0 sudo[127859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:00 compute-0 sudo[127859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:00 compute-0 python3.9[128034]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:41:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:01.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:01 compute-0 ceph-mon[73607]: pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:01.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:02 compute-0 python3.9[128185]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:41:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:03.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:03 compute-0 python3.9[128336]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:41:03 compute-0 ceph-mon[73607]: pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:03.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:04 compute-0 python3.9[128486]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:41:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:04 compute-0 sshd-session[127440]: Connection closed by 192.168.122.30 port 34238
Oct 02 11:41:04 compute-0 sshd-session[127437]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:41:04 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 02 11:41:04 compute-0 systemd[1]: session-44.scope: Consumed 5.636s CPU time.
Oct 02 11:41:04 compute-0 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Oct 02 11:41:04 compute-0 systemd-logind[789]: Removed session 44.
Oct 02 11:41:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:05.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:05 compute-0 ceph-mon[73607]: pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:07 compute-0 ceph-mon[73607]: pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:09.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:09 compute-0 sshd-session[128514]: Accepted publickey for zuul from 192.168.122.30 port 50612 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:41:09 compute-0 systemd-logind[789]: New session 45 of user zuul.
Oct 02 11:41:09 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 02 11:41:09 compute-0 sshd-session[128514]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:41:09 compute-0 ceph-mon[73607]: pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:10 compute-0 python3.9[128667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:41:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:11.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:11 compute-0 ceph-mon[73607]: pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:11 compute-0 sudo[128822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogxhhcezervwogfsztlresjviybtytmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405271.580892-116-166317062022192/AnsiballZ_file.py'
Oct 02 11:41:11 compute-0 sudo[128822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:12 compute-0 python3.9[128824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:12 compute-0 sudo[128822]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:12 compute-0 sudo[128975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthsacdfwvyjerahqjscerkpqlywrxqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405272.2996516-116-218387469371358/AnsiballZ_file.py'
Oct 02 11:41:12 compute-0 sudo[128975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:12 compute-0 python3.9[128977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:12 compute-0 sudo[128975]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:13.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:13 compute-0 sudo[129127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuqyjytzmhqhdvzfbdibqwsvmijbgmep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405272.9318335-158-27623935064480/AnsiballZ_stat.py'
Oct 02 11:41:13 compute-0 sudo[129127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:13 compute-0 ceph-mon[73607]: pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:13 compute-0 python3.9[129129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:13 compute-0 sudo[129127]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:14 compute-0 sudo[129250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjlmfuuxgrqpictksvageijqaxejktv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405272.9318335-158-27623935064480/AnsiballZ_copy.py'
Oct 02 11:41:14 compute-0 sudo[129250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:14 compute-0 python3.9[129252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405272.9318335-158-27623935064480/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3628efc44a5850f18f36ce8fcda6309d2ccb7cb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:14 compute-0 sudo[129250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:14 compute-0 sudo[129403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpaqxuwmwehhwyoxjslmpjhmvyidgcgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405274.5709763-158-92458814614191/AnsiballZ_stat.py'
Oct 02 11:41:14 compute-0 sudo[129403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:15 compute-0 python3.9[129405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:15 compute-0 sudo[129403]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:15.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:15 compute-0 sudo[129526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzojvuudkfzkztsskaptdqrxfuhofmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405274.5709763-158-92458814614191/AnsiballZ_copy.py'
Oct 02 11:41:15 compute-0 sudo[129526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:15 compute-0 python3.9[129528]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405274.5709763-158-92458814614191/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7abde3fe8c59ace3eb9cb99b75c7e56e5fa913ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:15 compute-0 ceph-mon[73607]: pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:15 compute-0 sudo[129526]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:16 compute-0 sudo[129678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxgkfdljzmrtwnirbarfhsedfaisbsdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405275.8140397-158-83174194417898/AnsiballZ_stat.py'
Oct 02 11:41:16 compute-0 sudo[129678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:16 compute-0 python3.9[129680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:16 compute-0 sudo[129678]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:16 compute-0 sudo[129802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmczqoiuhvrjfydjvwpnvubqoxjnfolm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405275.8140397-158-83174194417898/AnsiballZ_copy.py'
Oct 02 11:41:16 compute-0 sudo[129802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:16 compute-0 ceph-mon[73607]: pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:16 compute-0 python3.9[129804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405275.8140397-158-83174194417898/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cd703aff8c5cd0367963b50cf87fa123e571ddeb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:16 compute-0 sudo[129802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:17 compute-0 sudo[129954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexydblolnvntjrbjqjizdpxgllqlvjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405276.9677927-290-202394114439980/AnsiballZ_file.py'
Oct 02 11:41:17 compute-0 sudo[129954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:17.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:17 compute-0 python3.9[129956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:17 compute-0 sudo[129954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:17 compute-0 sudo[130106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idattqcaxwnulzdwjeasqdfdzfbprcvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405277.5355277-290-86959022435855/AnsiballZ_file.py'
Oct 02 11:41:17 compute-0 sudo[130106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:17.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:17 compute-0 python3.9[130108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:18 compute-0 sudo[130106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:18 compute-0 sudo[130259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwhavbfwlhzmnvkaspbekqucfmqdped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405278.1934605-331-78144149149312/AnsiballZ_stat.py'
Oct 02 11:41:18 compute-0 sudo[130259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:18 compute-0 python3.9[130261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:18 compute-0 sudo[130259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:18 compute-0 sudo[130382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfnuesecrwgweidmvcmbeildpeimocyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405278.1934605-331-78144149149312/AnsiballZ_copy.py'
Oct 02 11:41:18 compute-0 sudo[130382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:19 compute-0 python3.9[130384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405278.1934605-331-78144149149312/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=323447b54d15888de9de7efa850115ce86f86567 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:19.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:19 compute-0 sudo[130382]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:19 compute-0 ceph-mon[73607]: pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:19 compute-0 sudo[130534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrymkywrrwzqpaontytxnyemlbekurxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405279.3941875-331-188654211223500/AnsiballZ_stat.py'
Oct 02 11:41:19 compute-0 sudo[130534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:19 compute-0 python3.9[130536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:19 compute-0 sudo[130534]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:19.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:20 compute-0 sudo[130657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbitwyacwmsvdhkoyrjuzvbeeiidfibw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405279.3941875-331-188654211223500/AnsiballZ_copy.py'
Oct 02 11:41:20 compute-0 sudo[130657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:20 compute-0 sudo[130659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:20 compute-0 sudo[130659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:20 compute-0 sudo[130659]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:20 compute-0 sudo[130685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:20 compute-0 sudo[130685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:20 compute-0 sudo[130685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:20 compute-0 python3.9[130660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405279.3941875-331-188654211223500/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c2cb2163d8d8d6ce387c5934cff1fd20c9d087eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:20 compute-0 sudo[130657]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:20 compute-0 sudo[130860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwgkeepjlketviyoejsawnhhxwvxwhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405280.5449345-331-13649950144231/AnsiballZ_stat.py'
Oct 02 11:41:20 compute-0 sudo[130860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:21 compute-0 python3.9[130862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:21 compute-0 sudo[130860]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:21.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:21 compute-0 sudo[130983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgftcsncugeapyhyqftrotuguepazmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405280.5449345-331-13649950144231/AnsiballZ_copy.py'
Oct 02 11:41:21 compute-0 sudo[130983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:21 compute-0 python3.9[130985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405280.5449345-331-13649950144231/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6462c17761ec9bbd8a3a071ec36296940eeea87e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:21 compute-0 sudo[130983]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:21 compute-0 ceph-mon[73607]: pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:21 compute-0 sudo[131033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:21 compute-0 sudo[131033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:21 compute-0 sudo[131033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:21 compute-0 sudo[131087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:41:21 compute-0 sudo[131087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:21 compute-0 sudo[131087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:21 compute-0 sudo[131130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:21 compute-0 sudo[131130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:21.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:21 compute-0 sudo[131130]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:21 compute-0 sudo[131160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:41:21 compute-0 sudo[131160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:22 compute-0 sudo[131235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljnxzdatqxpyvsfzpmaonpxqpgbbexod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405281.775029-451-90174026682411/AnsiballZ_file.py'
Oct 02 11:41:22 compute-0 sudo[131235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:22 compute-0 python3.9[131237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:22 compute-0 sudo[131235]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:22 compute-0 sudo[131160]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3074ab3b-a94f-40c6-9d43-6d58d0e1a5fa does not exist
Oct 02 11:41:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aa12eddc-3cbb-4ba3-97d2-3c6385edfd61 does not exist
Oct 02 11:41:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c80fcd56-c9c7-459a-8cf3-9df6e9cd1760 does not exist
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:41:22 compute-0 sudo[131440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntjpqaotxrtuddaxrqbesrrqjuwrvdey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405282.3948932-451-77464083602275/AnsiballZ_file.py'
Oct 02 11:41:22 compute-0 ceph-mon[73607]: pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:41:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:41:22 compute-0 sudo[131440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:22 compute-0 sudo[131400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:22 compute-0 sudo[131400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:22 compute-0 sudo[131400]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:22 compute-0 sudo[131447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:41:22 compute-0 sudo[131447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:22 compute-0 sudo[131447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:22 compute-0 sudo[131472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:22 compute-0 sudo[131472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:22 compute-0 sudo[131472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:22 compute-0 sudo[131497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:41:22 compute-0 sudo[131497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:22 compute-0 python3.9[131444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:22 compute-0 sudo[131440]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.126921711 +0000 UTC m=+0.023569291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:23.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.292027178 +0000 UTC m=+0.188674728 container create 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:41:23 compute-0 systemd[1]: Started libpod-conmon-310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222.scope.
Oct 02 11:41:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:23 compute-0 sudo[131729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqqsrjbrcxyjrojwevjieuttpnatvvtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405283.1122663-501-45919505203349/AnsiballZ_stat.py'
Oct 02 11:41:23 compute-0 sudo[131729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.395899074 +0000 UTC m=+0.292546644 container init 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.403595604 +0000 UTC m=+0.300243154 container start 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.407388713 +0000 UTC m=+0.304036263 container attach 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:41:23 compute-0 optimistic_elion[131727]: 167 167
Oct 02 11:41:23 compute-0 systemd[1]: libpod-310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222.scope: Deactivated successfully.
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.409337478 +0000 UTC m=+0.305985028 container died 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c289bc1a087a6e058bce4c05b5dede7d2d296bee8da2ec5551894862c62ef4f7-merged.mount: Deactivated successfully.
Oct 02 11:41:23 compute-0 podman[131598]: 2025-10-02 11:41:23.473159266 +0000 UTC m=+0.369806816 container remove 310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:41:23 compute-0 systemd[1]: libpod-conmon-310a6cc91283c3e5f3be8bb0c1f0a5592325d4da35abaf4d19d2a5ff7dc87222.scope: Deactivated successfully.
Oct 02 11:41:23 compute-0 python3.9[131732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:23 compute-0 sudo[131729]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:23 compute-0 podman[131755]: 2025-10-02 11:41:23.65863707 +0000 UTC m=+0.094628850 container create 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:41:23 compute-0 podman[131755]: 2025-10-02 11:41:23.582422325 +0000 UTC m=+0.018414125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:23 compute-0 systemd[1]: Started libpod-conmon-2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb.scope.
Oct 02 11:41:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:23 compute-0 podman[131755]: 2025-10-02 11:41:23.794676707 +0000 UTC m=+0.230668517 container init 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:41:23 compute-0 podman[131755]: 2025-10-02 11:41:23.802207732 +0000 UTC m=+0.238199522 container start 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:41:23 compute-0 podman[131755]: 2025-10-02 11:41:23.840597427 +0000 UTC m=+0.276589217 container attach 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:41:23 compute-0 sudo[131896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkopbclvhlmanhswngxfyydtexpvgbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405283.1122663-501-45919505203349/AnsiballZ_copy.py'
Oct 02 11:41:23 compute-0 sudo[131896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:24 compute-0 python3.9[131898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405283.1122663-501-45919505203349/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e5b2afb90a16effa63f6b889670292891e3f1dd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:24 compute-0 sudo[131896]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:24 compute-0 sudo[132050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrubznbbpwpcspimxuftesoaxneolqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405284.2566423-501-231095845199886/AnsiballZ_stat.py'
Oct 02 11:41:24 compute-0 sudo[132050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:24 compute-0 wonderful_mclaren[131828]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:41:24 compute-0 wonderful_mclaren[131828]: --> relative data size: 1.0
Oct 02 11:41:24 compute-0 wonderful_mclaren[131828]: --> All data devices are unavailable
Oct 02 11:41:24 compute-0 systemd[1]: libpod-2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb.scope: Deactivated successfully.
Oct 02 11:41:24 compute-0 podman[131755]: 2025-10-02 11:41:24.634901321 +0000 UTC m=+1.070893111 container died 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:41:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:24 compute-0 python3.9[132053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:24 compute-0 sudo[132050]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:24 compute-0 ceph-mon[73607]: pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c78305f52596da6422dd7e64881ef015b515450f158bd647ef33fd8f2fd3474e-merged.mount: Deactivated successfully.
Oct 02 11:41:24 compute-0 podman[131755]: 2025-10-02 11:41:24.92060866 +0000 UTC m=+1.356600450 container remove 2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:41:24 compute-0 sudo[131497]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:24 compute-0 systemd[1]: libpod-conmon-2db682c2107c3a1cdc6d6e9411232320368c232128a39af06f8cbd25be4a84cb.scope: Deactivated successfully.
Oct 02 11:41:25 compute-0 sudo[132142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:25 compute-0 sudo[132142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:25 compute-0 sudo[132142]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:25 compute-0 sudo[132177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:41:25 compute-0 sudo[132177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:25 compute-0 sudo[132177]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:25 compute-0 sudo[132265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiniwyruraoobefrnkykfksunmfhkmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405284.2566423-501-231095845199886/AnsiballZ_copy.py'
Oct 02 11:41:25 compute-0 sudo[132265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:25 compute-0 sudo[132226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:25 compute-0 sudo[132226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:25 compute-0 sudo[132226]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:25 compute-0 sudo[132272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:41:25 compute-0 sudo[132272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:25.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:25 compute-0 python3.9[132269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405284.2566423-501-231095845199886/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c2cb2163d8d8d6ce387c5934cff1fd20c9d087eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:25 compute-0 sudo[132265]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.471995358 +0000 UTC m=+0.037495294 container create ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:41:25 compute-0 systemd[1]: Started libpod-conmon-ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e.scope.
Oct 02 11:41:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.45377167 +0000 UTC m=+0.019271626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.556638581 +0000 UTC m=+0.122138567 container init ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.568562314 +0000 UTC m=+0.134062250 container start ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.574320936 +0000 UTC m=+0.139820892 container attach ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:41:25 compute-0 infallible_clarke[132405]: 167 167
Oct 02 11:41:25 compute-0 systemd[1]: libpod-ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e.scope: Deactivated successfully.
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.579317909 +0000 UTC m=+0.144817845 container died ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1711e273023cb4d3ee3479a7b50360f74dda74f5cbcaa304d74e0e37a4e95703-merged.mount: Deactivated successfully.
Oct 02 11:41:25 compute-0 podman[132361]: 2025-10-02 11:41:25.617736534 +0000 UTC m=+0.183236470 container remove ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:41:25 compute-0 systemd[1]: libpod-conmon-ca79894e9260fae73d43aa720e49b6c59668d5f29b8b04218dc649162c75596e.scope: Deactivated successfully.
Oct 02 11:41:25 compute-0 sudo[132530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfkktirctlshzwelrryduxicdsqifsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405285.479317-501-222238480961767/AnsiballZ_stat.py'
Oct 02 11:41:25 compute-0 sudo[132530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:25 compute-0 podman[132526]: 2025-10-02 11:41:25.784049726 +0000 UTC m=+0.039476222 container create 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:41:25 compute-0 systemd[1]: Started libpod-conmon-7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00.scope.
Oct 02 11:41:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597912cfb73b482ee8256135a0f09bcbda4355f11c910241e88b6058a190acfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597912cfb73b482ee8256135a0f09bcbda4355f11c910241e88b6058a190acfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597912cfb73b482ee8256135a0f09bcbda4355f11c910241e88b6058a190acfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597912cfb73b482ee8256135a0f09bcbda4355f11c910241e88b6058a190acfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:25 compute-0 podman[132526]: 2025-10-02 11:41:25.83943196 +0000 UTC m=+0.094858476 container init 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:41:25 compute-0 podman[132526]: 2025-10-02 11:41:25.849207499 +0000 UTC m=+0.104633995 container start 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:41:25 compute-0 podman[132526]: 2025-10-02 11:41:25.853021063 +0000 UTC m=+0.108447579 container attach 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:41:25 compute-0 podman[132526]: 2025-10-02 11:41:25.768940445 +0000 UTC m=+0.024366961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:25 compute-0 python3.9[132542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:26 compute-0 sudo[132530]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:26 compute-0 sudo[132673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndlelltiknujlocltrbkbmyxahoyhsyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405285.479317-501-222238480961767/AnsiballZ_copy.py'
Oct 02 11:41:26 compute-0 sudo[132673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:26 compute-0 python3.9[132675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405285.479317-501-222238480961767/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e609582f20fab5a95082f697eeffc650f7f5925b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:26 compute-0 sudo[132673]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:26 compute-0 compassionate_germain[132547]: {
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:     "1": [
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:         {
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "devices": [
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "/dev/loop3"
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             ],
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "lv_name": "ceph_lv0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "lv_size": "7511998464",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "name": "ceph_lv0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "tags": {
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.cluster_name": "ceph",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.crush_device_class": "",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.encrypted": "0",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.osd_id": "1",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.type": "block",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:                 "ceph.vdo": "0"
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             },
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "type": "block",
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:             "vg_name": "ceph_vg0"
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:         }
Oct 02 11:41:26 compute-0 compassionate_germain[132547]:     ]
Oct 02 11:41:26 compute-0 compassionate_germain[132547]: }
Oct 02 11:41:26 compute-0 systemd[1]: libpod-7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00.scope: Deactivated successfully.
Oct 02 11:41:26 compute-0 podman[132704]: 2025-10-02 11:41:26.675454801 +0000 UTC m=+0.023854928 container died 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-597912cfb73b482ee8256135a0f09bcbda4355f11c910241e88b6058a190acfc-merged.mount: Deactivated successfully.
Oct 02 11:41:26 compute-0 podman[132704]: 2025-10-02 11:41:26.943618689 +0000 UTC m=+0.292018796 container remove 7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:41:26 compute-0 systemd[1]: libpod-conmon-7bc6a01bed2b1a14708087d1e22c25ad3a962cc52d23e97eb703da1358f74e00.scope: Deactivated successfully.
Oct 02 11:41:26 compute-0 sudo[132272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:27 compute-0 sudo[132720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:27 compute-0 sudo[132720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:27 compute-0 sudo[132720]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:27 compute-0 sudo[132745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:41:27 compute-0 sudo[132745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:27 compute-0 sudo[132745]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:27 compute-0 sudo[132770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:27 compute-0 sudo[132770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:27 compute-0 sudo[132770]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:27 compute-0 sudo[132802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:41:27 compute-0 sudo[132802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:27.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:27 compute-0 sudo[132971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmmyxmqzvuaogceftptdxfbxwvshfhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405287.2051525-667-23952989308698/AnsiballZ_file.py'
Oct 02 11:41:27 compute-0 sudo[132971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:27 compute-0 podman[132989]: 2025-10-02 11:41:27.641494321 +0000 UTC m=+0.106984424 container create c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:41:27 compute-0 ceph-mon[73607]: pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:27 compute-0 podman[132989]: 2025-10-02 11:41:27.55413656 +0000 UTC m=+0.019626693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:27 compute-0 python3.9[132974]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:27 compute-0 systemd[1]: Started libpod-conmon-c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f.scope.
Oct 02 11:41:27 compute-0 sudo[132971]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:27 compute-0 podman[132989]: 2025-10-02 11:41:27.896080094 +0000 UTC m=+0.361570217 container init c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:41:27 compute-0 podman[132989]: 2025-10-02 11:41:27.902288487 +0000 UTC m=+0.367778580 container start c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:41:27 compute-0 condescending_swartz[133006]: 167 167
Oct 02 11:41:27 compute-0 systemd[1]: libpod-c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f.scope: Deactivated successfully.
Oct 02 11:41:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:28 compute-0 podman[132989]: 2025-10-02 11:41:28.035420943 +0000 UTC m=+0.500911046 container attach c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 11:41:28 compute-0 podman[132989]: 2025-10-02 11:41:28.035853553 +0000 UTC m=+0.501343656 container died c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c6ed6b9f2249d6e2eedbe12ba3113558b045b5e5e7ef2d4c6f462d8109fd8b2-merged.mount: Deactivated successfully.
Oct 02 11:41:28 compute-0 sudo[133172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmthqnnlvrqezebcioxwdyuhmiohugih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405287.8667293-691-281017414956226/AnsiballZ_stat.py'
Oct 02 11:41:28 compute-0 sudo[133172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:28 compute-0 python3.9[133174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:28 compute-0 sudo[133172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:28 compute-0 podman[132989]: 2025-10-02 11:41:28.489219429 +0000 UTC m=+0.954709542 container remove c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:41:28 compute-0 systemd[1]: libpod-conmon-c41a12a7c340aa027644ae6e95499f32de8c202208a7a766679b60d37d8af57f.scope: Deactivated successfully.
Oct 02 11:41:28 compute-0 podman[133230]: 2025-10-02 11:41:28.678432514 +0000 UTC m=+0.061873103 container create 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:41:28 compute-0 podman[133230]: 2025-10-02 11:41:28.635887688 +0000 UTC m=+0.019328247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:41:28 compute-0 ceph-mon[73607]: pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:28 compute-0 systemd[1]: Started libpod-conmon-87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d.scope.
Oct 02 11:41:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:41:28 compute-0 sudo[133319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysoevcabwwvmsswainvpzucbhylvvkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405287.8667293-691-281017414956226/AnsiballZ_copy.py'
Oct 02 11:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf8a5822c5c32255d1ca4ca356d13f04d536d01391bedbadbb1a6b757b8649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf8a5822c5c32255d1ca4ca356d13f04d536d01391bedbadbb1a6b757b8649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf8a5822c5c32255d1ca4ca356d13f04d536d01391bedbadbb1a6b757b8649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf8a5822c5c32255d1ca4ca356d13f04d536d01391bedbadbb1a6b757b8649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:41:28 compute-0 sudo[133319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:28 compute-0 podman[133230]: 2025-10-02 11:41:28.872342566 +0000 UTC m=+0.255783135 container init 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:41:28 compute-0 podman[133230]: 2025-10-02 11:41:28.882697451 +0000 UTC m=+0.266138000 container start 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:41:28 compute-0 podman[133230]: 2025-10-02 11:41:28.921818063 +0000 UTC m=+0.305258632 container attach 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:41:29 compute-0 python3.9[133324]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405287.8667293-691-281017414956226/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:29 compute-0 sudo[133319]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:29.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:29 compute-0 sudo[133476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksavnveafzofyfsktgrrbzrxnhwpoxhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405289.2762723-750-244601419604338/AnsiballZ_file.py'
Oct 02 11:41:29 compute-0 sudo[133476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]: {
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:         "osd_id": 1,
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:         "type": "bluestore"
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]:     }
Oct 02 11:41:29 compute-0 intelligent_goodall[133320]: }
Oct 02 11:41:29 compute-0 systemd[1]: libpod-87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d.scope: Deactivated successfully.
Oct 02 11:41:29 compute-0 podman[133230]: 2025-10-02 11:41:29.742819164 +0000 UTC m=+1.126259703 container died 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:41:29 compute-0 python3.9[133478]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:29 compute-0 sudo[133476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bf8a5822c5c32255d1ca4ca356d13f04d536d01391bedbadbb1a6b757b8649-merged.mount: Deactivated successfully.
Oct 02 11:41:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:29 compute-0 podman[133230]: 2025-10-02 11:41:29.939229628 +0000 UTC m=+1.322670157 container remove 87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:41:29 compute-0 sudo[132802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:41:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:41:29 compute-0 systemd[1]: libpod-conmon-87b3450f3ca28e2448848535be62bb2a89e57f43efd13c07c8fd557aae6c936d.scope: Deactivated successfully.
Oct 02 11:41:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7235b00f-757d-49a6-bf2f-02d27479064b does not exist
Oct 02 11:41:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f0edaaec-9818-4970-a08c-1cc94226e24d does not exist
Oct 02 11:41:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 58254c8d-77a8-46b4-94cc-72720b551e24 does not exist
Oct 02 11:41:30 compute-0 sudo[133565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:30 compute-0 sudo[133565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:30 compute-0 sudo[133565]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:30 compute-0 sudo[133608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:41:30 compute-0 sudo[133608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:30 compute-0 sudo[133608]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:30 compute-0 sudo[133705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtfhjedvupvmgkduoueqfaaxonivqyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405289.9991112-776-38221917580263/AnsiballZ_stat.py'
Oct 02 11:41:30 compute-0 sudo[133705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:30 compute-0 python3.9[133708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:30 compute-0 sudo[133705]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:30 compute-0 sudo[133829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regaykplnmveldmnxsovgcefxsdmxcup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405289.9991112-776-38221917580263/AnsiballZ_copy.py'
Oct 02 11:41:30 compute-0 sudo[133829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:30 compute-0 python3.9[133831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405289.9991112-776-38221917580263/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:31 compute-0 sudo[133829]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:41:31 compute-0 ceph-mon[73607]: pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:31.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:31 compute-0 sudo[133981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvfchukaktpubhvwntvfmwptbsrmgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405291.1809707-824-144664792122072/AnsiballZ_file.py'
Oct 02 11:41:31 compute-0 sudo[133981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:31 compute-0 python3.9[133983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:31 compute-0 sudo[133981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:31.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:32 compute-0 sudo[134133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvisithwhtkthhksoboodpijkewzhkex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405291.8325365-847-262960400848483/AnsiballZ_stat.py'
Oct 02 11:41:32 compute-0 sudo[134133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:32 compute-0 python3.9[134135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:32 compute-0 sudo[134133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:32 compute-0 sudo[134257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcxxuuyjwajhxbnypktjbqqymoxjfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405291.8325365-847-262960400848483/AnsiballZ_copy.py'
Oct 02 11:41:32 compute-0 sudo[134257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:32 compute-0 python3.9[134259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405291.8325365-847-262960400848483/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:32 compute-0 sudo[134257]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:33.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:33 compute-0 sudo[134409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvqwcxkvqdneqzbglvloopyevtreeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405293.0739272-890-118080438655988/AnsiballZ_file.py'
Oct 02 11:41:33 compute-0 sudo[134409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:33 compute-0 python3.9[134411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:33 compute-0 ceph-mon[73607]: pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:33 compute-0 sudo[134409]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:33 compute-0 sudo[134561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingrpxeolabensavpgdbtgmqfihahmjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405293.6873908-912-246750700524502/AnsiballZ_stat.py'
Oct 02 11:41:33 compute-0 sudo[134561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:34 compute-0 python3.9[134563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:34 compute-0 sudo[134561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:34 compute-0 sudo[134685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbkscwfxosokdqoyzvaiuruuosssixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405293.6873908-912-246750700524502/AnsiballZ_copy.py'
Oct 02 11:41:34 compute-0 sudo[134685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:34 compute-0 python3.9[134687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405293.6873908-912-246750700524502/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:34 compute-0 sudo[134685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:35.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:35 compute-0 sudo[134837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uslmeawgqullmonsrcqrtczjycqwriml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405295.0740128-960-61743072414269/AnsiballZ_file.py'
Oct 02 11:41:35 compute-0 sudo[134837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:35 compute-0 ceph-mon[73607]: pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:35 compute-0 python3.9[134839]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:35 compute-0 sudo[134837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:35.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:36 compute-0 sudo[134989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkmxpqniramhlmmvinebefruvkbdriun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405295.8304858-983-139063889507453/AnsiballZ_stat.py'
Oct 02 11:41:36 compute-0 sudo[134989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:36 compute-0 python3.9[134991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:36 compute-0 sudo[134989]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:36 compute-0 sudo[135113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indmqcxqmplrxoapwaxcircybranwzvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405295.8304858-983-139063889507453/AnsiballZ_copy.py'
Oct 02 11:41:36 compute-0 sudo[135113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:36 compute-0 python3.9[135115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405295.8304858-983-139063889507453/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:36 compute-0 sudo[135113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:37 compute-0 sudo[135265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbztaxykfmjntkxojdwvivyrwybldeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405297.0050125-1026-144414659896299/AnsiballZ_file.py'
Oct 02 11:41:37 compute-0 sudo[135265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:37.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:37 compute-0 python3.9[135267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:37 compute-0 sudo[135265]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:37 compute-0 ceph-mon[73607]: pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:37 compute-0 sudo[135417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mabapvlzozisohtfsnjkdwsxgfdswrjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405297.6152558-1050-252910490931893/AnsiballZ_stat.py'
Oct 02 11:41:37 compute-0 sudo[135417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:38 compute-0 python3.9[135419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:38 compute-0 sudo[135417]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:38 compute-0 sudo[135541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnopcoffbmspjdyskmrszquscyqyuezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405297.6152558-1050-252910490931893/AnsiballZ_copy.py'
Oct 02 11:41:38 compute-0 sudo[135541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:38 compute-0 python3.9[135543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405297.6152558-1050-252910490931893/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:38 compute-0 sudo[135541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:39.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:39 compute-0 ceph-mon[73607]: pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:40 compute-0 sudo[135569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:40 compute-0 sudo[135569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:40 compute-0 sudo[135569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:40 compute-0 sudo[135594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:41:40 compute-0 sudo[135594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:41:40 compute-0 sudo[135594]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:41 compute-0 ceph-mon[73607]: pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:41 compute-0 sshd-session[128517]: Connection closed by 192.168.122.30 port 50612
Oct 02 11:41:41 compute-0 sshd-session[128514]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:41:41 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 02 11:41:41 compute-0 systemd[1]: session-45.scope: Consumed 21.427s CPU time.
Oct 02 11:41:41 compute-0 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Oct 02 11:41:41 compute-0 systemd-logind[789]: Removed session 45.
Oct 02 11:41:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:41.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:41:42
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:41:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:41:42 compute-0 ceph-mon[73607]: pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:43.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:43.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:45.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:45 compute-0 ceph-mon[73607]: pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:47 compute-0 ceph-mon[73607]: pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:47.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:48 compute-0 ceph-mon[73607]: pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:49 compute-0 sshd-session[135623]: Accepted publickey for zuul from 192.168.122.30 port 57136 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:41:49 compute-0 systemd-logind[789]: New session 46 of user zuul.
Oct 02 11:41:49 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 02 11:41:49 compute-0 sshd-session[135623]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:41:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:49 compute-0 sudo[135776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgwzkqynmlxlgmqibzvyastzbgqtbce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405309.2113495-30-92088737879145/AnsiballZ_file.py'
Oct 02 11:41:49 compute-0 sudo[135776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:49 compute-0 python3.9[135778]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:49 compute-0 sudo[135776]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:50 compute-0 sudo[135929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnbgnmpxpcnemwbmbrcrhhrbksolzug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405310.2409825-66-8429050764637/AnsiballZ_stat.py'
Oct 02 11:41:50 compute-0 sudo[135929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:50 compute-0 python3.9[135931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:50 compute-0 sudo[135929]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:51.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:51 compute-0 sudo[136052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzynwfnjxhahcvqufjxdtypuopdbjnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405310.2409825-66-8429050764637/AnsiballZ_copy.py'
Oct 02 11:41:51 compute-0 sudo[136052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:51 compute-0 python3.9[136054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405310.2409825-66-8429050764637/.source.conf _original_basename=ceph.conf follow=False checksum=a063547ed46c9b567daa2ad5bf469f0aff0b35ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:51 compute-0 ceph-mon[73607]: pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:51 compute-0 sudo[136052]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:51 compute-0 sudo[136204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmztyvvywujcykqugygmyqlrfwioonh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405311.735855-66-142608787256770/AnsiballZ_stat.py'
Oct 02 11:41:51 compute-0 sudo[136204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:52 compute-0 python3.9[136206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:52 compute-0 sudo[136204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:52 compute-0 sudo[136328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zglmrhxoimuujisuwgddxnrzoptbemgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405311.735855-66-142608787256770/AnsiballZ_copy.py'
Oct 02 11:41:52 compute-0 sudo[136328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:52 compute-0 python3.9[136330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405311.735855-66-142608787256770/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=879f4ae20801e566b8dfcda89b2df304e135843d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:52 compute-0 sudo[136328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:53 compute-0 ceph-mon[73607]: pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:53.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:53 compute-0 sshd-session[135626]: Connection closed by 192.168.122.30 port 57136
Oct 02 11:41:53 compute-0 sshd-session[135623]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:41:53 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 02 11:41:53 compute-0 systemd[1]: session-46.scope: Consumed 2.439s CPU time.
Oct 02 11:41:53 compute-0 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Oct 02 11:41:53 compute-0 systemd-logind[789]: Removed session 46.
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:41:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:41:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:55.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:55 compute-0 ceph-mon[73607]: pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:57 compute-0 ceph-mon[73607]: pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:41:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:58 compute-0 ceph-mon[73607]: pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:41:59 compute-0 sshd-session[136358]: Accepted publickey for zuul from 192.168.122.30 port 36920 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:41:59 compute-0 systemd-logind[789]: New session 47 of user zuul.
Oct 02 11:41:59 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 02 11:41:59 compute-0 sshd-session[136358]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:41:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:41:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:59.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:41:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:41:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:41:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:41:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:00 compute-0 python3.9[136511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:00 compute-0 sudo[136541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:00 compute-0 sudo[136541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:00 compute-0 sudo[136541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:00 compute-0 sudo[136566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:00 compute-0 sudo[136566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:00 compute-0 sudo[136566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:01 compute-0 sudo[136716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaagmjcvshvhpysjouiwlalgawltxcow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405320.7480881-67-159046273949411/AnsiballZ_file.py'
Oct 02 11:42:01 compute-0 sudo[136716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:01.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:01 compute-0 python3.9[136718]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:01 compute-0 sudo[136716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:01 compute-0 ceph-mon[73607]: pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:01 compute-0 sudo[136868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjdmrlmoxefugzxqzltfixqoroedmjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405321.6141007-67-272630635097161/AnsiballZ_file.py'
Oct 02 11:42:01 compute-0 sudo[136868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:02 compute-0 python3.9[136870]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:02 compute-0 sudo[136868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:03 compute-0 python3.9[137021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:03 compute-0 ceph-mon[73607]: pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:03.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:03 compute-0 sudo[137171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddkxvvridcidhejalfwjslrsbbauvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405323.21347-136-178322718687214/AnsiballZ_seboolean.py'
Oct 02 11:42:03 compute-0 sudo[137171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:03 compute-0 python3.9[137173]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:42:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:05 compute-0 ceph-mon[73607]: pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:05 compute-0 sudo[137171]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:05.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:05 compute-0 sudo[137328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsdqnxodnxlroundrhcggyccbifboizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405325.6367264-166-61036641200104/AnsiballZ_setup.py'
Oct 02 11:42:05 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 02 11:42:05 compute-0 sudo[137328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:06 compute-0 python3.9[137330]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:42:06 compute-0 sudo[137328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:06 compute-0 sudo[137413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waxpjebobgwngfskvtavrlmgfgyuktpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405325.6367264-166-61036641200104/AnsiballZ_dnf.py'
Oct 02 11:42:06 compute-0 sudo[137413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:07 compute-0 python3.9[137415]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:42:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:07.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:07 compute-0 ceph-mon[73607]: pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:08 compute-0 sudo[137413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:08 compute-0 ceph-mon[73607]: pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:09 compute-0 sudo[137567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzbxmscyvfdvlgwamtjvsmqtuylpyioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405328.4940703-202-78054711194042/AnsiballZ_systemd.py'
Oct 02 11:42:09 compute-0 sudo[137567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:09 compute-0 python3.9[137569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:42:09 compute-0 sudo[137567]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:09.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:10 compute-0 sudo[137722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xigrwcsyolrbvgnsaemepayakvdazedu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405329.685218-226-225502107765075/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 11:42:10 compute-0 sudo[137722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:10 compute-0 python3[137724]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 11:42:10 compute-0 sudo[137722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:11 compute-0 sudo[137875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igogmfsxtvcansumdazvokapdpucavoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405330.722273-253-183713922681344/AnsiballZ_file.py'
Oct 02 11:42:11 compute-0 sudo[137875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:11 compute-0 python3.9[137877]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:11 compute-0 sudo[137875]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:11 compute-0 ceph-mon[73607]: pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:11 compute-0 sudo[138027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emgyheklmvcgmgoztzzlydxxalzhfmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405331.4954035-277-278170455694366/AnsiballZ_stat.py'
Oct 02 11:42:11 compute-0 sudo[138027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:11.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:12 compute-0 python3.9[138029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:12 compute-0 sudo[138027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:12 compute-0 sudo[138106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdrbinnduouwcncnmggbjzmpdmcmnmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405331.4954035-277-278170455694366/AnsiballZ_file.py'
Oct 02 11:42:12 compute-0 sudo[138106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:12 compute-0 python3.9[138108]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:12 compute-0 sudo[138106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:13 compute-0 sudo[138258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizmqsbptiuorasepwrpzjgqqwrrtlar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405332.798956-313-261446251274675/AnsiballZ_stat.py'
Oct 02 11:42:13 compute-0 sudo[138258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:13 compute-0 python3.9[138260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:13 compute-0 sudo[138258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:13.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:13 compute-0 sudo[138336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osbcqglhpmhyksmebsshsuwzxganoyyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405332.798956-313-261446251274675/AnsiballZ_file.py'
Oct 02 11:42:13 compute-0 sudo[138336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:13 compute-0 ceph-mon[73607]: pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:13 compute-0 python3.9[138338]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.teceon33 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:13 compute-0 sudo[138336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:13.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:14 compute-0 sudo[138489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbuzjubcpslocoxgobazbsybkjcvkyzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405334.0596986-349-276180825232031/AnsiballZ_stat.py'
Oct 02 11:42:14 compute-0 sudo[138489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:14 compute-0 python3.9[138491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:14 compute-0 sudo[138489]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:14 compute-0 sudo[138567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygxpixlobnipzlytvjzsgrshgrcolak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405334.0596986-349-276180825232031/AnsiballZ_file.py'
Oct 02 11:42:14 compute-0 sudo[138567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:14 compute-0 python3.9[138569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:14 compute-0 sudo[138567]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:15 compute-0 ceph-mon[73607]: pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:15 compute-0 sudo[138719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzrzndrayqfymnrdqmmlkimnxxiywbme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405335.4145484-388-41085226501609/AnsiballZ_command.py'
Oct 02 11:42:15 compute-0 sudo[138719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:15.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:16 compute-0 python3.9[138721]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:16 compute-0 sudo[138719]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:16 compute-0 ceph-mon[73607]: pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:16 compute-0 sudo[138873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqklsofgbfczvkcylafinupwuzpnxzqr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405336.3436503-412-198069121040223/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:42:16 compute-0 sudo[138873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:17 compute-0 python3[138875]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:42:17 compute-0 sudo[138873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:17 compute-0 sudo[139025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcnnbpewjxbpywvlpqbgodfpntyezlhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405337.263897-436-23157227872324/AnsiballZ_stat.py'
Oct 02 11:42:17 compute-0 sudo[139025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:17 compute-0 python3.9[139027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:17 compute-0 sudo[139025]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:17.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:18 compute-0 sudo[139150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkfxokyuocszqvwvghzhyfafjknkyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405337.263897-436-23157227872324/AnsiballZ_copy.py'
Oct 02 11:42:18 compute-0 sudo[139150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:18 compute-0 python3.9[139152]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405337.263897-436-23157227872324/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:18 compute-0 sudo[139150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:19 compute-0 sudo[139303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfzvareokreoxdzolhgzcbudnbaqyeff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405338.8195825-481-216662146523899/AnsiballZ_stat.py'
Oct 02 11:42:19 compute-0 sudo[139303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:19.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:19 compute-0 python3.9[139305]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:19 compute-0 sudo[139303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:19 compute-0 ceph-mon[73607]: pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:19 compute-0 sudo[139428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxuabrmyvxnpfwuprkruxvsnerujaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405338.8195825-481-216662146523899/AnsiballZ_copy.py'
Oct 02 11:42:19 compute-0 sudo[139428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:19 compute-0 python3.9[139430]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405338.8195825-481-216662146523899/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:20 compute-0 sudo[139428]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:20 compute-0 sudo[139531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:20 compute-0 sudo[139531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:20 compute-0 sudo[139531]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:20 compute-0 sudo[139580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:20 compute-0 sudo[139580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:20 compute-0 sudo[139580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:20 compute-0 sudo[139630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvnnkjfyadcbowlrpegmqddphdcqjmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405340.3528702-526-195936569335286/AnsiballZ_stat.py'
Oct 02 11:42:20 compute-0 sudo[139630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:20 compute-0 python3.9[139633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:21 compute-0 sudo[139630]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:21 compute-0 sudo[139756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemelqxzrafclmwycducmoyhgktpwxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405340.3528702-526-195936569335286/AnsiballZ_copy.py'
Oct 02 11:42:21 compute-0 sudo[139756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:21.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:21 compute-0 python3.9[139758]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405340.3528702-526-195936569335286/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:21 compute-0 sudo[139756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:21 compute-0 ceph-mon[73607]: pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:22 compute-0 sudo[139908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blraqgprawqxwbehthzvllepuvjxusyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405341.8332677-571-40545075429493/AnsiballZ_stat.py'
Oct 02 11:42:22 compute-0 sudo[139908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:22 compute-0 python3.9[139910]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:22 compute-0 sudo[139908]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:22 compute-0 sudo[140034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egneyqrdewpfhfvxfxkpkikhdcrkqofz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405341.8332677-571-40545075429493/AnsiballZ_copy.py'
Oct 02 11:42:22 compute-0 sudo[140034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:22 compute-0 ceph-mon[73607]: pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:22 compute-0 python3.9[140036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405341.8332677-571-40545075429493/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:23 compute-0 sudo[140034]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:23.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:23 compute-0 sudo[140186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkrevdpaprvnxwzmmounyypyrhqrthyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405343.2580578-616-238950940478874/AnsiballZ_stat.py'
Oct 02 11:42:23 compute-0 sudo[140186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:23 compute-0 python3.9[140188]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:23 compute-0 sudo[140186]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:24 compute-0 sudo[140312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptfpwnjowyarejadugqmtnnmtxqlirzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405343.2580578-616-238950940478874/AnsiballZ_copy.py'
Oct 02 11:42:24 compute-0 sudo[140312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:24 compute-0 python3.9[140314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405343.2580578-616-238950940478874/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:24 compute-0 sudo[140312]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:25 compute-0 sudo[140464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtnvpdcmdsnrlkybfynrsvcemenxgbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405344.8493307-661-143800114913861/AnsiballZ_file.py'
Oct 02 11:42:25 compute-0 sudo[140464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:25.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:25 compute-0 python3.9[140466]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:25 compute-0 sudo[140464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:25 compute-0 ceph-mon[73607]: pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:25 compute-0 sudo[140616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvqpnrwhwlcziwhhfpkyxxngtwymwadz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405345.5603502-685-130469672315653/AnsiballZ_command.py'
Oct 02 11:42:25 compute-0 sudo[140616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:26 compute-0 python3.9[140618]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:26 compute-0 sudo[140616]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:26 compute-0 ceph-mon[73607]: pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:26 compute-0 sudo[140772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napawrbtmakrwpdqweteyaqlyrzdgnud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405346.271075-709-252112089776598/AnsiballZ_blockinfile.py'
Oct 02 11:42:26 compute-0 sudo[140772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:26 compute-0 python3.9[140774]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:26 compute-0 sudo[140772]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:27.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:27 compute-0 sudo[140924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chduysyvtghmpcnfehqmtyddtcgknnwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405347.2330735-736-127715112242261/AnsiballZ_command.py'
Oct 02 11:42:27 compute-0 sudo[140924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:27 compute-0 python3.9[140926]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:27 compute-0 sudo[140924]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:28 compute-0 sudo[141077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkwqjcqvuzrpdqktcgrfbncknrtnxng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405347.9446254-760-262014742150444/AnsiballZ_stat.py'
Oct 02 11:42:28 compute-0 sudo[141077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:28 compute-0 python3.9[141079]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:28 compute-0 sudo[141077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:28 compute-0 sudo[141232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuotalexvwrygoydxpvstgitlnhtwdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405348.6359324-784-239624537744931/AnsiballZ_command.py'
Oct 02 11:42:28 compute-0 sudo[141232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:29 compute-0 python3.9[141234]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:29 compute-0 sudo[141232]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:29.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:29 compute-0 ceph-mon[73607]: pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:29 compute-0 sudo[141387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqdpaesafpqdxolcapaqagnpcyunjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405349.405275-808-76827866469348/AnsiballZ_file.py'
Oct 02 11:42:29 compute-0 sudo[141387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:29.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:30 compute-0 python3.9[141389]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:30 compute-0 sudo[141387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:30 compute-0 sudo[141415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:30 compute-0 sudo[141415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:30 compute-0 sudo[141415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-0 sudo[141440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:42:30 compute-0 sudo[141440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:30 compute-0 sudo[141440]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-0 sudo[141465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:30 compute-0 sudo[141465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:30 compute-0 sudo[141465]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-0 sudo[141490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:42:30 compute-0 sudo[141490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:30 compute-0 sudo[141490]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:42:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:42:31 compute-0 ceph-mon[73607]: pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:31.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:31 compute-0 python3.9[141660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:42:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:42:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:31 compute-0 sudo[141686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:31 compute-0 sudo[141686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:31 compute-0 sudo[141686]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:31 compute-0 sudo[141711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:42:31 compute-0 sudo[141711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:31 compute-0 sudo[141711]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:31 compute-0 sudo[141736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:31 compute-0 sudo[141736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:31 compute-0 sudo[141736]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:31.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:32 compute-0 sudo[141761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:42:32 compute-0 sudo[141761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:32 compute-0 sudo[141761]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:32 compute-0 sudo[141944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fumjeuovxnvbvcakkixdwgegdrgmpwym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405352.2527092-928-214996028355355/AnsiballZ_command.py'
Oct 02 11:42:32 compute-0 sudo[141944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:32 compute-0 python3.9[141946]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:32 compute-0 ovs-vsctl[141947]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 11:42:32 compute-0 sudo[141944]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 09026a06-0b7e-4066-acf4-226e587de542 does not exist
Oct 02 11:42:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7f677b58-d890-48f2-a7de-181bcdb5ec3c does not exist
Oct 02 11:42:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fe414343-a580-404c-b6b0-f46c9d51c13f does not exist
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:42:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:42:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:42:32 compute-0 sudo[141948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:32 compute-0 sudo[141948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:32 compute-0 sudo[141948]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:32 compute-0 sudo[141978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:42:32 compute-0 sudo[141978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:32 compute-0 sudo[141978]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:32 compute-0 sudo[142022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:32 compute-0 sudo[142022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:32 compute-0 sudo[142022]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:33 compute-0 sudo[142047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:42:33 compute-0 sudo[142047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.332089345 +0000 UTC m=+0.052691140 container create 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:42:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:33 compute-0 systemd[1]: Started libpod-conmon-58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61.scope.
Oct 02 11:42:33 compute-0 sudo[142253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhskislbahjijdcqkeotawrsumbymeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405353.0823545-955-84950121953798/AnsiballZ_command.py'
Oct 02 11:42:33 compute-0 sudo[142253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.301863719 +0000 UTC m=+0.022465534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.553099362 +0000 UTC m=+0.273701187 container init 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.561600891 +0000 UTC m=+0.282202736 container start 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:42:33 compute-0 affectionate_carver[142255]: 167 167
Oct 02 11:42:33 compute-0 systemd[1]: libpod-58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61.scope: Deactivated successfully.
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.607620324 +0000 UTC m=+0.328222159 container attach 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.609514292 +0000 UTC m=+0.330116097 container died 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:33 compute-0 python3.9[142257]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f771f556c2e3d31a65a1f9ea7fea22ade082047887d6e38470c5f88983212900-merged.mount: Deactivated successfully.
Oct 02 11:42:33 compute-0 podman[142190]: 2025-10-02 11:42:33.65977274 +0000 UTC m=+0.380374555 container remove 58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:33 compute-0 sudo[142253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:33 compute-0 ceph-mon[73607]: pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:42:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:42:33 compute-0 systemd[1]: libpod-conmon-58f898ef1d327f24bba87e35ccd6efda6d0ac442a75e52a89f0ee843f7a85e61.scope: Deactivated successfully.
Oct 02 11:42:33 compute-0 podman[142309]: 2025-10-02 11:42:33.8091245 +0000 UTC m=+0.040204571 container create 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:42:33 compute-0 systemd[1]: Started libpod-conmon-420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e.scope.
Oct 02 11:42:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:33 compute-0 podman[142309]: 2025-10-02 11:42:33.79044755 +0000 UTC m=+0.021527631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:33 compute-0 podman[142309]: 2025-10-02 11:42:33.902496772 +0000 UTC m=+0.133576873 container init 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:42:33 compute-0 podman[142309]: 2025-10-02 11:42:33.910742915 +0000 UTC m=+0.141822986 container start 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:42:33 compute-0 podman[142309]: 2025-10-02 11:42:33.913942734 +0000 UTC m=+0.145022805 container attach 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:42:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:33.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:34 compute-0 sudo[142455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnhwqwfqeqelcktmkdjhxymwsveqdkfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405353.81188-979-101013407005552/AnsiballZ_command.py'
Oct 02 11:42:34 compute-0 sudo[142455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:34 compute-0 python3.9[142457]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:34 compute-0 ovs-vsctl[142458]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 02 11:42:34 compute-0 sudo[142455]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:34 compute-0 ceph-mon[73607]: pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:34 compute-0 flamboyant_wu[142365]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:42:34 compute-0 flamboyant_wu[142365]: --> relative data size: 1.0
Oct 02 11:42:34 compute-0 flamboyant_wu[142365]: --> All data devices are unavailable
Oct 02 11:42:34 compute-0 systemd[1]: libpod-420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e.scope: Deactivated successfully.
Oct 02 11:42:34 compute-0 podman[142309]: 2025-10-02 11:42:34.762097206 +0000 UTC m=+0.993177287 container died 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbdd09f0e5a9a4ebeb181a493e71037e027645fa7af939df82673f18e673c243-merged.mount: Deactivated successfully.
Oct 02 11:42:34 compute-0 podman[142309]: 2025-10-02 11:42:34.829521968 +0000 UTC m=+1.060602039 container remove 420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:34 compute-0 systemd[1]: libpod-conmon-420d4bdb8d8b8ab3ac9b1f946a75e80f81d4b60dc468ac96a7a3bc63f8f12c8e.scope: Deactivated successfully.
Oct 02 11:42:34 compute-0 sudo[142047]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:34 compute-0 sudo[142631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:34 compute-0 sudo[142631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:34 compute-0 sudo[142631]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:34 compute-0 sudo[142656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:42:34 compute-0 sudo[142656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:34 compute-0 sudo[142656]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:34 compute-0 python3.9[142619]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:35 compute-0 sudo[142681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:35 compute-0 sudo[142681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:35 compute-0 sudo[142681]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:35 compute-0 sudo[142708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:42:35 compute-0 sudo[142708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:35.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.405046572 +0000 UTC m=+0.072279972 container create f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:42:35 compute-0 systemd[1]: Started libpod-conmon-f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe.scope.
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.353672206 +0000 UTC m=+0.020905636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.482338426 +0000 UTC m=+0.149571856 container init f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.490098708 +0000 UTC m=+0.157332108 container start f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:42:35 compute-0 intelligent_goldwasser[142873]: 167 167
Oct 02 11:42:35 compute-0 systemd[1]: libpod-f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe.scope: Deactivated successfully.
Oct 02 11:42:35 compute-0 conmon[142873]: conmon f74fb0ce890353b77f45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe.scope/container/memory.events
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.498091975 +0000 UTC m=+0.165325405 container attach f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.498408303 +0000 UTC m=+0.165641703 container died f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cd0034285ed9b5465afd2b14a25659f1e29f8c1734610796e0b1e4f1b8e95d1-merged.mount: Deactivated successfully.
Oct 02 11:42:35 compute-0 sudo[142954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvdjeezuycseykicrhwtoodxlcycabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405355.3331065-1030-251631386150258/AnsiballZ_file.py'
Oct 02 11:42:35 compute-0 sudo[142954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:35 compute-0 podman[142818]: 2025-10-02 11:42:35.611369037 +0000 UTC m=+0.278602437 container remove f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:42:35 compute-0 systemd[1]: libpod-conmon-f74fb0ce890353b77f45e70488e895a9082eb15e29b8f7a8e8603a95b2a2e5fe.scope: Deactivated successfully.
Oct 02 11:42:35 compute-0 podman[142964]: 2025-10-02 11:42:35.745331218 +0000 UTC m=+0.035534017 container create 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:42:35 compute-0 systemd[1]: Started libpod-conmon-264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874.scope.
Oct 02 11:42:35 compute-0 python3.9[142956]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8222910dadff69715a5e48a7d528a7da1219abe53f166c6b263d93be7ba124a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8222910dadff69715a5e48a7d528a7da1219abe53f166c6b263d93be7ba124a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8222910dadff69715a5e48a7d528a7da1219abe53f166c6b263d93be7ba124a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8222910dadff69715a5e48a7d528a7da1219abe53f166c6b263d93be7ba124a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:35 compute-0 podman[142964]: 2025-10-02 11:42:35.821933576 +0000 UTC m=+0.112136395 container init 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 11:42:35 compute-0 podman[142964]: 2025-10-02 11:42:35.73001246 +0000 UTC m=+0.020215279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:35 compute-0 sudo[142954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:35 compute-0 podman[142964]: 2025-10-02 11:42:35.828719213 +0000 UTC m=+0.118922012 container start 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:42:35 compute-0 podman[142964]: 2025-10-02 11:42:35.833318696 +0000 UTC m=+0.123521515 container attach 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:42:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:35.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:36 compute-0 sudo[143136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxoirxvzeaxlndgnexcfvnfhwvieervh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405356.075023-1054-217721127538823/AnsiballZ_stat.py'
Oct 02 11:42:36 compute-0 sudo[143136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:36 compute-0 python3.9[143138]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:36 compute-0 affectionate_payne[142981]: {
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:     "1": [
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:         {
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "devices": [
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "/dev/loop3"
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             ],
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "lv_name": "ceph_lv0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "lv_size": "7511998464",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "name": "ceph_lv0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "tags": {
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.cluster_name": "ceph",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.crush_device_class": "",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.encrypted": "0",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.osd_id": "1",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.type": "block",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:                 "ceph.vdo": "0"
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             },
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "type": "block",
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:             "vg_name": "ceph_vg0"
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:         }
Oct 02 11:42:36 compute-0 affectionate_payne[142981]:     ]
Oct 02 11:42:36 compute-0 affectionate_payne[142981]: }
Oct 02 11:42:36 compute-0 sudo[143136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:36 compute-0 systemd[1]: libpod-264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874.scope: Deactivated successfully.
Oct 02 11:42:36 compute-0 podman[142964]: 2025-10-02 11:42:36.612609362 +0000 UTC m=+0.902812161 container died 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8222910dadff69715a5e48a7d528a7da1219abe53f166c6b263d93be7ba124a3-merged.mount: Deactivated successfully.
Oct 02 11:42:36 compute-0 podman[142964]: 2025-10-02 11:42:36.7079068 +0000 UTC m=+0.998109599 container remove 264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:42:36 compute-0 systemd[1]: libpod-conmon-264c6057925e5ad606eceb3c15aed83f008f3571b35d9c6f2be5d7316c3a5874.scope: Deactivated successfully.
Oct 02 11:42:36 compute-0 sudo[142708]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:36 compute-0 sudo[143206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:36 compute-0 sudo[143206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:36 compute-0 sudo[143206]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:36 compute-0 sudo[143257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnzxqtpkyoupabsdzivepzvgcwdtrbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405356.075023-1054-217721127538823/AnsiballZ_file.py'
Oct 02 11:42:36 compute-0 sudo[143257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:36 compute-0 sudo[143258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:42:36 compute-0 sudo[143258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:36 compute-0 sudo[143258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:36 compute-0 sudo[143285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:36 compute-0 sudo[143285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:36 compute-0 sudo[143285]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:36 compute-0 sudo[143310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:42:36 compute-0 sudo[143310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:37 compute-0 python3.9[143266]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:37 compute-0 sudo[143257]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.25043466 +0000 UTC m=+0.041003051 container create 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:37 compute-0 systemd[1]: Started libpod-conmon-641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5.scope.
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.231524135 +0000 UTC m=+0.022092546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.363137918 +0000 UTC m=+0.153706329 container init 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:42:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.371704619 +0000 UTC m=+0.162273020 container start 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:42:37 compute-0 amazing_wright[143482]: 167 167
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.375960504 +0000 UTC m=+0.166528915 container attach 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:42:37 compute-0 systemd[1]: libpod-641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5.scope: Deactivated successfully.
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.377856581 +0000 UTC m=+0.168424972 container died 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d00ad38d554da874f6cac3f0d003462e16554b9b875af9a63ccba0243d3eed9-merged.mount: Deactivated successfully.
Oct 02 11:42:37 compute-0 sudo[143559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noftubxomzppcvpdhhjhqpypthfnyjwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405357.2234254-1054-21614908271483/AnsiballZ_stat.py'
Oct 02 11:42:37 compute-0 sudo[143559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:37 compute-0 podman[143407]: 2025-10-02 11:42:37.477094176 +0000 UTC m=+0.267662567 container remove 641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:42:37 compute-0 systemd[1]: libpod-conmon-641be3167765ebed42ac0754eacee2df8bcccd856e9d67950d29c831100a58a5.scope: Deactivated successfully.
Oct 02 11:42:37 compute-0 ceph-mon[73607]: pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:37 compute-0 podman[143569]: 2025-10-02 11:42:37.626709324 +0000 UTC m=+0.052455955 container create 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:42:37 compute-0 systemd[1]: Started libpod-conmon-5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734.scope.
Oct 02 11:42:37 compute-0 python3.9[143561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:37 compute-0 podman[143569]: 2025-10-02 11:42:37.602466467 +0000 UTC m=+0.028213138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:42:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a7fcf45b1fbbf5f9f5703bbf31a707ec979687dc9c334dd0601c4b75432ebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a7fcf45b1fbbf5f9f5703bbf31a707ec979687dc9c334dd0601c4b75432ebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a7fcf45b1fbbf5f9f5703bbf31a707ec979687dc9c334dd0601c4b75432ebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a7fcf45b1fbbf5f9f5703bbf31a707ec979687dc9c334dd0601c4b75432ebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:42:37 compute-0 sudo[143559]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:37 compute-0 podman[143569]: 2025-10-02 11:42:37.755311193 +0000 UTC m=+0.181057834 container init 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:42:37 compute-0 podman[143569]: 2025-10-02 11:42:37.761635049 +0000 UTC m=+0.187381670 container start 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:42:37 compute-0 podman[143569]: 2025-10-02 11:42:37.852696603 +0000 UTC m=+0.278443234 container attach 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:42:37 compute-0 sudo[143666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxmomikzufuyghlpbompfzksrxmtmir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405357.2234254-1054-21614908271483/AnsiballZ_file.py'
Oct 02 11:42:37 compute-0 sudo[143666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:38 compute-0 python3.9[143668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:38 compute-0 sudo[143666]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:38 compute-0 sudo[143830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hahnbrbkbgxehagywtvtodbzgugoxycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405358.3685756-1123-50722292901395/AnsiballZ_file.py'
Oct 02 11:42:38 compute-0 sudo[143830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:38 compute-0 admiring_tu[143586]: {
Oct 02 11:42:38 compute-0 admiring_tu[143586]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:42:38 compute-0 admiring_tu[143586]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:42:38 compute-0 admiring_tu[143586]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:42:38 compute-0 admiring_tu[143586]:         "osd_id": 1,
Oct 02 11:42:38 compute-0 admiring_tu[143586]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:42:38 compute-0 admiring_tu[143586]:         "type": "bluestore"
Oct 02 11:42:38 compute-0 admiring_tu[143586]:     }
Oct 02 11:42:38 compute-0 admiring_tu[143586]: }
Oct 02 11:42:38 compute-0 systemd[1]: libpod-5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734.scope: Deactivated successfully.
Oct 02 11:42:38 compute-0 podman[143569]: 2025-10-02 11:42:38.72981987 +0000 UTC m=+1.155566491 container died 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:42:38 compute-0 ceph-mon[73607]: pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a7fcf45b1fbbf5f9f5703bbf31a707ec979687dc9c334dd0601c4b75432ebc-merged.mount: Deactivated successfully.
Oct 02 11:42:38 compute-0 python3.9[143835]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:38 compute-0 sudo[143830]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:38 compute-0 podman[143569]: 2025-10-02 11:42:38.893863992 +0000 UTC m=+1.319610613 container remove 5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:42:38 compute-0 systemd[1]: libpod-conmon-5b709773e8683b42ad2cde08f186ef5f503864451fa3e89ed6915772032ed734.scope: Deactivated successfully.
Oct 02 11:42:38 compute-0 sudo[143310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:42:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:42:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2fba4ab1-2bb6-45d2-8432-3d6575dd7ed8 does not exist
Oct 02 11:42:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 79c761b5-52e3-46f4-8487-01a245152bbc does not exist
Oct 02 11:42:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 388e0810-2c6f-4fa2-b209-cbe6050a0721 does not exist
Oct 02 11:42:39 compute-0 sudo[143881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:39 compute-0 sudo[143881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:39 compute-0 sudo[143881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:39 compute-0 sudo[143943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:42:39 compute-0 sudo[143943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:39 compute-0 sudo[143943]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:39 compute-0 sudo[144051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohloqskfoquzmbzbwwbbznlcrnwfoqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405359.0821567-1147-207181625410090/AnsiballZ_stat.py'
Oct 02 11:42:39 compute-0 sudo[144051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:42:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:39.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:42:39 compute-0 python3.9[144053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:39 compute-0 sudo[144051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:39 compute-0 sudo[144129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtvevjwvkhlsihpbxnzyaofzasmwqyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405359.0821567-1147-207181625410090/AnsiballZ_file.py'
Oct 02 11:42:39 compute-0 sudo[144129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:42:39 compute-0 python3.9[144131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:40 compute-0 sudo[144129]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:40 compute-0 sudo[144282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmmodjnxyqudspcdubguokkjgdnssbgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405360.274361-1183-162717330196336/AnsiballZ_stat.py'
Oct 02 11:42:40 compute-0 sudo[144282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:40 compute-0 python3.9[144284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:40 compute-0 sudo[144282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:40 compute-0 sudo[144287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:40 compute-0 sudo[144287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:40 compute-0 sudo[144287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:40 compute-0 sudo[144335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:42:40 compute-0 sudo[144335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:42:40 compute-0 sudo[144335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:40 compute-0 sudo[144410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boqurmboqvoqoitswyyacziiomrmejun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405360.274361-1183-162717330196336/AnsiballZ_file.py'
Oct 02 11:42:40 compute-0 sudo[144410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:41 compute-0 ceph-mon[73607]: pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:41 compute-0 python3.9[144412]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:41 compute-0 sudo[144410]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:41.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:41 compute-0 sudo[144562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodollepreboxqesdkcsctsgdzdxcyir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405361.5775535-1219-12661927998520/AnsiballZ_systemd.py'
Oct 02 11:42:41 compute-0 sudo[144562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:42.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:42 compute-0 python3.9[144564]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:42:42 compute-0 systemd[1]: Reloading.
Oct 02 11:42:42 compute-0 systemd-sysv-generator[144596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:42:42 compute-0 systemd-rc-local-generator[144593]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:42:42
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta']
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:42:42 compute-0 sudo[144562]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:42:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:42:43 compute-0 sudo[144753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-absctzeblvmbaujrygmmsdbedupiomrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405363.0278883-1243-709109162579/AnsiballZ_stat.py'
Oct 02 11:42:43 compute-0 sudo[144753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:43 compute-0 python3.9[144755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:43 compute-0 sudo[144753]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:43 compute-0 ceph-mon[73607]: pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:43 compute-0 sudo[144831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynhpevcgxsnonfljyqchoxfjrddztckr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405363.0278883-1243-709109162579/AnsiballZ_file.py'
Oct 02 11:42:43 compute-0 sudo[144831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:43 compute-0 python3.9[144833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:43 compute-0 sudo[144831]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:44.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:44 compute-0 sudo[144984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxezmqgptjqmfrankgmacicruceuvysi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405364.2694857-1279-105864340471173/AnsiballZ_stat.py'
Oct 02 11:42:44 compute-0 sudo[144984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:44 compute-0 python3.9[144986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:44 compute-0 sudo[144984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:44 compute-0 ceph-mon[73607]: pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:44 compute-0 sudo[145062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angdemuowlhipiqieofmksrplvkggqmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405364.2694857-1279-105864340471173/AnsiballZ_file.py'
Oct 02 11:42:44 compute-0 sudo[145062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:45 compute-0 python3.9[145064]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:45 compute-0 sudo[145062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:45.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:45 compute-0 sudo[145214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxbglheduixojcrhfranamiwribfgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405365.5835667-1315-122458833846007/AnsiballZ_systemd.py'
Oct 02 11:42:45 compute-0 sudo[145214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:46.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:46 compute-0 python3.9[145216]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:42:46 compute-0 systemd[1]: Reloading.
Oct 02 11:42:46 compute-0 systemd-rc-local-generator[145244]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:42:46 compute-0 systemd-sysv-generator[145247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:42:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:46 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:42:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:42:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:42:46 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:42:46 compute-0 sudo[145214]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:47 compute-0 sudo[145409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubuveifhcpvxypamgceysgijvnvbppeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405366.919049-1345-42903320729677/AnsiballZ_file.py'
Oct 02 11:42:47 compute-0 sudo[145409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:47.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:47 compute-0 python3.9[145411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:47 compute-0 sudo[145409]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:47 compute-0 ceph-mon[73607]: pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:47 compute-0 sudo[145561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabqyztfbecdoiyeaxvzjqcupsyxdflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405367.749068-1369-234770375283103/AnsiballZ_stat.py'
Oct 02 11:42:47 compute-0 sudo[145561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:42:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:42:48 compute-0 python3.9[145563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:48 compute-0 sudo[145561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:48 compute-0 sudo[145685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludcnigcolgnjwyqttgxwuamhamztsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405367.749068-1369-234770375283103/AnsiballZ_copy.py'
Oct 02 11:42:48 compute-0 sudo[145685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:48 compute-0 python3.9[145687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405367.749068-1369-234770375283103/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:48 compute-0 sudo[145685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:49 compute-0 ceph-mon[73607]: pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:49 compute-0 sudo[145837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwxgcbqusvccfiskvqabxknoalpxpmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405369.4446528-1420-89269682788670/AnsiballZ_file.py'
Oct 02 11:42:49 compute-0 sudo[145837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:49 compute-0 python3.9[145839]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:42:50 compute-0 sudo[145837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:50 compute-0 sudo[145990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatmbstdzmmjvhhltvtdaspsbxgtjege ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405370.2736638-1444-21160353414082/AnsiballZ_stat.py'
Oct 02 11:42:50 compute-0 sudo[145990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:50 compute-0 ceph-mon[73607]: pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:50 compute-0 python3.9[145992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:50 compute-0 sudo[145990]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:51 compute-0 sudo[146113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trzsvfrrbxzimnwfqritnfrvkfhggsgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405370.2736638-1444-21160353414082/AnsiballZ_copy.py'
Oct 02 11:42:51 compute-0 sudo[146113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:51 compute-0 python3.9[146115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405370.2736638-1444-21160353414082/.source.json _original_basename=.50xpwe__ follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:51 compute-0 sudo[146113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:51 compute-0 sudo[146265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahvmjogwgtdqrqfagfznviexiulxirur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405371.67163-1489-110967743931501/AnsiballZ_file.py'
Oct 02 11:42:51 compute-0 sudo[146265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:42:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:52.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:42:52 compute-0 python3.9[146267]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:52 compute-0 sudo[146265]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:52 compute-0 sudo[146418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meymvswmnlwsyctvplcbklwjuqafbjrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405372.433979-1513-163720402814982/AnsiballZ_stat.py'
Oct 02 11:42:52 compute-0 sudo[146418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-0 sudo[146418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-0 sudo[146541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqgpaodvwwmdodheowpqbiwnkuunrcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405372.433979-1513-163720402814982/AnsiballZ_copy.py'
Oct 02 11:42:53 compute-0 sudo[146541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:53 compute-0 sudo[146541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-0 ceph-mon[73607]: pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:42:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:42:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:54 compute-0 sudo[146693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loytwwedcsndakaxhwviiuroujvlzrnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405373.803065-1564-63195860775032/AnsiballZ_container_config_data.py'
Oct 02 11:42:54 compute-0 sudo[146693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:54 compute-0 python3.9[146695]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 11:42:54 compute-0 sudo[146693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:42:55 compute-0 sudo[146846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfzuuqgnphtzzsiexmkfzmivlqnitsww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405374.7928858-1591-217695523449722/AnsiballZ_container_config_hash.py'
Oct 02 11:42:55 compute-0 sudo[146846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:55.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:55 compute-0 python3.9[146848]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:42:55 compute-0 sudo[146846]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:55 compute-0 ceph-mon[73607]: pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:56 compute-0 sudo[146998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iduxqvjkotsespyzwisfuxklmxqbsops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405375.807159-1618-59980530979148/AnsiballZ_podman_container_info.py'
Oct 02 11:42:56 compute-0 sudo[146998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:56 compute-0 python3.9[147000]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:42:56 compute-0 sudo[146998]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:42:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:57.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:42:57 compute-0 ceph-mon[73607]: pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:58.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:58 compute-0 sudo[147178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piiaamowyceqzkghzsmdlzsxnwzfipan ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405377.4106073-1657-101622093913691/AnsiballZ_edpm_container_manage.py'
Oct 02 11:42:58 compute-0 sudo[147178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:58 compute-0 python3[147180]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:42:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:42:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5727 writes, 24K keys, 5727 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5727 writes, 865 syncs, 6.62 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5727 writes, 24K keys, 5727 commit groups, 1.0 writes per commit group, ingest: 18.86 MB, 0.03 MB/s
                                           Interval WAL: 5727 writes, 865 syncs, 6.62 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:42:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:42:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:42:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:42:59 compute-0 ceph-mon[73607]: pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:42:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:00.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:00 compute-0 sudo[147244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:00 compute-0 sudo[147244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:00 compute-0 sudo[147244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:00 compute-0 sudo[147269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:01 compute-0 sudo[147269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:01 compute-0 sudo[147269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:01 compute-0 ceph-mon[73607]: pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:43:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:43:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:02.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:02 compute-0 podman[147195]: 2025-10-02 11:43:02.710501534 +0000 UTC m=+4.369032244 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:43:02 compute-0 podman[147364]: 2025-10-02 11:43:02.848797522 +0000 UTC m=+0.043597745 container create 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 11:43:02 compute-0 podman[147364]: 2025-10-02 11:43:02.825150489 +0000 UTC m=+0.019950732 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:43:02 compute-0 python3[147180]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:43:02 compute-0 sudo[147178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:03 compute-0 ceph-mon[73607]: pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:04.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:04 compute-0 sudo[147553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnwjaseyvsyaglutfbvaludjnabjauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405384.5005438-1681-110515874893590/AnsiballZ_stat.py'
Oct 02 11:43:04 compute-0 sudo[147553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:04 compute-0 python3.9[147555]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:04 compute-0 sudo[147553]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:05.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:05 compute-0 ceph-mon[73607]: pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:05 compute-0 sudo[147707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbofjnkjtjtctahyqfbdkfyzdfxjhtsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405385.3731234-1708-216383206631153/AnsiballZ_file.py'
Oct 02 11:43:05 compute-0 sudo[147707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:05 compute-0 python3.9[147709]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:05 compute-0 sudo[147707]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:06.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:06 compute-0 sudo[147783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyxynydgheqrxgcmxwervvxajcxjstwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405385.3731234-1708-216383206631153/AnsiballZ_stat.py'
Oct 02 11:43:06 compute-0 sudo[147783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:06 compute-0 python3.9[147785]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:06 compute-0 sudo[147783]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:06 compute-0 sudo[147936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acnsiuwodfevumukmkexlbhscitjsdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405386.3914347-1708-124598310127483/AnsiballZ_copy.py'
Oct 02 11:43:06 compute-0 sudo[147936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 11:43:06 compute-0 ceph-mon[73607]: pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:07 compute-0 python3.9[147938]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405386.3914347-1708-124598310127483/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:07 compute-0 sudo[147936]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:07 compute-0 sudo[148012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwanvxxodcewcopxcuknwzvmcvtuezct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405386.3914347-1708-124598310127483/AnsiballZ_systemd.py'
Oct 02 11:43:07 compute-0 sudo[148012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:43:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:07.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:43:07 compute-0 python3.9[148014]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:43:07 compute-0 systemd[1]: Reloading.
Oct 02 11:43:07 compute-0 systemd-sysv-generator[148041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:07 compute-0 systemd-rc-local-generator[148038]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:07 compute-0 sudo[148012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:08.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:08 compute-0 sudo[148123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjrvrigitgmrmukfzgwgtduixaryper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405386.3914347-1708-124598310127483/AnsiballZ_systemd.py'
Oct 02 11:43:08 compute-0 sudo[148123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:08 compute-0 python3.9[148125]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:43:08 compute-0 systemd[1]: Reloading.
Oct 02 11:43:08 compute-0 systemd-rc-local-generator[148160]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:08 compute-0 systemd-sysv-generator[148163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:08 compute-0 systemd[1]: Starting ovn_controller container...
Oct 02 11:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a167496ef82afc732be58eb8228df6fb0605c44f2855b41512160224748e991/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7.
Oct 02 11:43:08 compute-0 podman[148168]: 2025-10-02 11:43:08.996446769 +0000 UTC m=+0.161550503 container init 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + sudo -E kolla_set_configs
Oct 02 11:43:09 compute-0 podman[148168]: 2025-10-02 11:43:09.042697788 +0000 UTC m=+0.207801492 container start 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 11:43:09 compute-0 edpm-start-podman-container[148168]: ovn_controller
Oct 02 11:43:09 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 11:43:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 11:43:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 11:43:09 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 11:43:09 compute-0 edpm-start-podman-container[148167]: Creating additional drop-in dependency for "ovn_controller" (609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7)
Oct 02 11:43:09 compute-0 systemd[148223]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 11:43:09 compute-0 podman[148190]: 2025-10-02 11:43:09.151505849 +0000 UTC m=+0.099835281 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 02 11:43:09 compute-0 systemd[1]: 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7-70c3403616f2414d.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:43:09 compute-0 systemd[1]: 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7-70c3403616f2414d.service: Failed with result 'exit-code'.
Oct 02 11:43:09 compute-0 systemd[1]: Reloading.
Oct 02 11:43:09 compute-0 systemd-rc-local-generator[148267]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:09 compute-0 systemd-sysv-generator[148271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:09 compute-0 systemd[148223]: Queued start job for default target Main User Target.
Oct 02 11:43:09 compute-0 systemd[148223]: Created slice User Application Slice.
Oct 02 11:43:09 compute-0 systemd[148223]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 11:43:09 compute-0 systemd[148223]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:43:09 compute-0 systemd[148223]: Reached target Paths.
Oct 02 11:43:09 compute-0 systemd[148223]: Reached target Timers.
Oct 02 11:43:09 compute-0 systemd[148223]: Starting D-Bus User Message Bus Socket...
Oct 02 11:43:09 compute-0 systemd[148223]: Starting Create User's Volatile Files and Directories...
Oct 02 11:43:09 compute-0 systemd[148223]: Finished Create User's Volatile Files and Directories.
Oct 02 11:43:09 compute-0 systemd[148223]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:43:09 compute-0 systemd[148223]: Reached target Sockets.
Oct 02 11:43:09 compute-0 systemd[148223]: Reached target Basic System.
Oct 02 11:43:09 compute-0 systemd[148223]: Reached target Main User Target.
Oct 02 11:43:09 compute-0 systemd[148223]: Startup finished in 179ms.
Oct 02 11:43:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:09.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:09 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 11:43:09 compute-0 systemd[1]: Started ovn_controller container.
Oct 02 11:43:09 compute-0 systemd[1]: Started Session c1 of User root.
Oct 02 11:43:09 compute-0 sudo[148123]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:09 compute-0 ovn_controller[148183]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:43:09 compute-0 ovn_controller[148183]: INFO:__main__:Validating config file
Oct 02 11:43:09 compute-0 ovn_controller[148183]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:43:09 compute-0 ovn_controller[148183]: INFO:__main__:Writing out command to execute
Oct 02 11:43:09 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: ++ cat /run_command
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + ARGS=
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + sudo kolla_copy_cacerts
Oct 02 11:43:09 compute-0 systemd[1]: Started Session c2 of User root.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + [[ ! -n '' ]]
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + . kolla_extend_start
Oct 02 11:43:09 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + umask 0022
Oct 02 11:43:09 compute-0 ovn_controller[148183]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.5997] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6003] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6014] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 02 11:43:09 compute-0 ceph-mon[73607]: pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6025] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6027] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 11:43:09 compute-0 kernel: br-int: entered promiscuous mode
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6205] manager: (ovn-e4c887-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6211] manager: (ovn-672825-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 02 11:43:09 compute-0 ovn_controller[148183]: 2025-10-02T11:43:09Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:43:09 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6411] device (genev_sys_6081): carrier: link connected
Oct 02 11:43:09 compute-0 NetworkManager[44987]: <info>  [1759405389.6416] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct 02 11:43:09 compute-0 systemd-udevd[148319]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:43:09 compute-0 systemd-udevd[148323]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:43:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:10.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:10 compute-0 NetworkManager[44987]: <info>  [1759405390.2417] manager: (ovn-900289-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 02 11:43:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:10 compute-0 sudo[148452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtvzjsqclvwfmbpokpyocwgkvvvxnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405390.2632492-1792-91765627387023/AnsiballZ_command.py'
Oct 02 11:43:10 compute-0 sudo[148452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:10 compute-0 python3.9[148454]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:10 compute-0 ovs-vsctl[148455]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 11:43:10 compute-0 sudo[148452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:11 compute-0 sudo[148605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmigkvioixvnlueceuoimutlitsrketp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405391.0351913-1816-13037658289519/AnsiballZ_command.py'
Oct 02 11:43:11 compute-0 sudo[148605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:11.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:11 compute-0 python3.9[148607]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:11 compute-0 ovs-vsctl[148609]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 11:43:11 compute-0 sudo[148605]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:11 compute-0 ceph-mon[73607]: pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:12.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:12 compute-0 sudo[148761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthfkqusbtspglsurgluumamyzaqguyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405392.1016576-1858-272040408267869/AnsiballZ_command.py'
Oct 02 11:43:12 compute-0 sudo[148761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:12 compute-0 python3.9[148763]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:12 compute-0 ovs-vsctl[148764]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:12 compute-0 sudo[148761]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:12 compute-0 ceph-mon[73607]: pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:13 compute-0 sshd-session[136361]: Connection closed by 192.168.122.30 port 36920
Oct 02 11:43:13 compute-0 sshd-session[136358]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:43:13 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 02 11:43:13 compute-0 systemd[1]: session-47.scope: Consumed 54.054s CPU time.
Oct 02 11:43:13 compute-0 systemd-logind[789]: Session 47 logged out. Waiting for processes to exit.
Oct 02 11:43:13 compute-0 systemd-logind[789]: Removed session 47.
Oct 02 11:43:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:13.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:14.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:15.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:15 compute-0 ceph-mon[73607]: pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:17.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:17 compute-0 ceph-mon[73607]: pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:18 compute-0 sshd-session[148792]: Accepted publickey for zuul from 192.168.122.30 port 49822 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:43:18 compute-0 systemd-logind[789]: New session 49 of user zuul.
Oct 02 11:43:18 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 02 11:43:18 compute-0 sshd-session[148792]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:43:18 compute-0 ceph-mon[73607]: pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:19 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 11:43:19 compute-0 systemd[148223]: Activating special unit Exit the Session...
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped target Main User Target.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped target Basic System.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped target Paths.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped target Sockets.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped target Timers.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:43:19 compute-0 systemd[148223]: Closed D-Bus User Message Bus Socket.
Oct 02 11:43:19 compute-0 systemd[148223]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:43:19 compute-0 systemd[148223]: Removed slice User Application Slice.
Oct 02 11:43:19 compute-0 systemd[148223]: Reached target Shutdown.
Oct 02 11:43:19 compute-0 systemd[148223]: Finished Exit the Session.
Oct 02 11:43:19 compute-0 systemd[148223]: Reached target Exit the Session.
Oct 02 11:43:19 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 11:43:19 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 11:43:19 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 11:43:19 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 11:43:19 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 11:43:19 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 11:43:19 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 11:43:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:19 compute-0 python3.9[148945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:43:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:20 compute-0 sudo[149103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knobmwkngnjnphvsajwkjsybkawlzdxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405400.2394843-67-9988423178673/AnsiballZ_file.py'
Oct 02 11:43:20 compute-0 sudo[149103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:20 compute-0 python3.9[149105]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:20 compute-0 sudo[149103]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:21 compute-0 sudo[149112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:21 compute-0 sudo[149112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:21 compute-0 sudo[149112]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:21 compute-0 sudo[149155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:21 compute-0 sudo[149155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:21 compute-0 sudo[149155]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:21 compute-0 sudo[149305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpwquxlatevqzfrfficuxkymetdmwxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405401.1366813-67-246906690222907/AnsiballZ_file.py'
Oct 02 11:43:21 compute-0 sudo[149305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:21 compute-0 python3.9[149307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:21 compute-0 sudo[149305]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:21 compute-0 ceph-mon[73607]: pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:22 compute-0 sudo[149457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmushqtlmrxzsrqbznidvjfhnwkahovz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405401.7651896-67-83836687106483/AnsiballZ_file.py'
Oct 02 11:43:22 compute-0 sudo[149457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:22 compute-0 python3.9[149459]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:22 compute-0 sudo[149457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:22 compute-0 ceph-mon[73607]: pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:22 compute-0 sudo[149610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womwwrajwaxjdkmudfiaszhkzsqposro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405402.4902594-67-86136069878466/AnsiballZ_file.py'
Oct 02 11:43:22 compute-0 sudo[149610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:22 compute-0 python3.9[149612]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:23 compute-0 sudo[149610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:23 compute-0 sudo[149762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csowecrtlvgzzbmlmyqpatlorswwnyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405403.1821113-67-7105324713334/AnsiballZ_file.py'
Oct 02 11:43:23 compute-0 sudo[149762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:23 compute-0 python3.9[149764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:23 compute-0 sudo[149762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:24.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:24 compute-0 python3.9[149915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:43:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:25 compute-0 sudo[150065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viafegcuumurthyitltrhagriupbukra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405404.9438925-199-191301826166340/AnsiballZ_seboolean.py'
Oct 02 11:43:25 compute-0 sudo[150065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:25 compute-0 python3.9[150067]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:43:25 compute-0 ceph-mon[73607]: pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:26 compute-0 sudo[150065]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:26 compute-0 ceph-mon[73607]: pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:27 compute-0 python3.9[150218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:27 compute-0 python3.9[150339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405406.4389617-223-75823725677770/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:30 compute-0 ceph-mon[73607]: pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:30 compute-0 python3.9[150491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:31 compute-0 python3.9[150616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405408.0910559-268-227754068164558/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:32 compute-0 sudo[150767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-darvyhvuojhfofvudktrioniidaolsgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405411.7383661-319-48910975756432/AnsiballZ_setup.py'
Oct 02 11:43:32 compute-0 sudo[150767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:32 compute-0 python3.9[150769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:43:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:32 compute-0 sudo[150767]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:32 compute-0 sudo[150852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wselnxyzxoqjewgkalszszpcjzqheidi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405411.7383661-319-48910975756432/AnsiballZ_dnf.py'
Oct 02 11:43:32 compute-0 sudo[150852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:33 compute-0 python3.9[150854]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:43:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:34 compute-0 ceph-mon[73607]: pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:34 compute-0 sudo[150852]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:35.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:36.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:37 compute-0 ceph-mon[73607]: pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:37 compute-0 ceph-mon[73607]: pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:38.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:38 compute-0 ceph-mon[73607]: pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:39 compute-0 sshd-session[150883]: Invalid user riscv from 167.99.55.34 port 37850
Oct 02 11:43:39 compute-0 sudo[150885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:39 compute-0 sudo[150885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:39 compute-0 sudo[150885]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:39 compute-0 sshd-session[150883]: Connection closed by invalid user riscv 167.99.55.34 port 37850 [preauth]
Oct 02 11:43:39 compute-0 ovn_controller[148183]: 2025-10-02T11:43:39Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Oct 02 11:43:39 compute-0 ovn_controller[148183]: 2025-10-02T11:43:39Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 02 11:43:39 compute-0 sudo[150916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:43:39 compute-0 podman[150908]: 2025-10-02 11:43:39.618609872 +0000 UTC m=+0.078109301 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 11:43:39 compute-0 sudo[150916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:39 compute-0 sudo[150916]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:39 compute-0 sudo[150961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:39 compute-0 sudo[150961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:39 compute-0 sudo[150961]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:39 compute-0 sudo[150986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:43:39 compute-0 sudo[150986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:39 compute-0 ceph-mon[73607]: pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:40 compute-0 sudo[150986]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:43:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:43:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:43:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:43:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:43:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:43:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 02 11:43:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:43:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:43:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 02 11:43:40 compute-0 ceph-mon[73607]: pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:43:41 compute-0 sudo[151168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-burywgmusdmnixekgubsoeaohciahhfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405420.4235191-355-120529657549771/AnsiballZ_systemd.py'
Oct 02 11:43:41 compute-0 sudo[151168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9fd4f478-7578-444a-a48d-922e6ce639cc does not exist
Oct 02 11:43:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7097ff4f-d604-4630-9f43-443e1f994a27 does not exist
Oct 02 11:43:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b7c326fb-a10e-49e9-8aef-7a36926b343a does not exist
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:43:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:41 compute-0 sudo[151171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:41 compute-0 sudo[151171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 sudo[151171]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 sudo[151196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:43:41 compute-0 sudo[151196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 sudo[151196]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 sudo[151221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:41 compute-0 sudo[151221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 sudo[151221]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 sudo[151225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:41 compute-0 sudo[151225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 sudo[151225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 sudo[151270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:41 compute-0 sudo[151270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 sudo[151270]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 sudo[151279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:43:41 compute-0 sudo[151279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:41 compute-0 python3.9[151170]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:43:41 compute-0 sudo[151168]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.568509118 +0000 UTC m=+0.046139832 container create 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:43:41 compute-0 systemd[1]: Started libpod-conmon-52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637.scope.
Oct 02 11:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.543161701 +0000 UTC m=+0.020792435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.774663785 +0000 UTC m=+0.252294529 container init 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.784896789 +0000 UTC m=+0.262527543 container start 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:43:41 compute-0 distracted_leakey[151405]: 167 167
Oct 02 11:43:41 compute-0 systemd[1]: libpod-52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637.scope: Deactivated successfully.
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.84037003 +0000 UTC m=+0.318000764 container attach 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:41 compute-0 podman[151386]: 2025-10-02 11:43:41.840734309 +0000 UTC m=+0.318365023 container died 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:43:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:43:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:42 compute-0 python3.9[151543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-de64bb119066703f79987a29e26dd3fbb8be826dfe62c2233dd6f08a599d8d21-merged.mount: Deactivated successfully.
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:43:42
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'volumes']
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:43:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:43:42 compute-0 python3.9[151666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405421.6406088-379-109048250472266/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:43 compute-0 podman[151386]: 2025-10-02 11:43:43.095211558 +0000 UTC m=+1.572842312 container remove 52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_leakey, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:43:43 compute-0 systemd[1]: libpod-conmon-52a45db0bda7299ac3f915fddaaec72574c0efdaef44f463b7fdcee1704f5637.scope: Deactivated successfully.
Oct 02 11:43:43 compute-0 ceph-mon[73607]: pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:43 compute-0 podman[151781]: 2025-10-02 11:43:43.277761973 +0000 UTC m=+0.026329253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:43 compute-0 podman[151781]: 2025-10-02 11:43:43.426681464 +0000 UTC m=+0.175248754 container create a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:43 compute-0 python3.9[151837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:43 compute-0 systemd[1]: Started libpod-conmon-a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0.scope.
Oct 02 11:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:43 compute-0 podman[151781]: 2025-10-02 11:43:43.790660615 +0000 UTC m=+0.539227945 container init a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:43:43 compute-0 podman[151781]: 2025-10-02 11:43:43.805367889 +0000 UTC m=+0.553935179 container start a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:43 compute-0 podman[151781]: 2025-10-02 11:43:43.831396752 +0000 UTC m=+0.579964082 container attach a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:43:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:43:44 compute-0 python3.9[151965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405423.007971-379-18017473360760/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:44 compute-0 frosty_gould[151840]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:43:44 compute-0 frosty_gould[151840]: --> relative data size: 1.0
Oct 02 11:43:44 compute-0 frosty_gould[151840]: --> All data devices are unavailable
Oct 02 11:43:44 compute-0 systemd[1]: libpod-a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0.scope: Deactivated successfully.
Oct 02 11:43:44 compute-0 podman[151781]: 2025-10-02 11:43:44.630544723 +0000 UTC m=+1.379112003 container died a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:45 compute-0 ceph-mon[73607]: pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cf0e5efdda87123ada0b802d50a4c8650d2ca6ae24501d564969030240abe53-merged.mount: Deactivated successfully.
Oct 02 11:43:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:45 compute-0 python3.9[152139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:45 compute-0 podman[151781]: 2025-10-02 11:43:45.846399767 +0000 UTC m=+2.594967087 container remove a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:43:45 compute-0 sudo[151279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:45 compute-0 systemd[1]: libpod-conmon-a5b03b7f8e50befd55248d11cafb5e21d0280624a9d2adc2784fd1adc00bc7d0.scope: Deactivated successfully.
Oct 02 11:43:45 compute-0 sudo[152216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:45 compute-0 sudo[152216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:45 compute-0 sudo[152216]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:46 compute-0 sudo[152276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:43:46 compute-0 sudo[152276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:46 compute-0 sudo[152276]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:46 compute-0 sudo[152311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:46 compute-0 sudo[152311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:46 compute-0 sudo[152311]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:46 compute-0 sudo[152336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:43:46 compute-0 sudo[152336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:46 compute-0 python3.9[152293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405425.0102057-511-129463324553153/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:46 compute-0 podman[152450]: 2025-10-02 11:43:46.476076087 +0000 UTC m=+0.018603201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:46 compute-0 podman[152450]: 2025-10-02 11:43:46.753390354 +0000 UTC m=+0.295917448 container create 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:43:46 compute-0 systemd[1]: Started libpod-conmon-26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388.scope.
Oct 02 11:43:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:46 compute-0 python3.9[152566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:47 compute-0 ceph-mon[73607]: pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:47 compute-0 podman[152450]: 2025-10-02 11:43:47.22962922 +0000 UTC m=+0.772156354 container init 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:43:47 compute-0 podman[152450]: 2025-10-02 11:43:47.237761571 +0000 UTC m=+0.780288705 container start 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:43:47 compute-0 frosty_darwin[152569]: 167 167
Oct 02 11:43:47 compute-0 systemd[1]: libpod-26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388.scope: Deactivated successfully.
Oct 02 11:43:47 compute-0 podman[152450]: 2025-10-02 11:43:47.382194493 +0000 UTC m=+0.924721647 container attach 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:43:47 compute-0 podman[152450]: 2025-10-02 11:43:47.382593593 +0000 UTC m=+0.925120697 container died 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:43:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:47 compute-0 python3.9[152704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405426.4578283-511-145803407001021/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf6b5d14988df90cf84a0f6f8721a618652e4224455dd15b6bd59046a845e0e-merged.mount: Deactivated successfully.
Oct 02 11:43:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:48 compute-0 podman[152450]: 2025-10-02 11:43:48.244124606 +0000 UTC m=+1.786651740 container remove 26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:43:48 compute-0 systemd[1]: libpod-conmon-26ac49e437446aa3ca862eeff4368993feb54300bcdca42d370d44fac8217388.scope: Deactivated successfully.
Oct 02 11:43:48 compute-0 python3.9[152855]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:48 compute-0 podman[152865]: 2025-10-02 11:43:48.516885161 +0000 UTC m=+0.115758724 container create 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:48 compute-0 podman[152865]: 2025-10-02 11:43:48.423753527 +0000 UTC m=+0.022627110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:48 compute-0 systemd[1]: Started libpod-conmon-538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d.scope.
Oct 02 11:43:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef4a62c6d85a86937483c7c2d6a3f5a6aaadd0f8a57644ee71275aafb4c1fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef4a62c6d85a86937483c7c2d6a3f5a6aaadd0f8a57644ee71275aafb4c1fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef4a62c6d85a86937483c7c2d6a3f5a6aaadd0f8a57644ee71275aafb4c1fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef4a62c6d85a86937483c7c2d6a3f5a6aaadd0f8a57644ee71275aafb4c1fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:48 compute-0 podman[152865]: 2025-10-02 11:43:48.777651768 +0000 UTC m=+0.376525341 container init 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:43:48 compute-0 podman[152865]: 2025-10-02 11:43:48.78700891 +0000 UTC m=+0.385882473 container start 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:43:48 compute-0 podman[152865]: 2025-10-02 11:43:48.792159397 +0000 UTC m=+0.391033080 container attach 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:48 compute-0 sudo[153037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whtyjnwhcouxamobadknrtwyvindrpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405428.6854768-625-111212287743731/AnsiballZ_file.py'
Oct 02 11:43:49 compute-0 sudo[153037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:49 compute-0 python3.9[153039]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:49 compute-0 sudo[153037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]: {
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:     "1": [
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:         {
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "devices": [
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "/dev/loop3"
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             ],
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "lv_name": "ceph_lv0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "lv_size": "7511998464",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "name": "ceph_lv0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "tags": {
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.cluster_name": "ceph",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.crush_device_class": "",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.encrypted": "0",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.osd_id": "1",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.type": "block",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:                 "ceph.vdo": "0"
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             },
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "type": "block",
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:             "vg_name": "ceph_vg0"
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:         }
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]:     ]
Oct 02 11:43:49 compute-0 eloquent_lamarr[152938]: }
Oct 02 11:43:49 compute-0 systemd[1]: libpod-538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d.scope: Deactivated successfully.
Oct 02 11:43:49 compute-0 podman[152865]: 2025-10-02 11:43:49.620455959 +0000 UTC m=+1.219329522 container died 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-eef4a62c6d85a86937483c7c2d6a3f5a6aaadd0f8a57644ee71275aafb4c1fcf-merged.mount: Deactivated successfully.
Oct 02 11:43:49 compute-0 ceph-mon[73607]: pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:49 compute-0 podman[152865]: 2025-10-02 11:43:49.676946615 +0000 UTC m=+1.275820178 container remove 538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:43:49 compute-0 systemd[1]: libpod-conmon-538dd48d51d9fd13a90a9b956a0b25d65f59fcdfdf972f6985ce79934ccde42d.scope: Deactivated successfully.
Oct 02 11:43:49 compute-0 sudo[152336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:49 compute-0 sudo[153206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhthivabtbhatnrdmdsjszixqmrwapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405429.4192688-649-230237179547764/AnsiballZ_stat.py'
Oct 02 11:43:49 compute-0 sudo[153206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:49 compute-0 sudo[153208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:49 compute-0 sudo[153208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:49 compute-0 sudo[153208]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:49 compute-0 sudo[153234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:43:49 compute-0 sudo[153234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:49 compute-0 sudo[153234]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:49 compute-0 sudo[153259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:49 compute-0 sudo[153259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:49 compute-0 sudo[153259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:49 compute-0 python3.9[153213]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:49 compute-0 sudo[153284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:43:49 compute-0 sudo[153284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:49 compute-0 sudo[153206]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:50 compute-0 sudo[153410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebezdabxwrwydcklilwrojjmcxqfpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405429.4192688-649-230237179547764/AnsiballZ_file.py'
Oct 02 11:43:50 compute-0 sudo[153410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.274152763 +0000 UTC m=+0.051623778 container create ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:43:50 compute-0 systemd[1]: Started libpod-conmon-ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f.scope.
Oct 02 11:43:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.253140913 +0000 UTC m=+0.030612008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.358677542 +0000 UTC m=+0.136148557 container init ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.36870428 +0000 UTC m=+0.146175285 container start ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.371781236 +0000 UTC m=+0.149252261 container attach ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:43:50 compute-0 festive_ride[153443]: 167 167
Oct 02 11:43:50 compute-0 systemd[1]: libpod-ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f.scope: Deactivated successfully.
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.377372604 +0000 UTC m=+0.154843629 container died ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:43:50 compute-0 python3.9[153420]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7693a11c6fa45d84f9f7170a221c64ef02d84902fb5f68451feb9c527abe5f-merged.mount: Deactivated successfully.
Oct 02 11:43:50 compute-0 podman[153426]: 2025-10-02 11:43:50.41195893 +0000 UTC m=+0.189429945 container remove ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:50 compute-0 sudo[153410]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:50 compute-0 systemd[1]: libpod-conmon-ca76445935dbc4dc5bf83d5de1f16c5c7e02dbdd0e2c3f08ff4d95b95e8c423f.scope: Deactivated successfully.
Oct 02 11:43:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:50 compute-0 podman[153491]: 2025-10-02 11:43:50.570462159 +0000 UTC m=+0.038364129 container create 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:50 compute-0 systemd[1]: Started libpod-conmon-34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38.scope.
Oct 02 11:43:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46078bce747e68d31f7c376763312e5e196bb5d67c925faee5b754976629688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46078bce747e68d31f7c376763312e5e196bb5d67c925faee5b754976629688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46078bce747e68d31f7c376763312e5e196bb5d67c925faee5b754976629688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46078bce747e68d31f7c376763312e5e196bb5d67c925faee5b754976629688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:50 compute-0 podman[153491]: 2025-10-02 11:43:50.643220008 +0000 UTC m=+0.111121988 container init 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:43:50 compute-0 podman[153491]: 2025-10-02 11:43:50.552723731 +0000 UTC m=+0.020625721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:43:50 compute-0 podman[153491]: 2025-10-02 11:43:50.65498738 +0000 UTC m=+0.122889340 container start 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:50 compute-0 podman[153491]: 2025-10-02 11:43:50.659611684 +0000 UTC m=+0.127513664 container attach 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:43:50 compute-0 ceph-mon[73607]: pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.724276) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430724357, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1773, "num_deletes": 252, "total_data_size": 3311004, "memory_usage": 3364672, "flush_reason": "Manual Compaction"}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430744940, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3253241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10494, "largest_seqno": 12266, "table_properties": {"data_size": 3245113, "index_size": 5007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16060, "raw_average_key_size": 19, "raw_value_size": 3228885, "raw_average_value_size": 3966, "num_data_blocks": 224, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405240, "oldest_key_time": 1759405240, "file_creation_time": 1759405430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20926 microseconds, and 5967 cpu microseconds.
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.745223) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3253241 bytes OK
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.745309) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.746745) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.746760) EVENT_LOG_v1 {"time_micros": 1759405430746755, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.746775) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3303765, prev total WAL file size 3303765, number of live WAL files 2.
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.748023) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3176KB)], [26(7326KB)]
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430748048, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10755325, "oldest_snapshot_seqno": -1}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3947 keys, 8581585 bytes, temperature: kUnknown
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430820895, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8581585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8552101, "index_size": 18494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 95680, "raw_average_key_size": 24, "raw_value_size": 8477716, "raw_average_value_size": 2147, "num_data_blocks": 802, "num_entries": 3947, "num_filter_entries": 3947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.821890) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8581585 bytes
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.823673) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.0 rd, 116.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 4469, records dropped: 522 output_compression: NoCompression
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.823693) EVENT_LOG_v1 {"time_micros": 1759405430823684, "job": 10, "event": "compaction_finished", "compaction_time_micros": 73652, "compaction_time_cpu_micros": 17836, "output_level": 6, "num_output_files": 1, "total_output_size": 8581585, "num_input_records": 4469, "num_output_records": 3947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430824712, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430826260, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.747956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.826468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.826474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.826476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.826479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:43:50.826481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:43:50 compute-0 sudo[153638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chiuhreynzxfjsgwvonpseplhvbmmame ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405430.5719275-649-115391389769194/AnsiballZ_stat.py'
Oct 02 11:43:50 compute-0 sudo[153638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:51 compute-0 python3.9[153640]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:51 compute-0 sudo[153638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:51 compute-0 sudo[153716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwwtxoljdaisjpekdseknxrjdwwesipc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405430.5719275-649-115391389769194/AnsiballZ_file.py'
Oct 02 11:43:51 compute-0 sudo[153716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:51 compute-0 python3.9[153718]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:43:51 compute-0 sudo[153716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:51 compute-0 amazing_shaw[153550]: {
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:         "osd_id": 1,
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:         "type": "bluestore"
Oct 02 11:43:51 compute-0 amazing_shaw[153550]:     }
Oct 02 11:43:51 compute-0 amazing_shaw[153550]: }
Oct 02 11:43:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:51 compute-0 systemd[1]: libpod-34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38.scope: Deactivated successfully.
Oct 02 11:43:51 compute-0 podman[153491]: 2025-10-02 11:43:51.525559077 +0000 UTC m=+0.993461067 container died 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d46078bce747e68d31f7c376763312e5e196bb5d67c925faee5b754976629688-merged.mount: Deactivated successfully.
Oct 02 11:43:51 compute-0 podman[153491]: 2025-10-02 11:43:51.597680969 +0000 UTC m=+1.065582939 container remove 34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:51 compute-0 systemd[1]: libpod-conmon-34a0e9117ef4ada4de8dcce30ab0afb1f09d90cace208e5cff4cca0769a78a38.scope: Deactivated successfully.
Oct 02 11:43:51 compute-0 sudo[153284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:43:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:43:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 52956528-40f6-4d34-aca3-7f22574769e8 does not exist
Oct 02 11:43:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b11f6fdb-f157-447c-bf68-155cc0e2f13f does not exist
Oct 02 11:43:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 02f16638-22ce-46ee-afe6-a5fe1bd318ee does not exist
Oct 02 11:43:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:52.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:52 compute-0 sudo[153772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:43:52 compute-0 sudo[153772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:52 compute-0 sudo[153772]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:52 compute-0 sudo[153820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:43:52 compute-0 sudo[153820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:52 compute-0 sudo[153820]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:52 compute-0 sudo[153948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffokpdswzzreynsyrpwzcahgyvbsthc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405432.0959406-718-191604891047358/AnsiballZ_file.py'
Oct 02 11:43:52 compute-0 sudo[153948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:52 compute-0 python3.9[153950]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:52 compute-0 sudo[153948]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:43:52 compute-0 ceph-mon[73607]: pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:53 compute-0 sudo[154100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfbsjbjxjholaricldrzilwljkzofrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405432.8389134-742-205545298604696/AnsiballZ_stat.py'
Oct 02 11:43:53 compute-0 sudo[154100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:53 compute-0 python3.9[154102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:53 compute-0 sudo[154100]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:53 compute-0 sudo[154178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrbebltltxbtehbxwwceumormtctevwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405432.8389134-742-205545298604696/AnsiballZ_file.py'
Oct 02 11:43:53 compute-0 sudo[154178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:53 compute-0 python3.9[154180]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:53 compute-0 sudo[154178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:43:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:43:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:54 compute-0 sudo[154331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadixwufaskndvdnvlcyvcpwcdfghevf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405434.1064534-778-13304569238991/AnsiballZ_stat.py'
Oct 02 11:43:54 compute-0 sudo[154331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:54 compute-0 python3.9[154333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:54 compute-0 sudo[154331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:54 compute-0 sudo[154409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqplyckkkcgvvzlnrocfcectqukiyvjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405434.1064534-778-13304569238991/AnsiballZ_file.py'
Oct 02 11:43:54 compute-0 sudo[154409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:55 compute-0 python3.9[154411]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:55 compute-0 sudo[154409]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:43:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:55 compute-0 ceph-mon[73607]: pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:55 compute-0 sudo[154561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqngagrjqgtylhcrqemfvlweydxbgxpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405435.3197386-814-105443929446648/AnsiballZ_systemd.py'
Oct 02 11:43:55 compute-0 sudo[154561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:56 compute-0 python3.9[154563]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:43:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:56.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:56 compute-0 systemd[1]: Reloading.
Oct 02 11:43:56 compute-0 systemd-rc-local-generator[154589]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:56 compute-0 systemd-sysv-generator[154593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:56 compute-0 sudo[154561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:57 compute-0 sudo[154750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupkmyhfeogikhhgmvxyrholaiqpgddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405436.7723155-838-219374254887137/AnsiballZ_stat.py'
Oct 02 11:43:57 compute-0 sudo[154750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:57 compute-0 python3.9[154752]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:57 compute-0 sudo[154750]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:57 compute-0 sudo[154828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmtslqbldwpxbzvvspgfibsyhyfwsiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405436.7723155-838-219374254887137/AnsiballZ_file.py'
Oct 02 11:43:57 compute-0 sudo[154828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:57 compute-0 ceph-mon[73607]: pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:57 compute-0 python3.9[154830]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:57 compute-0 sudo[154828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:43:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:43:58 compute-0 sudo[154981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmcbvlqzyrmsdcyvfdgkmnhuefznrjiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405438.2038283-874-173963458463383/AnsiballZ_stat.py'
Oct 02 11:43:58 compute-0 sudo[154981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:58 compute-0 python3.9[154983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:43:58 compute-0 sudo[154981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:58 compute-0 sudo[155059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcurokjiuxrqasvrvjkcedxebkxduysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405438.2038283-874-173963458463383/AnsiballZ_file.py'
Oct 02 11:43:58 compute-0 sudo[155059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:59 compute-0 sshd-session[152986]: banner exchange: Connection from 61.41.4.17 port 39079: invalid format
Oct 02 11:43:59 compute-0 python3.9[155061]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:59 compute-0 sudo[155059]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:43:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:43:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:43:59 compute-0 ceph-mon[73607]: pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:43:59 compute-0 sudo[155211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzamoitnfluniwsngwilxrbxiugbfpnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405439.381553-910-50495484897866/AnsiballZ_systemd.py'
Oct 02 11:43:59 compute-0 sudo[155211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:59 compute-0 python3.9[155213]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:43:59 compute-0 systemd[1]: Reloading.
Oct 02 11:44:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:00 compute-0 systemd-rc-local-generator[155237]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:00 compute-0 systemd-sysv-generator[155242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:00 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:44:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:44:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:44:00 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:44:00 compute-0 sudo[155211]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:01 compute-0 sudo[155405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kehbbgnssymncaqbwbzqpldomkkyilxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405440.7902606-940-197249177966654/AnsiballZ_file.py'
Oct 02 11:44:01 compute-0 sudo[155405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:01 compute-0 python3.9[155407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:44:01 compute-0 sudo[155405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:01 compute-0 sudo[155432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:01 compute-0 sudo[155432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:01 compute-0 sudo[155432]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:01 compute-0 sudo[155457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:01 compute-0 sudo[155457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:01 compute-0 sudo[155457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:01 compute-0 ceph-mon[73607]: pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:01 compute-0 sudo[155607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajyvypywrcdrobqjimutsxfvfssyvdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405441.7041156-964-110870748867938/AnsiballZ_stat.py'
Oct 02 11:44:01 compute-0 sudo[155607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:02.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:02 compute-0 python3.9[155609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:44:02 compute-0 sudo[155607]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:02 compute-0 sudo[155731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvyiultcltljsnnhlgpsemtbtyjlhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405441.7041156-964-110870748867938/AnsiballZ_copy.py'
Oct 02 11:44:02 compute-0 sudo[155731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:02 compute-0 python3.9[155733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405441.7041156-964-110870748867938/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:44:02 compute-0 sudo[155731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:03 compute-0 sudo[155883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nabefirgvrxxnipjjhfpjsgrasqwjbdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405443.294621-1015-139280160944768/AnsiballZ_file.py'
Oct 02 11:44:03 compute-0 sudo[155883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:03 compute-0 ceph-mon[73607]: pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:03 compute-0 python3.9[155885]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:44:03 compute-0 sudo[155883]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:04 compute-0 sudo[156036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwuynolutmhanjidpwfknijbvzflyrar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405444.09357-1039-262605494690953/AnsiballZ_stat.py'
Oct 02 11:44:04 compute-0 sudo[156036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:04 compute-0 python3.9[156038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:44:04 compute-0 sudo[156036]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:04 compute-0 ceph-mon[73607]: pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:04 compute-0 sudo[156159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwfuejwtuhsjcgtrqlrbgbnuwfcnqrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405444.09357-1039-262605494690953/AnsiballZ_copy.py'
Oct 02 11:44:04 compute-0 sudo[156159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:05 compute-0 python3.9[156161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405444.09357-1039-262605494690953/.source.json _original_basename=.zs0o8hm8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:05 compute-0 sudo[156159]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:05 compute-0 sudo[156311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kumnnxlnepzpsthrkmpysdpldkmhlxxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405445.4951057-1084-166896227916668/AnsiballZ_file.py'
Oct 02 11:44:05 compute-0 sudo[156311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:05 compute-0 python3.9[156313]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:06 compute-0 sudo[156311]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:06 compute-0 sudo[156464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjfxsqypaskftxmmguxgjdgoqepohyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405446.2783284-1108-14894653445885/AnsiballZ_stat.py'
Oct 02 11:44:06 compute-0 sudo[156464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:06 compute-0 sudo[156464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:07 compute-0 sudo[156587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwveqdtssuprzemoeowdwnzinrvodnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405446.2783284-1108-14894653445885/AnsiballZ_copy.py'
Oct 02 11:44:07 compute-0 sudo[156587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:07 compute-0 sudo[156587]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:07 compute-0 ceph-mon[73607]: pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:08.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:08 compute-0 sudo[156740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctsmyzzwewvtwngusyswwgzpkrmanbua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405447.8975248-1159-236534489366065/AnsiballZ_container_config_data.py'
Oct 02 11:44:08 compute-0 sudo[156740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:08 compute-0 python3.9[156742]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 02 11:44:08 compute-0 sudo[156740]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:08 compute-0 ceph-mon[73607]: pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:09 compute-0 sudo[156892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljugmaguzbpoxprlahvtvweubrpgfwhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405448.836539-1186-110189424505565/AnsiballZ_container_config_hash.py'
Oct 02 11:44:09 compute-0 sudo[156892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:09 compute-0 python3.9[156894]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:44:09 compute-0 sudo[156892]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:09 compute-0 podman[156969]: 2025-10-02 11:44:09.936561628 +0000 UTC m=+0.069741566 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 11:44:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:10 compute-0 sudo[157073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sualyjtlfwtrndhzdawkfgluszsyjjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405449.78465-1213-241749148430762/AnsiballZ_podman_container_info.py'
Oct 02 11:44:10 compute-0 sudo[157073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:10 compute-0 python3.9[157075]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:44:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:10 compute-0 sudo[157073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:11 compute-0 ceph-mon[73607]: pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:11 compute-0 sudo[157252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhivbgvxiorhptreidqlywrkbqhnfoki ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405451.4274309-1252-72548169779779/AnsiballZ_edpm_container_manage.py'
Oct 02 11:44:11 compute-0 sudo[157252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:12.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:12 compute-0 python3[157254]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:44:12 compute-0 sshd-session[156942]: Invalid user wqmarlduiqkmgs from 61.41.4.17 port 49307
Oct 02 11:44:12 compute-0 sshd-session[156942]: fatal: userauth_pubkey: parse publickey packet: incomplete message [preauth]
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:12 compute-0 ceph-mon[73607]: pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:13.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:15 compute-0 ceph-mon[73607]: pgmap v495: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:16.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:16 compute-0 ceph-mon[73607]: pgmap v496: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:18.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:18 compute-0 ceph-mon[73607]: pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:44:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:44:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:20 compute-0 podman[157267]: 2025-10-02 11:44:20.37455026 +0000 UTC m=+8.153171565 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:44:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:20 compute-0 podman[157390]: 2025-10-02 11:44:20.51568747 +0000 UTC m=+0.053246717 container create 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:44:20 compute-0 podman[157390]: 2025-10-02 11:44:20.485723069 +0000 UTC m=+0.023282346 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:44:20 compute-0 python3[157254]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:44:20 compute-0 sudo[157252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:21 compute-0 sudo[157578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvttrtcdqxtlmfazrnrokhfznctrlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405460.759647-1276-168799124560263/AnsiballZ_stat.py'
Oct 02 11:44:21 compute-0 sudo[157578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:21 compute-0 python3.9[157580]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:44:21 compute-0 sudo[157578]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:21 compute-0 sudo[157607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:21 compute-0 sudo[157607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:21 compute-0 sudo[157607]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:21 compute-0 ceph-mon[73607]: pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:21 compute-0 sudo[157632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:21 compute-0 sudo[157632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:21 compute-0 sudo[157632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:21 compute-0 sudo[157782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oublpjepwitcczbqrianyjaqjqrfkner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405461.6564536-1303-16360010529739/AnsiballZ_file.py'
Oct 02 11:44:21 compute-0 sudo[157782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:22.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:22 compute-0 python3.9[157784]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:22 compute-0 sudo[157782]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:22 compute-0 sudo[157859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykffzscihuprivrgcwpqbwkgjcjxwwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405461.6564536-1303-16360010529739/AnsiballZ_stat.py'
Oct 02 11:44:22 compute-0 sudo[157859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:22 compute-0 python3.9[157861]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:44:22 compute-0 sudo[157859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:23 compute-0 sudo[158010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrxhxwdzqoshwtwrkqqexppysqrdzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405462.6277146-1303-46148040150083/AnsiballZ_copy.py'
Oct 02 11:44:23 compute-0 sudo[158010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:23 compute-0 python3.9[158012]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405462.6277146-1303-46148040150083/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:23 compute-0 sudo[158010]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:23 compute-0 sudo[158086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rewscylwjppotkzljfrmqnbnlviubpbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405462.6277146-1303-46148040150083/AnsiballZ_systemd.py'
Oct 02 11:44:23 compute-0 sudo[158086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:23 compute-0 ceph-mon[73607]: pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:23 compute-0 python3.9[158088]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:44:23 compute-0 systemd[1]: Reloading.
Oct 02 11:44:23 compute-0 systemd-rc-local-generator[158113]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:23 compute-0 systemd-sysv-generator[158117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:24 compute-0 sudo[158086]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:24 compute-0 sudo[158197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrgoxkokowgbbydfakqohvzeoskoevx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405462.6277146-1303-46148040150083/AnsiballZ_systemd.py'
Oct 02 11:44:24 compute-0 sudo[158197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:24 compute-0 python3.9[158200]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:24 compute-0 systemd[1]: Reloading.
Oct 02 11:44:24 compute-0 systemd-rc-local-generator[158228]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:24 compute-0 systemd-sysv-generator[158233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:24 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 02 11:44:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff658d7f37a29ec93809fbc7c274e8a79e3422fcc0ca9a438953c97c5f3ee/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff658d7f37a29ec93809fbc7c274e8a79e3422fcc0ca9a438953c97c5f3ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca.
Oct 02 11:44:25 compute-0 podman[158241]: 2025-10-02 11:44:25.024685865 +0000 UTC m=+0.130205811 container init 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + sudo -E kolla_set_configs
Oct 02 11:44:25 compute-0 podman[158241]: 2025-10-02 11:44:25.064069538 +0000 UTC m=+0.169589424 container start 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:44:25 compute-0 edpm-start-podman-container[158241]: ovn_metadata_agent
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Validating config file
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Copying service configuration files
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Writing out command to execute
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: ++ cat /run_command
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + CMD=neutron-ovn-metadata-agent
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + ARGS=
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + sudo kolla_copy_cacerts
Oct 02 11:44:25 compute-0 edpm-start-podman-container[158240]: Creating additional drop-in dependency for "ovn_metadata_agent" (3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca)
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + [[ ! -n '' ]]
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + . kolla_extend_start
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: Running command: 'neutron-ovn-metadata-agent'
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + umask 0022
Oct 02 11:44:25 compute-0 ovn_metadata_agent[158256]: + exec neutron-ovn-metadata-agent
Oct 02 11:44:25 compute-0 systemd[1]: Reloading.
Oct 02 11:44:25 compute-0 podman[158263]: 2025-10-02 11:44:25.158858472 +0000 UTC m=+0.085907465 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 11:44:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:25 compute-0 systemd-rc-local-generator[158330]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:25 compute-0 systemd-sysv-generator[158333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:25 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 02 11:44:25 compute-0 sudo[158197]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:25.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:25 compute-0 ceph-mon[73607]: pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:26.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:26 compute-0 sshd-session[148795]: Connection closed by 192.168.122.30 port 49822
Oct 02 11:44:26 compute-0 sshd-session[148792]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:44:26 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 02 11:44:26 compute-0 systemd[1]: session-49.scope: Consumed 53.861s CPU time.
Oct 02 11:44:26 compute-0 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Oct 02 11:44:26 compute-0 systemd-logind[789]: Removed session 49.
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.864 158261 INFO neutron.common.config [-] Logging enabled!
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.864 158261 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.864 158261 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.865 158261 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.866 158261 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.867 158261 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.868 158261 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.869 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.870 158261 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.871 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.872 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.873 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.874 158261 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.875 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.876 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.877 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.878 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.879 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.880 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.881 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.882 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.883 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.884 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.885 158261 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.886 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.887 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.888 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.889 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.890 158261 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.891 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.892 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.893 158261 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.894 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.895 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.896 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.897 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.898 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.898 158261 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.898 158261 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.906 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.907 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.907 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.907 158261 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.907 158261 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.919 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 6718a9ec-e13c-42f0-978a-6c44c48d0d54 (UUID: 6718a9ec-e13c-42f0-978a-6c44c48d0d54) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.943 158261 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.943 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.943 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.944 158261 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.946 158261 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.952 158261 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.957 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '6718a9ec-e13c-42f0-978a-6c44c48d0d54'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], external_ids={}, name=6718a9ec-e13c-42f0-978a-6c44c48d0d54, nb_cfg_timestamp=1759405397615, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.958 158261 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f6f7616af40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.959 158261 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.959 158261 INFO oslo_service.service [-] Starting 1 workers
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.963 158261 DEBUG oslo_service.service [-] Started child 158368 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.967 158261 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpkju6gcx2/privsep.sock']
Oct 02 11:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:26.969 158368 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-176354'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.004 158368 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.005 158368 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.005 158368 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.011 158368 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.020 158368 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.028 158368 INFO eventlet.wsgi.server [-] (158368) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 02 11:44:27 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 02 11:44:27 compute-0 ceph-mon[73607]: pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.604 158261 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.605 158261 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkju6gcx2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.489 158373 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.493 158373 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.494 158373 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.495 158373 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158373
Oct 02 11:44:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:27.607 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe9a4f-4fb1-4b97-bc0f-c1887139e1dc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:44:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:28.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.157 158373 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.158 158373 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.158 158373 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:44:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.715 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[57eec181-9906-4621-b92f-08616cb10bd6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.718 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, column=external_ids, values=({'neutron:ovn-metadata-id': '1d0a4ac7-2179-595a-8ebe-484e1b62b182'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.729 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:44:28 compute-0 ceph-mon[73607]: pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.739 158261 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.739 158261 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.739 158261 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.739 158261 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.740 158261 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.740 158261 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.740 158261 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.740 158261 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.741 158261 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.742 158261 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.743 158261 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.743 158261 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.744 158261 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.744 158261 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.744 158261 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.745 158261 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.746 158261 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.747 158261 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.748 158261 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.749 158261 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.750 158261 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.751 158261 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.752 158261 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.753 158261 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.754 158261 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.755 158261 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.756 158261 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.757 158261 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.758 158261 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.759 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.760 158261 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.761 158261 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.762 158261 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.763 158261 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.764 158261 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.765 158261 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.766 158261 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.767 158261 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.768 158261 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.769 158261 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.770 158261 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.770 158261 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.770 158261 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.770 158261 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.770 158261 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.771 158261 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.772 158261 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.773 158261 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.774 158261 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.775 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.776 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.777 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.778 158261 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.779 158261 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.779 158261 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:44:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:44:28.779 158261 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:44:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:29.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:30.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:31 compute-0 ceph-mon[73607]: pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:32 compute-0 sshd-session[158380]: Accepted publickey for zuul from 192.168.122.30 port 57100 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:44:32 compute-0 systemd-logind[789]: New session 50 of user zuul.
Oct 02 11:44:32 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 02 11:44:32 compute-0 sshd-session[158380]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:44:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:33 compute-0 python3.9[158534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:44:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:33.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:33 compute-0 ceph-mon[73607]: pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:34 compute-0 sudo[158688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sipoqazxnrntzlaiuinhvcxbztqkmaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405473.7160761-67-262911604470879/AnsiballZ_command.py'
Oct 02 11:44:34 compute-0 sudo[158688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:34.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:34 compute-0 python3.9[158690]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:34 compute-0 sudo[158688]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:35 compute-0 sudo[158854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnznetikfoixzumfwynhoxysjuydsbti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405474.8169408-100-112909584372620/AnsiballZ_systemd_service.py'
Oct 02 11:44:35 compute-0 sudo[158854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:35 compute-0 ceph-mon[73607]: pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:35 compute-0 python3.9[158856]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:44:35 compute-0 systemd[1]: Reloading.
Oct 02 11:44:35 compute-0 systemd-rc-local-generator[158884]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:35 compute-0 systemd-sysv-generator[158887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:36 compute-0 sudo[158854]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:36 compute-0 python3.9[159042]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:44:36 compute-0 network[159059]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:44:36 compute-0 network[159060]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:44:36 compute-0 network[159061]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:44:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:37.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:37 compute-0 ceph-mon[73607]: pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:38.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:44:38 compute-0 ceph-mon[73607]: pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:44:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:40.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Oct 02 11:44:40 compute-0 podman[159201]: 2025-10-02 11:44:40.947543642 +0000 UTC m=+0.089403460 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:44:41 compute-0 ceph-mon[73607]: pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Oct 02 11:44:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:41 compute-0 sudo[159225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 sudo[159225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[159225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[159250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 sudo[159250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[159250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:44:42
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', '.rgw.root']
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:44:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:44:42 compute-0 sudo[159401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsknzqctoamcbxjkdvnhzqqbcdibtdun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405482.5080183-157-149127158287926/AnsiballZ_systemd_service.py'
Oct 02 11:44:42 compute-0 sudo[159401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:43 compute-0 python3.9[159403]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:43 compute-0 sudo[159401]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:43 compute-0 sudo[159554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbrseljkzneetjfiezyagewnshchxnvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405483.2251208-157-116830792824889/AnsiballZ_systemd_service.py'
Oct 02 11:44:43 compute-0 sudo[159554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:43 compute-0 ceph-mon[73607]: pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Oct 02 11:44:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:43 compute-0 python3.9[159556]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:43 compute-0 sudo[159554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:44 compute-0 sudo[159707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mraxlwtpuyvajrzyajxzeevcuppzcthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405483.9169118-157-149797503496902/AnsiballZ_systemd_service.py'
Oct 02 11:44:44 compute-0 sudo[159707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 02 11:44:44 compute-0 python3.9[159709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:44 compute-0 sudo[159707]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:45 compute-0 sudo[159861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesphziauzqosuhjhuokzppegkfwlswg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405484.6801941-157-22906956483599/AnsiballZ_systemd_service.py'
Oct 02 11:44:45 compute-0 sudo[159861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:45 compute-0 python3.9[159863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:45 compute-0 sudo[159861]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:45 compute-0 ceph-mon[73607]: pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 02 11:44:45 compute-0 sudo[160014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkloadydzoprjcsnxujptbvwwyhkdink ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405485.4806633-157-71048661052424/AnsiballZ_systemd_service.py'
Oct 02 11:44:45 compute-0 sudo[160014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:46 compute-0 python3.9[160016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:46 compute-0 sudo[160014]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Oct 02 11:44:46 compute-0 sudo[160168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jueektuexlpknbkiqhfklfxonlvjnsoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405486.2611706-157-206046342509568/AnsiballZ_systemd_service.py'
Oct 02 11:44:46 compute-0 sudo[160168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:46 compute-0 python3.9[160170]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:46 compute-0 sudo[160168]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[160321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgddsxhdgrzyvjdawhkvpxspkphvnfze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405487.0987074-157-264721119521317/AnsiballZ_systemd_service.py'
Oct 02 11:44:47 compute-0 sudo[160321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:47.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:47 compute-0 ceph-mon[73607]: pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Oct 02 11:44:47 compute-0 python3.9[160323]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:44:47 compute-0 sudo[160321]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Oct 02 11:44:49 compute-0 sudo[160475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rebhpvsreazvszuatxanmwspvrpvfzzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405488.6901288-313-125692228907479/AnsiballZ_file.py'
Oct 02 11:44:49 compute-0 sudo[160475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:49 compute-0 python3.9[160477]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:49 compute-0 sudo[160475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:44:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:49.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:44:49 compute-0 ceph-mon[73607]: pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Oct 02 11:44:49 compute-0 sudo[160627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqlafdtmxcpbmjkpzxxohzywnbrkiccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405489.43584-313-263791213300108/AnsiballZ_file.py'
Oct 02 11:44:49 compute-0 sudo[160627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:49 compute-0 python3.9[160629]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:49 compute-0 sudo[160627]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:50 compute-0 sudo[160780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiypxxtocdcbvdwvxhiwgsmfkplhzbtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405490.0260875-313-110851965304476/AnsiballZ_file.py'
Oct 02 11:44:50 compute-0 sudo[160780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Oct 02 11:44:50 compute-0 python3.9[160782]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:50 compute-0 sudo[160780]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 ceph-mon[73607]: pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Oct 02 11:44:50 compute-0 sudo[160932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knhixmvtwwsuvdgjfwtugqveoffwirgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405490.6419737-313-127233347667453/AnsiballZ_file.py'
Oct 02 11:44:50 compute-0 sudo[160932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:51 compute-0 python3.9[160934]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:51 compute-0 sudo[160932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:51 compute-0 sudo[161084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprhkvxxrjepeqnwbhmhudpwqhsfenox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405491.229728-313-41323541152172/AnsiballZ_file.py'
Oct 02 11:44:51 compute-0 sudo[161084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:51.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:51 compute-0 python3.9[161086]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:51 compute-0 sudo[161084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[161236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxwdqxyzdvgyqhvnzlhpzdjaqhlmiyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405491.8308036-313-186934856155813/AnsiballZ_file.py'
Oct 02 11:44:52 compute-0 sudo[161236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:52.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:52 compute-0 python3.9[161238]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:52 compute-0 sudo[161236]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:44:52 compute-0 sudo[161287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:52 compute-0 sudo[161287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[161287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[161341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:52 compute-0 sudo[161341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[161341]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[161389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:52 compute-0 sudo[161389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[161389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[161419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:44:52 compute-0 sudo[161419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[161489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxdgaafwnlnqzkizvkvqkcadosxtgwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405492.444182-313-104673163993035/AnsiballZ_file.py'
Oct 02 11:44:52 compute-0 sudo[161489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:52 compute-0 python3.9[161491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:52 compute-0 sudo[161489]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 sudo[161419]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:44:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:44:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:44:53 compute-0 sudo[161674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcimdwzsxsyrqnibqcrrtahquviyplr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405493.62818-463-56175766345163/AnsiballZ_file.py'
Oct 02 11:44:53 compute-0 sudo[161674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:54 compute-0 python3.9[161676]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:54 compute-0 sudo[161674]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 ceph-mon[73607]: pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:44:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:44:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev effa3269-5fdc-4c78-98e3-61cb4eb8a99c does not exist
Oct 02 11:44:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a90c932d-7ad0-4a57-8944-3eda1cfd2cd3 does not exist
Oct 02 11:44:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4dcea8eb-8145-4beb-9991-fcc53cab2061 does not exist
Oct 02 11:44:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:54.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:54 compute-0 sudo[161683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:54 compute-0 sudo[161683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[161683]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[161732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:54 compute-0 sudo[161732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[161732]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[161790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:54 compute-0 sudo[161790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[161790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[161834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:44:54 compute-0 sudo[161834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[161927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iocjfnvohoxzuczyyyrqrmlthekuxxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405494.219927-463-209838880443590/AnsiballZ_file.py'
Oct 02 11:44:54 compute-0 sudo[161927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:44:54 compute-0 python3.9[161931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:54 compute-0 sudo[161927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.68670725 +0000 UTC m=+0.051213807 container create 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:44:54 compute-0 systemd[1]: Started libpod-conmon-3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f.scope.
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.663278556 +0000 UTC m=+0.027785133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.810162089 +0000 UTC m=+0.174668736 container init 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.820333833 +0000 UTC m=+0.184840430 container start 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:44:54 compute-0 zen_sinoussi[162004]: 167 167
Oct 02 11:44:54 compute-0 systemd[1]: libpod-3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f.scope: Deactivated successfully.
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.830933497 +0000 UTC m=+0.195440074 container attach 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.834424425 +0000 UTC m=+0.198931042 container died 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2b00b72dad176a57983a3f6e92bdda6f29ae886c4e02d0349d957cda9b5d9f-merged.mount: Deactivated successfully.
Oct 02 11:44:54 compute-0 podman[161971]: 2025-10-02 11:44:54.901564269 +0000 UTC m=+0.266070826 container remove 3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:44:54 compute-0 systemd[1]: libpod-conmon-3dc9da54a44c63cae3749f8e320702ed51f7506c7972899d6c93a633a4c3922f.scope: Deactivated successfully.
Oct 02 11:44:55 compute-0 podman[162113]: 2025-10-02 11:44:55.098742535 +0000 UTC m=+0.052752575 container create 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:55 compute-0 ceph-mon[73607]: pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:44:55 compute-0 systemd[1]: Started libpod-conmon-323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3.scope.
Oct 02 11:44:55 compute-0 podman[162113]: 2025-10-02 11:44:55.077407554 +0000 UTC m=+0.031417634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 sudo[162177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwgkqdsrgpqqlcpkkqvvenfzrkkfseuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405494.8473327-463-225953973465486/AnsiballZ_file.py'
Oct 02 11:44:55 compute-0 sudo[162177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:55 compute-0 podman[162113]: 2025-10-02 11:44:55.226544393 +0000 UTC m=+0.180554483 container init 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:55 compute-0 podman[162113]: 2025-10-02 11:44:55.23564277 +0000 UTC m=+0.189652830 container start 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:44:55 compute-0 podman[162113]: 2025-10-02 11:44:55.239738782 +0000 UTC m=+0.193748842 container attach 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:44:55 compute-0 python3.9[162180]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:55 compute-0 sudo[162177]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:55 compute-0 sudo[162342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpiqrxpyojcmsitanezjcucjdngqawpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405495.5552204-463-177702362302592/AnsiballZ_file.py'
Oct 02 11:44:55 compute-0 sudo[162342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:55 compute-0 podman[162306]: 2025-10-02 11:44:55.934283552 +0000 UTC m=+0.102879166 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:44:56 compute-0 festive_austin[162175]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:44:56 compute-0 festive_austin[162175]: --> relative data size: 1.0
Oct 02 11:44:56 compute-0 festive_austin[162175]: --> All data devices are unavailable
Oct 02 11:44:56 compute-0 systemd[1]: libpod-323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3.scope: Deactivated successfully.
Oct 02 11:44:56 compute-0 podman[162113]: 2025-10-02 11:44:56.092805286 +0000 UTC m=+1.046815346 container died 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:56 compute-0 python3.9[162350]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:56 compute-0 sudo[162342]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0827d323b7f2e4966c8ced58cbb9838af8594f648f90e109143639117c400ea-merged.mount: Deactivated successfully.
Oct 02 11:44:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:56 compute-0 podman[162113]: 2025-10-02 11:44:56.166938484 +0000 UTC m=+1.120948534 container remove 323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:56 compute-0 systemd[1]: libpod-conmon-323a52e05bb126f2c5836b2bc50dc125ec15585ed544874a91c1333d25a07bc3.scope: Deactivated successfully.
Oct 02 11:44:56 compute-0 sudo[161834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[162399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:56 compute-0 sudo[162399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[162399]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[162448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:56 compute-0 sudo[162448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[162448]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[162503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:56 compute-0 sudo[162503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[162503]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:44:56 compute-0 sudo[162551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:44:56 compute-0 sudo[162551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[162625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zknyvujeelykgjishdzsxvqafgqepktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405496.285721-463-3153420736687/AnsiballZ_file.py'
Oct 02 11:44:56 compute-0 sudo[162625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:56 compute-0 python3.9[162627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:56 compute-0 sudo[162625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 podman[162681]: 2025-10-02 11:44:56.906386775 +0000 UTC m=+0.042800258 container create 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:44:56 compute-0 systemd[1]: Started libpod-conmon-8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9.scope.
Oct 02 11:44:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:56 compute-0 podman[162681]: 2025-10-02 11:44:56.887935445 +0000 UTC m=+0.024348948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:57 compute-0 podman[162681]: 2025-10-02 11:44:57.002694657 +0000 UTC m=+0.139108160 container init 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:57 compute-0 podman[162681]: 2025-10-02 11:44:57.012683266 +0000 UTC m=+0.149096749 container start 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:44:57 compute-0 podman[162681]: 2025-10-02 11:44:57.016194834 +0000 UTC m=+0.152608317 container attach 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:57 compute-0 infallible_chaplygin[162720]: 167 167
Oct 02 11:44:57 compute-0 systemd[1]: libpod-8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9.scope: Deactivated successfully.
Oct 02 11:44:57 compute-0 podman[162681]: 2025-10-02 11:44:57.020729797 +0000 UTC m=+0.157143300 container died 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-95649bf7ef1472a3ebeac95b6987a043e5efc3b9aa17abc0dd9d196a7f90bd2b-merged.mount: Deactivated successfully.
Oct 02 11:44:57 compute-0 podman[162681]: 2025-10-02 11:44:57.069120274 +0000 UTC m=+0.205533757 container remove 8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:57 compute-0 systemd[1]: libpod-conmon-8e42490594f47694a6acc961bb2cb01edd06227e105b8c49c03445d6fd24f4e9.scope: Deactivated successfully.
Oct 02 11:44:57 compute-0 podman[162812]: 2025-10-02 11:44:57.252985609 +0000 UTC m=+0.049181788 container create 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:44:57 compute-0 systemd[1]: Started libpod-conmon-8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c.scope.
Oct 02 11:44:57 compute-0 podman[162812]: 2025-10-02 11:44:57.233003021 +0000 UTC m=+0.029199230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:57 compute-0 sudo[162873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofuvdsvnfgyivccxlqymiyucdgkuedu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405496.975513-463-138774106877255/AnsiballZ_file.py'
Oct 02 11:44:57 compute-0 sudo[162873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824d5070a930d632fbb075a22ad54e92372d5d08da551f575ccc513ff530b8c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824d5070a930d632fbb075a22ad54e92372d5d08da551f575ccc513ff530b8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824d5070a930d632fbb075a22ad54e92372d5d08da551f575ccc513ff530b8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824d5070a930d632fbb075a22ad54e92372d5d08da551f575ccc513ff530b8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:57 compute-0 podman[162812]: 2025-10-02 11:44:57.378383906 +0000 UTC m=+0.174580105 container init 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:44:57 compute-0 podman[162812]: 2025-10-02 11:44:57.390729323 +0000 UTC m=+0.186925502 container start 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:57 compute-0 podman[162812]: 2025-10-02 11:44:57.404806474 +0000 UTC m=+0.201002643 container attach 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:57 compute-0 python3.9[162877]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:57 compute-0 sudo[162873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:44:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:44:57 compute-0 ceph-mon[73607]: pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:44:58 compute-0 sudo[163030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbuvgdbwskwymzvgitenjxerpiipybr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405497.717764-463-2833401486917/AnsiballZ_file.py'
Oct 02 11:44:58 compute-0 sudo[163030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:44:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:44:58 compute-0 youthful_raman[162875]: {
Oct 02 11:44:58 compute-0 youthful_raman[162875]:     "1": [
Oct 02 11:44:58 compute-0 youthful_raman[162875]:         {
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "devices": [
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "/dev/loop3"
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             ],
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "lv_name": "ceph_lv0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "lv_size": "7511998464",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "name": "ceph_lv0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "tags": {
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.cluster_name": "ceph",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.crush_device_class": "",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.encrypted": "0",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.osd_id": "1",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.type": "block",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:                 "ceph.vdo": "0"
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             },
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "type": "block",
Oct 02 11:44:58 compute-0 youthful_raman[162875]:             "vg_name": "ceph_vg0"
Oct 02 11:44:58 compute-0 youthful_raman[162875]:         }
Oct 02 11:44:58 compute-0 youthful_raman[162875]:     ]
Oct 02 11:44:58 compute-0 youthful_raman[162875]: }
Oct 02 11:44:58 compute-0 systemd[1]: libpod-8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c.scope: Deactivated successfully.
Oct 02 11:44:58 compute-0 conmon[162875]: conmon 8300e1b3d58c61a2103b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c.scope/container/memory.events
Oct 02 11:44:58 compute-0 podman[162812]: 2025-10-02 11:44:58.216959578 +0000 UTC m=+1.013155767 container died 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-824d5070a930d632fbb075a22ad54e92372d5d08da551f575ccc513ff530b8c9-merged.mount: Deactivated successfully.
Oct 02 11:44:58 compute-0 podman[162812]: 2025-10-02 11:44:58.293008375 +0000 UTC m=+1.089204554 container remove 8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:44:58 compute-0 python3.9[163033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:44:58 compute-0 systemd[1]: libpod-conmon-8300e1b3d58c61a2103ba01b25b121e9bacc63c09569ea0537414ab50e0f5a5c.scope: Deactivated successfully.
Oct 02 11:44:58 compute-0 sudo[163030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-0 sudo[162551]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-0 sudo[163052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:58 compute-0 sudo[163052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-0 sudo[163052]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-0 sudo[163100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:58 compute-0 sudo[163100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-0 sudo[163100]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:44:58 compute-0 sudo[163125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:58 compute-0 sudo[163125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-0 sudo[163125]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-0 sudo[163150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:44:58 compute-0 sudo[163150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.094333608 +0000 UTC m=+0.065797122 container create c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:44:59 compute-0 sudo[163357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxjuwomyugxqktklvzotdhbesutyxgub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405498.7402952-616-139041133978183/AnsiballZ_command.py'
Oct 02 11:44:59 compute-0 sudo[163357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:59 compute-0 systemd[1]: Started libpod-conmon-c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193.scope.
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.067674263 +0000 UTC m=+0.039137847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.20266944 +0000 UTC m=+0.174133044 container init c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.212248349 +0000 UTC m=+0.183711853 container start c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.215812228 +0000 UTC m=+0.187275812 container attach c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:44:59 compute-0 strange_hugle[163363]: 167 167
Oct 02 11:44:59 compute-0 systemd[1]: libpod-c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193.scope: Deactivated successfully.
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.21749179 +0000 UTC m=+0.188955294 container died c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b8d8c18a252c44b5005e13cbc36184f893727601ffd096c7778d84d56932f75-merged.mount: Deactivated successfully.
Oct 02 11:44:59 compute-0 podman[163316]: 2025-10-02 11:44:59.259463226 +0000 UTC m=+0.230926770 container remove c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:59 compute-0 systemd[1]: libpod-conmon-c206e28d46dfd434e3344d4cab6647297a121669c6c20b1ab8562d2ff2fed193.scope: Deactivated successfully.
Oct 02 11:44:59 compute-0 python3.9[163359]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:59 compute-0 sudo[163357]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-0 podman[163388]: 2025-10-02 11:44:59.450318526 +0000 UTC m=+0.051965627 container create 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:59 compute-0 systemd[1]: Started libpod-conmon-5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8.scope.
Oct 02 11:44:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7e5e415b92472bf769cdf8146b1b1816e62aa7db99e9b0b0b1286e75a04291/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7e5e415b92472bf769cdf8146b1b1816e62aa7db99e9b0b0b1286e75a04291/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7e5e415b92472bf769cdf8146b1b1816e62aa7db99e9b0b0b1286e75a04291/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7e5e415b92472bf769cdf8146b1b1816e62aa7db99e9b0b0b1286e75a04291/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:59 compute-0 podman[163388]: 2025-10-02 11:44:59.433361653 +0000 UTC m=+0.035008774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:59 compute-0 podman[163388]: 2025-10-02 11:44:59.542581036 +0000 UTC m=+0.144228157 container init 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:44:59 compute-0 podman[163388]: 2025-10-02 11:44:59.553655083 +0000 UTC m=+0.155302214 container start 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:44:59 compute-0 podman[163388]: 2025-10-02 11:44:59.558689179 +0000 UTC m=+0.160336300 container attach 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:44:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:44:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:44:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:44:59 compute-0 ceph-mon[73607]: pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:00.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:00 compute-0 python3.9[163558]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:45:00 compute-0 reverent_williams[163428]: {
Oct 02 11:45:00 compute-0 reverent_williams[163428]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:45:00 compute-0 reverent_williams[163428]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:45:00 compute-0 reverent_williams[163428]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:45:00 compute-0 reverent_williams[163428]:         "osd_id": 1,
Oct 02 11:45:00 compute-0 reverent_williams[163428]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:45:00 compute-0 reverent_williams[163428]:         "type": "bluestore"
Oct 02 11:45:00 compute-0 reverent_williams[163428]:     }
Oct 02 11:45:00 compute-0 reverent_williams[163428]: }
Oct 02 11:45:00 compute-0 podman[163388]: 2025-10-02 11:45:00.428626443 +0000 UTC m=+1.030273544 container died 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:45:00 compute-0 systemd[1]: libpod-5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8.scope: Deactivated successfully.
Oct 02 11:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7e5e415b92472bf769cdf8146b1b1816e62aa7db99e9b0b0b1286e75a04291-merged.mount: Deactivated successfully.
Oct 02 11:45:00 compute-0 podman[163388]: 2025-10-02 11:45:00.477295777 +0000 UTC m=+1.078942898 container remove 5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:00 compute-0 systemd[1]: libpod-conmon-5259341b707f674fd7549b984c3be23b5f8480c1016406b476fc9d278cf688d8.scope: Deactivated successfully.
Oct 02 11:45:00 compute-0 sudo[163150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:45:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:45:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4c3d3a53-021a-4631-b7e4-ee6b3c8b4ad9 does not exist
Oct 02 11:45:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cf22a636-54c3-42a6-909e-a503277eead1 does not exist
Oct 02 11:45:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 05971caf-2d11-4c2d-b490-9f3a8ca0b101 does not exist
Oct 02 11:45:00 compute-0 sudo[163614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:00 compute-0 sudo[163614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-0 sudo[163614]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:00 compute-0 sudo[163662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:45:00 compute-0 sudo[163662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-0 sudo[163662]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-0 sudo[163789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqpjzsuzsnppveidiwktvzyvwvapwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405500.6430004-670-59691035696214/AnsiballZ_systemd_service.py'
Oct 02 11:45:01 compute-0 sudo[163789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:01 compute-0 python3.9[163791]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:45:01 compute-0 systemd[1]: Reloading.
Oct 02 11:45:01 compute-0 systemd-sysv-generator[163821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:01 compute-0 systemd-rc-local-generator[163817]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:01 compute-0 ceph-mon[73607]: pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:45:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:45:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:01 compute-0 sudo[163789]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-0 sudo[163852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:01 compute-0 sudo[163852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-0 sudo[163852]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-0 sudo[163884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:01 compute-0 sudo[163884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-0 sudo[163884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-0 sudo[164027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rehsmdzaebndonvfjsvhjpkilxeacvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405501.8183064-694-29027969264094/AnsiballZ_command.py'
Oct 02 11:45:02 compute-0 sudo[164027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:02.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:02 compute-0 python3.9[164029]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:02 compute-0 sudo[164027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:02 compute-0 sudo[164181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkedrpvnqtqdcjudxkjubiebqffyskyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405502.5040166-694-142712806734490/AnsiballZ_command.py'
Oct 02 11:45:02 compute-0 sudo[164181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:02 compute-0 python3.9[164183]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:03 compute-0 sudo[164181]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-0 sudo[164334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvcroxiuytikvpsjmeydegyvaiedvkvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405503.1914718-694-49778452327925/AnsiballZ_command.py'
Oct 02 11:45:03 compute-0 sudo[164334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:03 compute-0 ceph-mon[73607]: pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:03 compute-0 python3.9[164336]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:03 compute-0 sudo[164334]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:04 compute-0 sudo[164487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdpmdmnxnplnyszhfjqglcqrphqbmigq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405503.8890975-694-261140576583583/AnsiballZ_command.py'
Oct 02 11:45:04 compute-0 sudo[164487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:04 compute-0 python3.9[164489]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:04 compute-0 sudo[164487]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:04 compute-0 ceph-mon[73607]: pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:04 compute-0 sudo[164641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckjsvkhyeqfxklkvvdsqzumsncrvaema ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405504.5300837-694-13312710334942/AnsiballZ_command.py'
Oct 02 11:45:04 compute-0 sudo[164641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:05 compute-0 python3.9[164643]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:05 compute-0 sudo[164641]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:05 compute-0 sudo[164794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qterxgumhjdyuujnswpkjabazvddnudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405505.1918283-694-34692436583212/AnsiballZ_command.py'
Oct 02 11:45:05 compute-0 sudo[164794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:05 compute-0 python3.9[164796]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:05 compute-0 sudo[164794]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:06 compute-0 sudo[164947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsmbhsxduwbrqqspyzuzylhotemlqcyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405505.8427641-694-171835737469397/AnsiballZ_command.py'
Oct 02 11:45:06 compute-0 sudo[164947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:06.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:06 compute-0 python3.9[164949]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:06 compute-0 sudo[164947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:07 compute-0 sudo[165101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aezixrxknbfmehuledmnjqmpfvklhjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405507.0830379-856-195489496481450/AnsiballZ_getent.py'
Oct 02 11:45:07 compute-0 sudo[165101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:07 compute-0 ceph-mon[73607]: pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:07 compute-0 python3.9[165103]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 11:45:07 compute-0 sudo[165101]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:08 compute-0 sudo[165255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnbiudrhtnegmcmucjhcozqxydljiwxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405508.0025382-880-230141584908200/AnsiballZ_group.py'
Oct 02 11:45:08 compute-0 sudo[165255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:08 compute-0 python3.9[165257]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:45:08 compute-0 groupadd[165258]: group added to /etc/group: name=libvirt, GID=42473
Oct 02 11:45:08 compute-0 groupadd[165258]: group added to /etc/gshadow: name=libvirt
Oct 02 11:45:08 compute-0 groupadd[165258]: new group: name=libvirt, GID=42473
Oct 02 11:45:08 compute-0 sudo[165255]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:09 compute-0 sudo[165413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfkefemjatxbzuywqzupzuqvaaqhhsff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405509.0013123-904-99219434116156/AnsiballZ_user.py'
Oct 02 11:45:09 compute-0 sudo[165413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:09.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:09 compute-0 ceph-mon[73607]: pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:09 compute-0 python3.9[165415]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:45:09 compute-0 useradd[165417]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:45:09 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:45:09 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:45:09 compute-0 sudo[165413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:10 compute-0 sudo[165575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwprishxsinhgvngafvbmlhsqhisoffv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405510.4713278-937-107685863923253/AnsiballZ_setup.py'
Oct 02 11:45:10 compute-0 sudo[165575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:11 compute-0 python3.9[165577]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:45:11 compute-0 sudo[165575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:11 compute-0 sudo[165669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptefgzetotktvvcptpxyvbjmtfnfczxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405510.4713278-937-107685863923253/AnsiballZ_dnf.py'
Oct 02 11:45:11 compute-0 sudo[165669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:11 compute-0 podman[165633]: 2025-10-02 11:45:11.6751587 +0000 UTC m=+0.092337883 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 11:45:11 compute-0 ceph-mon[73607]: pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:11 compute-0 python3.9[165674]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:45:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:12.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:12 compute-0 ceph-mon[73607]: pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:14.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:15 compute-0 ceph-mon[73607]: pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:45:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:45:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:16.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:17 compute-0 ceph-mon[73607]: pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:18 compute-0 ceph-mon[73607]: pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:20.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:21 compute-0 ceph-mon[73607]: pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:21 compute-0 sudo[165763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-0 sudo[165763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-0 sudo[165763]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-0 sudo[165792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-0 sudo[165792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-0 sudo[165792]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:45:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:45:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:23 compute-0 ceph-mon[73607]: pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:24.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:25 compute-0 ceph-mon[73607]: pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:25.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:26.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:45:26.899 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:45:26.899 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:45:26.899 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:45:26 compute-0 podman[165929]: 2025-10-02 11:45:26.909908084 +0000 UTC m=+0.047004583 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:45:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:27 compute-0 ceph-mon[73607]: pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:28.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:29.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:29 compute-0 ceph-mon[73607]: pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:30.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:30 compute-0 ceph-mon[73607]: pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:33 compute-0 ceph-mon[73607]: pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:35 compute-0 ceph-mon[73607]: pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:36 compute-0 ceph-mon[73607]: pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:39 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:45:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:45:39 compute-0 ceph-mon[73607]: pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:39.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:40.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:41 compute-0 ceph-mon[73607]: pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:41.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:41 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 02 11:45:41 compute-0 podman[165970]: 2025-10-02 11:45:41.99143859 +0000 UTC m=+0.110278212 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 11:45:42 compute-0 sudo[165996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[165996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[165996]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[166021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[166021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[166021]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:42.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:45:42
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'backups', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:45:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:45:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:43.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:43 compute-0 ceph-mon[73607]: pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:44 compute-0 ceph-mon[73607]: pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:47 compute-0 ceph-mon[73607]: pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:47.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:48 compute-0 ceph-mon[73607]: pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:48 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:45:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:45:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:49.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:45:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:51.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:45:51 compute-0 ceph-mon[73607]: pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:52 compute-0 ceph-mon[73607]: pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:53.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:45:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:45:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:55 compute-0 ceph-mon[73607]: pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:57 compute-0 ceph-mon[73607]: pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:57 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 02 11:45:57 compute-0 podman[166061]: 2025-10-02 11:45:57.938442905 +0000 UTC m=+0.066944901 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:45:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:58.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:45:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:59 compute-0 ceph-mon[73607]: pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:45:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:45:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:45:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:59.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:01 compute-0 sudo[166082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:01 compute-0 sudo[166082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 sudo[166082]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:01 compute-0 sudo[166107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:01 compute-0 sudo[166107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 sudo[166107]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:01 compute-0 sudo[166137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:01 compute-0 sudo[166137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 sudo[166137]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:01 compute-0 sudo[166199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:46:01 compute-0 sudo[166199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 ceph-mon[73607]: pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:01 compute-0 sudo[166199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:46:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e44d852a-657d-4e82-a596-dbcf2d5a3798 does not exist
Oct 02 11:46:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 15567fa0-b73e-492a-a752-8b83d2bb5ef2 does not exist
Oct 02 11:46:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a9ff621e-4696-4418-aa53-27419026ac8d does not exist
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:01 compute-0 sudo[166716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:01 compute-0 sudo[166716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 sudo[166716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:01 compute-0 sudo[166780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:01 compute-0 sudo[166780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:01 compute-0 sudo[166780]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:02 compute-0 sudo[166835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:02 compute-0 sudo[166835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:02 compute-0 sudo[166835]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:02 compute-0 sudo[166901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:46:02 compute-0 sudo[166901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:02 compute-0 sudo[167025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:02 compute-0 sudo[167025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:02 compute-0 sudo[167025]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:02 compute-0 sudo[167109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:02 compute-0 sudo[167109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:02 compute-0 sudo[167109]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.457429113 +0000 UTC m=+0.067026514 container create a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:02 compute-0 systemd[1]: Started libpod-conmon-a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d.scope.
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.408954783 +0000 UTC m=+0.018552194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.553572013 +0000 UTC m=+0.163169424 container init a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.560406373 +0000 UTC m=+0.170003764 container start a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:46:02 compute-0 angry_moore[167320]: 167 167
Oct 02 11:46:02 compute-0 systemd[1]: libpod-a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d.scope: Deactivated successfully.
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.591163721 +0000 UTC m=+0.200761132 container attach a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.592408522 +0000 UTC m=+0.202005913 container died a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5928f45e9f7c1fbd3e3a309d4a26af31f5e4bc57e51fc529d8b150d114c04542-merged.mount: Deactivated successfully.
Oct 02 11:46:02 compute-0 podman[167217]: 2025-10-02 11:46:02.664243375 +0000 UTC m=+0.273840766 container remove a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:46:02 compute-0 systemd[1]: libpod-conmon-a139b6bbd1277e9eb08da976c8be43ae365ea4d3e7414d1257689ff781c2e57d.scope: Deactivated successfully.
Oct 02 11:46:02 compute-0 podman[167541]: 2025-10-02 11:46:02.820236498 +0000 UTC m=+0.039394094 container create fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:46:02 compute-0 systemd[1]: Started libpod-conmon-fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d.scope.
Oct 02 11:46:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:02 compute-0 podman[167541]: 2025-10-02 11:46:02.800778043 +0000 UTC m=+0.019935649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:02 compute-0 podman[167541]: 2025-10-02 11:46:02.899449576 +0000 UTC m=+0.118607192 container init fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:46:02 compute-0 podman[167541]: 2025-10-02 11:46:02.905787604 +0000 UTC m=+0.124945190 container start fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:46:02 compute-0 podman[167541]: 2025-10-02 11:46:02.909273661 +0000 UTC m=+0.128431267 container attach fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:03 compute-0 ceph-mon[73607]: pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:03 compute-0 kind_rubin[167621]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:46:03 compute-0 kind_rubin[167621]: --> relative data size: 1.0
Oct 02 11:46:03 compute-0 kind_rubin[167621]: --> All data devices are unavailable
Oct 02 11:46:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:03 compute-0 systemd[1]: libpod-fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d.scope: Deactivated successfully.
Oct 02 11:46:03 compute-0 podman[167541]: 2025-10-02 11:46:03.72341224 +0000 UTC m=+0.942569836 container died fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f68f2d6a9ac099f5c6fa6bf2416c567283e8b1f4733d3b89e48772a234366f67-merged.mount: Deactivated successfully.
Oct 02 11:46:03 compute-0 podman[167541]: 2025-10-02 11:46:03.791746896 +0000 UTC m=+1.010904502 container remove fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:03 compute-0 systemd[1]: libpod-conmon-fb34c4dce24c8ca466cefe19dd4ebc657a083e63d3c3df2b969ab3b12c6ab79d.scope: Deactivated successfully.
Oct 02 11:46:03 compute-0 sudo[166901]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:03 compute-0 sudo[168284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:03 compute-0 sudo[168284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:03 compute-0 sudo[168284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:03 compute-0 sudo[168352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:03 compute-0 sudo[168352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:03 compute-0 sudo[168352]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:03 compute-0 auditd[705]: Audit daemon rotating log files
Oct 02 11:46:04 compute-0 sudo[168418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:04 compute-0 sudo[168418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:04 compute-0 sudo[168418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:04 compute-0 sudo[168478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:46:04 compute-0 sudo[168478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.382717826 +0000 UTC m=+0.039808145 container create 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:46:04 compute-0 systemd[1]: Started libpod-conmon-841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772.scope.
Oct 02 11:46:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.456124318 +0000 UTC m=+0.113214657 container init 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.363111776 +0000 UTC m=+0.020202105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.489937831 +0000 UTC m=+0.147028150 container start 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:04 compute-0 compassionate_yonath[168809]: 167 167
Oct 02 11:46:04 compute-0 systemd[1]: libpod-841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772.scope: Deactivated successfully.
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.495674675 +0000 UTC m=+0.152764984 container attach 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.496444104 +0000 UTC m=+0.153534413 container died 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:46:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66c05e7525011a70c6dacbb0ccb68be7197c04449fb75f1e6346d576d68d9bd-merged.mount: Deactivated successfully.
Oct 02 11:46:04 compute-0 podman[168738]: 2025-10-02 11:46:04.53911496 +0000 UTC m=+0.196205259 container remove 841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:46:04 compute-0 systemd[1]: libpod-conmon-841ee3dc4f7aaf995c14aa236172c011e01d400a0d063225a434cff898a06772.scope: Deactivated successfully.
Oct 02 11:46:04 compute-0 podman[168986]: 2025-10-02 11:46:04.724346952 +0000 UTC m=+0.055157247 container create 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:46:04 compute-0 systemd[1]: Started libpod-conmon-0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee.scope.
Oct 02 11:46:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd729c8c47695fdb18e6ce511c87010c08e0ef6df0c01656b67e18dbfe71abc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd729c8c47695fdb18e6ce511c87010c08e0ef6df0c01656b67e18dbfe71abc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd729c8c47695fdb18e6ce511c87010c08e0ef6df0c01656b67e18dbfe71abc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd729c8c47695fdb18e6ce511c87010c08e0ef6df0c01656b67e18dbfe71abc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:04 compute-0 podman[168986]: 2025-10-02 11:46:04.787854077 +0000 UTC m=+0.118664382 container init 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:04 compute-0 podman[168986]: 2025-10-02 11:46:04.794597316 +0000 UTC m=+0.125407611 container start 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:46:04 compute-0 podman[168986]: 2025-10-02 11:46:04.703685237 +0000 UTC m=+0.034495592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:04 compute-0 podman[168986]: 2025-10-02 11:46:04.797542709 +0000 UTC m=+0.128353024 container attach 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]: {
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:     "1": [
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:         {
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "devices": [
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "/dev/loop3"
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             ],
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "lv_name": "ceph_lv0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "lv_size": "7511998464",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "name": "ceph_lv0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "tags": {
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.cluster_name": "ceph",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.crush_device_class": "",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.encrypted": "0",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.osd_id": "1",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.type": "block",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:                 "ceph.vdo": "0"
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             },
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "type": "block",
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:             "vg_name": "ceph_vg0"
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:         }
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]:     ]
Oct 02 11:46:05 compute-0 upbeat_lehmann[169064]: }
Oct 02 11:46:05 compute-0 systemd[1]: libpod-0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee.scope: Deactivated successfully.
Oct 02 11:46:05 compute-0 podman[168986]: 2025-10-02 11:46:05.531169499 +0000 UTC m=+0.861979794 container died 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd729c8c47695fdb18e6ce511c87010c08e0ef6df0c01656b67e18dbfe71abc7-merged.mount: Deactivated successfully.
Oct 02 11:46:05 compute-0 podman[168986]: 2025-10-02 11:46:05.591369562 +0000 UTC m=+0.922179857 container remove 0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:46:05 compute-0 ceph-mon[73607]: pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:05 compute-0 systemd[1]: libpod-conmon-0f189006dd903144dc4446502f437c4c8071e28b0991c6dee84b214d8c16a4ee.scope: Deactivated successfully.
Oct 02 11:46:05 compute-0 sudo[168478]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:05 compute-0 sudo[169647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:05 compute-0 sudo[169647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:05 compute-0 sudo[169647]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:05 compute-0 sudo[169720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:05 compute-0 sudo[169720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:05 compute-0 sudo[169720]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:05 compute-0 sudo[169782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:05 compute-0 sudo[169782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:05 compute-0 sudo[169782]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:05 compute-0 sudo[169854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:46:05 compute-0 sudo[169854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.220596377 +0000 UTC m=+0.044509902 container create ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:46:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:06 compute-0 systemd[1]: Started libpod-conmon-ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455.scope.
Oct 02 11:46:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.200757091 +0000 UTC m=+0.024670626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.317181308 +0000 UTC m=+0.141094853 container init ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.325988057 +0000 UTC m=+0.149901582 container start ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.332179321 +0000 UTC m=+0.156092856 container attach ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:06 compute-0 jovial_mendel[170211]: 167 167
Oct 02 11:46:06 compute-0 systemd[1]: libpod-ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455.scope: Deactivated successfully.
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.337658378 +0000 UTC m=+0.161571943 container died ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f35571ab706f14dde3438a9251ec120c51c006e2911a9ada6aa4970b3a56411-merged.mount: Deactivated successfully.
Oct 02 11:46:06 compute-0 podman[170120]: 2025-10-02 11:46:06.39264695 +0000 UTC m=+0.216560475 container remove ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:06 compute-0 systemd[1]: libpod-conmon-ddde1a804fbec7693817fabc3d88eb5668e5902fc83913b5237c2c223f263455.scope: Deactivated successfully.
Oct 02 11:46:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:06 compute-0 podman[170435]: 2025-10-02 11:46:06.609447281 +0000 UTC m=+0.047368673 container create 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:06 compute-0 systemd[1]: Started libpod-conmon-2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b.scope.
Oct 02 11:46:06 compute-0 podman[170435]: 2025-10-02 11:46:06.587426233 +0000 UTC m=+0.025347715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365250e2445aabeef8433413759948de4e2f15b0b278292e6df13519378acb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365250e2445aabeef8433413759948de4e2f15b0b278292e6df13519378acb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365250e2445aabeef8433413759948de4e2f15b0b278292e6df13519378acb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365250e2445aabeef8433413759948de4e2f15b0b278292e6df13519378acb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:06 compute-0 podman[170435]: 2025-10-02 11:46:06.745389084 +0000 UTC m=+0.183310496 container init 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:06 compute-0 podman[170435]: 2025-10-02 11:46:06.753320233 +0000 UTC m=+0.191241655 container start 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:06 compute-0 podman[170435]: 2025-10-02 11:46:06.75839959 +0000 UTC m=+0.196321002 container attach 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:07 compute-0 friendly_greider[170548]: {
Oct 02 11:46:07 compute-0 friendly_greider[170548]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:46:07 compute-0 friendly_greider[170548]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:46:07 compute-0 friendly_greider[170548]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:46:07 compute-0 friendly_greider[170548]:         "osd_id": 1,
Oct 02 11:46:07 compute-0 friendly_greider[170548]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:46:07 compute-0 friendly_greider[170548]:         "type": "bluestore"
Oct 02 11:46:07 compute-0 friendly_greider[170548]:     }
Oct 02 11:46:07 compute-0 friendly_greider[170548]: }
Oct 02 11:46:07 compute-0 systemd[1]: libpod-2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b.scope: Deactivated successfully.
Oct 02 11:46:07 compute-0 podman[170435]: 2025-10-02 11:46:07.566588001 +0000 UTC m=+1.004509413 container died 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:07 compute-0 ceph-mon[73607]: pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:08.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f365250e2445aabeef8433413759948de4e2f15b0b278292e6df13519378acb0-merged.mount: Deactivated successfully.
Oct 02 11:46:09 compute-0 ceph-mon[73607]: pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:09.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:09 compute-0 podman[170435]: 2025-10-02 11:46:09.95533699 +0000 UTC m=+3.393258382 container remove 2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:46:09 compute-0 sudo[169854]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:46:10 compute-0 systemd[1]: libpod-conmon-2b520e4c047a0d8208a5a07a6e2dd56e790f542fe68b681eb6bdd76fcc4f4f5b.scope: Deactivated successfully.
Oct 02 11:46:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:46:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:10.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1cb03998-675c-453e-8047-47fe42da4b9d does not exist
Oct 02 11:46:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 71724c15-12e1-4a8d-b60e-71eb06081be3 does not exist
Oct 02 11:46:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2c964e1c-2df4-47b9-a77f-5bff86bc21c3 does not exist
Oct 02 11:46:10 compute-0 sudo[172919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:10 compute-0 sudo[172919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:10 compute-0 sudo[172919]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:10 compute-0 sudo[173033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:46:10 compute-0 sudo[173033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:10 compute-0 sudo[173033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:46:11 compute-0 ceph-mon[73607]: pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:46:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:12 compute-0 podman[174477]: 2025-10-02 11:46:12.980837633 +0000 UTC m=+0.115184895 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 11:46:13 compute-0 ceph-mon[73607]: pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:14 compute-0 ceph-mon[73607]: pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:16.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:17 compute-0 ceph-mon[73607]: pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:17.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:18.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:18 compute-0 ceph-mon[73607]: pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:21 compute-0 ceph-mon[73607]: pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:22 compute-0 sudo[180547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:22 compute-0 sudo[180547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:22 compute-0 sudo[180547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:22 compute-0 sudo[180619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:22 compute-0 sudo[180619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:22 compute-0 sudo[180619]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:22 compute-0 ceph-mon[73607]: pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:23.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:24.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:25 compute-0 ceph-mon[73607]: pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:25.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:26.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:46:26.901 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:46:26.901 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:46:26.901 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:46:26 compute-0 ceph-mon[73607]: pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:27.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:28 compute-0 podman[183826]: 2025-10-02 11:46:28.930341491 +0000 UTC m=+0.069229619 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 11:46:29 compute-0 ceph-mon[73607]: pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:29.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:30 compute-0 ceph-mon[73607]: pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:33 compute-0 ceph-mon[73607]: pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:35 compute-0 ceph-mon[73607]: pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:35.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:37 compute-0 ceph-mon[73607]: pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:39 compute-0 ceph-mon[73607]: pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:39 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:46:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:46:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:40 compute-0 ceph-mon[73607]: pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:41 compute-0 groupadd[183875]: group added to /etc/group: name=dnsmasq, GID=991
Oct 02 11:46:41 compute-0 groupadd[183875]: group added to /etc/gshadow: name=dnsmasq
Oct 02 11:46:41 compute-0 groupadd[183875]: new group: name=dnsmasq, GID=991
Oct 02 11:46:41 compute-0 useradd[183882]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 02 11:46:41 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:46:41 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 02 11:46:41 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 02 11:46:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:41.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:42 compute-0 groupadd[183895]: group added to /etc/group: name=clevis, GID=990
Oct 02 11:46:42 compute-0 groupadd[183895]: group added to /etc/gshadow: name=clevis
Oct 02 11:46:42 compute-0 groupadd[183895]: new group: name=clevis, GID=990
Oct 02 11:46:42 compute-0 useradd[183902]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 02 11:46:42 compute-0 usermod[183912]: add 'clevis' to group 'tss'
Oct 02 11:46:42 compute-0 usermod[183912]: add 'clevis' to shadow group 'tss'
Oct 02 11:46:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:42.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:46:42
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:42 compute-0 sudo[183920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:42 compute-0 sudo[183920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:42 compute-0 sudo[183920]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:42 compute-0 sudo[183945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:42 compute-0 sudo[183945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:42 compute-0 sudo[183945]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:46:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:46:43 compute-0 podman[183984]: 2025-10-02 11:46:43.182225168 +0000 UTC m=+0.099946545 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 11:46:43 compute-0 ceph-mgr[73901]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct 02 11:46:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:46:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:43.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:46:44 compute-0 ceph-mon[73607]: pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:44 compute-0 polkitd[6475]: Reloading rules
Oct 02 11:46:44 compute-0 polkitd[6475]: Collecting garbage unconditionally...
Oct 02 11:46:44 compute-0 polkitd[6475]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:46:44 compute-0 polkitd[6475]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:46:44 compute-0 polkitd[6475]: Finished loading, compiling and executing 4 rules
Oct 02 11:46:44 compute-0 polkitd[6475]: Reloading rules
Oct 02 11:46:44 compute-0 polkitd[6475]: Collecting garbage unconditionally...
Oct 02 11:46:44 compute-0 polkitd[6475]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:46:44 compute-0 polkitd[6475]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:46:44 compute-0 polkitd[6475]: Finished loading, compiling and executing 4 rules
Oct 02 11:46:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:45 compute-0 ceph-mon[73607]: pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:45 compute-0 groupadd[184178]: group added to /etc/group: name=ceph, GID=167
Oct 02 11:46:45 compute-0 groupadd[184178]: group added to /etc/gshadow: name=ceph
Oct 02 11:46:45 compute-0 groupadd[184178]: new group: name=ceph, GID=167
Oct 02 11:46:45 compute-0 useradd[184184]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 02 11:46:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:45.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:47.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:48 compute-0 ceph-mon[73607]: pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:48 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 02 11:46:48 compute-0 sshd[1008]: Received signal 15; terminating.
Oct 02 11:46:48 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 02 11:46:48 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 02 11:46:48 compute-0 systemd[1]: sshd.service: Consumed 2.326s CPU time, read 0B from disk, written 20.0K to disk.
Oct 02 11:46:48 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 02 11:46:48 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 02 11:46:48 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:46:48 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:46:48 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:46:48 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 02 11:46:48 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 02 11:46:48 compute-0 sshd[184811]: Server listening on 0.0.0.0 port 22.
Oct 02 11:46:48 compute-0 sshd[184811]: Server listening on :: port 22.
Oct 02 11:46:48 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 02 11:46:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:50 compute-0 ceph-mon[73607]: pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:46:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:46:50 compute-0 systemd[1]: Reloading.
Oct 02 11:46:50 compute-0 systemd-rc-local-generator[185067]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:46:50 compute-0 systemd-sysv-generator[185072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:46:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:46:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:51.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:46:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:52.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:46:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:52 compute-0 ceph-mon[73607]: pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:53 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 11:46:53 compute-0 PackageKit[188085]: daemon start
Oct 02 11:46:53 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 11:46:53 compute-0 sudo[165669]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 ceph-mon[73607]: pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:46:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:53.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:46:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:46:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:54 compute-0 ceph-mon[73607]: pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:55.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:56.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:57 compute-0 ceph-mon[73607]: pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:46:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:46:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:58.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:46:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:46:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.842s CPU time.
Oct 02 11:46:58 compute-0 systemd[1]: run-r7bff8f9b9db7434a9f9111b99dbde280.service: Deactivated successfully.
Oct 02 11:46:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:46:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:46:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:46:59 compute-0 podman[193343]: 2025-10-02 11:46:59.951788433 +0000 UTC m=+0.078772016 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:47:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:01 compute-0 ceph-mon[73607]: pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:02.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:02 compute-0 sudo[193365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:02 compute-0 sudo[193365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:02 compute-0 sudo[193365]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:02 compute-0 sudo[193390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:02 compute-0 sudo[193390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:02 compute-0 sudo[193390]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:02 compute-0 ceph-mon[73607]: pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:02 compute-0 ceph-mon[73607]: pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:03.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:04.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:05 compute-0 ceph-mon[73607]: pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:05 compute-0 sudo[193541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayqvdwkwhdhcwwijvztcengjkivqnjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405625.3334765-973-54899545630835/AnsiballZ_systemd.py'
Oct 02 11:47:05 compute-0 sudo[193541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:06 compute-0 python3.9[193543]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:47:06 compute-0 systemd[1]: Reloading.
Oct 02 11:47:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:06 compute-0 systemd-sysv-generator[193579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:06 compute-0 systemd-rc-local-generator[193575]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:06 compute-0 sudo[193541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:06 compute-0 sudo[193733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izanvhqalxhnxhakpczoajrgnjzcxjhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405626.6792881-973-143202940633861/AnsiballZ_systemd.py'
Oct 02 11:47:06 compute-0 sudo[193733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:07 compute-0 python3.9[193735]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:47:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:07 compute-0 ceph-mon[73607]: pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:08.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:08 compute-0 systemd[1]: Reloading.
Oct 02 11:47:08 compute-0 systemd-sysv-generator[193768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:08 compute-0 systemd-rc-local-generator[193762]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:09 compute-0 ceph-mon[73607]: pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:09 compute-0 sudo[193733]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:10 compute-0 sudo[193924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xamsfclwoxcorozjtqpvxlfejvrflhyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405629.779019-973-256091810955018/AnsiballZ_systemd.py'
Oct 02 11:47:10 compute-0 sudo[193924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:10 compute-0 python3.9[193926]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:47:10 compute-0 systemd[1]: Reloading.
Oct 02 11:47:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:10 compute-0 systemd-rc-local-generator[193954]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:10 compute-0 systemd-sysv-generator[193958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:10 compute-0 sudo[193964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:10 compute-0 sudo[193964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:10 compute-0 sudo[193964]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:10 compute-0 sudo[193924]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:10 compute-0 sudo[193990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:10 compute-0 sudo[193990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:10 compute-0 sudo[193990]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:10 compute-0 ceph-mon[73607]: pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:10 compute-0 sudo[194039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:10 compute-0 sudo[194039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:10 compute-0 sudo[194039]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:10 compute-0 sudo[194087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:47:10 compute-0 sudo[194087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:11 compute-0 sudo[194240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oegzgegvnitcdkqsjeaphfdizjnexinj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405630.9363859-973-28755290841701/AnsiballZ_systemd.py'
Oct 02 11:47:11 compute-0 sudo[194240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:11 compute-0 podman[194287]: 2025-10-02 11:47:11.37811961 +0000 UTC m=+0.051687622 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:11 compute-0 podman[194287]: 2025-10-02 11:47:11.46627299 +0000 UTC m=+0.139840972 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:47:11 compute-0 python3.9[194249]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:47:11 compute-0 systemd[1]: Reloading.
Oct 02 11:47:11 compute-0 systemd-sysv-generator[194370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:11 compute-0 systemd-rc-local-generator[194367]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:11 compute-0 sudo[194240]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:12 compute-0 podman[194538]: 2025-10-02 11:47:12.232309229 +0000 UTC m=+0.064114821 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:47:12 compute-0 podman[194538]: 2025-10-02 11:47:12.244133474 +0000 UTC m=+0.075939046 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:47:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:47:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:47:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:12.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:47:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:47:12 compute-0 sudo[194661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvbpurqvpcuugqvqlnjvvnmpepbzcef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405632.060945-1060-184337715166951/AnsiballZ_systemd.py'
Oct 02 11:47:12 compute-0 sudo[194661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:12 compute-0 podman[194676]: 2025-10-02 11:47:12.446962066 +0000 UTC m=+0.053243040 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 02 11:47:12 compute-0 podman[194676]: 2025-10-02 11:47:12.481261593 +0000 UTC m=+0.087542547 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:12 compute-0 sudo[194087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:12 compute-0 python3.9[194665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:12 compute-0 systemd[1]: Reloading.
Oct 02 11:47:12 compute-0 systemd-sysv-generator[194759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:12 compute-0 systemd-rc-local-generator[194756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:12 compute-0 sudo[194765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:12 compute-0 sudo[194765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:12 compute-0 sudo[194765]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:12 compute-0 sudo[194661]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:13 compute-0 sudo[194790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:13 compute-0 sudo[194790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:13 compute-0 sudo[194790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:13 compute-0 sudo[194828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:13 compute-0 sudo[194828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:13 compute-0 sudo[194828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:13 compute-0 sudo[194868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:47:13 compute-0 sudo[194868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:13 compute-0 sudo[195040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdmipviwijoswcxvkldethgqrzsuovfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405633.1148539-1060-88870253325142/AnsiballZ_systemd.py'
Oct 02 11:47:13 compute-0 sudo[195040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:13 compute-0 ceph-mon[73607]: pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:13 compute-0 podman[195003]: 2025-10-02 11:47:13.451405976 +0000 UTC m=+0.102655764 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 11:47:13 compute-0 sudo[194868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:47:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:47:13 compute-0 python3.9[195049]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:13 compute-0 systemd[1]: Reloading.
Oct 02 11:47:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:13 compute-0 systemd-rc-local-generator[195100]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:13 compute-0 systemd-sysv-generator[195104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:14 compute-0 sudo[195040]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 19e2ae8a-e48e-4f53-a9f8-950505abe39e does not exist
Oct 02 11:47:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b37aeb6d-1245-4201-a9e0-a92ee4ac74ea does not exist
Oct 02 11:47:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9ef517c0-11b2-487a-9780-6a145a46f7e0 does not exist
Oct 02 11:47:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:47:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:47:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:14 compute-0 sudo[195136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:14 compute-0 sudo[195136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:14 compute-0 sudo[195136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:14 compute-0 sudo[195184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:14 compute-0 sudo[195184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:14 compute-0 sudo[195184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:14 compute-0 sudo[195238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:14 compute-0 sudo[195238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:14 compute-0 sudo[195238]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:14 compute-0 sudo[195287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:47:14 compute-0 sudo[195287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:14 compute-0 sudo[195362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzleequewolwxkaykbbjwlxelxxcldev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405634.1829197-1060-263307973828972/AnsiballZ_systemd.py'
Oct 02 11:47:14 compute-0 sudo[195362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.688353488 +0000 UTC m=+0.039476146 container create b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:47:14 compute-0 systemd[1]: Started libpod-conmon-b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4.scope.
Oct 02 11:47:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.669298513 +0000 UTC m=+0.020421191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.775047672 +0000 UTC m=+0.126170350 container init b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.781886903 +0000 UTC m=+0.133009561 container start b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.7854006 +0000 UTC m=+0.136523288 container attach b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:47:14 compute-0 python3.9[195364]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:14 compute-0 dreamy_snyder[195425]: 167 167
Oct 02 11:47:14 compute-0 systemd[1]: libpod-b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4.scope: Deactivated successfully.
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.788514549 +0000 UTC m=+0.139637217 container died b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4008415f28e62d1de2c017952f53be25ae5c0bb32965845eff6526ed1b58a7d4-merged.mount: Deactivated successfully.
Oct 02 11:47:14 compute-0 podman[195407]: 2025-10-02 11:47:14.83267713 +0000 UTC m=+0.183799788 container remove b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:14 compute-0 systemd[1]: libpod-conmon-b1fa44169d1497ee4f9586e5e42a650a1a59554d371d2d9247c0d0a84d85b9a4.scope: Deactivated successfully.
Oct 02 11:47:14 compute-0 systemd[1]: Reloading.
Oct 02 11:47:14 compute-0 systemd-rc-local-generator[195484]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:14 compute-0 systemd-sysv-generator[195488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:14 compute-0 podman[195451]: 2025-10-02 11:47:14.991884924 +0000 UTC m=+0.057680420 container create 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:15 compute-0 podman[195451]: 2025-10-02 11:47:14.965348232 +0000 UTC m=+0.031143738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:15 compute-0 systemd[1]: Started libpod-conmon-6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275.scope.
Oct 02 11:47:15 compute-0 sudo[195362]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 podman[195451]: 2025-10-02 11:47:15.253915734 +0000 UTC m=+0.319711280 container init 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:47:15 compute-0 podman[195451]: 2025-10-02 11:47:15.265175915 +0000 UTC m=+0.330971401 container start 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:15 compute-0 podman[195451]: 2025-10-02 11:47:15.269054502 +0000 UTC m=+0.334850038 container attach 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:47:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.298704) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635298727, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1836, "num_deletes": 250, "total_data_size": 3500622, "memory_usage": 3561912, "flush_reason": "Manual Compaction"}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635315941, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3431016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12267, "largest_seqno": 14102, "table_properties": {"data_size": 3422590, "index_size": 5241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15298, "raw_average_key_size": 18, "raw_value_size": 3406123, "raw_average_value_size": 4108, "num_data_blocks": 235, "num_entries": 829, "num_filter_entries": 829, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405431, "oldest_key_time": 1759405431, "file_creation_time": 1759405635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17272 microseconds, and 6258 cpu microseconds.
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.315973) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3431016 bytes OK
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.315988) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.317176) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.317186) EVENT_LOG_v1 {"time_micros": 1759405635317183, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.317199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3493196, prev total WAL file size 3493196, number of live WAL files 2.
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.318018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3350KB)], [29(8380KB)]
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635318044, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12012601, "oldest_snapshot_seqno": -1}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4261 keys, 11472655 bytes, temperature: kUnknown
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635387436, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11472655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11438434, "index_size": 22463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 103663, "raw_average_key_size": 24, "raw_value_size": 11355837, "raw_average_value_size": 2665, "num_data_blocks": 960, "num_entries": 4261, "num_filter_entries": 4261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.387762) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11472655 bytes
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.389361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.8 rd, 165.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.3) OK, records in: 4776, records dropped: 515 output_compression: NoCompression
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.389394) EVENT_LOG_v1 {"time_micros": 1759405635389378, "job": 12, "event": "compaction_finished", "compaction_time_micros": 69517, "compaction_time_cpu_micros": 21849, "output_level": 6, "num_output_files": 1, "total_output_size": 11472655, "num_input_records": 4776, "num_output_records": 4261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635390521, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635393365, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.317967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.393497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.393506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.393509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.393512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:47:15.393515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:47:15 compute-0 ceph-mon[73607]: pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:15 compute-0 sudo[195653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjwqwesusbeyvcyxruaymyrhrjiucme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405635.348188-1060-134878302112450/AnsiballZ_systemd.py'
Oct 02 11:47:15 compute-0 sudo[195653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:15.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:15 compute-0 python3.9[195655]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:16 compute-0 sudo[195653]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 trusting_nobel[195499]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:47:16 compute-0 trusting_nobel[195499]: --> relative data size: 1.0
Oct 02 11:47:16 compute-0 trusting_nobel[195499]: --> All data devices are unavailable
Oct 02 11:47:16 compute-0 systemd[1]: libpod-6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275.scope: Deactivated successfully.
Oct 02 11:47:16 compute-0 podman[195451]: 2025-10-02 11:47:16.082435683 +0000 UTC m=+1.148231209 container died 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-181f7001bc84f494a45aa4eb9d973c37839f56812b293ed4b3dde276b4621cb4-merged.mount: Deactivated successfully.
Oct 02 11:47:16 compute-0 podman[195451]: 2025-10-02 11:47:16.154389438 +0000 UTC m=+1.220184924 container remove 6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:47:16 compute-0 systemd[1]: libpod-conmon-6fa2ee004862f7c0e82a4d7f47c9999a9e53093b76b6e963a229ccfcfe7cf275.scope: Deactivated successfully.
Oct 02 11:47:16 compute-0 sudo[195287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 sudo[195714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:16 compute-0 sudo[195714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:16 compute-0 sudo[195714]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 sudo[195771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:16 compute-0 sudo[195771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:16 compute-0 sudo[195771]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 sudo[195813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:16 compute-0 sudo[195813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:16 compute-0 sudo[195813]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 sudo[195856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:47:16 compute-0 sudo[195856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:16 compute-0 sudo[195931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhafsuusmbwsjyglhpariatxtzsbgdio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405636.2135053-1060-216975589065241/AnsiballZ_systemd.py'
Oct 02 11:47:16 compute-0 sudo[195931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.707564245 +0000 UTC m=+0.043337113 container create 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:47:16 compute-0 systemd[1]: Started libpod-conmon-11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991.scope.
Oct 02 11:47:16 compute-0 python3.9[195933]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.68534961 +0000 UTC m=+0.021122468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.794409162 +0000 UTC m=+0.130182060 container init 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.8071397 +0000 UTC m=+0.142912568 container start 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:47:16 compute-0 nostalgic_goldwasser[195989]: 167 167
Oct 02 11:47:16 compute-0 systemd[1]: libpod-11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991.scope: Deactivated successfully.
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.818191286 +0000 UTC m=+0.153964134 container attach 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.818571556 +0000 UTC m=+0.154344404 container died 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:47:16 compute-0 systemd[1]: Reloading.
Oct 02 11:47:16 compute-0 podman[195974]: 2025-10-02 11:47:16.909238838 +0000 UTC m=+0.245011686 container remove 11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:47:16 compute-0 systemd-sysv-generator[196041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:16 compute-0 systemd-rc-local-generator[196037]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:17 compute-0 podman[196051]: 2025-10-02 11:47:17.087647681 +0000 UTC m=+0.041186579 container create 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e72db8bec7c6247023430de765c8b200c2a6eafdc72817be77724becdcd89931-merged.mount: Deactivated successfully.
Oct 02 11:47:17 compute-0 podman[196051]: 2025-10-02 11:47:17.072516294 +0000 UTC m=+0.026055222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:17 compute-0 systemd[1]: libpod-conmon-11ac6c3918451feddc4d05952407f6ca32800266ce3361dd95c2f31dfcf66991.scope: Deactivated successfully.
Oct 02 11:47:17 compute-0 systemd[1]: Started libpod-conmon-361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621.scope.
Oct 02 11:47:17 compute-0 sudo[195931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19842bb0d75a5552580e8165443d4e1aecb835b8e1a7f312daa7ebf86b653be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19842bb0d75a5552580e8165443d4e1aecb835b8e1a7f312daa7ebf86b653be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19842bb0d75a5552580e8165443d4e1aecb835b8e1a7f312daa7ebf86b653be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19842bb0d75a5552580e8165443d4e1aecb835b8e1a7f312daa7ebf86b653be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 podman[196051]: 2025-10-02 11:47:17.246713591 +0000 UTC m=+0.200252509 container init 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:47:17 compute-0 podman[196051]: 2025-10-02 11:47:17.255497631 +0000 UTC m=+0.209036529 container start 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:47:17 compute-0 podman[196051]: 2025-10-02 11:47:17.25867914 +0000 UTC m=+0.212218068 container attach 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:17 compute-0 ceph-mon[73607]: pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:17 compute-0 quirky_golick[196068]: {
Oct 02 11:47:17 compute-0 quirky_golick[196068]:     "1": [
Oct 02 11:47:17 compute-0 quirky_golick[196068]:         {
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "devices": [
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "/dev/loop3"
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             ],
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "lv_name": "ceph_lv0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "lv_size": "7511998464",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "name": "ceph_lv0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "tags": {
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.cluster_name": "ceph",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.crush_device_class": "",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.encrypted": "0",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.osd_id": "1",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.type": "block",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:                 "ceph.vdo": "0"
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             },
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "type": "block",
Oct 02 11:47:17 compute-0 quirky_golick[196068]:             "vg_name": "ceph_vg0"
Oct 02 11:47:17 compute-0 quirky_golick[196068]:         }
Oct 02 11:47:17 compute-0 quirky_golick[196068]:     ]
Oct 02 11:47:17 compute-0 quirky_golick[196068]: }
Oct 02 11:47:18 compute-0 systemd[1]: libpod-361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 podman[196051]: 2025-10-02 11:47:18.021722485 +0000 UTC m=+0.975261393 container died 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19842bb0d75a5552580e8165443d4e1aecb835b8e1a7f312daa7ebf86b653be-merged.mount: Deactivated successfully.
Oct 02 11:47:18 compute-0 podman[196051]: 2025-10-02 11:47:18.078980004 +0000 UTC m=+1.032518912 container remove 361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:47:18 compute-0 systemd[1]: libpod-conmon-361d134ac23bf6abf5630f90d56d673b75affc1dee491023735667a2a925f621.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 sudo[195856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:18 compute-0 sudo[196115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:18 compute-0 sudo[196115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:18 compute-0 sudo[196115]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:18 compute-0 sudo[196140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:18 compute-0 sudo[196140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:18 compute-0 sudo[196140]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:18 compute-0 sudo[196166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:18 compute-0 sudo[196166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:18 compute-0 sudo[196166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:18 compute-0 sudo[196227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:47:18 compute-0 sudo[196227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:18 compute-0 sudo[196365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqewwklsmwxhtmmbjjllzvaizlceidkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405638.3098788-1168-236489632834671/AnsiballZ_systemd.py'
Oct 02 11:47:18 compute-0 sudo[196365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.767462747 +0000 UTC m=+0.050384738 container create 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:47:18 compute-0 ceph-mon[73607]: pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:18 compute-0 systemd[1]: Started libpod-conmon-1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8.scope.
Oct 02 11:47:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.745571181 +0000 UTC m=+0.028493162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.856317045 +0000 UTC m=+0.139239006 container init 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.863705249 +0000 UTC m=+0.146627210 container start 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.867191956 +0000 UTC m=+0.150113937 container attach 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:47:18 compute-0 systemd[1]: libpod-1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 admiring_sammet[196398]: 167 167
Oct 02 11:47:18 compute-0 conmon[196398]: conmon 1f43e689c91ffbad5429 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8.scope/container/memory.events
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.870396266 +0000 UTC m=+0.153318267 container died 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-38ec6447304857b4186d6ef7df6b35eb50741d895aafa61c8ba63e11da087c63-merged.mount: Deactivated successfully.
Oct 02 11:47:18 compute-0 podman[196382]: 2025-10-02 11:47:18.919773448 +0000 UTC m=+0.202695419 container remove 1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:18 compute-0 systemd[1]: libpod-conmon-1f43e689c91ffbad542900a58266cb4f7b9acc1028b89438465fb6e0dc4964c8.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 python3.9[196369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:47:19 compute-0 systemd[1]: Reloading.
Oct 02 11:47:19 compute-0 podman[196425]: 2025-10-02 11:47:19.119830781 +0000 UTC m=+0.040060550 container create 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:19 compute-0 systemd-sysv-generator[196469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:19 compute-0 systemd-rc-local-generator[196466]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:19 compute-0 podman[196425]: 2025-10-02 11:47:19.101948795 +0000 UTC m=+0.022178484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:19 compute-0 systemd[1]: Started libpod-conmon-2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37.scope.
Oct 02 11:47:19 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 02 11:47:19 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 02 11:47:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f293ab20261232d30ae10fcd229cf075ed70330436263d45914d9a528a31a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f293ab20261232d30ae10fcd229cf075ed70330436263d45914d9a528a31a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f293ab20261232d30ae10fcd229cf075ed70330436263d45914d9a528a31a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f293ab20261232d30ae10fcd229cf075ed70330436263d45914d9a528a31a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:19 compute-0 sudo[196365]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:19 compute-0 podman[196425]: 2025-10-02 11:47:19.414000614 +0000 UTC m=+0.334230303 container init 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:47:19 compute-0 podman[196425]: 2025-10-02 11:47:19.422180648 +0000 UTC m=+0.342410307 container start 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:19 compute-0 podman[196425]: 2025-10-02 11:47:19.425313006 +0000 UTC m=+0.345542675 container attach 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:47:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:47:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:19.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:47:20 compute-0 sudo[196633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjscojdrnmchygpzdwndwgbydxwypkox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405639.7498946-1192-29407447796849/AnsiballZ_systemd.py'
Oct 02 11:47:20 compute-0 sudo[196633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:20 compute-0 brave_shirley[196478]: {
Oct 02 11:47:20 compute-0 brave_shirley[196478]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:47:20 compute-0 brave_shirley[196478]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:47:20 compute-0 brave_shirley[196478]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:47:20 compute-0 brave_shirley[196478]:         "osd_id": 1,
Oct 02 11:47:20 compute-0 brave_shirley[196478]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:47:20 compute-0 brave_shirley[196478]:         "type": "bluestore"
Oct 02 11:47:20 compute-0 brave_shirley[196478]:     }
Oct 02 11:47:20 compute-0 brave_shirley[196478]: }
Oct 02 11:47:20 compute-0 systemd[1]: libpod-2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37.scope: Deactivated successfully.
Oct 02 11:47:20 compute-0 podman[196425]: 2025-10-02 11:47:20.276531971 +0000 UTC m=+1.196761630 container died 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:47:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:20 compute-0 python3.9[196635]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e1f293ab20261232d30ae10fcd229cf075ed70330436263d45914d9a528a31a-merged.mount: Deactivated successfully.
Oct 02 11:47:20 compute-0 podman[196425]: 2025-10-02 11:47:20.340218011 +0000 UTC m=+1.260447670 container remove 2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:20 compute-0 systemd[1]: libpod-conmon-2de397b18dfa487c2f87bbe671c8abf88ef4892a4c83bd829d98ef3f9cf4aa37.scope: Deactivated successfully.
Oct 02 11:47:20 compute-0 sudo[196227]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:20 compute-0 sudo[196633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 10ba9f28-d42d-4e92-a91d-0961ace8b695 does not exist
Oct 02 11:47:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a8cd7243-c42e-4d25-a1fd-1e0667d04d84 does not exist
Oct 02 11:47:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1e56774f-5d23-46f7-843b-78d945d59084 does not exist
Oct 02 11:47:20 compute-0 sudo[196768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:20 compute-0 sudo[196768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:20 compute-0 sudo[196768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:20 compute-0 sudo[196862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayvbyryrrbcperooqwhmauxpjxvmoyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405640.5528708-1192-4473459573089/AnsiballZ_systemd.py'
Oct 02 11:47:20 compute-0 sudo[196862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:20 compute-0 sudo[196825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:20 compute-0 sudo[196825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:20 compute-0 sudo[196825]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:21 compute-0 python3.9[196868]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:21 compute-0 sudo[196862]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:21 compute-0 ceph-mon[73607]: pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:47:21 compute-0 sudo[197022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlgeoiucdtdfsifspskiqbmkonqfmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405641.3840501-1192-215035748422295/AnsiballZ_systemd.py'
Oct 02 11:47:21 compute-0 sudo[197022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:21 compute-0 python3.9[197024]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:22 compute-0 sudo[197022]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:22 compute-0 sudo[197178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wunwovrgzyrpdpsgzkzdzhwbtvhrczqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405642.1521652-1192-193943560129713/AnsiballZ_systemd.py'
Oct 02 11:47:22 compute-0 sudo[197178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:22 compute-0 python3.9[197180]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:22 compute-0 sudo[197178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:22 compute-0 sudo[197208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:22 compute-0 sudo[197208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:22 compute-0 sudo[197208]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:22 compute-0 sudo[197245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:22 compute-0 sudo[197245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:22 compute-0 sudo[197245]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:23 compute-0 sudo[197383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeftyuxmhbpdfvhjqvilygsxnxroxtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405642.896662-1192-113298533339856/AnsiballZ_systemd.py'
Oct 02 11:47:23 compute-0 sudo[197383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:23 compute-0 python3.9[197385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:23 compute-0 sudo[197383]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:23 compute-0 ceph-mon[73607]: pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:23 compute-0 sudo[197538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzbtlebacvxiuttgrprkufizszwojnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405643.646605-1192-82842259077692/AnsiballZ_systemd.py'
Oct 02 11:47:23 compute-0 sudo[197538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:24 compute-0 python3.9[197540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:24.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:24 compute-0 sudo[197538]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:24 compute-0 sudo[197694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcymwuhnnqivvyvqfmtehbvqtknajwny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405644.4746919-1192-45922211748384/AnsiballZ_systemd.py'
Oct 02 11:47:24 compute-0 sudo[197694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:25 compute-0 python3.9[197696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:25 compute-0 sudo[197694]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:25 compute-0 sudo[197849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwmolytmozvzncfullvvkemyunvirhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405645.27489-1192-152469363009000/AnsiballZ_systemd.py'
Oct 02 11:47:25 compute-0 sudo[197849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:25 compute-0 ceph-mon[73607]: pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:25 compute-0 python3.9[197851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:25 compute-0 sudo[197849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:26.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:26 compute-0 sudo[198005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebddzxzqzfhkpzyrpwlekwvfhsjynwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405646.088458-1192-115005141957348/AnsiballZ_systemd.py'
Oct 02 11:47:26 compute-0 sudo[198005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:26 compute-0 python3.9[198007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:26 compute-0 ceph-mon[73607]: pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:26 compute-0 sudo[198005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:47:26.902 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:47:26.903 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:47:26.903 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:47:27 compute-0 sudo[198160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcskyphaobwvzhtmwzeeefezqnchwpti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405646.9794126-1192-134854957062927/AnsiballZ_systemd.py'
Oct 02 11:47:27 compute-0 sudo[198160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:27 compute-0 python3.9[198162]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:27 compute-0 sudo[198160]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:27.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:28 compute-0 sudo[198315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaixxbtvcgaxhotxlournehvwbydsrei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405647.7930884-1192-249041871854828/AnsiballZ_systemd.py'
Oct 02 11:47:28 compute-0 sudo[198315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:28 compute-0 python3.9[198317]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:28 compute-0 sudo[198315]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:28 compute-0 sudo[198471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uweilghcnriymmwbxjlgqkmeoswoftij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405648.513073-1192-1150169048611/AnsiballZ_systemd.py'
Oct 02 11:47:28 compute-0 sudo[198471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:29 compute-0 python3.9[198473]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:29 compute-0 sudo[198471]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:29 compute-0 sudo[198626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llbkdljipeupqsqybvgwfweikovskbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405649.3386278-1192-252387743896464/AnsiballZ_systemd.py'
Oct 02 11:47:29 compute-0 sudo[198626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:29 compute-0 ceph-mon[73607]: pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:29.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:29 compute-0 python3.9[198628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:30 compute-0 sudo[198626]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:30 compute-0 podman[198632]: 2025-10-02 11:47:30.084081914 +0000 UTC m=+0.074853799 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 11:47:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:30 compute-0 sudo[198802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwablijevbmrmsetyaiixvzivyphmlcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405650.192425-1192-276077388088335/AnsiballZ_systemd.py'
Oct 02 11:47:30 compute-0 sudo[198802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:30 compute-0 python3.9[198804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:47:30 compute-0 sudo[198802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:31 compute-0 ceph-mon[73607]: pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:32.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:32 compute-0 ceph-mon[73607]: pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:33 compute-0 sudo[198958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzvypbakcrgqedfglnolfhhetdcwpnjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405653.1896603-1498-139400875840016/AnsiballZ_file.py'
Oct 02 11:47:33 compute-0 sudo[198958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:33 compute-0 python3.9[198960]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:33 compute-0 sudo[198958]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:33.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:34 compute-0 sudo[199110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szoufuykplawrxoxyoujspifbuhnlapm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405653.755341-1498-260815786237564/AnsiballZ_file.py'
Oct 02 11:47:34 compute-0 sudo[199110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:34 compute-0 python3.9[199112]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:34 compute-0 sudo[199110]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:34.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:34 compute-0 sudo[199263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwogusssxtcvqauuiutmhofjkwxytfds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405654.4423218-1498-72850060740154/AnsiballZ_file.py'
Oct 02 11:47:34 compute-0 sudo[199263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:34 compute-0 python3.9[199265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:34 compute-0 sudo[199263]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:35 compute-0 sudo[199415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-youzxsgssinwdtnqmzfgvqeqnqrzalmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405655.1348834-1498-155986262489838/AnsiballZ_file.py'
Oct 02 11:47:35 compute-0 sudo[199415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:35 compute-0 ceph-mon[73607]: pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:35 compute-0 python3.9[199417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:35 compute-0 sudo[199415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:35.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:36 compute-0 sudo[199567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdpkvftgsnskpdyurwdosobcpvdlypxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405655.8367877-1498-214017894442273/AnsiballZ_file.py'
Oct 02 11:47:36 compute-0 sudo[199567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:47:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:47:36 compute-0 python3.9[199569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:36 compute-0 sudo[199567]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:36 compute-0 sudo[199720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvheexfnkeqyegkrfjzosmyojjypmoyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405656.452308-1498-152687098942304/AnsiballZ_file.py'
Oct 02 11:47:36 compute-0 sudo[199720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:36 compute-0 python3.9[199722]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:47:36 compute-0 sudo[199720]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:36 compute-0 ceph-mon[73607]: pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:37 compute-0 sudo[199872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuzdjlrxhufswxleferxicreiztbmrta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405657.2736473-1627-104185456790152/AnsiballZ_stat.py'
Oct 02 11:47:37 compute-0 sudo[199872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:37 compute-0 python3.9[199874]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:37 compute-0 sudo[199872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:38 compute-0 sudo[199998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqstagdajdfuuowhqwqilktvgctdexj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405657.2736473-1627-104185456790152/AnsiballZ_copy.py'
Oct 02 11:47:38 compute-0 sudo[199998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:38 compute-0 python3.9[200000]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405657.2736473-1627-104185456790152/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:38 compute-0 sudo[199998]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 sudo[200150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwhmkusdjuguqabumibwdyeuuxflztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405658.7822173-1627-156407258131246/AnsiballZ_stat.py'
Oct 02 11:47:39 compute-0 sudo[200150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:39 compute-0 python3.9[200152]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:39 compute-0 sudo[200150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 sudo[200275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmaewnpejfcuglqrqtegaegohesxafz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405658.7822173-1627-156407258131246/AnsiballZ_copy.py'
Oct 02 11:47:39 compute-0 sudo[200275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:39 compute-0 ceph-mon[73607]: pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:39 compute-0 python3.9[200277]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405658.7822173-1627-156407258131246/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:39.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:39 compute-0 sudo[200275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:40 compute-0 sudo[200427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pykbllppcpluoheiowvfwtskeszpugrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405659.989624-1627-218072940422426/AnsiballZ_stat.py'
Oct 02 11:47:40 compute-0 sudo[200427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:40 compute-0 python3.9[200429]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:40 compute-0 sudo[200427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:40 compute-0 ceph-mon[73607]: pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:40 compute-0 sudo[200553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmykmywfoyyaeofkbwzmpnlfelxigjjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405659.989624-1627-218072940422426/AnsiballZ_copy.py'
Oct 02 11:47:40 compute-0 sudo[200553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:41 compute-0 python3.9[200555]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405659.989624-1627-218072940422426/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:41 compute-0 sudo[200553]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:41 compute-0 sudo[200705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjyfmnwrocczbbwxgthjlxuwrrmtkkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405661.213922-1627-39216744597150/AnsiballZ_stat.py'
Oct 02 11:47:41 compute-0 sudo[200705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:41 compute-0 python3.9[200707]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:41 compute-0 sudo[200705]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:41.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:42 compute-0 sudo[200830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtllrnrskjfekynnakddgwxxedonquob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405661.213922-1627-39216744597150/AnsiballZ_copy.py'
Oct 02 11:47:42 compute-0 sudo[200830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:42 compute-0 python3.9[200832]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405661.213922-1627-39216744597150/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:42 compute-0 sudo[200830]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:42.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:47:42
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:42 compute-0 sudo[200983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwbzribjesntspgtphinkvwudxzxudh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405662.3515837-1627-27958561384914/AnsiballZ_stat.py'
Oct 02 11:47:42 compute-0 sudo[200983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:47:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:47:42 compute-0 python3.9[200985]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:42 compute-0 sudo[200983]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:42 compute-0 sudo[200988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:42 compute-0 sudo[200988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:42 compute-0 sudo[200988]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:43 compute-0 sudo[201030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:43 compute-0 sudo[201030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:43 compute-0 sudo[201030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:43 compute-0 sudo[201158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzododugmcqfgumqinxoutugjiqghzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405662.3515837-1627-27958561384914/AnsiballZ_copy.py'
Oct 02 11:47:43 compute-0 sudo[201158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:43 compute-0 python3.9[201160]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405662.3515837-1627-27958561384914/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:43 compute-0 sudo[201158]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:43 compute-0 podman[201161]: 2025-10-02 11:47:43.588954677 +0000 UTC m=+0.089324560 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 11:47:43 compute-0 ceph-mon[73607]: pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:43.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:43 compute-0 sudo[201335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-facqteiaoujtzbddwyqyecbxjfyvbzfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405663.6266613-1627-147409341025772/AnsiballZ_stat.py'
Oct 02 11:47:43 compute-0 sudo[201335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:44 compute-0 python3.9[201337]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:44 compute-0 sudo[201335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:47:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:44.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:47:44 compute-0 sudo[201461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnuyqsoxoghwgwlerlaaqkhescuotjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405663.6266613-1627-147409341025772/AnsiballZ_copy.py'
Oct 02 11:47:44 compute-0 sudo[201461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:44 compute-0 python3.9[201463]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405663.6266613-1627-147409341025772/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:44 compute-0 sudo[201461]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:45 compute-0 sudo[201613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oamakfmelliukvbfimrzkutnxrwryvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405664.8681557-1627-59223080446496/AnsiballZ_stat.py'
Oct 02 11:47:45 compute-0 sudo[201613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:45 compute-0 ceph-mon[73607]: pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:45 compute-0 python3.9[201615]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:45 compute-0 sudo[201613]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:45 compute-0 sudo[201736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roloangvankpkurimjtwwzvbketphkbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405664.8681557-1627-59223080446496/AnsiballZ_copy.py'
Oct 02 11:47:45 compute-0 sudo[201736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:45.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:45 compute-0 python3.9[201738]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405664.8681557-1627-59223080446496/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:45 compute-0 sudo[201736]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:46 compute-0 sudo[201889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fudnmrgsbprdldsfchdqjxdcjozrmbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405666.0586545-1627-247974253876342/AnsiballZ_stat.py'
Oct 02 11:47:46 compute-0 sudo[201889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:46 compute-0 python3.9[201891]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:47:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:46 compute-0 sudo[201889]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-0 sudo[202014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjmesdklahadlcifntimfjvcimxnivm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405666.0586545-1627-247974253876342/AnsiballZ_copy.py'
Oct 02 11:47:46 compute-0 sudo[202014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:47 compute-0 python3.9[202016]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405666.0586545-1627-247974253876342/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:47 compute-0 sudo[202014]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:47.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:49 compute-0 sudo[202167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhitnogoosqrcmkoktavkydymjusffff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405668.9690788-1966-126356712762819/AnsiballZ_command.py'
Oct 02 11:47:49 compute-0 sudo[202167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:49 compute-0 python3.9[202169]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 11:47:49 compute-0 sudo[202167]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:47:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:47:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:52 compute-0 sudo[202322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpvznwthzapktwqwjluiozbxphqwzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405672.480637-1993-234760702315441/AnsiballZ_file.py'
Oct 02 11:47:52 compute-0 sudo[202322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:53 compute-0 python3.9[202324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:53 compute-0 sudo[202322]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 sudo[202474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdjexhvnafuzdgoeaukxgnhbdikfsqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405673.3150353-1993-92086524749672/AnsiballZ_file.py'
Oct 02 11:47:53 compute-0 sudo[202474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:53.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:47:53 compute-0 python3.9[202476]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:54 compute-0 sudo[202474]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:54 compute-0 ceph-mon[73607]: pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:54.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:54 compute-0 sudo[202627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbeqkszuwukapwvrcgoxlpqufyahbet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405674.1239915-1993-186412922052904/AnsiballZ_file.py'
Oct 02 11:47:54 compute-0 sudo[202627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:54 compute-0 python3.9[202629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:54 compute-0 sudo[202627]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:55 compute-0 sudo[202779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcaynwpschukrpkaaixjpqpsutmkrjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405674.858382-1993-254368120460380/AnsiballZ_file.py'
Oct 02 11:47:55 compute-0 sudo[202779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:55 compute-0 python3.9[202781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:55 compute-0 sudo[202779]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:55 compute-0 sudo[202931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtwhpgysixwxggbwqfoficgpcjuklfkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405675.569472-1993-98187376062992/AnsiballZ_file.py'
Oct 02 11:47:55 compute-0 sudo[202931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:55.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:55 compute-0 ceph-mon[73607]: pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:55 compute-0 ceph-mon[73607]: pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:55 compute-0 ceph-mon[73607]: pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:55 compute-0 ceph-mon[73607]: pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:56.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:56 compute-0 python3.9[202933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:56 compute-0 sudo[202931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:56 compute-0 sudo[203084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jecazmojeubgrceizaargdqcwsavegym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405676.5505016-1993-146815468582080/AnsiballZ_file.py'
Oct 02 11:47:56 compute-0 sudo[203084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:57 compute-0 python3.9[203086]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:57 compute-0 sudo[203084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:57 compute-0 ceph-mon[73607]: pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:57 compute-0 sudo[203236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfsmagijmmzazgvqizmcnokihemfoal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405677.3370578-1993-67205651917780/AnsiballZ_file.py'
Oct 02 11:47:57 compute-0 sudo[203236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:58 compute-0 python3.9[203238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:58 compute-0 sudo[203236]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:58 compute-0 sudo[203389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwjmdaqnrwnqechmwhinszxyepekkvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405678.1967652-1993-48108907919210/AnsiballZ_file.py'
Oct 02 11:47:58 compute-0 sudo[203389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:58 compute-0 python3.9[203391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:58 compute-0 sudo[203389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 sudo[203541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apmujaoitmxpxjxpsifgykeefeodfqov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405678.89305-1993-68403139872663/AnsiballZ_file.py'
Oct 02 11:47:59 compute-0 sudo[203541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:59 compute-0 ceph-mon[73607]: pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:59 compute-0 python3.9[203543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:47:59 compute-0 sudo[203541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:47:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:00 compute-0 sudo[203693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhdabtsalhxujwkrojfcxdggdybjlor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405679.7038765-1993-31289457863572/AnsiballZ_file.py'
Oct 02 11:48:00 compute-0 sudo[203693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:00.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:00 compute-0 python3.9[203695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:00 compute-0 sudo[203693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:00 compute-0 sudo[203864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iauoruuarcuhsphoxnevduxzknxlywpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405680.5266387-1993-201905157320868/AnsiballZ_file.py'
Oct 02 11:48:00 compute-0 sudo[203864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:00 compute-0 podman[203820]: 2025-10-02 11:48:00.878129157 +0000 UTC m=+0.076245164 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 11:48:01 compute-0 python3.9[203867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:01 compute-0 sudo[203864]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:01 compute-0 anacron[26590]: Job `cron.weekly' started
Oct 02 11:48:01 compute-0 anacron[26590]: Job `cron.weekly' terminated
Oct 02 11:48:01 compute-0 ceph-mon[73607]: pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:01 compute-0 sudo[204019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxvbtbvaiojacntdpmxtunyyattttov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405681.2558548-1993-185619452267301/AnsiballZ_file.py'
Oct 02 11:48:01 compute-0 sudo[204019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:01.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:01 compute-0 python3.9[204021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:01 compute-0 sudo[204019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:02.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:02 compute-0 sudo[204172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyirbwuxtvexodveydxkcmohwzgdgune ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405682.0799675-1993-157521448476972/AnsiballZ_file.py'
Oct 02 11:48:02 compute-0 sudo[204172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:02 compute-0 python3.9[204174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:02 compute-0 sudo[204172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-0 sudo[204280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:03 compute-0 sudo[204280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-0 sudo[204280]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-0 sudo[204330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:03 compute-0 sudo[204330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-0 sudo[204330]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-0 sudo[204372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrxayzsvvtdeoxytbtiqoykfjnnygqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405682.8954637-1993-118812581413964/AnsiballZ_file.py'
Oct 02 11:48:03 compute-0 sudo[204372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:03 compute-0 python3.9[204376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:03 compute-0 sudo[204372]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-0 ceph-mon[73607]: pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:03 compute-0 sudo[204526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exuvrvmfdacwlcmzudxeyzjxviveybqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405683.6097326-2290-232441881865352/AnsiballZ_stat.py'
Oct 02 11:48:03 compute-0 sudo[204526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:04 compute-0 python3.9[204528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:04 compute-0 sudo[204526]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:04.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:04 compute-0 sudo[204650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcvosbnaxygmdozvxhjdjxnzcxoghbxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405683.6097326-2290-232441881865352/AnsiballZ_copy.py'
Oct 02 11:48:04 compute-0 sudo[204650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:04 compute-0 python3.9[204652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405683.6097326-2290-232441881865352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:04 compute-0 sudo[204650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-0 ceph-mon[73607]: pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:05 compute-0 sudo[204802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztyogcqzvmkklorgfwbtgthkabvgpyrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405684.953509-2290-8747689560431/AnsiballZ_stat.py'
Oct 02 11:48:05 compute-0 sudo[204802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:05 compute-0 python3.9[204804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:05 compute-0 sudo[204802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:05 compute-0 sudo[204925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrnowuoajxouybkhphagqpddmhqsded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405684.953509-2290-8747689560431/AnsiballZ_copy.py'
Oct 02 11:48:05 compute-0 sudo[204925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:06 compute-0 python3.9[204927]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405684.953509-2290-8747689560431/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:06 compute-0 sudo[204925]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:06 compute-0 sudo[205078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkobaokpgmiofsozdetrzblkyfwxtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405686.220489-2290-78272764504099/AnsiballZ_stat.py'
Oct 02 11:48:06 compute-0 sudo[205078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:06 compute-0 python3.9[205080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:06 compute-0 sudo[205078]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 sudo[205201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyxxtmvykcoxqwsxdxbdixrdapizlbgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405686.220489-2290-78272764504099/AnsiballZ_copy.py'
Oct 02 11:48:07 compute-0 sudo[205201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:07 compute-0 python3.9[205203]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405686.220489-2290-78272764504099/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:07 compute-0 sudo[205201]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 sudo[205353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfpgxtisyxvjndxaurkvamghuzcfqmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405687.4347045-2290-48485817151093/AnsiballZ_stat.py'
Oct 02 11:48:07 compute-0 sudo[205353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:07 compute-0 ceph-mon[73607]: pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:07 compute-0 python3.9[205355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:07 compute-0 sudo[205353]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:08 compute-0 sudo[205476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdpuxmrrouvvqthnbtefodvapgkzahq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405687.4347045-2290-48485817151093/AnsiballZ_copy.py'
Oct 02 11:48:08 compute-0 sudo[205476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:08.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:08 compute-0 python3.9[205478]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405687.4347045-2290-48485817151093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:08 compute-0 sudo[205476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:08 compute-0 ceph-mon[73607]: pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:08 compute-0 sudo[205629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzpfnescnifrkhmaipjfeaxnqnxctzte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405688.6675687-2290-63689451548344/AnsiballZ_stat.py'
Oct 02 11:48:08 compute-0 sudo[205629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:09 compute-0 python3.9[205631]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:09 compute-0 sudo[205629]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 sudo[205752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lglvngsyicaivpcchobgabelrdeiikki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405688.6675687-2290-63689451548344/AnsiballZ_copy.py'
Oct 02 11:48:09 compute-0 sudo[205752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:09 compute-0 python3.9[205754]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405688.6675687-2290-63689451548344/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:09 compute-0 sudo[205752]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:10 compute-0 sudo[205904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syptyzqsqamcdniazllvzujbdiojufnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405689.8775637-2290-139899626646099/AnsiballZ_stat.py'
Oct 02 11:48:10 compute-0 sudo[205904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:10 compute-0 python3.9[205906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:10 compute-0 sudo[205904]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:10 compute-0 sudo[206028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcwihlpqytpxxzdjutiqujdhbxyorddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405689.8775637-2290-139899626646099/AnsiballZ_copy.py'
Oct 02 11:48:10 compute-0 sudo[206028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:11 compute-0 python3.9[206030]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405689.8775637-2290-139899626646099/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:11 compute-0 sudo[206028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:11 compute-0 ceph-mon[73607]: pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:11 compute-0 sudo[206180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whilpzjyzhfcshjhmekxdxuzguiwmsoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405691.4226973-2290-144722838644477/AnsiballZ_stat.py'
Oct 02 11:48:11 compute-0 sudo[206180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:11 compute-0 python3.9[206182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:11 compute-0 sudo[206180]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:12 compute-0 sudo[206303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungetzxjhjguixcgffhzrpescookduvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405691.4226973-2290-144722838644477/AnsiballZ_copy.py'
Oct 02 11:48:12 compute-0 sudo[206303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:12 compute-0 python3.9[206305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405691.4226973-2290-144722838644477/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:12 compute-0 sudo[206303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:12 compute-0 ceph-mon[73607]: pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:12 compute-0 sudo[206456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbmjhpolpwirhbwdvughueuqfougich ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405692.628906-2290-54561795136048/AnsiballZ_stat.py'
Oct 02 11:48:12 compute-0 sudo[206456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:13 compute-0 python3.9[206458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:13 compute-0 sudo[206456]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:13 compute-0 sudo[206579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkayudiqydumujwbtzotutitaxytruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405692.628906-2290-54561795136048/AnsiballZ_copy.py'
Oct 02 11:48:13 compute-0 sudo[206579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:13 compute-0 python3.9[206581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405692.628906-2290-54561795136048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:13 compute-0 sudo[206579]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:13.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:14 compute-0 podman[206649]: 2025-10-02 11:48:14.02798029 +0000 UTC m=+0.162058261 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 11:48:14 compute-0 sudo[206757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxwnigxkliwvkdifekrbnvzliwtedko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405693.799908-2290-198745046379213/AnsiballZ_stat.py'
Oct 02 11:48:14 compute-0 sudo[206757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:14.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:14 compute-0 python3.9[206759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:14 compute-0 sudo[206757]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:14 compute-0 sudo[206881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvzvnqczbrfefffhdkidfmzsuvpjkys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405693.799908-2290-198745046379213/AnsiballZ_copy.py'
Oct 02 11:48:14 compute-0 sudo[206881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:14 compute-0 python3.9[206883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405693.799908-2290-198745046379213/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:14 compute-0 sudo[206881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-0 ceph-mon[73607]: pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:15 compute-0 sudo[207033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjotucmzqkbvylzupkzdnhafnnhgujuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405695.1017053-2290-227395334028621/AnsiballZ_stat.py'
Oct 02 11:48:15 compute-0 sudo[207033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:15 compute-0 python3.9[207035]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:15 compute-0 sudo[207033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:15.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:16 compute-0 sudo[207156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdbojxkrbjfuzxhgfehgmxgobhiqexw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405695.1017053-2290-227395334028621/AnsiballZ_copy.py'
Oct 02 11:48:16 compute-0 sudo[207156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:16 compute-0 python3.9[207158]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405695.1017053-2290-227395334028621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:16.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:16 compute-0 sudo[207156]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:16 compute-0 sudo[207309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ireigtbobcifqxesrbqrwmyxecwdphxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405696.4748747-2290-30269444190648/AnsiballZ_stat.py'
Oct 02 11:48:16 compute-0 sudo[207309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:16 compute-0 python3.9[207311]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:16 compute-0 sudo[207309]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:17 compute-0 sudo[207432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtdktihijssrpuesurtgwnxsstdzfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405696.4748747-2290-30269444190648/AnsiballZ_copy.py'
Oct 02 11:48:17 compute-0 sudo[207432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:17 compute-0 python3.9[207434]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405696.4748747-2290-30269444190648/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:17 compute-0 sudo[207432]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:17 compute-0 ceph-mon[73607]: pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 11:48:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 11:48:18 compute-0 sudo[207584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczbfcsrcqvypbktutjoosjjkzhqszlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405697.7446065-2290-200076865218027/AnsiballZ_stat.py'
Oct 02 11:48:18 compute-0 sudo[207584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:18 compute-0 python3.9[207586]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:18 compute-0 sudo[207584]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:18.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:18 compute-0 sudo[207708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvhqssvlwhaoozqodpqcrrbweazusmxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405697.7446065-2290-200076865218027/AnsiballZ_copy.py'
Oct 02 11:48:18 compute-0 sudo[207708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:18 compute-0 python3.9[207710]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405697.7446065-2290-200076865218027/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:18 compute-0 sudo[207708]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:18 compute-0 ceph-mon[73607]: pgmap v617: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:19 compute-0 sudo[207860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcvdsgsipdmrmgzsjnswpiiwwgoxuxhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405698.9004505-2290-261241927025591/AnsiballZ_stat.py'
Oct 02 11:48:19 compute-0 sudo[207860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:19 compute-0 python3.9[207862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:19 compute-0 sudo[207860]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:19 compute-0 sudo[207983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asblbexnitggqynuolgssgnjsiwwsvgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405698.9004505-2290-261241927025591/AnsiballZ_copy.py'
Oct 02 11:48:19 compute-0 sudo[207983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:19 compute-0 python3.9[207985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405698.9004505-2290-261241927025591/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:19 compute-0 sudo[207983]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:20 compute-0 sudo[208136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdknkebrhwqbfuydqtdbotjjulnandul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405700.0590065-2290-11491418297374/AnsiballZ_stat.py'
Oct 02 11:48:20 compute-0 sudo[208136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:20.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:20 compute-0 python3.9[208138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:20 compute-0 sudo[208136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:20 compute-0 sudo[208259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkulemhechfificorjlkcrimsishkdba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405700.0590065-2290-11491418297374/AnsiballZ_copy.py'
Oct 02 11:48:20 compute-0 sudo[208259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:21 compute-0 python3.9[208261]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405700.0590065-2290-11491418297374/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:21 compute-0 sudo[208259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:21 compute-0 sudo[208283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:21 compute-0 sudo[208283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:21 compute-0 sudo[208283]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:21 compute-0 sudo[208311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:21 compute-0 sudo[208311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:21 compute-0 sudo[208311]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:21 compute-0 sudo[208336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:21 compute-0 sudo[208336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:21 compute-0 sudo[208336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:21 compute-0 sudo[208361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:48:21 compute-0 sudo[208361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:21 compute-0 ceph-mon[73607]: pgmap v618: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:21 compute-0 sudo[208361]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:48:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f809affb-425a-40e5-88dd-6c6429a11b34 does not exist
Oct 02 11:48:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fc247bfd-aae3-4ace-a99e-d4c002f92f17 does not exist
Oct 02 11:48:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 69e03ab8-49ed-426f-af3d-01d273948a48 does not exist
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:48:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:21 compute-0 sudo[208416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:21 compute-0 sudo[208416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:21 compute-0 sudo[208416]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:22 compute-0 sudo[208441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:22 compute-0 sudo[208441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:22 compute-0 sudo[208441]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:22 compute-0 sudo[208466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:22 compute-0 sudo[208466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:22 compute-0 sudo[208466]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:22 compute-0 sudo[208515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:48:22 compute-0 sudo[208515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.433660171 +0000 UTC m=+0.049093613 container create ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:48:22 compute-0 systemd[1]: Started libpod-conmon-ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580.scope.
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.406447819 +0000 UTC m=+0.021881291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:22 compute-0 python3.9[208665]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.529742492 +0000 UTC m=+0.145175954 container init ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.537385821 +0000 UTC m=+0.152819263 container start ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:22 compute-0 hardcore_yalow[208696]: 167 167
Oct 02 11:48:22 compute-0 systemd[1]: libpod-ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580.scope: Deactivated successfully.
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.54786529 +0000 UTC m=+0.163298762 container attach ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.548469525 +0000 UTC m=+0.163902977 container died ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:48:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f7ddf423627a7c67998d449ca0d98f7eb17ae702008866b6096459f6d3cd04-merged.mount: Deactivated successfully.
Oct 02 11:48:22 compute-0 podman[208679]: 2025-10-02 11:48:22.618616097 +0000 UTC m=+0.234049539 container remove ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:48:22 compute-0 systemd[1]: libpod-conmon-ae448d0faa8d4531262d06faf6246ad2ce177be136d49ad8d0d0d82144805580.scope: Deactivated successfully.
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:48:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:22 compute-0 podman[208747]: 2025-10-02 11:48:22.784645355 +0000 UTC m=+0.041942397 container create 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:48:22 compute-0 systemd[1]: Started libpod-conmon-822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c.scope.
Oct 02 11:48:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:22 compute-0 podman[208747]: 2025-10-02 11:48:22.764652501 +0000 UTC m=+0.021949563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:22 compute-0 podman[208747]: 2025-10-02 11:48:22.912036059 +0000 UTC m=+0.169333121 container init 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:48:22 compute-0 podman[208747]: 2025-10-02 11:48:22.918778796 +0000 UTC m=+0.176075838 container start 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:48:22 compute-0 podman[208747]: 2025-10-02 11:48:22.966281939 +0000 UTC m=+0.223578981 container attach 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:23 compute-0 sudo[208844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:23 compute-0 sudo[208844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:23 compute-0 sudo[208844]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:23 compute-0 sudo[208939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktzmcchcbmxwiwptfnhkdtavvtcbayn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405702.9147933-2908-95636165526831/AnsiballZ_seboolean.py'
Oct 02 11:48:23 compute-0 sudo[208896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:23 compute-0 sudo[208939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:23 compute-0 sudo[208896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:23 compute-0 sudo[208896]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:23 compute-0 python3.9[208944]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 11:48:23 compute-0 gallant_galois[208763]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:48:23 compute-0 gallant_galois[208763]: --> relative data size: 1.0
Oct 02 11:48:23 compute-0 gallant_galois[208763]: --> All data devices are unavailable
Oct 02 11:48:23 compute-0 systemd[1]: libpod-822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c.scope: Deactivated successfully.
Oct 02 11:48:23 compute-0 podman[208747]: 2025-10-02 11:48:23.7473484 +0000 UTC m=+1.004645442 container died 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:48:23 compute-0 ceph-mon[73607]: pgmap v619: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a2c1ae9420cc0d5013a91b9875dc453333f006f54f6863b821deeb9a6d5135-merged.mount: Deactivated successfully.
Oct 02 11:48:24 compute-0 podman[208747]: 2025-10-02 11:48:24.244868201 +0000 UTC m=+1.502165243 container remove 822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:48:24 compute-0 sudo[208515]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:24 compute-0 systemd[1]: libpod-conmon-822e39bfe59358de9b91f4a201625aff8ba416e4de2ec3b497de363ad5abde5c.scope: Deactivated successfully.
Oct 02 11:48:24 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 02 11:48:24 compute-0 sudo[208970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:24 compute-0 sudo[208970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:24 compute-0 sudo[208970]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:24 compute-0 sudo[208999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:24 compute-0 sudo[208999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:24 compute-0 sudo[208999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:24 compute-0 sudo[209024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:24 compute-0 sudo[209024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:24 compute-0 sudo[209024]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:24 compute-0 sudo[209049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:48:24 compute-0 sudo[209049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.821551967 +0000 UTC m=+0.059298544 container create 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.783680743 +0000 UTC m=+0.021427340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:24 compute-0 ceph-mon[73607]: pgmap v620: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:24 compute-0 systemd[1]: Started libpod-conmon-4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1.scope.
Oct 02 11:48:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:24 compute-0 sudo[208939]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.960452876 +0000 UTC m=+0.198199473 container init 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.967176842 +0000 UTC m=+0.204923429 container start 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:24 compute-0 systemd[1]: libpod-4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1.scope: Deactivated successfully.
Oct 02 11:48:24 compute-0 compassionate_taussig[209129]: 167 167
Oct 02 11:48:24 compute-0 conmon[209129]: conmon 4a78f5c691156f7ffd61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1.scope/container/memory.events
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.984431809 +0000 UTC m=+0.222178396 container attach 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:24 compute-0 podman[209113]: 2025-10-02 11:48:24.98531401 +0000 UTC m=+0.223060617 container died 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd41e031bbf124efe4145e0e5fee073e82f45b7eab899a198588fe67777bb49f-merged.mount: Deactivated successfully.
Oct 02 11:48:25 compute-0 podman[209113]: 2025-10-02 11:48:25.088107688 +0000 UTC m=+0.325854265 container remove 4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:25 compute-0 systemd[1]: libpod-conmon-4a78f5c691156f7ffd612a3b6930dbf1f4dfc3d93e2485927a6290001803c8b1.scope: Deactivated successfully.
Oct 02 11:48:25 compute-0 podman[209178]: 2025-10-02 11:48:25.278465907 +0000 UTC m=+0.076889559 container create 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:25 compute-0 systemd[1]: Started libpod-conmon-2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991.scope.
Oct 02 11:48:25 compute-0 podman[209178]: 2025-10-02 11:48:25.242211452 +0000 UTC m=+0.040635124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754d8d89fe1ef1296947c19bc03526b5eb7341f998e44af568ee363558f4fd6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754d8d89fe1ef1296947c19bc03526b5eb7341f998e44af568ee363558f4fd6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754d8d89fe1ef1296947c19bc03526b5eb7341f998e44af568ee363558f4fd6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754d8d89fe1ef1296947c19bc03526b5eb7341f998e44af568ee363558f4fd6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:25 compute-0 podman[209178]: 2025-10-02 11:48:25.383620753 +0000 UTC m=+0.182044415 container init 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:25 compute-0 podman[209178]: 2025-10-02 11:48:25.390538444 +0000 UTC m=+0.188962086 container start 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:48:25 compute-0 podman[209178]: 2025-10-02 11:48:25.507096281 +0000 UTC m=+0.305519923 container attach 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:48:25 compute-0 sudo[209325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdeglsebdwobgimnpxkrvosfflozzivn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405705.2599366-2932-147636196029338/AnsiballZ_copy.py'
Oct 02 11:48:25 compute-0 sudo[209325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:25 compute-0 python3.9[209327]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:25 compute-0 sudo[209325]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]: {
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:     "1": [
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:         {
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "devices": [
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "/dev/loop3"
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             ],
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "lv_name": "ceph_lv0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "lv_size": "7511998464",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "name": "ceph_lv0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "tags": {
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.cluster_name": "ceph",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.crush_device_class": "",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.encrypted": "0",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.osd_id": "1",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.type": "block",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:                 "ceph.vdo": "0"
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             },
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "type": "block",
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:             "vg_name": "ceph_vg0"
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:         }
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]:     ]
Oct 02 11:48:26 compute-0 stupefied_tharp[209247]: }
Oct 02 11:48:26 compute-0 sudo[209481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oztjzxadolyqpxncrkxzccyzgjxamfwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405705.8580139-2932-261681482998306/AnsiballZ_copy.py'
Oct 02 11:48:26 compute-0 sudo[209481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:26 compute-0 systemd[1]: libpod-2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991.scope: Deactivated successfully.
Oct 02 11:48:26 compute-0 podman[209178]: 2025-10-02 11:48:26.146269459 +0000 UTC m=+0.944693101 container died 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-754d8d89fe1ef1296947c19bc03526b5eb7341f998e44af568ee363558f4fd6c-merged.mount: Deactivated successfully.
Oct 02 11:48:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:26.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:26 compute-0 python3.9[209483]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:26 compute-0 sudo[209481]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:26 compute-0 podman[209178]: 2025-10-02 11:48:26.430989688 +0000 UTC m=+1.229413340 container remove 2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:26 compute-0 systemd[1]: libpod-conmon-2104f718740c4bd3119390ef8ce9b373cc1352bcdcb9b5b30848d55f0b6f3991.scope: Deactivated successfully.
Oct 02 11:48:26 compute-0 sudo[209049]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:26 compute-0 sudo[209526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:26 compute-0 sudo[209526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:26 compute-0 sudo[209526]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:26 compute-0 sudo[209593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:26 compute-0 sudo[209593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:26 compute-0 sudo[209593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:26 compute-0 sudo[209626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:26 compute-0 sudo[209626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:26 compute-0 sudo[209626]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:26 compute-0 sudo[209674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:48:26 compute-0 sudo[209674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:26 compute-0 sudo[209749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummzxuqboxaqktvqontvbvfrfhnellca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405706.5138857-2932-171504293925860/AnsiballZ_copy.py'
Oct 02 11:48:26 compute-0 sudo[209749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:48:26.904 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:48:26.904 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:48:26.904 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:48:27 compute-0 python3.9[209751]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.050713266 +0000 UTC m=+0.061093659 container create 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:48:27 compute-0 sudo[209749]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:27 compute-0 systemd[1]: Started libpod-conmon-30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6.scope.
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.011460707 +0000 UTC m=+0.021841120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.163595303 +0000 UTC m=+0.173975716 container init 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.170667237 +0000 UTC m=+0.181047630 container start 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:48:27 compute-0 nice_euler[209808]: 167 167
Oct 02 11:48:27 compute-0 systemd[1]: libpod-30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6.scope: Deactivated successfully.
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.189887112 +0000 UTC m=+0.200267515 container attach 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.190259711 +0000 UTC m=+0.200640114 container died 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f312fb9112f9a99d1777f1001c6d276646edfe81f8f968e3bef19cc51fa36374-merged.mount: Deactivated successfully.
Oct 02 11:48:27 compute-0 podman[209792]: 2025-10-02 11:48:27.372232773 +0000 UTC m=+0.382613156 container remove 30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:48:27 compute-0 systemd[1]: libpod-conmon-30888cb4bc14df56442c0254de7dbfd9c1584ea5d5713be97bce594738af83c6.scope: Deactivated successfully.
Oct 02 11:48:27 compute-0 sudo[209976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlniqscfpoppaeqcnajtstswnldybxiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405707.2052128-2932-163387147680670/AnsiballZ_copy.py'
Oct 02 11:48:27 compute-0 sudo[209976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:27 compute-0 podman[209982]: 2025-10-02 11:48:27.563604818 +0000 UTC m=+0.072970583 container create 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:48:27 compute-0 podman[209982]: 2025-10-02 11:48:27.511969963 +0000 UTC m=+0.021335758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:27 compute-0 systemd[1]: Started libpod-conmon-41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23.scope.
Oct 02 11:48:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed01957965809897a9b85928451e4b31d0f95906f59715f92e97cd6f9c67300f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed01957965809897a9b85928451e4b31d0f95906f59715f92e97cd6f9c67300f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed01957965809897a9b85928451e4b31d0f95906f59715f92e97cd6f9c67300f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed01957965809897a9b85928451e4b31d0f95906f59715f92e97cd6f9c67300f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:27 compute-0 podman[209982]: 2025-10-02 11:48:27.670484236 +0000 UTC m=+0.179850021 container init 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:27 compute-0 podman[209982]: 2025-10-02 11:48:27.676603656 +0000 UTC m=+0.185969421 container start 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:27 compute-0 podman[209982]: 2025-10-02 11:48:27.682559834 +0000 UTC m=+0.191925619 container attach 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:48:27 compute-0 python3.9[209984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:27 compute-0 sudo[209976]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:27 compute-0 ceph-mon[73607]: pgmap v621: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:28 compute-0 sudo[210153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouzuwedqwsjzluzkbltcsbuejpekxwza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405707.8381927-2932-116243174283567/AnsiballZ_copy.py'
Oct 02 11:48:28 compute-0 sudo[210153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:28 compute-0 python3.9[210155]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:28 compute-0 sudo[210153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]: {
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:         "osd_id": 1,
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:         "type": "bluestore"
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]:     }
Oct 02 11:48:28 compute-0 musing_bhaskara[209999]: }
Oct 02 11:48:28 compute-0 systemd[1]: libpod-41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23.scope: Deactivated successfully.
Oct 02 11:48:28 compute-0 podman[209982]: 2025-10-02 11:48:28.487249468 +0000 UTC m=+0.996615233 container died 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed01957965809897a9b85928451e4b31d0f95906f59715f92e97cd6f9c67300f-merged.mount: Deactivated successfully.
Oct 02 11:48:28 compute-0 podman[209982]: 2025-10-02 11:48:28.562786863 +0000 UTC m=+1.072152628 container remove 41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:48:28 compute-0 systemd[1]: libpod-conmon-41285fcacde97a152f4e3ae57ee864148dfe3683bdd6a8d37425312282663e23.scope: Deactivated successfully.
Oct 02 11:48:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:28 compute-0 sudo[209674]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 832be0a3-cedc-4803-98d9-3642e86fb56d does not exist
Oct 02 11:48:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1527477f-690d-4322-85a7-cef65f505563 does not exist
Oct 02 11:48:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d7faefa9-4fbd-4817-be8b-4ff7aae69b72 does not exist
Oct 02 11:48:28 compute-0 sudo[210260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:28 compute-0 sudo[210260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:28 compute-0 sudo[210260]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:28 compute-0 sudo[210308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:48:28 compute-0 sudo[210308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:28 compute-0 sudo[210308]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:28 compute-0 sudo[210383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzltapwoinrnxbgwrjmfoebqdxxkscg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405708.6748574-3040-102680197947351/AnsiballZ_copy.py'
Oct 02 11:48:28 compute-0 sudo[210383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:29 compute-0 python3.9[210385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:29 compute-0 sudo[210383]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:29 compute-0 sudo[210535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulttnkxiqwcsqzcnfdxwxwetaomasxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405709.267058-3040-39982454716834/AnsiballZ_copy.py'
Oct 02 11:48:29 compute-0 sudo[210535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:29 compute-0 python3.9[210537]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:29 compute-0 ceph-mon[73607]: pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:48:29 compute-0 sudo[210535]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:48:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:48:30 compute-0 sudo[210687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjirbdodeqcysfupojqazjswaxovlpgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405709.9086366-3040-17021390325544/AnsiballZ_copy.py'
Oct 02 11:48:30 compute-0 sudo[210687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:30 compute-0 python3.9[210689]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:30 compute-0 sudo[210687]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:30 compute-0 sudo[210840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljmffpfqsswsryrkftrcyklmhxrrtcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405710.4979553-3040-188977864442678/AnsiballZ_copy.py'
Oct 02 11:48:30 compute-0 sudo[210840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:30 compute-0 python3.9[210842]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:30 compute-0 sudo[210840]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:31 compute-0 podman[210843]: 2025-10-02 11:48:31.04586053 +0000 UTC m=+0.057205724 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 11:48:31 compute-0 ceph-mon[73607]: pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:31 compute-0 sudo[211011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gorxlfxtxhwshwbqkgcoebmbnmrckwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405711.1375902-3040-107511246378499/AnsiballZ_copy.py'
Oct 02 11:48:31 compute-0 sudo[211011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:31 compute-0 python3.9[211013]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:31 compute-0 sudo[211011]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:32 compute-0 sudo[211164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfopbrlnguusmnlrntoleywjiqotfesx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405712.3145635-3148-274940274271427/AnsiballZ_systemd.py'
Oct 02 11:48:32 compute-0 sudo[211164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:32 compute-0 python3.9[211166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:48:32 compute-0 systemd[1]: Reloading.
Oct 02 11:48:32 compute-0 systemd-sysv-generator[211198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:48:32 compute-0 systemd-rc-local-generator[211194]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:48:33 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 02 11:48:33 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 02 11:48:33 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 02 11:48:33 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 02 11:48:33 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 02 11:48:33 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 02 11:48:33 compute-0 sudo[211164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:33 compute-0 ceph-mon[73607]: pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:33 compute-0 sudo[211357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnjwotzrbpuvsckduirnoquwpikzszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405713.6020398-3148-47890428177294/AnsiballZ_systemd.py'
Oct 02 11:48:33 compute-0 sudo[211357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:34 compute-0 python3.9[211359]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:48:34 compute-0 systemd[1]: Reloading.
Oct 02 11:48:34 compute-0 systemd-rc-local-generator[211383]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:48:34 compute-0 systemd-sysv-generator[211387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:48:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:34 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 02 11:48:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 02 11:48:34 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 02 11:48:34 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 02 11:48:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 02 11:48:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 02 11:48:34 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 11:48:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:34 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 11:48:34 compute-0 sudo[211357]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:34 compute-0 ceph-mon[73607]: pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:35 compute-0 sudo[211573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opodifqybuyjdwftlpcqoufhchgtujzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405714.7551572-3148-146682796189667/AnsiballZ_systemd.py'
Oct 02 11:48:35 compute-0 sudo[211573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:35 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 02 11:48:35 compute-0 python3.9[211575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:48:35 compute-0 systemd[1]: Reloading.
Oct 02 11:48:35 compute-0 systemd-rc-local-generator[211606]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:48:35 compute-0 systemd-sysv-generator[211610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:48:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:35 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 02 11:48:35 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 02 11:48:35 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 02 11:48:35 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 02 11:48:35 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 02 11:48:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 11:48:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 11:48:35 compute-0 sudo[211573]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:35 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 02 11:48:35 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 02 11:48:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:36 compute-0 sudo[211792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irwdsxplshegnorrkxnolnaczznhyxcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405715.8450952-3148-96311741177188/AnsiballZ_systemd.py'
Oct 02 11:48:36 compute-0 sudo[211792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:36.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:36 compute-0 python3.9[211794]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:48:36 compute-0 systemd[1]: Reloading.
Oct 02 11:48:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:36 compute-0 systemd-sysv-generator[211826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:48:36 compute-0 systemd-rc-local-generator[211822]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:48:36 compute-0 setroubleshoot[211576]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eccceccb-db1a-4f21-a55b-667d24ca05d6
Oct 02 11:48:36 compute-0 setroubleshoot[211576]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 11:48:36 compute-0 setroubleshoot[211576]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eccceccb-db1a-4f21-a55b-667d24ca05d6
Oct 02 11:48:36 compute-0 setroubleshoot[211576]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 11:48:36 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 02 11:48:36 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 02 11:48:36 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 02 11:48:36 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 02 11:48:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 02 11:48:36 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 02 11:48:36 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 02 11:48:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 02 11:48:36 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 02 11:48:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 02 11:48:36 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 11:48:36 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 11:48:36 compute-0 sudo[211792]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:37 compute-0 sudo[212007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faeeeffmnoenmyjvqislxivlpfsfdaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405717.0599031-3148-275289217241889/AnsiballZ_systemd.py'
Oct 02 11:48:37 compute-0 sudo[212007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:37 compute-0 python3.9[212009]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:48:37 compute-0 systemd[1]: Reloading.
Oct 02 11:48:37 compute-0 ceph-mon[73607]: pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:37 compute-0 systemd-sysv-generator[212040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:48:37 compute-0 systemd-rc-local-generator[212036]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:48:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:38 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 02 11:48:38 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 02 11:48:38 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 02 11:48:38 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 02 11:48:38 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 02 11:48:38 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 02 11:48:38 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 11:48:38 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 11:48:38 compute-0 sudo[212007]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:38.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:39 compute-0 sudo[212218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffyvwpsxcuacakyrmwfezobinvniyvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405718.9759753-3259-222101695389037/AnsiballZ_file.py'
Oct 02 11:48:39 compute-0 sudo[212218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:39 compute-0 python3.9[212220]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:39 compute-0 sudo[212218]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:39 compute-0 ceph-mon[73607]: pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:40 compute-0 sudo[212370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmrajmzvjagggpmitrbymntlwdnxbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405719.7991972-3283-118307166489921/AnsiballZ_find.py'
Oct 02 11:48:40 compute-0 sudo[212370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:40 compute-0 python3.9[212372]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:48:40 compute-0 sudo[212370]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:40 compute-0 sudo[212523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdjwgrjcvlwapihyuwonkrzqsxqzlkpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405720.5993989-3307-87301052602970/AnsiballZ_command.py'
Oct 02 11:48:40 compute-0 sudo[212523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:40 compute-0 ceph-mon[73607]: pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:41 compute-0 python3.9[212525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:41 compute-0 sudo[212523]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:41 compute-0 python3.9[212679]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:48:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:41.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:48:42
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:48:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:42.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:48:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:48:42 compute-0 python3.9[212830]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:43 compute-0 python3.9[212951]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405722.3834612-3364-26304523547176/.source.xml follow=False _original_basename=secret.xml.j2 checksum=63af5286395175bfc9ebaef2783b75cb37ce0b72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:43 compute-0 sudo[212955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:43 compute-0 sudo[212955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:43 compute-0 sudo[212955]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:43 compute-0 sudo[213001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:43 compute-0 sudo[213001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:43 compute-0 sudo[213001]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:43 compute-0 ceph-mon[73607]: pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:43 compute-0 sudo[213151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnkuegeanluhyqrpmhiytfrkbxdrwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405723.6538274-3409-195939413504768/AnsiballZ_command.py'
Oct 02 11:48:43 compute-0 sudo[213151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:44 compute-0 python3.9[213153]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:44 compute-0 polkitd[6475]: Registered Authentication Agent for unix-process:213155:379179 (system bus name :1.3069 [/usr/bin/pkttyagent --process 213155 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 11:48:44 compute-0 polkitd[6475]: Unregistered Authentication Agent for unix-process:213155:379179 (system bus name :1.3069, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 11:48:44 compute-0 polkitd[6475]: Registered Authentication Agent for unix-process:213154:379178 (system bus name :1.3070 [/usr/bin/pkttyagent --process 213154 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 11:48:44 compute-0 polkitd[6475]: Unregistered Authentication Agent for unix-process:213154:379178 (system bus name :1.3070, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 11:48:44 compute-0 sudo[213151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:44 compute-0 ceph-mon[73607]: pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:45 compute-0 podman[213191]: 2025-10-02 11:48:45.002205773 +0000 UTC m=+0.141410282 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 11:48:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:45 compute-0 python3.9[213342]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:46 compute-0 sudo[213493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgjwpzumbdxagofhyzzikhckmeutissm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405726.1346748-3457-42665110470098/AnsiballZ_command.py'
Oct 02 11:48:46 compute-0 sudo[213493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:46 compute-0 sudo[213493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:46 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 02 11:48:46 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 02 11:48:47 compute-0 sudo[213646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhezcmqqbmfcgqbjxjzqjykybqqklvp ; FSID=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 KEY=AQAtYt5oAAAAABAAuYoDEL6p1gtJZUr+6PiDPw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405726.8469486-3481-60719864235251/AnsiballZ_command.py'
Oct 02 11:48:47 compute-0 sudo[213646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:47 compute-0 polkitd[6475]: Registered Authentication Agent for unix-process:213649:379496 (system bus name :1.3073 [/usr/bin/pkttyagent --process 213649 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 11:48:47 compute-0 polkitd[6475]: Unregistered Authentication Agent for unix-process:213649:379496 (system bus name :1.3073, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 11:48:47 compute-0 sudo[213646]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:47 compute-0 ceph-mon[73607]: pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:47 compute-0 sudo[213804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icivhlyceunbybikabfpxjlamxwimudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405727.6742713-3505-85748282510/AnsiballZ_copy.py'
Oct 02 11:48:47 compute-0 sudo[213804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:48 compute-0 python3.9[213806]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:48 compute-0 sudo[213804]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:48 compute-0 sudo[213957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdryenaugjuhtbdrbhihtxezcrmrjxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405728.4016736-3529-244689088601194/AnsiballZ_stat.py'
Oct 02 11:48:48 compute-0 sudo[213957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:48 compute-0 python3.9[213959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:48 compute-0 sudo[213957]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:49 compute-0 sudo[214080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmhepymxexmcjsdiibhnefwefjuexhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405728.4016736-3529-244689088601194/AnsiballZ_copy.py'
Oct 02 11:48:49 compute-0 sudo[214080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:49 compute-0 python3.9[214082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405728.4016736-3529-244689088601194/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:49 compute-0 sudo[214080]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:49 compute-0 ceph-mon[73607]: pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:48:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:48:50 compute-0 sudo[214232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjrvkxtitdljzgshlgtretdergsshxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405729.940156-3577-180677272766521/AnsiballZ_file.py'
Oct 02 11:48:50 compute-0 sudo[214232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:50 compute-0 python3.9[214234]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:50 compute-0 sudo[214232]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:50 compute-0 ceph-mon[73607]: pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:51 compute-0 sudo[214385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbzbksgjgzozhrmpdkipbmkbnsifjod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405730.6765199-3601-2141645507259/AnsiballZ_stat.py'
Oct 02 11:48:51 compute-0 sudo[214385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:51 compute-0 python3.9[214387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:51 compute-0 sudo[214385]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:51 compute-0 sudo[214463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjywxlfujpdzaziiatineixhfzfmdzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405730.6765199-3601-2141645507259/AnsiballZ_file.py'
Oct 02 11:48:51 compute-0 sudo[214463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:51 compute-0 python3.9[214465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:51 compute-0 sudo[214463]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:52 compute-0 sudo[214615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-broqxdddzulbzwozfolcejzzlmhmcyeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405731.9876363-3637-26326690613573/AnsiballZ_stat.py'
Oct 02 11:48:52 compute-0 sudo[214615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:52 compute-0 python3.9[214617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:52 compute-0 sudo[214615]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:52 compute-0 sudo[214694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbemtpmmgnmbvcecotsiiqqdphomhcpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405731.9876363-3637-26326690613573/AnsiballZ_file.py'
Oct 02 11:48:52 compute-0 sudo[214694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:52 compute-0 ceph-mon[73607]: pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:52 compute-0 python3.9[214696]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.b6k351l2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:53 compute-0 sudo[214694]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:53 compute-0 sudo[214846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuykxbhtbbtleqrvkcuvitgxfcikktfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405733.2926183-3673-265866612329421/AnsiballZ_stat.py'
Oct 02 11:48:53 compute-0 sudo[214846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:53 compute-0 python3.9[214848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:53 compute-0 sudo[214846]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:48:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:54 compute-0 sudo[214924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfkiyhnuofpoowkoiwmorvkkuspruteq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405733.2926183-3673-265866612329421/AnsiballZ_file.py'
Oct 02 11:48:54 compute-0 sudo[214924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:54 compute-0 python3.9[214926]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:54 compute-0 sudo[214924]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:54 compute-0 sudo[215077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmobekoubddgrlspjlauuhsasygwndpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405734.6050017-3712-46557005630520/AnsiballZ_command.py'
Oct 02 11:48:54 compute-0 sudo[215077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:55 compute-0 python3.9[215079]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:55 compute-0 sudo[215077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:55 compute-0 ceph-mon[73607]: pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:55 compute-0 sudo[215230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylojpnqnapvfjrnioptcyokibdmfage ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405735.365861-3736-55956643637163/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:48:55 compute-0 sudo[215230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:56 compute-0 python3[215232]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:48:56 compute-0 sudo[215230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:56 compute-0 sudo[215383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwcviothjgegzqzhubxlfwbflacicqtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405736.3165417-3760-110896323400958/AnsiballZ_stat.py'
Oct 02 11:48:56 compute-0 sudo[215383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:56 compute-0 ceph-mon[73607]: pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:56 compute-0 python3.9[215385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:56 compute-0 sudo[215383]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:57 compute-0 sudo[215461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajaaaknhdkabmgrjnoxpfbtjueafkaba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405736.3165417-3760-110896323400958/AnsiballZ_file.py'
Oct 02 11:48:57 compute-0 sudo[215461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:57 compute-0 python3.9[215463]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:57 compute-0 sudo[215461]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:57 compute-0 sudo[215613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flccpbcindhpfslfwefrkentqpqansbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405737.5521445-3796-38292978491509/AnsiballZ_stat.py'
Oct 02 11:48:57 compute-0 sudo[215613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:57.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:58 compute-0 python3.9[215615]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:58 compute-0 sudo[215613]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:48:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:58.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:48:58 compute-0 sudo[215692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iktutidrpcyxgmeycrtukgnnfzjvmhvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405737.5521445-3796-38292978491509/AnsiballZ_file.py'
Oct 02 11:48:58 compute-0 sudo[215692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:58 compute-0 python3.9[215694]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:58 compute-0 sudo[215692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:59 compute-0 sudo[215844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskfaeokhpxjarjiuvrrywwlczaswigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405738.9449446-3832-3703282051001/AnsiballZ_stat.py'
Oct 02 11:48:59 compute-0 sudo[215844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:59 compute-0 python3.9[215846]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:48:59 compute-0 sudo[215844]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:59 compute-0 sudo[215922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmfbwesozvisfzgjbuqtystdsucahyco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405738.9449446-3832-3703282051001/AnsiballZ_file.py'
Oct 02 11:48:59 compute-0 sudo[215922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:59 compute-0 ceph-mon[73607]: pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:59 compute-0 python3.9[215924]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:48:59 compute-0 sudo[215922]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:48:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:59.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:00.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:00 compute-0 sudo[216075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocjypvfsxkwhjbabarqgkknyjiqapcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405740.166822-3868-235948412056393/AnsiballZ_stat.py'
Oct 02 11:49:00 compute-0 sudo[216075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:00 compute-0 python3.9[216077]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:00 compute-0 sudo[216075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:00 compute-0 ceph-mon[73607]: pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:01 compute-0 sudo[216164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwvvkkhvyptjaeezchthnvjvvkvicqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405740.166822-3868-235948412056393/AnsiballZ_file.py'
Oct 02 11:49:01 compute-0 sudo[216164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:01 compute-0 podman[216127]: 2025-10-02 11:49:01.191378953 +0000 UTC m=+0.057012379 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 02 11:49:01 compute-0 python3.9[216175]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:01 compute-0 sudo[216164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:01.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:02 compute-0 sudo[216325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchktdgiaenzzsrrnclwyzhyhuaxyzsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405741.5863194-3904-200150321667455/AnsiballZ_stat.py'
Oct 02 11:49:02 compute-0 sudo[216325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:02 compute-0 python3.9[216327]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:02 compute-0 sudo[216325]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:02.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:02 compute-0 sudo[216451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwksmycmqccmnavxwmutkugduqfotmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405741.5863194-3904-200150321667455/AnsiballZ_copy.py'
Oct 02 11:49:02 compute-0 sudo[216451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:02 compute-0 python3.9[216453]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405741.5863194-3904-200150321667455/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:02 compute-0 sudo[216451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:03 compute-0 sudo[216603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykdpogsawahmqskcucbgilqfpylrdvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405743.1230764-3949-151970483762383/AnsiballZ_file.py'
Oct 02 11:49:03 compute-0 sudo[216603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:03 compute-0 sudo[216604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:03 compute-0 sudo[216604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:03 compute-0 sudo[216604]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:03 compute-0 sudo[216631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:03 compute-0 sudo[216631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:03 compute-0 sudo[216631]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:03 compute-0 ceph-mon[73607]: pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:03 compute-0 python3.9[216618]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:03 compute-0 sudo[216603]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:04 compute-0 sudo[216805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqudjvokskgksuhirrqtqbzdbtzvigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405743.8534048-3973-41922451933953/AnsiballZ_command.py'
Oct 02 11:49:04 compute-0 sudo[216805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:04 compute-0 python3.9[216807]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:49:04 compute-0 sudo[216805]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:05 compute-0 sudo[216961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdzjdgvtflvsjrvgsvcfxfypmmuebgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405744.6401029-3997-13242731975453/AnsiballZ_blockinfile.py'
Oct 02 11:49:05 compute-0 sudo[216961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:05 compute-0 python3.9[216963]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:05 compute-0 sudo[216961]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:05 compute-0 ceph-mon[73607]: pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:05 compute-0 sudo[217113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intrxgayukwxdgkhslmdovjnteyhtces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405745.5924296-4024-91246012259411/AnsiballZ_command.py'
Oct 02 11:49:05 compute-0 sudo[217113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:06 compute-0 python3.9[217115]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:49:06 compute-0 sudo[217113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:06.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:06 compute-0 sudo[217267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxituwsjavkuxwwdmditkmmrxsezdldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405746.3334794-4048-135588060273737/AnsiballZ_stat.py'
Oct 02 11:49:06 compute-0 sudo[217267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:06 compute-0 python3.9[217269]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:49:06 compute-0 sudo[217267]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:07 compute-0 sudo[217421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwjsstuytfwtuhcejnhcghjegjgdfic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405747.06712-4072-65489145008250/AnsiballZ_command.py'
Oct 02 11:49:07 compute-0 sudo[217421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:07 compute-0 python3.9[217423]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:49:07 compute-0 sudo[217421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:07 compute-0 ceph-mon[73607]: pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:07.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:08 compute-0 sudo[217576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmapeodheqkvlssspobphtnwfbgdlng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405747.916241-4096-101843420608494/AnsiballZ_file.py'
Oct 02 11:49:08 compute-0 sudo[217576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:08.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:08 compute-0 python3.9[217578]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:08 compute-0 sudo[217576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:08 compute-0 sudo[217729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmldfskqkgmeorotrghcokxmbboswyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405748.691156-4120-278857132779491/AnsiballZ_stat.py'
Oct 02 11:49:08 compute-0 sudo[217729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:09 compute-0 python3.9[217731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:09 compute-0 sudo[217729]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:09 compute-0 sudo[217852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzinpjmdhotqnvlaadwldbjdjmqctata ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405748.691156-4120-278857132779491/AnsiballZ_copy.py'
Oct 02 11:49:09 compute-0 sudo[217852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:09 compute-0 python3.9[217854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405748.691156-4120-278857132779491/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:09 compute-0 sudo[217852]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:09 compute-0 ceph-mon[73607]: pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:09.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:10 compute-0 sudo[218005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmlabvvhbylxwghaxmyvfjdaumzfdpuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405750.063281-4165-137426950364569/AnsiballZ_stat.py'
Oct 02 11:49:10 compute-0 sudo[218005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:10.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:10 compute-0 python3.9[218007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:10 compute-0 sudo[218005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.567919) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750568020, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1359, "num_deletes": 502, "total_data_size": 1899219, "memory_usage": 1934424, "flush_reason": "Manual Compaction"}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750575895, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1118591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14103, "largest_seqno": 15461, "table_properties": {"data_size": 1113801, "index_size": 1738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14678, "raw_average_key_size": 19, "raw_value_size": 1101709, "raw_average_value_size": 1430, "num_data_blocks": 79, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405635, "oldest_key_time": 1759405635, "file_creation_time": 1759405750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7964 microseconds, and 3547 cpu microseconds.
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.575937) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1118591 bytes OK
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.575955) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.577669) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.577686) EVENT_LOG_v1 {"time_micros": 1759405750577681, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.577706) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1892288, prev total WAL file size 1892288, number of live WAL files 2.
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.578507) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1092KB)], [32(10MB)]
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750578539, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12591246, "oldest_snapshot_seqno": -1}
Oct 02 11:49:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4062 keys, 7663830 bytes, temperature: kUnknown
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750633148, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7663830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7635497, "index_size": 17087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 100733, "raw_average_key_size": 24, "raw_value_size": 7560774, "raw_average_value_size": 1861, "num_data_blocks": 719, "num_entries": 4062, "num_filter_entries": 4062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.633414) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7663830 bytes
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.634400) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.3 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(18.1) write-amplify(6.9) OK, records in: 5031, records dropped: 969 output_compression: NoCompression
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.634421) EVENT_LOG_v1 {"time_micros": 1759405750634411, "job": 14, "event": "compaction_finished", "compaction_time_micros": 54683, "compaction_time_cpu_micros": 27265, "output_level": 6, "num_output_files": 1, "total_output_size": 7663830, "num_input_records": 5031, "num_output_records": 4062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750634745, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750637241, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.578440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.637280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.637285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.637287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.637289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:49:10.637290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:10 compute-0 sudo[218128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkwydtrxhpifjuhmygsxeqhvknruknu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405750.063281-4165-137426950364569/AnsiballZ_copy.py'
Oct 02 11:49:10 compute-0 sudo[218128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:11 compute-0 python3.9[218130]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405750.063281-4165-137426950364569/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:11 compute-0 sudo[218128]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:11 compute-0 ceph-mon[73607]: pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:11 compute-0 sudo[218280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqhhgqlttgjrcfnofwhhboppkjciancl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405751.2968023-4210-17415209430312/AnsiballZ_stat.py'
Oct 02 11:49:11 compute-0 sudo[218280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:11 compute-0 python3.9[218282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:11 compute-0 sudo[218280]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:12 compute-0 sudo[218403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwwekxvhwbrntuppsdcdjoazzserywia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405751.2968023-4210-17415209430312/AnsiballZ_copy.py'
Oct 02 11:49:12 compute-0 sudo[218403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:12 compute-0 python3.9[218405]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405751.2968023-4210-17415209430312/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:12 compute-0 sudo[218403]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:12 compute-0 sudo[218556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luobzeqcyqwqxmxhqgegappxryllnlgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405752.659909-4255-196346057965673/AnsiballZ_systemd.py'
Oct 02 11:49:12 compute-0 sudo[218556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:13 compute-0 python3.9[218558]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:49:13 compute-0 systemd[1]: Reloading.
Oct 02 11:49:13 compute-0 systemd-rc-local-generator[218586]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:13 compute-0 systemd-sysv-generator[218589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:13 compute-0 ceph-mon[73607]: pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:13 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 02 11:49:13 compute-0 sudo[218556]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:14 compute-0 sudo[218747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gitxejogutmmfqpeeinsjbnfhtcwxlbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405753.926529-4279-106260930468009/AnsiballZ_systemd.py'
Oct 02 11:49:14 compute-0 sudo[218747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:14.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:14 compute-0 python3.9[218749]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:49:14 compute-0 systemd[1]: Reloading.
Oct 02 11:49:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:14 compute-0 systemd-sysv-generator[218777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:14 compute-0 systemd-rc-local-generator[218773]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:14 compute-0 systemd[1]: Reloading.
Oct 02 11:49:15 compute-0 systemd-sysv-generator[218819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:15 compute-0 systemd-rc-local-generator[218815]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:15 compute-0 sudo[218747]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:15 compute-0 podman[218823]: 2025-10-02 11:49:15.362121639 +0000 UTC m=+0.103787423 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 11:49:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:15 compute-0 ceph-mon[73607]: pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:15 compute-0 sshd-session[158383]: Connection closed by 192.168.122.30 port 57100
Oct 02 11:49:15 compute-0 sshd-session[158380]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:49:15 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 02 11:49:15 compute-0 systemd[1]: session-50.scope: Consumed 3min 23.279s CPU time.
Oct 02 11:49:15 compute-0 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Oct 02 11:49:15 compute-0 systemd-logind[789]: Removed session 50.
Oct 02 11:49:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:16.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:17 compute-0 ceph-mon[73607]: pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:18.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:18 compute-0 ceph-mon[73607]: pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:19.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:21 compute-0 sshd-session[218877]: Accepted publickey for zuul from 192.168.122.30 port 37556 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:49:21 compute-0 systemd-logind[789]: New session 51 of user zuul.
Oct 02 11:49:21 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 02 11:49:21 compute-0 sshd-session[218877]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:49:21 compute-0 ceph-mon[73607]: pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:22 compute-0 python3.9[219031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:49:22 compute-0 ceph-mon[73607]: pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:23 compute-0 sudo[219112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:23 compute-0 sudo[219112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:23 compute-0 sudo[219112]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:23 compute-0 sudo[219158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:23 compute-0 sudo[219158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:23 compute-0 sudo[219158]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:23 compute-0 sudo[219235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwdlwxmeypwvqaamrjjtucewfphktwon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405763.3630338-67-212746084319468/AnsiballZ_file.py'
Oct 02 11:49:23 compute-0 sudo[219235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:24 compute-0 python3.9[219237]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:24 compute-0 sudo[219235]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:24 compute-0 sudo[219388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqadlaphbcuaiydkcdypklcnablylqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405764.2143993-67-46664244674691/AnsiballZ_file.py'
Oct 02 11:49:24 compute-0 sudo[219388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:24 compute-0 python3.9[219390]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:24 compute-0 sudo[219388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:25 compute-0 sudo[219540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqqaflzejgmzsdhvamlvbcuqruwzxrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405764.9973755-67-25719414714362/AnsiballZ_file.py'
Oct 02 11:49:25 compute-0 sudo[219540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:25 compute-0 python3.9[219542]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:25 compute-0 sudo[219540]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:25 compute-0 ceph-mon[73607]: pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:25 compute-0 sudo[219692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prycteqsoflupmdcaoiczsufqslowelo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405765.594822-67-276222434663355/AnsiballZ_file.py'
Oct 02 11:49:25 compute-0 sudo[219692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:25.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:26 compute-0 python3.9[219694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:49:26 compute-0 sudo[219692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:26.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:26 compute-0 sudo[219845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-antdcadblaknhlwnbzsrsjmcnyknajqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405766.2500901-67-224952998530145/AnsiballZ_file.py'
Oct 02 11:49:26 compute-0 sudo[219845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:26 compute-0 python3.9[219847]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:26 compute-0 sudo[219845]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:49:26.905 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:49:26.905 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:49:26.905 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:49:27 compute-0 sudo[219997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjujcumpprjnngziaqklvebekgjfqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405767.0828245-175-174296328705446/AnsiballZ_stat.py'
Oct 02 11:49:27 compute-0 sudo[219997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:27 compute-0 ceph-mon[73607]: pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:27 compute-0 python3.9[219999]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:49:27 compute-0 sudo[219997]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:28 compute-0 sudo[220152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeptzgeffwnzsrfeprdkosatrqgkirps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405768.1008859-199-61348542330464/AnsiballZ_systemd.py'
Oct 02 11:49:28 compute-0 sudo[220152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:29 compute-0 python3.9[220154]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:49:29 compute-0 systemd[1]: Reloading.
Oct 02 11:49:29 compute-0 systemd-rc-local-generator[220206]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:29 compute-0 systemd-sysv-generator[220209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:29 compute-0 sudo[220158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:29 compute-0 sudo[220158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:29 compute-0 sudo[220158]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:29 compute-0 sudo[220152]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:29 compute-0 sudo[220217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:29 compute-0 sudo[220217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:29 compute-0 sudo[220217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:29 compute-0 sudo[220249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:29 compute-0 sudo[220249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:29 compute-0 sudo[220249]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:29 compute-0 sudo[220291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:49:29 compute-0 sudo[220291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:29 compute-0 ceph-mon[73607]: pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:29.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:30 compute-0 sudo[220291]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:30 compute-0 sudo[220473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsixapjeevdndtkzhhsxkrujskdnjokh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405769.6628792-223-247352614253498/AnsiballZ_service_facts.py'
Oct 02 11:49:30 compute-0 sudo[220473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:49:30 compute-0 python3.9[220475]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 407e653e-bcb3-47b5-b3f9-9ca0a3f81ae8 does not exist
Oct 02 11:49:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 01d53312-e0f0-4ada-aa57-dae740034e42 does not exist
Oct 02 11:49:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2b315810-06eb-4927-962c-7e1e5ebeaf9b does not exist
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:49:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:30 compute-0 network[220493]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:49:30 compute-0 network[220494]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:49:30 compute-0 network[220496]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:49:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:30 compute-0 ceph-mon[73607]: pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:31 compute-0 sudo[220495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:31 compute-0 sudo[220495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:31 compute-0 sudo[220495]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:31 compute-0 podman[220525]: 2025-10-02 11:49:31.378563907 +0000 UTC m=+0.081555104 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:49:31 compute-0 sudo[220539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:31 compute-0 sudo[220539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:31 compute-0 sudo[220539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:31 compute-0 sudo[220576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:31 compute-0 sudo[220576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:31 compute-0 sudo[220576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:31 compute-0 sudo[220604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:49:31 compute-0 sudo[220604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.85990424 +0000 UTC m=+0.051960764 container create ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:31 compute-0 systemd[1]: Started libpod-conmon-ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6.scope.
Oct 02 11:49:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.831222521 +0000 UTC m=+0.023279065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.938072349 +0000 UTC m=+0.130128883 container init ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.944915328 +0000 UTC m=+0.136971852 container start ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.948077856 +0000 UTC m=+0.140134380 container attach ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:49:31 compute-0 amazing_babbage[220706]: 167 167
Oct 02 11:49:31 compute-0 systemd[1]: libpod-ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6.scope: Deactivated successfully.
Oct 02 11:49:31 compute-0 conmon[220706]: conmon ef0b865a72abc151915c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6.scope/container/memory.events
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.951757507 +0000 UTC m=+0.143814031 container died ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4861346d18892a08ebada14a8a98c00551834017d273d037d265089ab52430d4-merged.mount: Deactivated successfully.
Oct 02 11:49:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:31.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:31 compute-0 podman[220685]: 2025-10-02 11:49:31.997749662 +0000 UTC m=+0.189806186 container remove ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:49:32 compute-0 systemd[1]: libpod-conmon-ef0b865a72abc151915cc2954af6e44cb829f8956eec6a2361a7dfd2716520b6.scope: Deactivated successfully.
Oct 02 11:49:32 compute-0 podman[220741]: 2025-10-02 11:49:32.184321158 +0000 UTC m=+0.048673542 container create 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:49:32 compute-0 systemd[1]: Started libpod-conmon-3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe.scope.
Oct 02 11:49:32 compute-0 podman[220741]: 2025-10-02 11:49:32.165815281 +0000 UTC m=+0.030167675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:32 compute-0 podman[220741]: 2025-10-02 11:49:32.299747197 +0000 UTC m=+0.164099601 container init 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:49:32 compute-0 podman[220741]: 2025-10-02 11:49:32.306896274 +0000 UTC m=+0.171248658 container start 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:49:32 compute-0 podman[220741]: 2025-10-02 11:49:32.311114858 +0000 UTC m=+0.175467272 container attach 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:49:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:32.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:33 compute-0 agitated_kepler[220762]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:49:33 compute-0 agitated_kepler[220762]: --> relative data size: 1.0
Oct 02 11:49:33 compute-0 agitated_kepler[220762]: --> All data devices are unavailable
Oct 02 11:49:33 compute-0 podman[220741]: 2025-10-02 11:49:33.130033814 +0000 UTC m=+0.994386198 container died 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:49:33 compute-0 systemd[1]: libpod-3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe.scope: Deactivated successfully.
Oct 02 11:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ae318b39e9474bf83a4850fb1b56090014a310465b566b420479948410f440-merged.mount: Deactivated successfully.
Oct 02 11:49:33 compute-0 podman[220741]: 2025-10-02 11:49:33.187726958 +0000 UTC m=+1.052079342 container remove 3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:33 compute-0 systemd[1]: libpod-conmon-3232983738cb2e8ce595c63d2df83d45eb782fd22bbd25b32fa86ca2eef723fe.scope: Deactivated successfully.
Oct 02 11:49:33 compute-0 sudo[220604]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 sudo[220847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:33 compute-0 sudo[220847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[220847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 sudo[220875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:33 compute-0 sudo[220875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[220875]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 sudo[220903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:33 compute-0 sudo[220903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[220903]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 sudo[220931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:49:33 compute-0 sudo[220931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[220473]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.71643745 +0000 UTC m=+0.032362860 container create b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:49:33 compute-0 ceph-mon[73607]: pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:33 compute-0 systemd[1]: Started libpod-conmon-b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887.scope.
Oct 02 11:49:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.784705985 +0000 UTC m=+0.100631405 container init b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.790665112 +0000 UTC m=+0.106590522 container start b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.793891381 +0000 UTC m=+0.109816811 container attach b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:33 compute-0 suspicious_roentgen[221043]: 167 167
Oct 02 11:49:33 compute-0 systemd[1]: libpod-b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887.scope: Deactivated successfully.
Oct 02 11:49:33 compute-0 conmon[221043]: conmon b6d17b5488bc4a12b478 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887.scope/container/memory.events
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.795329587 +0000 UTC m=+0.111254987 container died b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.702382803 +0000 UTC m=+0.018308223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf594820bc41c6ade67e3a0e47c649ca1648217ae9e49c2ee9e9f0ab9164d38a-merged.mount: Deactivated successfully.
Oct 02 11:49:33 compute-0 podman[221026]: 2025-10-02 11:49:33.838508003 +0000 UTC m=+0.154433403 container remove b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:33 compute-0 systemd[1]: libpod-conmon-b6d17b5488bc4a12b4786748e7c9a1fd332e1aa2af7ecef2046c83be8ee9f887.scope: Deactivated successfully.
Oct 02 11:49:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:33.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:33 compute-0 podman[221069]: 2025-10-02 11:49:33.994531675 +0000 UTC m=+0.035016096 container create e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:34 compute-0 systemd[1]: Started libpod-conmon-e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3.scope.
Oct 02 11:49:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46638ebe183e860553d2d2985bdb87b43e9338b8fe4c292849c9832834f7c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46638ebe183e860553d2d2985bdb87b43e9338b8fe4c292849c9832834f7c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46638ebe183e860553d2d2985bdb87b43e9338b8fe4c292849c9832834f7c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46638ebe183e860553d2d2985bdb87b43e9338b8fe4c292849c9832834f7c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:34.064109722 +0000 UTC m=+0.104594243 container init e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:34.071500984 +0000 UTC m=+0.111985405 container start e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:33.979127135 +0000 UTC m=+0.019611566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:34.074753355 +0000 UTC m=+0.115237856 container attach e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:49:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:34 compute-0 great_hawking[221085]: {
Oct 02 11:49:34 compute-0 great_hawking[221085]:     "1": [
Oct 02 11:49:34 compute-0 great_hawking[221085]:         {
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "devices": [
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "/dev/loop3"
Oct 02 11:49:34 compute-0 great_hawking[221085]:             ],
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "lv_name": "ceph_lv0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "lv_size": "7511998464",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "name": "ceph_lv0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "tags": {
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.cluster_name": "ceph",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.crush_device_class": "",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.encrypted": "0",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.osd_id": "1",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.type": "block",
Oct 02 11:49:34 compute-0 great_hawking[221085]:                 "ceph.vdo": "0"
Oct 02 11:49:34 compute-0 great_hawking[221085]:             },
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "type": "block",
Oct 02 11:49:34 compute-0 great_hawking[221085]:             "vg_name": "ceph_vg0"
Oct 02 11:49:34 compute-0 great_hawking[221085]:         }
Oct 02 11:49:34 compute-0 great_hawking[221085]:     ]
Oct 02 11:49:34 compute-0 great_hawking[221085]: }
Oct 02 11:49:34 compute-0 systemd[1]: libpod-e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3.scope: Deactivated successfully.
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:34.777117093 +0000 UTC m=+0.817601514 container died e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f46638ebe183e860553d2d2985bdb87b43e9338b8fe4c292849c9832834f7c6b-merged.mount: Deactivated successfully.
Oct 02 11:49:34 compute-0 ceph-mon[73607]: pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:34 compute-0 podman[221069]: 2025-10-02 11:49:34.839258297 +0000 UTC m=+0.879742718 container remove e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:34 compute-0 systemd[1]: libpod-conmon-e861e93e1cc3beb595e05ee460ea6b947f114160ead5b6037d8ad4df64d2c0a3.scope: Deactivated successfully.
Oct 02 11:49:34 compute-0 sudo[220931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:34 compute-0 sudo[221109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:34 compute-0 sudo[221109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:34 compute-0 sudo[221109]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:34 compute-0 sudo[221134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:34 compute-0 sudo[221134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:34 compute-0 sudo[221134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:35 compute-0 sudo[221159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:35 compute-0 sudo[221159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:35 compute-0 sudo[221159]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:35 compute-0 sudo[221184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:49:35 compute-0 sudo[221184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.357720486 +0000 UTC m=+0.041805693 container create bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:35 compute-0 sudo[221387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogffrusctyglllurivsrwdsbnouszdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405775.1196454-247-62999714007697/AnsiballZ_systemd.py'
Oct 02 11:49:35 compute-0 sudo[221387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:35 compute-0 systemd[1]: Started libpod-conmon-bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58.scope.
Oct 02 11:49:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.337896346 +0000 UTC m=+0.021981563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.438118911 +0000 UTC m=+0.122204178 container init bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.444838476 +0000 UTC m=+0.128923673 container start bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.447719077 +0000 UTC m=+0.131804374 container attach bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:49:35 compute-0 exciting_chaplygin[221393]: 167 167
Oct 02 11:49:35 compute-0 systemd[1]: libpod-bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58.scope: Deactivated successfully.
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.450053435 +0000 UTC m=+0.134138642 container died bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c46b8136c41c9bbef46f9312d37c466dd7cdb5bf64ced8634b429b70452cc62-merged.mount: Deactivated successfully.
Oct 02 11:49:35 compute-0 podman[221347]: 2025-10-02 11:49:35.489707344 +0000 UTC m=+0.173792541 container remove bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:49:35 compute-0 systemd[1]: libpod-conmon-bdb0e4caa0f5a1cb924cc3b1b79a67adc91bfbdec691ca5d283b5d51ec24cd58.scope: Deactivated successfully.
Oct 02 11:49:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:35 compute-0 podman[221418]: 2025-10-02 11:49:35.643987013 +0000 UTC m=+0.038960663 container create af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:49:35 compute-0 systemd[1]: Started libpod-conmon-af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e.scope.
Oct 02 11:49:35 compute-0 python3.9[221392]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:49:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e836eaaa0bd1c70255ed02b4f98dcede1fa110cfbf1c4bc96d18f5ce8382036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e836eaaa0bd1c70255ed02b4f98dcede1fa110cfbf1c4bc96d18f5ce8382036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e836eaaa0bd1c70255ed02b4f98dcede1fa110cfbf1c4bc96d18f5ce8382036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e836eaaa0bd1c70255ed02b4f98dcede1fa110cfbf1c4bc96d18f5ce8382036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:35 compute-0 podman[221418]: 2025-10-02 11:49:35.720572873 +0000 UTC m=+0.115546553 container init af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:49:35 compute-0 podman[221418]: 2025-10-02 11:49:35.626137902 +0000 UTC m=+0.021111572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:35 compute-0 podman[221418]: 2025-10-02 11:49:35.729729279 +0000 UTC m=+0.124702929 container start af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:49:35 compute-0 podman[221418]: 2025-10-02 11:49:35.732907937 +0000 UTC m=+0.127881587 container attach af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:49:35 compute-0 systemd[1]: Reloading.
Oct 02 11:49:35 compute-0 systemd-rc-local-generator[221466]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:35 compute-0 systemd-sysv-generator[221469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:35.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:36 compute-0 sudo[221387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:36 compute-0 epic_jones[221434]: {
Oct 02 11:49:36 compute-0 epic_jones[221434]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:49:36 compute-0 epic_jones[221434]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:49:36 compute-0 epic_jones[221434]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:49:36 compute-0 epic_jones[221434]:         "osd_id": 1,
Oct 02 11:49:36 compute-0 epic_jones[221434]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:49:36 compute-0 epic_jones[221434]:         "type": "bluestore"
Oct 02 11:49:36 compute-0 epic_jones[221434]:     }
Oct 02 11:49:36 compute-0 epic_jones[221434]: }
Oct 02 11:49:36 compute-0 systemd[1]: libpod-af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e.scope: Deactivated successfully.
Oct 02 11:49:36 compute-0 podman[221516]: 2025-10-02 11:49:36.598138566 +0000 UTC m=+0.044367996 container died af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:49:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e836eaaa0bd1c70255ed02b4f98dcede1fa110cfbf1c4bc96d18f5ce8382036-merged.mount: Deactivated successfully.
Oct 02 11:49:36 compute-0 podman[221516]: 2025-10-02 11:49:36.690200579 +0000 UTC m=+0.136429989 container remove af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:49:36 compute-0 systemd[1]: libpod-conmon-af441e6d12b1dc312ec2d0cbfef578cde904b99131d3d18b4b91f6d816068e4e.scope: Deactivated successfully.
Oct 02 11:49:36 compute-0 sudo[221184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:49:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:49:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f9e85a84-56df-4ff7-bf92-e6d3ea462f02 does not exist
Oct 02 11:49:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 53d40826-0d1f-47be-85ab-dc9c67c41cf5 does not exist
Oct 02 11:49:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 02e818c3-3421-43ee-9eb4-77ec8d25293b does not exist
Oct 02 11:49:36 compute-0 sudo[221606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:36 compute-0 sudo[221606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:36 compute-0 sudo[221606]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:36 compute-0 sudo[221655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:49:36 compute-0 sudo[221655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:36 compute-0 sudo[221655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:37 compute-0 python3.9[221706]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:49:37 compute-0 ceph-mon[73607]: pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:49:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000072s ======
Oct 02 11:49:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:37.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Oct 02 11:49:38 compute-0 sudo[221856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myhoncbcxigtphsigfqdyaunazwgavkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405777.2854784-298-100879899380215/AnsiballZ_podman_container.py'
Oct 02 11:49:38 compute-0 sudo[221856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:38 compute-0 python3.9[221858]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 11:49:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:49:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:49:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:39 compute-0 podman[221873]: 2025-10-02 11:49:39.582112908 +0000 UTC m=+1.210668797 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 11:49:39 compute-0 ceph-mon[73607]: pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:39 compute-0 podman[221932]: 2025-10-02 11:49:39.706560671 +0000 UTC m=+0.040771048 container create 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7485] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 02 11:49:39 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 11:49:39 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 11:49:39 compute-0 kernel: veth0: entered allmulticast mode
Oct 02 11:49:39 compute-0 kernel: veth0: entered promiscuous mode
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7655] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 02 11:49:39 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 11:49:39 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7674] device (veth0): carrier: link connected
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7677] device (podman0): carrier: link connected
Oct 02 11:49:39 compute-0 systemd-udevd[221958]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:49:39 compute-0 systemd-udevd[221961]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:49:39 compute-0 podman[221932]: 2025-10-02 11:49:39.688093135 +0000 UTC m=+0.022303532 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7898] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7907] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7914] device (podman0): Activation: starting connection 'podman0' (f6dd8b0e-d0e4-4f56-ab3a-91e8ab1fdf50)
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7915] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7917] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7918] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.7920] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:49:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.8269] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.8277] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 11:49:39 compute-0 NetworkManager[44987]: <info>  [1759405779.8283] device (podman0): Activation: successful, device activated.
Oct 02 11:49:39 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 02 11:49:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:39.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:39 compute-0 systemd[1]: Started libpod-conmon-4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48.scope.
Oct 02 11:49:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:40 compute-0 podman[221932]: 2025-10-02 11:49:40.049685321 +0000 UTC m=+0.383895708 container init 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:49:40 compute-0 podman[221932]: 2025-10-02 11:49:40.057925585 +0000 UTC m=+0.392135962 container start 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:49:40 compute-0 iscsid_config[222089]: iqn.1994-05.com.redhat:89256e26a090
Oct 02 11:49:40 compute-0 systemd[1]: libpod-4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48.scope: Deactivated successfully.
Oct 02 11:49:40 compute-0 podman[221932]: 2025-10-02 11:49:40.06175696 +0000 UTC m=+0.395967357 container attach 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 11:49:40 compute-0 podman[221932]: 2025-10-02 11:49:40.063094693 +0000 UTC m=+0.397305070 container died 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 11:49:40 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 11:49:40 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 02 11:49:40 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 02 11:49:40 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 11:49:40 compute-0 NetworkManager[44987]: <info>  [1759405780.1245] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 11:49:40 compute-0 systemd[1]: run-netns-netns\x2d9c44e380\x2d6ba8\x2dfd99\x2db26f\x2dd5c26d3ad833.mount: Deactivated successfully.
Oct 02 11:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48-userdata-shm.mount: Deactivated successfully.
Oct 02 11:49:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:40.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:40 compute-0 podman[221932]: 2025-10-02 11:49:40.42958412 +0000 UTC m=+0.763794497 container remove 4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 11:49:40 compute-0 python3.9[221858]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 02 11:49:40 compute-0 systemd[1]: libpod-conmon-4b17f28533ece21c0c5ac36bb7a23ae47a80a786d8cdb549907af22e7487bd48.scope: Deactivated successfully.
Oct 02 11:49:40 compute-0 python3.9[221858]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 02 11:49:40 compute-0 sudo[221856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-74e1b5f540936eee676f0ff07fe64bd7a0ffdb5de82f56288f99ccc8950ee6e2-merged.mount: Deactivated successfully.
Oct 02 11:49:41 compute-0 sudo[222327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebjkxanqmjmmoxmiriwzmmrzkqhygye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405780.9168885-322-253459023797520/AnsiballZ_stat.py'
Oct 02 11:49:41 compute-0 sudo[222327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:41 compute-0 python3.9[222329]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:41 compute-0 sudo[222327]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:41 compute-0 ceph-mon[73607]: pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 11:49:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 11:49:42 compute-0 sudo[222450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfftphwwrslahqocwfkzogsmcluhgeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405780.9168885-322-253459023797520/AnsiballZ_copy.py'
Oct 02 11:49:42 compute-0 sudo[222450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:42 compute-0 python3.9[222452]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405780.9168885-322-253459023797520/.source.iscsi _original_basename=.3vva3n1v follow=False checksum=fe4b01a670c8605cfe4a2293333ecf639af37a83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:42 compute-0 sudo[222450]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:49:42
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:49:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:49:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:49:42 compute-0 sudo[222603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrhymuaoqclbkxpycsliuxppixsbxir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405782.5115185-367-7167271584219/AnsiballZ_file.py'
Oct 02 11:49:42 compute-0 ceph-mon[73607]: pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:42 compute-0 sudo[222603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:43 compute-0 python3.9[222605]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:43 compute-0 sudo[222603]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:43 compute-0 sudo[222756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:43 compute-0 sudo[222756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:43 compute-0 sudo[222756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:43 compute-0 python3.9[222755]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:49:43 compute-0 sudo[222781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:43 compute-0 sudo[222781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:43 compute-0 sudo[222781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:44 compute-0 sudo[222958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzhxivkkltuectnpqaaavmdwgxyqkdtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405784.3124475-418-53586392298890/AnsiballZ_lineinfile.py'
Oct 02 11:49:44 compute-0 sudo[222958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:44 compute-0 python3.9[222960]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:44 compute-0 sudo[222958]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:45 compute-0 ceph-mon[73607]: pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:45 compute-0 podman[223060]: 2025-10-02 11:49:45.950434367 +0000 UTC m=+0.085657396 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:45.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:46 compute-0 sudo[223136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfajsompgxaakrcbxclmsnojokuyxikq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405785.5983884-445-186482562664513/AnsiballZ_file.py'
Oct 02 11:49:46 compute-0 sudo[223136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:46 compute-0 python3.9[223138]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:46 compute-0 sudo[223136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:46 compute-0 sudo[223289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vymofawnvanwujylqrjlhhbytabigljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405786.5586672-469-263864277947853/AnsiballZ_stat.py'
Oct 02 11:49:46 compute-0 sudo[223289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:47 compute-0 ceph-mon[73607]: pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:47 compute-0 python3.9[223291]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:47 compute-0 sudo[223289]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:47 compute-0 sudo[223367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnocpkkdmdtujrvajogrxrmvgxsvsuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405786.5586672-469-263864277947853/AnsiballZ_file.py'
Oct 02 11:49:47 compute-0 sudo[223367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:47 compute-0 python3.9[223369]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:47 compute-0 sudo[223367]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:47 compute-0 sudo[223519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwrsoixvbpreucotqbldoynmxhazyope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405787.6977909-469-247298739773153/AnsiballZ_stat.py'
Oct 02 11:49:47 compute-0 sudo[223519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:48 compute-0 python3.9[223521]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:48 compute-0 sudo[223519]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:48 compute-0 sudo[223598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqbyzkjduvaomjuckfyqwiywiwixtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405787.6977909-469-247298739773153/AnsiballZ_file.py'
Oct 02 11:49:48 compute-0 sudo[223598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:48 compute-0 python3.9[223600]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:48 compute-0 sudo[223598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:49 compute-0 sudo[223750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geiarlulxsnplahhxyoopeshlkpyylrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405789.0585866-538-146164204869229/AnsiballZ_file.py'
Oct 02 11:49:49 compute-0 sudo[223750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:49 compute-0 python3.9[223752]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:49 compute-0 sudo[223750]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:49 compute-0 ceph-mon[73607]: pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:50 compute-0 sudo[223902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwafnufjknxcvkstgcuflwmtzttaaef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405789.8002172-562-220756596469202/AnsiballZ_stat.py'
Oct 02 11:49:50 compute-0 sudo[223902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:49:50 compute-0 python3.9[223904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:50 compute-0 sudo[223902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:50 compute-0 sudo[223981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytuczlbqfvudsmwcnkqkfjauymnyxwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405789.8002172-562-220756596469202/AnsiballZ_file.py'
Oct 02 11:49:50 compute-0 sudo[223981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:50 compute-0 python3.9[223983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:50 compute-0 sudo[223981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:51 compute-0 sudo[224133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regmwbrrhafulbzbvrpophhugqoebjeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405791.1028085-598-54503866777420/AnsiballZ_stat.py'
Oct 02 11:49:51 compute-0 sudo[224133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:51 compute-0 python3.9[224135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:51 compute-0 sudo[224133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:51 compute-0 ceph-mon[73607]: pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:51 compute-0 sudo[224211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppmcqkfvbgcuoaatujqvwjvonrzvxxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405791.1028085-598-54503866777420/AnsiballZ_file.py'
Oct 02 11:49:51 compute-0 sudo[224211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:52 compute-0 python3.9[224213]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:52 compute-0 sudo[224211]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:52.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:52 compute-0 sudo[224364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqihpzysrzwtxdzfuvemamzvpeogkrow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405792.3246772-634-168336980493586/AnsiballZ_systemd.py'
Oct 02 11:49:52 compute-0 sudo[224364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:52 compute-0 python3.9[224366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:49:52 compute-0 systemd[1]: Reloading.
Oct 02 11:49:53 compute-0 systemd-sysv-generator[224398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:53 compute-0 systemd-rc-local-generator[224394]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:53 compute-0 sudo[224364]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:53 compute-0 ceph-mon[73607]: pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:53 compute-0 sudo[224554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuqwanzoyuyjufcgkirzupvykbywpoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405793.621672-658-41239151437858/AnsiballZ_stat.py'
Oct 02 11:49:53 compute-0 sudo[224554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:49:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:54 compute-0 python3.9[224556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:54 compute-0 sudo[224554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-0 sudo[224632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmfuvcuflnxvwfcrkmowzveouocfgnhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405793.621672-658-41239151437858/AnsiballZ_file.py'
Oct 02 11:49:54 compute-0 sudo[224632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:54.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:54 compute-0 python3.9[224634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:54 compute-0 sudo[224632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:55 compute-0 sudo[224785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxiofysnkfuwpuxsrjqcoiolvapsamux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.8705645-694-151176691134340/AnsiballZ_stat.py'
Oct 02 11:49:55 compute-0 sudo[224785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:55 compute-0 python3.9[224787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:55 compute-0 sudo[224785]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:55 compute-0 sudo[224863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhwexzfmmqtreqfsnbazsujsazdnnpcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.8705645-694-151176691134340/AnsiballZ_file.py'
Oct 02 11:49:55 compute-0 sudo[224863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:55 compute-0 ceph-mon[73607]: pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:55 compute-0 python3.9[224865]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:55 compute-0 sudo[224863]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:56 compute-0 sudo[225016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbjpmjiqdxjwsfnncjrxvepupuxdxrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405796.0730536-730-192031278378536/AnsiballZ_systemd.py'
Oct 02 11:49:56 compute-0 sudo[225016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:56.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:56 compute-0 python3.9[225018]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:49:56 compute-0 systemd[1]: Reloading.
Oct 02 11:49:56 compute-0 systemd-sysv-generator[225049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:49:56 compute-0 systemd-rc-local-generator[225044]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:49:56 compute-0 ceph-mon[73607]: pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:57 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:49:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:49:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:49:57 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:49:57 compute-0 sudo[225016]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:57 compute-0 sudo[225209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grgvygnwtgstdhdshdrsmshfgrjboegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.5721796-760-66039538464182/AnsiballZ_file.py'
Oct 02 11:49:57 compute-0 sudo[225209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:58.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:58 compute-0 python3.9[225211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:58 compute-0 sudo[225209]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:49:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:49:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:58.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:49:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:58 compute-0 sudo[225362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utihadphhsafggznejytlycbnrqgyond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405798.3779588-784-155418289467160/AnsiballZ_stat.py'
Oct 02 11:49:58 compute-0 sudo[225362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-0 python3.9[225364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:58 compute-0 sudo[225362]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:59 compute-0 sudo[225485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xldgisxwxlljcdwzepjtdrpuoghatuyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405798.3779588-784-155418289467160/AnsiballZ_copy.py'
Oct 02 11:49:59 compute-0 sudo[225485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:59 compute-0 python3.9[225487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405798.3779588-784-155418289467160/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:59 compute-0 sudo[225485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:59 compute-0 ceph-mon[73607]: pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:50:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:00 compute-0 sudo[225638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdkzttuaxuzvrtojjirrxnyqhpfnyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405800.0529306-835-82282641120209/AnsiballZ_file.py'
Oct 02 11:50:00 compute-0 sudo[225638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:00.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:00 compute-0 python3.9[225640]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:00 compute-0 sudo[225638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 11:50:01 compute-0 sudo[225790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjlnmdcaoykiqaspfaosbsakhyzgyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405800.8058846-859-30053615838943/AnsiballZ_stat.py'
Oct 02 11:50:01 compute-0 sudo[225790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:01 compute-0 python3.9[225792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:01 compute-0 sudo[225790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:01 compute-0 sudo[225925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qudlooffyblzqeevnczrpuzjscmzthmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405800.8058846-859-30053615838943/AnsiballZ_copy.py'
Oct 02 11:50:01 compute-0 sudo[225925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:01 compute-0 podman[225887]: 2025-10-02 11:50:01.709327257 +0000 UTC m=+0.060110735 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 11:50:01 compute-0 ceph-mon[73607]: pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:01 compute-0 python3.9[225933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405800.8058846-859-30053615838943/.source.json _original_basename=.b881y6to follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:01 compute-0 sudo[225925]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:50:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:02.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:50:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:02 compute-0 sudo[226084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwjtoyotknfwfvzmfwxtyvuyykdnjak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405802.2267401-904-117619100999121/AnsiballZ_file.py'
Oct 02 11:50:02 compute-0 sudo[226084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:02 compute-0 python3.9[226086]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:02 compute-0 sudo[226084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:02 compute-0 ceph-mon[73607]: pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:03 compute-0 sudo[226236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbtckjlohpovkptityrrspkvgghqwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405803.2041745-928-166336918809465/AnsiballZ_stat.py'
Oct 02 11:50:03 compute-0 sudo[226236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:03 compute-0 sudo[226236]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:03 compute-0 sudo[226286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:03 compute-0 sudo[226286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:03 compute-0 sudo[226286]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:04.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:04 compute-0 sudo[226334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:04 compute-0 sudo[226334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:04 compute-0 sudo[226334]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:04 compute-0 sudo[226409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juujagkcdmxzxudnbllvtaszgmoxubmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405803.2041745-928-166336918809465/AnsiballZ_copy.py'
Oct 02 11:50:04 compute-0 sudo[226409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:04 compute-0 sudo[226409]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:04.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:05 compute-0 sudo[226562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhidyrejiywqojgwlrkbcziusptctqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405804.8107913-979-60703330771495/AnsiballZ_container_config_data.py'
Oct 02 11:50:05 compute-0 sudo[226562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:05 compute-0 python3.9[226564]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 02 11:50:05 compute-0 sudo[226562]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:05 compute-0 ceph-mon[73607]: pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:06 compute-0 sudo[226714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukaegetpozbisfknrijwbxmqyrmceior ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405805.8307972-1006-27540201306612/AnsiballZ_container_config_hash.py'
Oct 02 11:50:06 compute-0 sudo[226714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:06 compute-0 python3.9[226716]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:50:06 compute-0 sudo[226714]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:07 compute-0 ceph-mon[73607]: pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:07 compute-0 sudo[226867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmzmpnljsrgfsluytupcwjpvfclpfzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405806.8599513-1033-185327788057272/AnsiballZ_podman_container_info.py'
Oct 02 11:50:07 compute-0 sudo[226867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:07 compute-0 python3.9[226869]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:50:07 compute-0 sudo[226867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:50:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:08.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:50:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:09 compute-0 sudo[227046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqpjedhzssmnvgcdelapxaowmhmcycw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405808.7881289-1072-208103257471532/AnsiballZ_edpm_container_manage.py'
Oct 02 11:50:09 compute-0 sudo[227046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:09 compute-0 python3[227048]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:50:09 compute-0 podman[227082]: 2025-10-02 11:50:09.730672196 +0000 UTC m=+0.045899748 container create b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:09 compute-0 podman[227082]: 2025-10-02 11:50:09.707036956 +0000 UTC m=+0.022264518 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 11:50:09 compute-0 ceph-mon[73607]: pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:09 compute-0 python3[227048]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 11:50:09 compute-0 sudo[227046]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:10.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:10 compute-0 sudo[227271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnxqkfxibvkdqsdjeoezcrrcfibbvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405810.4264922-1096-261997459784527/AnsiballZ_stat.py'
Oct 02 11:50:10 compute-0 sudo[227271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:10 compute-0 ceph-mon[73607]: pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:10 compute-0 python3.9[227273]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:10 compute-0 sudo[227271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:11 compute-0 sudo[227425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgtybwawrfrgipsdtwfcebwwtvfgfat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405811.3763397-1123-193600765382978/AnsiballZ_file.py'
Oct 02 11:50:11 compute-0 sudo[227425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:11 compute-0 python3.9[227427]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:11 compute-0 sudo[227425]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:12.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:12 compute-0 sudo[227501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epkxextxyzyestzcxdfldqpoerovdfym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405811.3763397-1123-193600765382978/AnsiballZ_stat.py'
Oct 02 11:50:12 compute-0 sudo[227501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:12 compute-0 python3.9[227503]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:12 compute-0 sudo[227501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:12.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:12 compute-0 sudo[227653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozflqmftaxgcrsykzyrfxxzsgcakecje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405812.3727198-1123-131226270840040/AnsiballZ_copy.py'
Oct 02 11:50:12 compute-0 sudo[227653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:12 compute-0 python3.9[227655]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405812.3727198-1123-131226270840040/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:12 compute-0 sudo[227653]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:13 compute-0 sudo[227729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdujaovtuehonkvtdbbnickmdxtsbay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405812.3727198-1123-131226270840040/AnsiballZ_systemd.py'
Oct 02 11:50:13 compute-0 sudo[227729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:13 compute-0 python3.9[227731]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:50:13 compute-0 systemd[1]: Reloading.
Oct 02 11:50:13 compute-0 systemd-rc-local-generator[227760]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:50:13 compute-0 systemd-sysv-generator[227763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:50:13 compute-0 ceph-mon[73607]: pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:13 compute-0 sudo[227729]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:14.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:14 compute-0 sudo[227840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmavrzyjefzpgektplprbzqrhpmdvjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405812.3727198-1123-131226270840040/AnsiballZ_systemd.py'
Oct 02 11:50:14 compute-0 sudo[227840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:14 compute-0 python3.9[227842]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:14 compute-0 systemd[1]: Reloading.
Oct 02 11:50:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:14 compute-0 systemd-sysv-generator[227872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:50:14 compute-0 systemd-rc-local-generator[227869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:50:14 compute-0 ceph-mon[73607]: pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:14 compute-0 systemd[1]: Starting iscsid container...
Oct 02 11:50:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10d5bd100a95a7dc07f07a14c4627755ff509534fac010de68bc96de556b28/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10d5bd100a95a7dc07f07a14c4627755ff509534fac010de68bc96de556b28/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10d5bd100a95a7dc07f07a14c4627755ff509534fac010de68bc96de556b28/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676.
Oct 02 11:50:15 compute-0 podman[227883]: 2025-10-02 11:50:15.092683574 +0000 UTC m=+0.178718277 container init b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 11:50:15 compute-0 iscsid[227898]: + sudo -E kolla_set_configs
Oct 02 11:50:15 compute-0 sudo[227904]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 11:50:15 compute-0 podman[227883]: 2025-10-02 11:50:15.122880466 +0000 UTC m=+0.208915129 container start b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:50:15 compute-0 podman[227883]: iscsid
Oct 02 11:50:15 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 11:50:15 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 11:50:15 compute-0 systemd[1]: Started iscsid container.
Oct 02 11:50:15 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 11:50:15 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 11:50:15 compute-0 sudo[227840]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:15 compute-0 systemd[227920]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 11:50:15 compute-0 podman[227905]: 2025-10-02 11:50:15.191796197 +0000 UTC m=+0.060252230 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 11:50:15 compute-0 systemd[1]: b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676-255642b54f940a91.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:50:15 compute-0 systemd[1]: b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676-255642b54f940a91.service: Failed with result 'exit-code'.
Oct 02 11:50:15 compute-0 systemd[227920]: Queued start job for default target Main User Target.
Oct 02 11:50:15 compute-0 systemd[227920]: Created slice User Application Slice.
Oct 02 11:50:15 compute-0 systemd[227920]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 11:50:15 compute-0 systemd[227920]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:50:15 compute-0 systemd[227920]: Reached target Paths.
Oct 02 11:50:15 compute-0 systemd[227920]: Reached target Timers.
Oct 02 11:50:15 compute-0 systemd[227920]: Starting D-Bus User Message Bus Socket...
Oct 02 11:50:15 compute-0 systemd[227920]: Starting Create User's Volatile Files and Directories...
Oct 02 11:50:15 compute-0 systemd[227920]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:50:15 compute-0 systemd[227920]: Reached target Sockets.
Oct 02 11:50:15 compute-0 systemd[227920]: Finished Create User's Volatile Files and Directories.
Oct 02 11:50:15 compute-0 systemd[227920]: Reached target Basic System.
Oct 02 11:50:15 compute-0 systemd[227920]: Reached target Main User Target.
Oct 02 11:50:15 compute-0 systemd[227920]: Startup finished in 128ms.
Oct 02 11:50:15 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 11:50:15 compute-0 systemd[1]: Started Session c3 of User root.
Oct 02 11:50:15 compute-0 sudo[227904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:50:15 compute-0 iscsid[227898]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:50:15 compute-0 iscsid[227898]: INFO:__main__:Validating config file
Oct 02 11:50:15 compute-0 iscsid[227898]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:50:15 compute-0 iscsid[227898]: INFO:__main__:Writing out command to execute
Oct 02 11:50:15 compute-0 sudo[227904]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:15 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 02 11:50:15 compute-0 iscsid[227898]: ++ cat /run_command
Oct 02 11:50:15 compute-0 iscsid[227898]: + CMD='/usr/sbin/iscsid -f'
Oct 02 11:50:15 compute-0 iscsid[227898]: + ARGS=
Oct 02 11:50:15 compute-0 iscsid[227898]: + sudo kolla_copy_cacerts
Oct 02 11:50:15 compute-0 sudo[227966]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 11:50:15 compute-0 systemd[1]: Started Session c4 of User root.
Oct 02 11:50:15 compute-0 sudo[227966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:50:15 compute-0 sudo[227966]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:15 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 02 11:50:15 compute-0 iscsid[227898]: + [[ ! -n '' ]]
Oct 02 11:50:15 compute-0 iscsid[227898]: + . kolla_extend_start
Oct 02 11:50:15 compute-0 iscsid[227898]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 02 11:50:15 compute-0 iscsid[227898]: Running command: '/usr/sbin/iscsid -f'
Oct 02 11:50:15 compute-0 iscsid[227898]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 02 11:50:15 compute-0 iscsid[227898]: + umask 0022
Oct 02 11:50:15 compute-0 iscsid[227898]: + exec /usr/sbin/iscsid -f
Oct 02 11:50:15 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 02 11:50:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:16.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:16 compute-0 podman[228076]: 2025-10-02 11:50:16.267869194 +0000 UTC m=+0.110855523 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 11:50:16 compute-0 python3.9[228116]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:16 compute-0 sudo[228279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzujfvrmxbdumvpvmbqxtngqweyoszx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405816.6107585-1234-144202839129445/AnsiballZ_file.py'
Oct 02 11:50:16 compute-0 sudo[228279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:17 compute-0 python3.9[228281]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:17 compute-0 sudo[228279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:17 compute-0 ceph-mon[73607]: pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:17 compute-0 sudo[228431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwaerhtafsjbsajunqyzdixjqxsyzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405817.6763492-1267-244117923879077/AnsiballZ_service_facts.py'
Oct 02 11:50:17 compute-0 sudo[228431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:18 compute-0 python3.9[228433]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:50:18 compute-0 network[228450]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:50:18 compute-0 network[228451]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:50:18 compute-0 network[228452]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:50:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:19 compute-0 ceph-mon[73607]: pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:20.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:20.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:21 compute-0 sudo[228431]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:21 compute-0 ceph-mon[73607]: pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:22.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:22.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:23 compute-0 sudo[228728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmgasiflipkeqhvbmgjeoizwznfqeck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405822.752058-1297-162569896818212/AnsiballZ_file.py'
Oct 02 11:50:23 compute-0 sudo[228728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:23 compute-0 python3.9[228730]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:50:23 compute-0 sudo[228728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:23 compute-0 ceph-mon[73607]: pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:23 compute-0 sudo[228880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upjdafriybtvbvakxhyqqjzgmcvrhqcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405823.5167432-1321-93991746314694/AnsiballZ_modprobe.py'
Oct 02 11:50:23 compute-0 sudo[228880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:24 compute-0 python3.9[228882]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 02 11:50:24 compute-0 sudo[228883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:24 compute-0 sudo[228880]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:24 compute-0 sudo[228883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-0 sudo[228883]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:24 compute-0 sudo[228912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:24 compute-0 sudo[228912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-0 sudo[228912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:50:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:50:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:24 compute-0 sudo[229087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skytnjamhopvbnyvawqqnkfkcloigevo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405824.4867535-1345-97711283808966/AnsiballZ_stat.py'
Oct 02 11:50:24 compute-0 sudo[229087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:24 compute-0 python3.9[229089]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:24 compute-0 sudo[229087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-0 ceph-mon[73607]: pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:25 compute-0 sudo[229210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzdvdngqjxuotwbzpgbtfakbcjpzmgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405824.4867535-1345-97711283808966/AnsiballZ_copy.py'
Oct 02 11:50:25 compute-0 sudo[229210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:25 compute-0 python3.9[229212]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405824.4867535-1345-97711283808966/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:25 compute-0 sudo[229210]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 11:50:25 compute-0 systemd[227920]: Activating special unit Exit the Session...
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped target Main User Target.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped target Basic System.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped target Paths.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped target Sockets.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped target Timers.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:50:25 compute-0 systemd[227920]: Closed D-Bus User Message Bus Socket.
Oct 02 11:50:25 compute-0 systemd[227920]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:50:25 compute-0 systemd[227920]: Removed slice User Application Slice.
Oct 02 11:50:25 compute-0 systemd[227920]: Reached target Shutdown.
Oct 02 11:50:25 compute-0 systemd[227920]: Finished Exit the Session.
Oct 02 11:50:25 compute-0 systemd[227920]: Reached target Exit the Session.
Oct 02 11:50:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:25 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 11:50:25 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 11:50:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 11:50:25 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 11:50:25 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 11:50:25 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 11:50:25 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 11:50:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:26.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:26 compute-0 sudo[229363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skomihqfdchnxxymfoyzpchnruzxptti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405826.020971-1393-29108599235132/AnsiballZ_lineinfile.py'
Oct 02 11:50:26 compute-0 sudo[229363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:26.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:26 compute-0 python3.9[229366]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:26 compute-0 sudo[229363]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:50:26.906 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:50:26.907 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:50:26.907 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:50:27 compute-0 sudo[229516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxpnynyvtaguvlpczvziczlidivkpug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405826.7406015-1417-173091640456239/AnsiballZ_systemd.py'
Oct 02 11:50:27 compute-0 sudo[229516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:27 compute-0 python3.9[229518]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:50:27 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 11:50:27 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 11:50:27 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 11:50:27 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 11:50:27 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 11:50:27 compute-0 sudo[229516]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:27 compute-0 ceph-mon[73607]: pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:28 compute-0 sudo[229672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlfdksmjyxikhtbyhexrhwgdilsfhwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405827.760174-1441-226846957931123/AnsiballZ_file.py'
Oct 02 11:50:28 compute-0 sudo[229672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:28.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:28 compute-0 python3.9[229674]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:28 compute-0 sudo[229672]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:28 compute-0 sudo[229825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgizcomckxmplmqrolszwdqxfxsuvgto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405828.676311-1468-47088200891340/AnsiballZ_stat.py'
Oct 02 11:50:28 compute-0 sudo[229825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:29 compute-0 python3.9[229827]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:29 compute-0 sudo[229825]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:29 compute-0 ceph-mon[73607]: pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:29 compute-0 sudo[229977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mycgtyvnedxazgpghovbzqyhnucjubvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405829.5447865-1495-180519713613645/AnsiballZ_stat.py'
Oct 02 11:50:29 compute-0 sudo[229977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:30.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:30 compute-0 python3.9[229979]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:30 compute-0 sudo[229977]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:30 compute-0 sudo[230130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdonzvxrdofjzbrinyxxcqurcqauikjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405830.3191545-1519-214993528888267/AnsiballZ_stat.py'
Oct 02 11:50:30 compute-0 sudo[230130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:30 compute-0 python3.9[230132]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:30 compute-0 sudo[230130]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 ceph-mon[73607]: pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:31 compute-0 sudo[230253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkulzbettulmqfbvjueojajvjbrpooiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405830.3191545-1519-214993528888267/AnsiballZ_copy.py'
Oct 02 11:50:31 compute-0 sudo[230253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:31 compute-0 python3.9[230255]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405830.3191545-1519-214993528888267/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:31 compute-0 sudo[230253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-0 podman[230332]: 2025-10-02 11:50:31.919596107 +0000 UTC m=+0.059597814 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 11:50:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:32 compute-0 sudo[230424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jactdnsaxxtmkuwsfzejqgtkafebvayk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405831.671609-1564-66721007239998/AnsiballZ_command.py'
Oct 02 11:50:32 compute-0 sudo[230424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:32 compute-0 python3.9[230426]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:50:32 compute-0 sudo[230424]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:32 compute-0 sudo[230578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsfvqjqcfkktfdvhpctqbcwziptveere ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405832.5612051-1588-149741451494657/AnsiballZ_lineinfile.py'
Oct 02 11:50:32 compute-0 sudo[230578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:33 compute-0 python3.9[230580]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:33 compute-0 sudo[230578]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-0 ceph-mon[73607]: pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:33 compute-0 sudo[230730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soemjrkiaptwdcdpgvjgvpptwzlulcjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405833.395272-1612-143354237824254/AnsiballZ_replace.py'
Oct 02 11:50:33 compute-0 sudo[230730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:34 compute-0 python3.9[230732]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:34 compute-0 sudo[230730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:34 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 02 11:50:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:34 compute-0 ceph-mon[73607]: pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:34 compute-0 sudo[230884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfnvhdavkgsldyeyoocwemczlnnyxfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405834.656576-1636-272599376050132/AnsiballZ_replace.py'
Oct 02 11:50:34 compute-0 sudo[230884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:35 compute-0 python3.9[230886]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:35 compute-0 sudo[230884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:35 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 11:50:35 compute-0 sudo[231037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxjixerbwxpdaakqohsyiedhqxumoibr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405835.5412107-1663-45359137676564/AnsiballZ_lineinfile.py'
Oct 02 11:50:35 compute-0 sudo[231037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:36 compute-0 python3.9[231039]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:36 compute-0 sudo[231037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:36.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:36 compute-0 sudo[231190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eereexmbgkxkekhjqmvdogvjvgmqzuwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405836.320628-1663-142964279693635/AnsiballZ_lineinfile.py'
Oct 02 11:50:36 compute-0 sudo[231190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:36 compute-0 python3.9[231192]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:36 compute-0 sudo[231190]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 sudo[231290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:37 compute-0 sudo[231290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:37 compute-0 sudo[231290]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 sudo[231321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:37 compute-0 sudo[231321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:37 compute-0 sudo[231321]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 sudo[231370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:37 compute-0 sudo[231370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:37 compute-0 sudo[231370]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 sudo[231415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkuvyoesovfrplglahuiycivhkgiqxsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405837.0585892-1663-269435461329813/AnsiballZ_lineinfile.py'
Oct 02 11:50:37 compute-0 sudo[231415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:37 compute-0 sudo[231420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:50:37 compute-0 sudo[231420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:37 compute-0 python3.9[231419]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:37 compute-0 sudo[231415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 ceph-mon[73607]: pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:37 compute-0 sudo[231420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 sudo[231625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndmnptedggkamnubmchdnofbyvvcynyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405837.6416428-1663-106676155816835/AnsiballZ_lineinfile.py'
Oct 02 11:50:37 compute-0 sudo[231625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:38.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:38 compute-0 python3.9[231627]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:38 compute-0 sudo[231625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:38.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:38 compute-0 sudo[231778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcueegrfffjrungdykpaecwxvsldxmnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405838.3791697-1750-5568957087209/AnsiballZ_stat.py'
Oct 02 11:50:38 compute-0 sudo[231778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:38 compute-0 python3.9[231780]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:38 compute-0 sudo[231778]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:50:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:39 compute-0 sudo[231932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzbskwtfklfgyizlygkimverjrramcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405839.197134-1774-167953032994472/AnsiballZ_file.py'
Oct 02 11:50:39 compute-0 sudo[231932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:39 compute-0 ceph-mon[73607]: pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:39 compute-0 python3.9[231934]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:39 compute-0 sudo[231932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e481776c-5738-4d63-9ceb-5dff0bba3941 does not exist
Oct 02 11:50:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f9c6fe5e-5aad-4089-ab4a-1aba00b9992a does not exist
Oct 02 11:50:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aa2da4b7-5b4b-4b81-8680-0309495381d6 does not exist
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:39 compute-0 sudo[231956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:39 compute-0 sudo[231956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:39 compute-0 sudo[231956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:40 compute-0 sudo[231984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:40 compute-0 sudo[231984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:40 compute-0 sudo[231984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:40.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:40 compute-0 sudo[232009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:40 compute-0 sudo[232009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:40 compute-0 sudo[232009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:40 compute-0 sudo[232057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:50:40 compute-0 sudo[232057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:40 compute-0 sudo[232222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dplzmusowgdlkbgfyafogajxkteouimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405840.1717224-1801-244552528516932/AnsiballZ_file.py'
Oct 02 11:50:40 compute-0 sudo[232222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.517360148 +0000 UTC m=+0.037368758 container create 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:50:40 compute-0 systemd[1]: Started libpod-conmon-78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5.scope.
Oct 02 11:50:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.596666615 +0000 UTC m=+0.116675245 container init 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.500622487 +0000 UTC m=+0.020631117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.606181798 +0000 UTC m=+0.126190408 container start 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.609886499 +0000 UTC m=+0.129895129 container attach 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:50:40 compute-0 unruffled_mcclintock[232244]: 167 167
Oct 02 11:50:40 compute-0 systemd[1]: libpod-78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5.scope: Deactivated successfully.
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.612497564 +0000 UTC m=+0.132506174 container died 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:50:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5b2a898330743ccf80ee18d4bd833d4023c61d5472372d132c2caff89b3bf1-merged.mount: Deactivated successfully.
Oct 02 11:50:40 compute-0 podman[232228]: 2025-10-02 11:50:40.648182699 +0000 UTC m=+0.168191299 container remove 78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:50:40 compute-0 python3.9[232227]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:40 compute-0 systemd[1]: libpod-conmon-78a9ed1b42a8ae1407c769133ae1a2b8690f8bd3f09e8f0e373e210e065ceba5.scope: Deactivated successfully.
Oct 02 11:50:40 compute-0 sudo[232222]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:40 compute-0 podman[232293]: 2025-10-02 11:50:40.793764793 +0000 UTC m=+0.042359121 container create 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:50:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:40 compute-0 systemd[1]: Started libpod-conmon-34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21.scope.
Oct 02 11:50:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:40 compute-0 podman[232293]: 2025-10-02 11:50:40.776689744 +0000 UTC m=+0.025284092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:40 compute-0 podman[232293]: 2025-10-02 11:50:40.885598928 +0000 UTC m=+0.134193276 container init 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:40 compute-0 podman[232293]: 2025-10-02 11:50:40.891667206 +0000 UTC m=+0.140261524 container start 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:50:40 compute-0 podman[232293]: 2025-10-02 11:50:40.89747424 +0000 UTC m=+0.146068598 container attach 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:50:41 compute-0 sudo[232439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnairbzmyrdbnvvqwvdfyqbgvcmnkzuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405840.9008813-1825-136742086928550/AnsiballZ_stat.py'
Oct 02 11:50:41 compute-0 sudo[232439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:41 compute-0 python3.9[232441]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:41 compute-0 sudo[232439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 eager_swartz[232309]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:50:41 compute-0 eager_swartz[232309]: --> relative data size: 1.0
Oct 02 11:50:41 compute-0 eager_swartz[232309]: --> All data devices are unavailable
Oct 02 11:50:41 compute-0 sudo[232527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngrlinivvmhcignxmxzhygmtwvvwtlsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405840.9008813-1825-136742086928550/AnsiballZ_file.py'
Oct 02 11:50:41 compute-0 sudo[232527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:41 compute-0 systemd[1]: libpod-34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21.scope: Deactivated successfully.
Oct 02 11:50:41 compute-0 podman[232293]: 2025-10-02 11:50:41.701064036 +0000 UTC m=+0.949658364 container died 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-efef8a6eefd058cedcbc5b10c45b211631777b697939666f40dfca2eff990b0d-merged.mount: Deactivated successfully.
Oct 02 11:50:41 compute-0 podman[232293]: 2025-10-02 11:50:41.784698799 +0000 UTC m=+1.033293127 container remove 34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:50:41 compute-0 systemd[1]: libpod-conmon-34bb66abeebdea9bfe85ce7774805968d483d261fba3b358a76a3cd1ebe12b21.scope: Deactivated successfully.
Oct 02 11:50:41 compute-0 sudo[232057]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 ceph-mon[73607]: pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:41 compute-0 sudo[232544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:41 compute-0 sudo[232544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:41 compute-0 sudo[232544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 python3.9[232529]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:41 compute-0 sudo[232569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:41 compute-0 sudo[232569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:41 compute-0 sudo[232569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 sudo[232527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 sudo[232594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:41 compute-0 sudo[232594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:41 compute-0 sudo[232594]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:42 compute-0 sudo[232642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:50:42 compute-0 sudo[232642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:50:42
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.343913078 +0000 UTC m=+0.043466988 container create 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:50:42 compute-0 systemd[1]: Started libpod-conmon-9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8.scope.
Oct 02 11:50:42 compute-0 sudo[232846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazzbluhczjmfggexxvgnbvbouytwsmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405842.106449-1825-173322691722295/AnsiballZ_stat.py'
Oct 02 11:50:42 compute-0 sudo[232846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.325097245 +0000 UTC m=+0.024651185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.426807642 +0000 UTC m=+0.126361552 container init 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.434685495 +0000 UTC m=+0.134239405 container start 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.43812132 +0000 UTC m=+0.137675250 container attach 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 02 11:50:42 compute-0 intelligent_maxwell[232850]: 167 167
Oct 02 11:50:42 compute-0 systemd[1]: libpod-9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8.scope: Deactivated successfully.
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.444364553 +0000 UTC m=+0.143918473 container died 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:50:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:50:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-daf6314b7d8038f1a7b74adc2fc72059290ee4e87df6ffb31843eee4b07eeb63-merged.mount: Deactivated successfully.
Oct 02 11:50:42 compute-0 podman[232805]: 2025-10-02 11:50:42.491871759 +0000 UTC m=+0.191425679 container remove 9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:42 compute-0 systemd[1]: libpod-conmon-9c07eaec124ed0a973a6fc3e7f9fefcce51299c09fd349f7d93688e891277ba8.scope: Deactivated successfully.
Oct 02 11:50:42 compute-0 python3.9[232852]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:42 compute-0 sudo[232846]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:42 compute-0 podman[232874]: 2025-10-02 11:50:42.647493149 +0000 UTC m=+0.046241746 container create 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:42 compute-0 systemd[1]: Started libpod-conmon-475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4.scope.
Oct 02 11:50:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b041109d6e6e4bcbd8d54d445287dd3047faaa84e6175d15f098c05020fbffb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b041109d6e6e4bcbd8d54d445287dd3047faaa84e6175d15f098c05020fbffb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b041109d6e6e4bcbd8d54d445287dd3047faaa84e6175d15f098c05020fbffb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b041109d6e6e4bcbd8d54d445287dd3047faaa84e6175d15f098c05020fbffb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:42 compute-0 podman[232874]: 2025-10-02 11:50:42.723158317 +0000 UTC m=+0.121906944 container init 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:42 compute-0 podman[232874]: 2025-10-02 11:50:42.629533129 +0000 UTC m=+0.028281726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:42 compute-0 podman[232874]: 2025-10-02 11:50:42.730016375 +0000 UTC m=+0.128764972 container start 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:50:42 compute-0 podman[232874]: 2025-10-02 11:50:42.736845363 +0000 UTC m=+0.135594200 container attach 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:50:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:50:42 compute-0 ceph-mon[73607]: pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:42 compute-0 sudo[232970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxjnfbssyjzahyqhcdohdhrvgtcqhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405842.106449-1825-173322691722295/AnsiballZ_file.py'
Oct 02 11:50:42 compute-0 sudo[232970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:43 compute-0 python3.9[232972]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:43 compute-0 sudo[232970]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:43 compute-0 adoring_herschel[232910]: {
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:     "1": [
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:         {
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "devices": [
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "/dev/loop3"
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             ],
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "lv_name": "ceph_lv0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "lv_size": "7511998464",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "name": "ceph_lv0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "tags": {
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.cluster_name": "ceph",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.crush_device_class": "",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.encrypted": "0",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.osd_id": "1",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.type": "block",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:                 "ceph.vdo": "0"
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             },
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "type": "block",
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:             "vg_name": "ceph_vg0"
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:         }
Oct 02 11:50:43 compute-0 adoring_herschel[232910]:     ]
Oct 02 11:50:43 compute-0 adoring_herschel[232910]: }
Oct 02 11:50:43 compute-0 systemd[1]: libpod-475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4.scope: Deactivated successfully.
Oct 02 11:50:43 compute-0 podman[232874]: 2025-10-02 11:50:43.524449238 +0000 UTC m=+0.923197835 container died 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b041109d6e6e4bcbd8d54d445287dd3047faaa84e6175d15f098c05020fbffb6-merged.mount: Deactivated successfully.
Oct 02 11:50:43 compute-0 sudo[233138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woaoivphkwuxnnzmfgcfwabdimlpciks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405843.414626-1894-165377415822100/AnsiballZ_file.py'
Oct 02 11:50:43 compute-0 sudo[233138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:43 compute-0 podman[232874]: 2025-10-02 11:50:43.792129429 +0000 UTC m=+1.190878026 container remove 475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:50:43 compute-0 systemd[1]: libpod-conmon-475d586cd1dc4996025d8dde31228782fdfd4daba07734ab95464edf9f2467d4.scope: Deactivated successfully.
Oct 02 11:50:43 compute-0 sudo[232642]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:43 compute-0 sudo[233141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:43 compute-0 sudo[233141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:43 compute-0 sudo[233141]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:43 compute-0 sudo[233166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:43 compute-0 python3.9[233140]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:43 compute-0 sudo[233166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:43 compute-0 sudo[233166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:43 compute-0 sudo[233138]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:43 compute-0 sudo[233191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:43 compute-0 sudo[233191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:43 compute-0 sudo[233191]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:44 compute-0 sudo[233233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:50:44 compute-0 sudo[233233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:44 compute-0 sudo[233343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:44 compute-0 sudo[233343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:44 compute-0 sudo[233343]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:44 compute-0 sudo[233405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:44 compute-0 sudo[233405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:44 compute-0 sudo[233405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.37065475 +0000 UTC m=+0.055389430 container create 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:50:44 compute-0 systemd[1]: Started libpod-conmon-5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b.scope.
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.338804088 +0000 UTC m=+0.023538768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.465265103 +0000 UTC m=+0.149999803 container init 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:50:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:44 compute-0 sudo[233501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dovrobovuniddlxpxmvvihazwqiiczar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405844.1609478-1918-235778101120442/AnsiballZ_stat.py'
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.474083919 +0000 UTC m=+0.158818589 container start 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:44 compute-0 sudo[233501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.477348719 +0000 UTC m=+0.162083429 container attach 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:44 compute-0 clever_dewdney[233472]: 167 167
Oct 02 11:50:44 compute-0 systemd[1]: libpod-5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b.scope: Deactivated successfully.
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.478990589 +0000 UTC m=+0.163725269 container died 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ceebfb2285ed30d58c8e5dc6037e7101674ee6d0b91fceb2810311eaa4c0dd-merged.mount: Deactivated successfully.
Oct 02 11:50:44 compute-0 podman[233411]: 2025-10-02 11:50:44.520516019 +0000 UTC m=+0.205250699 container remove 5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:50:44 compute-0 systemd[1]: libpod-conmon-5d9f0da590f8dd782fce31b1ef35f8c0d74d8a15a5e26ae59db941910ab7398b.scope: Deactivated successfully.
Oct 02 11:50:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:44 compute-0 python3.9[233505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:44 compute-0 podman[233525]: 2025-10-02 11:50:44.678259522 +0000 UTC m=+0.043733885 container create 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:50:44 compute-0 sudo[233501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:44 compute-0 systemd[1]: Started libpod-conmon-87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240.scope.
Oct 02 11:50:44 compute-0 podman[233525]: 2025-10-02 11:50:44.661872739 +0000 UTC m=+0.027347122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a320cb750ceadbc03980ce926178e3ac692f00e89d41560ed216118e79d01e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a320cb750ceadbc03980ce926178e3ac692f00e89d41560ed216118e79d01e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a320cb750ceadbc03980ce926178e3ac692f00e89d41560ed216118e79d01e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a320cb750ceadbc03980ce926178e3ac692f00e89d41560ed216118e79d01e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:44 compute-0 podman[233525]: 2025-10-02 11:50:44.778221805 +0000 UTC m=+0.143696218 container init 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:50:44 compute-0 podman[233525]: 2025-10-02 11:50:44.785441012 +0000 UTC m=+0.150915415 container start 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:50:44 compute-0 podman[233525]: 2025-10-02 11:50:44.789682186 +0000 UTC m=+0.155156569 container attach 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:50:44 compute-0 sudo[233622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurfsuzwwfeulhmyznptnjpatwgcszfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405844.1609478-1918-235778101120442/AnsiballZ_file.py'
Oct 02 11:50:44 compute-0 sudo[233622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:45 compute-0 python3.9[233624]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:45 compute-0 sudo[233622]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:45 compute-0 objective_black[233544]: {
Oct 02 11:50:45 compute-0 objective_black[233544]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:50:45 compute-0 objective_black[233544]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:50:45 compute-0 objective_black[233544]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:50:45 compute-0 objective_black[233544]:         "osd_id": 1,
Oct 02 11:50:45 compute-0 objective_black[233544]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:50:45 compute-0 objective_black[233544]:         "type": "bluestore"
Oct 02 11:50:45 compute-0 objective_black[233544]:     }
Oct 02 11:50:45 compute-0 objective_black[233544]: }
Oct 02 11:50:45 compute-0 systemd[1]: libpod-87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240.scope: Deactivated successfully.
Oct 02 11:50:45 compute-0 podman[233525]: 2025-10-02 11:50:45.641435786 +0000 UTC m=+1.006910159 container died 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a320cb750ceadbc03980ce926178e3ac692f00e89d41560ed216118e79d01e-merged.mount: Deactivated successfully.
Oct 02 11:50:45 compute-0 podman[233525]: 2025-10-02 11:50:45.70308627 +0000 UTC m=+1.068560633 container remove 87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:50:45 compute-0 systemd[1]: libpod-conmon-87fa1779314d5981edf73f31ca71ff3ce9942f1cf525be63aa7f9a3e1f094240.scope: Deactivated successfully.
Oct 02 11:50:45 compute-0 sudo[233233]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:50:45 compute-0 ceph-mon[73607]: pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:45 compute-0 podman[233741]: 2025-10-02 11:50:45.75526508 +0000 UTC m=+0.082571658 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:45 compute-0 sudo[233823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcistxqzbtsrbubupynszpfihkumhwar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405845.514584-1954-128958292519675/AnsiballZ_stat.py'
Oct 02 11:50:45 compute-0 sudo[233823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:50:45 compute-0 python3.9[233826]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:45 compute-0 sudo[233823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:46.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3d0a978f-78ec-4232-80d5-89d5fcbc80e7 does not exist
Oct 02 11:50:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5b2e5f60-0f38-4216-a705-580fe14f2775 does not exist
Oct 02 11:50:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 14e3d935-05dc-4fb8-994e-e5258b050286 does not exist
Oct 02 11:50:46 compute-0 sudo[233856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:46 compute-0 sudo[233856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:46 compute-0 sudo[233856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:46 compute-0 sudo[233950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltssmgcaevdjkdrompqxcqifnuldynji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405845.514584-1954-128958292519675/AnsiballZ_file.py'
Oct 02 11:50:46 compute-0 sudo[233905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:50:46 compute-0 sudo[233905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:46 compute-0 sudo[233950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:46 compute-0 sudo[233905]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:46 compute-0 python3.9[233954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:46 compute-0 sudo[233950]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:46.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:50:46 compute-0 ceph-mon[73607]: pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:46 compute-0 podman[234004]: 2025-10-02 11:50:46.953112816 +0000 UTC m=+0.087661903 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:50:47 compute-0 sudo[234131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tocjwljgbreizfptpokgermqonyeaupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405846.825436-1990-28022047436558/AnsiballZ_systemd.py'
Oct 02 11:50:47 compute-0 sudo[234131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:47 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 02 11:50:47 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 11:50:47 compute-0 python3.9[234133]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:47 compute-0 systemd[1]: Reloading.
Oct 02 11:50:47 compute-0 systemd-sysv-generator[234169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:50:47 compute-0 systemd-rc-local-generator[234166]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:50:47 compute-0 sudo[234131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:48.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:48 compute-0 sudo[234324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxsvtvtesrxzwmlpffbvkycgcpwlpti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405848.011703-2014-28181430139516/AnsiballZ_stat.py'
Oct 02 11:50:48 compute-0 sudo[234324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:48.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:48 compute-0 python3.9[234326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:48 compute-0 sudo[234324]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:48 compute-0 sudo[234403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mijnjfqcsnbxrgdgxrfgwanhibkcknsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405848.011703-2014-28181430139516/AnsiballZ_file.py'
Oct 02 11:50:48 compute-0 sudo[234403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:48 compute-0 python3.9[234405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:48 compute-0 sudo[234403]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:50:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3719 writes, 16K keys, 3718 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3719 writes, 3718 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1400 writes, 5705 keys, 1400 commit groups, 1.0 writes per commit group, ingest: 9.82 MB, 0.02 MB/s
                                           Interval WAL: 1400 writes, 1400 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    154.1      0.12              0.04         7    0.018       0      0       0.0       0.0
                                             L6      1/0    7.31 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    163.8    133.1      0.38              0.12         6    0.063     26K   3321       0.0       0.0
                                            Sum      1/0    7.31 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    123.7    138.2      0.50              0.16        13    0.039     26K   3321       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    138.2    138.8      0.24              0.08         6    0.041     14K   2006       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    163.8    133.1      0.38              0.12         6    0.063     26K   3321       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    159.4      0.12              0.04         6    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 308.00 MB usage: 2.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(110,1.86 MB,0.602316%) FilterBlock(14,79.30 KB,0.0251423%) IndexBlock(14,168.92 KB,0.0535593%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:50:49 compute-0 sudo[234555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlzugmukxrjlhrnvurwebxmyoqnpnmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405849.3171003-2050-221534025861105/AnsiballZ_stat.py'
Oct 02 11:50:49 compute-0 sudo[234555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:49 compute-0 ceph-mon[73607]: pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:49 compute-0 python3.9[234557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:49 compute-0 sudo[234555]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:50 compute-0 sudo[234633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfxljsfkhrrwjouadyfyahxmidpoowx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405849.3171003-2050-221534025861105/AnsiballZ_file.py'
Oct 02 11:50:50 compute-0 sudo[234633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:50.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:50 compute-0 python3.9[234635]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:50 compute-0 sudo[234633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:50.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:51 compute-0 sudo[234786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvffoakpnjgtavrhygctkptatkdwinzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405850.758007-2086-98584861026905/AnsiballZ_systemd.py'
Oct 02 11:50:51 compute-0 sudo[234786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:51 compute-0 python3.9[234788]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:51 compute-0 systemd[1]: Reloading.
Oct 02 11:50:51 compute-0 systemd-rc-local-generator[234813]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:50:51 compute-0 systemd-sysv-generator[234818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:50:51 compute-0 ceph-mon[73607]: pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:51 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:50:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:50:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:50:51 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:50:51 compute-0 sudo[234786]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:52.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:52 compute-0 sudo[234979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjmhmebnszgiqovjnfszoznhlagfvxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405852.3734715-2116-248978015562376/AnsiballZ_file.py'
Oct 02 11:50:52 compute-0 sudo[234979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:52 compute-0 python3.9[234981]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:52 compute-0 sudo[234979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:53 compute-0 sudo[235131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtzwejogjnjyytgjnooojzybgafccnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405853.138898-2140-200855898252534/AnsiballZ_stat.py'
Oct 02 11:50:53 compute-0 sudo[235131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:53 compute-0 python3.9[235133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:53 compute-0 sudo[235131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:50:53 compute-0 sudo[235254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyudhyqymsppdlksbtmyubtizxfldoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405853.138898-2140-200855898252534/AnsiballZ_copy.py'
Oct 02 11:50:53 compute-0 sudo[235254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:53 compute-0 ceph-mon[73607]: pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:50:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:54 compute-0 python3.9[235256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405853.138898-2140-200855898252534/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:54 compute-0 sudo[235254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:54 compute-0 ceph-mon[73607]: pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:55 compute-0 sudo[235407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-socwcbjjrpjliifpacyqdjfheukxyxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405854.7560825-2191-99012771000088/AnsiballZ_file.py'
Oct 02 11:50:55 compute-0 sudo[235407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:55 compute-0 python3.9[235409]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:55 compute-0 sudo[235407]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:55 compute-0 sudo[235559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjrdyeelhtvnnmpuwjfiobjzpncqdugc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405855.6634195-2215-241748795930158/AnsiballZ_stat.py'
Oct 02 11:50:55 compute-0 sudo[235559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:56 compute-0 python3.9[235561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:56 compute-0 sudo[235559]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:56.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:56 compute-0 sudo[235683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpbxfgfzgkvtzxlabqyagmbsifaykkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405855.6634195-2215-241748795930158/AnsiballZ_copy.py'
Oct 02 11:50:56 compute-0 sudo[235683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:56 compute-0 python3.9[235685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405855.6634195-2215-241748795930158/.source.json _original_basename=.fqcfztfg follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:56 compute-0 sudo[235683]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:57 compute-0 sudo[235835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfcvwhdfrimagxmlawesdwdhqoeiicb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405857.0864801-2260-52612444603583/AnsiballZ_file.py'
Oct 02 11:50:57 compute-0 sudo[235835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:57 compute-0 python3.9[235837]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:50:57 compute-0 sudo[235835]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:57 compute-0 ceph-mon[73607]: pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:58 compute-0 sudo[235987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthgtwxuvohguugerzekfoqhwqgxdoxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405858.0038707-2284-168117708752953/AnsiballZ_stat.py'
Oct 02 11:50:58 compute-0 sudo[235987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:50:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:58.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:58 compute-0 sudo[235987]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:58 compute-0 sudo[236111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmksrjcqejqohvscefzobzrfrbuswcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405858.0038707-2284-168117708752953/AnsiballZ_copy.py'
Oct 02 11:50:58 compute-0 sudo[236111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:59 compute-0 sudo[236111]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:59 compute-0 ceph-mon[73607]: pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:59 compute-0 sudo[236263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzyvtfwietpoclfgzszpxiibwsmthaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405859.446728-2335-85308279305168/AnsiballZ_container_config_data.py'
Oct 02 11:50:59 compute-0 sudo[236263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:00 compute-0 python3.9[236265]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 02 11:51:00 compute-0 sudo[236263]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:00.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:00 compute-0 sudo[236416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzlpcywsidcffdycyjvuqgnfrkgdwxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405860.3928077-2362-1556109020038/AnsiballZ_container_config_hash.py'
Oct 02 11:51:00 compute-0 sudo[236416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:00 compute-0 python3.9[236418]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:51:00 compute-0 sudo[236416]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:01 compute-0 sudo[236568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeecvaffiyonnmmwyjcimrdhzcxqpwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405861.3549922-2389-176325493104882/AnsiballZ_podman_container_info.py'
Oct 02 11:51:01 compute-0 sudo[236568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:01 compute-0 ceph-mon[73607]: pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:01 compute-0 python3.9[236570]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:51:02 compute-0 sudo[236568]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:02.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:02.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:02 compute-0 ceph-mon[73607]: pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:02 compute-0 podman[236623]: 2025-10-02 11:51:02.940347598 +0000 UTC m=+0.074139491 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:51:03 compute-0 sudo[236769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrucnllesxqmvezfuywgwnykpbnodqe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405863.261752-2428-93787780271687/AnsiballZ_edpm_container_manage.py'
Oct 02 11:51:03 compute-0 sudo[236769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:03 compute-0 python3[236771]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:51:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:51:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:51:04 compute-0 sudo[236799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:04 compute-0 sudo[236799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:04 compute-0 sudo[236799]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:04.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:04 compute-0 sudo[236824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:04 compute-0 sudo[236824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:04 compute-0 sudo[236824]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:04 compute-0 ceph-mon[73607]: pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:05 compute-0 podman[236784]: 2025-10-02 11:51:05.127355095 +0000 UTC m=+1.253859182 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 11:51:05 compute-0 podman[236894]: 2025-10-02 11:51:05.337074333 +0000 UTC m=+0.104362133 container create a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:05 compute-0 podman[236894]: 2025-10-02 11:51:05.271631636 +0000 UTC m=+0.038919466 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 11:51:05 compute-0 python3[236771]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 11:51:05 compute-0 sudo[236769]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:06.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:06 compute-0 sudo[237083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfnmpnbaiqlzndczbkpzzqrbyqgqygh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405866.2919338-2452-239742899620065/AnsiballZ_stat.py'
Oct 02 11:51:06 compute-0 sudo[237083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:06 compute-0 python3.9[237085]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:06 compute-0 sudo[237083]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:07 compute-0 sudo[237237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfeianxjsnyllgsagssfuhccbwugztmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405867.2281086-2479-173732161055327/AnsiballZ_file.py'
Oct 02 11:51:07 compute-0 sudo[237237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:07 compute-0 ceph-mon[73607]: pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:07 compute-0 python3.9[237239]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:07 compute-0 sudo[237237]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:07 compute-0 sudo[237313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjsyghsewfkyafkydigkramxtwyjpsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405867.2281086-2479-173732161055327/AnsiballZ_stat.py'
Oct 02 11:51:07 compute-0 sudo[237313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:08 compute-0 python3.9[237315]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:08 compute-0 sudo[237313]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:08.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:08 compute-0 sudo[237465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvgmotkixixkphzzwmagvbnztlhhsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405868.2678788-2479-237337954974906/AnsiballZ_copy.py'
Oct 02 11:51:08 compute-0 sudo[237465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:08 compute-0 python3.9[237467]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405868.2678788-2479-237337954974906/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:08 compute-0 sudo[237465]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:09 compute-0 sudo[237541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkbjuonxpoqjdvyvrhbedfbpnvmvrpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405868.2678788-2479-237337954974906/AnsiballZ_systemd.py'
Oct 02 11:51:09 compute-0 sudo[237541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:09 compute-0 python3.9[237543]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:51:09 compute-0 systemd[1]: Reloading.
Oct 02 11:51:09 compute-0 systemd-rc-local-generator[237574]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:09 compute-0 systemd-sysv-generator[237578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:09 compute-0 sudo[237541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:09 compute-0 ceph-mon[73607]: pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-0 sudo[237655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwkjaukymhijcqezcvicbhwblmgxaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405868.2678788-2479-237337954974906/AnsiballZ_systemd.py'
Oct 02 11:51:10 compute-0 sudo[237655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:10 compute-0 sshd-session[237631]: Invalid user riscv from 167.99.55.34 port 38194
Oct 02 11:51:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:10 compute-0 sshd-session[237631]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 11:51:10 compute-0 sshd-session[237631]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 11:51:10 compute-0 python3.9[237657]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:10 compute-0 systemd[1]: Reloading.
Oct 02 11:51:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:10 compute-0 systemd-sysv-generator[237691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:10 compute-0 systemd-rc-local-generator[237688]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-0 systemd[1]: Starting multipathd container...
Oct 02 11:51:10 compute-0 ceph-mon[73607]: pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7b9b197187f41d2d7b8b596ad2f91816987445684ba454a4017d67728a2739/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7b9b197187f41d2d7b8b596ad2f91816987445684ba454a4017d67728a2739/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4.
Oct 02 11:51:10 compute-0 podman[237698]: 2025-10-02 11:51:10.929539048 +0000 UTC m=+0.112836621 container init a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:51:10 compute-0 multipathd[237714]: + sudo -E kolla_set_configs
Oct 02 11:51:10 compute-0 sudo[237720]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 11:51:10 compute-0 sudo[237720]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:51:10 compute-0 sudo[237720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:51:10 compute-0 podman[237698]: 2025-10-02 11:51:10.965532822 +0000 UTC m=+0.148830365 container start a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 11:51:10 compute-0 podman[237698]: multipathd
Oct 02 11:51:10 compute-0 systemd[1]: Started multipathd container.
Oct 02 11:51:11 compute-0 multipathd[237714]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:51:11 compute-0 multipathd[237714]: INFO:__main__:Validating config file
Oct 02 11:51:11 compute-0 multipathd[237714]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:51:11 compute-0 multipathd[237714]: INFO:__main__:Writing out command to execute
Oct 02 11:51:11 compute-0 sudo[237720]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:11 compute-0 sudo[237655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:11 compute-0 multipathd[237714]: ++ cat /run_command
Oct 02 11:51:11 compute-0 multipathd[237714]: + CMD='/usr/sbin/multipathd -d'
Oct 02 11:51:11 compute-0 multipathd[237714]: + ARGS=
Oct 02 11:51:11 compute-0 multipathd[237714]: + sudo kolla_copy_cacerts
Oct 02 11:51:11 compute-0 sudo[237740]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 11:51:11 compute-0 sudo[237740]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:51:11 compute-0 sudo[237740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:51:11 compute-0 sudo[237740]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:11 compute-0 multipathd[237714]: + [[ ! -n '' ]]
Oct 02 11:51:11 compute-0 multipathd[237714]: + . kolla_extend_start
Oct 02 11:51:11 compute-0 multipathd[237714]: Running command: '/usr/sbin/multipathd -d'
Oct 02 11:51:11 compute-0 multipathd[237714]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 11:51:11 compute-0 multipathd[237714]: + umask 0022
Oct 02 11:51:11 compute-0 multipathd[237714]: + exec /usr/sbin/multipathd -d
Oct 02 11:51:11 compute-0 podman[237721]: 2025-10-02 11:51:11.035787027 +0000 UTC m=+0.060999269 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 11:51:11 compute-0 systemd[1]: a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-35bdd121ed217184.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:51:11 compute-0 systemd[1]: a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-35bdd121ed217184.service: Failed with result 'exit-code'.
Oct 02 11:51:11 compute-0 multipathd[237714]: 3938.735635 | --------start up--------
Oct 02 11:51:11 compute-0 multipathd[237714]: 3938.735656 | read /etc/multipath.conf
Oct 02 11:51:11 compute-0 multipathd[237714]: 3938.741358 | path checkers start up
Oct 02 11:51:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:12 compute-0 python3.9[237902]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:12 compute-0 sshd-session[237631]: Failed password for invalid user riscv from 167.99.55.34 port 38194 ssh2
Oct 02 11:51:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:12 compute-0 sudo[238055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phumptobuxmszznyomnulsdqhnguzgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405872.5814373-2587-13187761138522/AnsiballZ_command.py'
Oct 02 11:51:12 compute-0 sudo[238055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:13 compute-0 python3.9[238057]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:51:13 compute-0 sudo[238055]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:13 compute-0 ceph-mon[73607]: pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:13 compute-0 sudo[238220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izomlryjbxnfijejxfwchoylylcrsium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405873.483884-2611-246311930719582/AnsiballZ_systemd.py'
Oct 02 11:51:13 compute-0 sudo[238220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:14 compute-0 python3.9[238222]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:51:14 compute-0 systemd[1]: Stopping multipathd container...
Oct 02 11:51:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:14 compute-0 sshd-session[237631]: Connection closed by invalid user riscv 167.99.55.34 port 38194 [preauth]
Oct 02 11:51:14 compute-0 multipathd[237714]: 3941.923785 | exit (signal)
Oct 02 11:51:14 compute-0 multipathd[237714]: 3941.923840 | --------shut down-------
Oct 02 11:51:14 compute-0 systemd[1]: libpod-a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4.scope: Deactivated successfully.
Oct 02 11:51:14 compute-0 podman[238226]: 2025-10-02 11:51:14.263819489 +0000 UTC m=+0.158297147 container died a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 11:51:14 compute-0 systemd[1]: a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-35bdd121ed217184.timer: Deactivated successfully.
Oct 02 11:51:14 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4.
Oct 02 11:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-userdata-shm.mount: Deactivated successfully.
Oct 02 11:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b7b9b197187f41d2d7b8b596ad2f91816987445684ba454a4017d67728a2739-merged.mount: Deactivated successfully.
Oct 02 11:51:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:51:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:51:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:14 compute-0 podman[238226]: 2025-10-02 11:51:14.691090568 +0000 UTC m=+0.585568216 container cleanup a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 11:51:14 compute-0 podman[238226]: multipathd
Oct 02 11:51:14 compute-0 podman[238258]: multipathd
Oct 02 11:51:14 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 02 11:51:14 compute-0 systemd[1]: Stopped multipathd container.
Oct 02 11:51:14 compute-0 systemd[1]: Starting multipathd container...
Oct 02 11:51:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7b9b197187f41d2d7b8b596ad2f91816987445684ba454a4017d67728a2739/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7b9b197187f41d2d7b8b596ad2f91816987445684ba454a4017d67728a2739/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4.
Oct 02 11:51:14 compute-0 podman[238271]: 2025-10-02 11:51:14.919772922 +0000 UTC m=+0.114762588 container init a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:14 compute-0 multipathd[238286]: + sudo -E kolla_set_configs
Oct 02 11:51:14 compute-0 sudo[238292]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 11:51:14 compute-0 sudo[238292]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:51:14 compute-0 sudo[238292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:51:14 compute-0 podman[238271]: 2025-10-02 11:51:14.952914665 +0000 UTC m=+0.147904321 container start a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:14 compute-0 podman[238271]: multipathd
Oct 02 11:51:14 compute-0 multipathd[238286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:51:14 compute-0 multipathd[238286]: INFO:__main__:Validating config file
Oct 02 11:51:14 compute-0 multipathd[238286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:51:14 compute-0 multipathd[238286]: INFO:__main__:Writing out command to execute
Oct 02 11:51:14 compute-0 systemd[1]: Started multipathd container.
Oct 02 11:51:14 compute-0 sudo[238292]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-0 multipathd[238286]: ++ cat /run_command
Oct 02 11:51:14 compute-0 multipathd[238286]: + CMD='/usr/sbin/multipathd -d'
Oct 02 11:51:14 compute-0 multipathd[238286]: + ARGS=
Oct 02 11:51:14 compute-0 multipathd[238286]: + sudo kolla_copy_cacerts
Oct 02 11:51:15 compute-0 sudo[238309]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 11:51:15 compute-0 sudo[238309]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:51:15 compute-0 sudo[238309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 11:51:15 compute-0 sudo[238309]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:15 compute-0 multipathd[238286]: + [[ ! -n '' ]]
Oct 02 11:51:15 compute-0 multipathd[238286]: + . kolla_extend_start
Oct 02 11:51:15 compute-0 multipathd[238286]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 11:51:15 compute-0 multipathd[238286]: Running command: '/usr/sbin/multipathd -d'
Oct 02 11:51:15 compute-0 multipathd[238286]: + umask 0022
Oct 02 11:51:15 compute-0 multipathd[238286]: + exec /usr/sbin/multipathd -d
Oct 02 11:51:15 compute-0 sudo[238220]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:15 compute-0 podman[238293]: 2025-10-02 11:51:15.01748843 +0000 UTC m=+0.057037750 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:51:15 compute-0 systemd[1]: a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-353b374540688a8.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:51:15 compute-0 systemd[1]: a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4-353b374540688a8.service: Failed with result 'exit-code'.
Oct 02 11:51:15 compute-0 multipathd[238286]: 3942.717361 | --------start up--------
Oct 02 11:51:15 compute-0 multipathd[238286]: 3942.717378 | read /etc/multipath.conf
Oct 02 11:51:15 compute-0 multipathd[238286]: 3942.723161 | path checkers start up
Oct 02 11:51:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:15 compute-0 sudo[238475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awzhixnopzfkklapbpegnrghhquqlnwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405875.3002782-2635-258953912140448/AnsiballZ_file.py'
Oct 02 11:51:15 compute-0 sudo[238475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:15 compute-0 ceph-mon[73607]: pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:15 compute-0 python3.9[238477]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:15 compute-0 sudo[238475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:15 compute-0 podman[238478]: 2025-10-02 11:51:15.9576432 +0000 UTC m=+0.097348811 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:51:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:16 compute-0 sudo[238648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowykkaajneissfdjdltemfqdotsaxym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405876.4548974-2671-182847711500094/AnsiballZ_file.py'
Oct 02 11:51:16 compute-0 sudo[238648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:16 compute-0 ceph-mon[73607]: pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:17 compute-0 python3.9[238650]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:51:17 compute-0 sudo[238648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:17 compute-0 sudo[238810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tktvaqpngncybhvolgkokzgpvaejyqlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405877.2690918-2695-259451648617133/AnsiballZ_modprobe.py'
Oct 02 11:51:17 compute-0 sudo[238810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:17 compute-0 podman[238774]: 2025-10-02 11:51:17.614702668 +0000 UTC m=+0.102961598 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:17 compute-0 python3.9[238823]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 02 11:51:17 compute-0 kernel: Key type psk registered
Oct 02 11:51:17 compute-0 sudo[238810]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:18 compute-0 sudo[238993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsflaqvbncvrmqtyofaqelfohluvaxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405878.195977-2719-270426818933838/AnsiballZ_stat.py'
Oct 02 11:51:18 compute-0 sudo[238993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:18 compute-0 python3.9[238995]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:18 compute-0 sudo[238993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:19 compute-0 sudo[239116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvigviwwgnwvizpirqcgsajiyvigctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405878.195977-2719-270426818933838/AnsiballZ_copy.py'
Oct 02 11:51:19 compute-0 sudo[239116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:19 compute-0 python3.9[239118]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405878.195977-2719-270426818933838/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:19 compute-0 sudo[239116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:19 compute-0 ceph-mon[73607]: pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:20 compute-0 sudo[239268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqhpvifhhrngxnggpojkadikdpjsvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405879.747822-2767-72534147690211/AnsiballZ_lineinfile.py'
Oct 02 11:51:20 compute-0 sudo[239268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:20 compute-0 python3.9[239270]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:20 compute-0 sudo[239268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:20 compute-0 sudo[239421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omymuvtwekxairrrizkxlzpifvzbtaez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405880.6050336-2791-133352768600828/AnsiballZ_systemd.py'
Oct 02 11:51:20 compute-0 sudo[239421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:21 compute-0 ceph-mon[73607]: pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:21 compute-0 python3.9[239423]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:51:21 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 11:51:21 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 11:51:21 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 11:51:21 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 11:51:21 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 11:51:21 compute-0 sudo[239421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:22 compute-0 sudo[239577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmkicptmfovgxkpitmakblflomkkkct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405881.764014-2815-211725131723370/AnsiballZ_setup.py'
Oct 02 11:51:22 compute-0 sudo[239577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:22.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:22 compute-0 python3.9[239579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:22.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:22 compute-0 sudo[239577]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:23 compute-0 sudo[239662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixillwggzvyliaubfksgxddxjhnximsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405881.764014-2815-211725131723370/AnsiballZ_dnf.py'
Oct 02 11:51:23 compute-0 sudo[239662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:23 compute-0 python3.9[239664]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:23 compute-0 ceph-mon[73607]: pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:24 compute-0 sudo[239667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:24 compute-0 sudo[239667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:24 compute-0 sudo[239667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:24 compute-0 sudo[239692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:24 compute-0 sudo[239692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:24 compute-0 sudo[239692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:24 compute-0 ceph-mon[73607]: pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:51:26.907 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:51:26.907 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:51:26.908 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:51:27 compute-0 ceph-mon[73607]: pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:28 compute-0 ceph-mon[73607]: pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:29 compute-0 systemd[1]: Reloading.
Oct 02 11:51:29 compute-0 systemd-rc-local-generator[239751]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:29 compute-0 systemd-sysv-generator[239756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:29 compute-0 systemd[1]: Reloading.
Oct 02 11:51:29 compute-0 systemd-rc-local-generator[239778]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:29 compute-0 systemd-sysv-generator[239783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:30 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 11:51:30 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 11:51:30 compute-0 lvm[239829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:51:30 compute-0 lvm[239829]: VG ceph_vg0 finished
Oct 02 11:51:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:51:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:51:30 compute-0 systemd[1]: Reloading.
Oct 02 11:51:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:30 compute-0 systemd-sysv-generator[239885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:30 compute-0 systemd-rc-local-generator[239881]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:51:31 compute-0 sudo[239662]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:51:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:51:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.483s CPU time.
Oct 02 11:51:31 compute-0 systemd[1]: run-rc6ba86fa6f3647ae8c0b7b16ac9c3e82.service: Deactivated successfully.
Oct 02 11:51:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:32 compute-0 ceph-mon[73607]: pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:33 compute-0 ceph-mon[73607]: pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:33 compute-0 podman[241048]: 2025-10-02 11:51:33.906735731 +0000 UTC m=+0.050460130 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 11:51:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:34 compute-0 sudo[241192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdvamxwemghnctzmwblqflveydvhbjfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405894.0196693-2851-150010851187181/AnsiballZ_file.py'
Oct 02 11:51:34 compute-0 sudo[241192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:34 compute-0 python3.9[241194]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:34 compute-0 sudo[241192]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:35 compute-0 ceph-mon[73607]: pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:35 compute-0 python3.9[241345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:37 compute-0 sudo[241500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajuqeggfvgjxtqeecpvuyjupkkhmvunb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405896.7128336-2903-80501395729847/AnsiballZ_file.py'
Oct 02 11:51:37 compute-0 sudo[241500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:37 compute-0 python3.9[241502]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:37 compute-0 sudo[241500]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:37 compute-0 ceph-mon[73607]: pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:38.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:38 compute-0 sudo[241653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcqwhofijbiejvukovgnrzescsuvzzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405897.783766-2936-266163165695690/AnsiballZ_systemd_service.py'
Oct 02 11:51:38 compute-0 sudo[241653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:38 compute-0 python3.9[241655]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:51:38 compute-0 systemd[1]: Reloading.
Oct 02 11:51:38 compute-0 ceph-mon[73607]: pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:38 compute-0 systemd-rc-local-generator[241679]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:51:38 compute-0 systemd-sysv-generator[241685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:51:39 compute-0 sudo[241653]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 python3.9[241839]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:51:39 compute-0 network[241856]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:51:39 compute-0 network[241857]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:51:39 compute-0 network[241858]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:51:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:40.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:41 compute-0 ceph-mon[73607]: pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:42.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:51:42
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control']
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:51:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:51:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:51:43 compute-0 ceph-mon[73607]: pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:44.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:44 compute-0 sudo[242012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:44 compute-0 sudo[242012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:44 compute-0 sudo[242012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:44 compute-0 sudo[242037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:44 compute-0 sudo[242037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:44 compute-0 sudo[242037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:45 compute-0 sudo[242200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyuohexvzbrjkjogcfxxwjxjkxndpirv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405905.3061945-2993-259350552401557/AnsiballZ_systemd_service.py'
Oct 02 11:51:45 compute-0 sudo[242200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:45 compute-0 podman[242161]: 2025-10-02 11:51:45.763392964 +0000 UTC m=+0.064209828 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 11:51:45 compute-0 ceph-mon[73607]: pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:46 compute-0 python3.9[242206]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:46 compute-0 sudo[242200]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-0 podman[242212]: 2025-10-02 11:51:46.153597242 +0000 UTC m=+0.076167561 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:51:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:46.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:46 compute-0 sudo[242347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:46 compute-0 sudo[242347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:46 compute-0 sudo[242347]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:46 compute-0 sudo[242413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtpheqaxhytexhztahknajbbmzskcygt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405906.223849-2993-10897493346070/AnsiballZ_systemd_service.py'
Oct 02 11:51:46 compute-0 sudo[242413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:46 compute-0 sudo[242399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:46 compute-0 sudo[242399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:46 compute-0 sudo[242399]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:46 compute-0 sudo[242433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:46 compute-0 sudo[242433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:46 compute-0 sudo[242433]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-0 sudo[242458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:51:46 compute-0 sudo[242458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:46 compute-0 ceph-mon[73607]: pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:46 compute-0 python3.9[242430]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.876420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906876481, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1482, "num_deletes": 251, "total_data_size": 2718575, "memory_usage": 2763616, "flush_reason": "Manual Compaction"}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906906428, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2678237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15462, "largest_seqno": 16943, "table_properties": {"data_size": 2671331, "index_size": 4041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13830, "raw_average_key_size": 19, "raw_value_size": 2657679, "raw_average_value_size": 3785, "num_data_blocks": 182, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405751, "oldest_key_time": 1759405751, "file_creation_time": 1759405906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 30053 microseconds, and 8620 cpu microseconds.
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.906478) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2678237 bytes OK
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.906499) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.907902) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.907918) EVENT_LOG_v1 {"time_micros": 1759405906907913, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.907936) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2712378, prev total WAL file size 2712378, number of live WAL files 2.
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.908583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2615KB)], [35(7484KB)]
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906908630, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10342067, "oldest_snapshot_seqno": -1}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4247 keys, 8313318 bytes, temperature: kUnknown
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906988540, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8313318, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8283345, "index_size": 18279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 104993, "raw_average_key_size": 24, "raw_value_size": 8204889, "raw_average_value_size": 1931, "num_data_blocks": 769, "num_entries": 4247, "num_filter_entries": 4247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.988748) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8313318 bytes
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.991005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.3 rd, 104.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4764, records dropped: 517 output_compression: NoCompression
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.991044) EVENT_LOG_v1 {"time_micros": 1759405906991013, "job": 16, "event": "compaction_finished", "compaction_time_micros": 79971, "compaction_time_cpu_micros": 32458, "output_level": 6, "num_output_files": 1, "total_output_size": 8313318, "num_input_records": 4764, "num_output_records": 4247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906991700, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906992939, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.908533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.993055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.993063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.993066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.993069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:46 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:46.993073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:47 compute-0 sudo[242458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3b8d4892-6405-4683-a5cf-2a46731ed4da does not exist
Oct 02 11:51:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 78974abd-7e9a-40a6-9080-4bdd2eb91a02 does not exist
Oct 02 11:51:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev de92fcf6-d12b-4413-9a56-e34386dfc052 does not exist
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:47 compute-0 sudo[242513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:47 compute-0 sudo[242513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:47 compute-0 sudo[242513]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 sudo[242538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:47 compute-0 sudo[242538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:47 compute-0 sudo[242538]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 sudo[242563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:47 compute-0 sudo[242563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:47 compute-0 sudo[242563]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 sudo[242588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:51:47 compute-0 sudo[242588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:47 compute-0 sudo[242413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 podman[242654]: 2025-10-02 11:51:47.86606128 +0000 UTC m=+0.018952546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:51:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:48.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.186975318 +0000 UTC m=+0.339866574 container create 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:51:48 compute-0 systemd[1]: Started libpod-conmon-407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a.scope.
Oct 02 11:51:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.415208831 +0000 UTC m=+0.568100137 container init 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.424174181 +0000 UTC m=+0.577065437 container start 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.428179419 +0000 UTC m=+0.581070695 container attach 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:51:48 compute-0 keen_ritchie[242779]: 167 167
Oct 02 11:51:48 compute-0 systemd[1]: libpod-407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a.scope: Deactivated successfully.
Oct 02 11:51:48 compute-0 conmon[242779]: conmon 407e8bfe39d4b4d42e87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a.scope/container/memory.events
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.431419509 +0000 UTC m=+0.584310765 container died 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:51:48 compute-0 podman[242655]: 2025-10-02 11:51:48.449039461 +0000 UTC m=+0.584892799 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:51:48 compute-0 sudo[242852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbcvjdarkhatxnokxxakqsyuwjwxpfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405908.1214306-2993-70530764065392/AnsiballZ_systemd_service.py'
Oct 02 11:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3a339576f001c58124fa1bc392ef779c35e6cd31a5b89797809f48c6a62d1b6-merged.mount: Deactivated successfully.
Oct 02 11:51:48 compute-0 sudo[242852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:48 compute-0 podman[242654]: 2025-10-02 11:51:48.489939065 +0000 UTC m=+0.642830321 container remove 407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:51:48 compute-0 systemd[1]: libpod-conmon-407e8bfe39d4b4d42e87cdc2326ca3d3e449f36d71548e8ab3ec3d5d5ef46a9a.scope: Deactivated successfully.
Oct 02 11:51:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:48 compute-0 podman[242872]: 2025-10-02 11:51:48.691422362 +0000 UTC m=+0.064307900 container create 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:51:48 compute-0 podman[242872]: 2025-10-02 11:51:48.651717027 +0000 UTC m=+0.024602585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:48 compute-0 systemd[1]: Started libpod-conmon-94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9.scope.
Oct 02 11:51:48 compute-0 python3.9[242864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:48 compute-0 sudo[242852]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:48 compute-0 podman[242872]: 2025-10-02 11:51:48.856186507 +0000 UTC m=+0.229072055 container init 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:51:48 compute-0 podman[242872]: 2025-10-02 11:51:48.870549699 +0000 UTC m=+0.243435227 container start 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:51:48 compute-0 podman[242872]: 2025-10-02 11:51:48.893282217 +0000 UTC m=+0.266167745 container attach 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:51:49 compute-0 ceph-mon[73607]: pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:49 compute-0 sudo[243044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbvwroimrxhijwainxpfblhcyryufkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405909.010463-2993-75212617676647/AnsiballZ_systemd_service.py'
Oct 02 11:51:49 compute-0 sudo[243044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:49 compute-0 python3.9[243046]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:49 compute-0 sudo[243044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:49 compute-0 magical_ellis[242889]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:51:49 compute-0 magical_ellis[242889]: --> relative data size: 1.0
Oct 02 11:51:49 compute-0 magical_ellis[242889]: --> All data devices are unavailable
Oct 02 11:51:49 compute-0 systemd[1]: libpod-94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9.scope: Deactivated successfully.
Oct 02 11:51:49 compute-0 podman[242872]: 2025-10-02 11:51:49.726110351 +0000 UTC m=+1.098995879 container died 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-557dd5e80aae6d947c540324bbfb22356bcadb45673ebe668dad7a11bc9a0891-merged.mount: Deactivated successfully.
Oct 02 11:51:49 compute-0 podman[242872]: 2025-10-02 11:51:49.886496888 +0000 UTC m=+1.259382416 container remove 94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:49 compute-0 systemd[1]: libpod-conmon-94e2925625edb5644171e14a94512fd2459c06fac9ef9db91aa554b806706be9.scope: Deactivated successfully.
Oct 02 11:51:49 compute-0 sudo[242588]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:49 compute-0 sudo[243170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:49 compute-0 sudo[243170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:49 compute-0 sudo[243170]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:50 compute-0 sudo[243219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:50 compute-0 sudo[243219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:50 compute-0 sudo[243219]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:50 compute-0 sudo[243244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:50 compute-0 sudo[243244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:50 compute-0 sudo[243244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:50.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:50 compute-0 sudo[243316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csdwifcgjmgllypxdrzvgijllgzqmiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405909.766604-2993-277723157839997/AnsiballZ_systemd_service.py'
Oct 02 11:51:50 compute-0 sudo[243272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:51:50 compute-0 sudo[243272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:50 compute-0 sudo[243316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.48008339 +0000 UTC m=+0.036772224 container create efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:51:50 compute-0 python3.9[243322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:50 compute-0 systemd[1]: Started libpod-conmon-efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937.scope.
Oct 02 11:51:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:50 compute-0 sudo[243316]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.553079622 +0000 UTC m=+0.109768476 container init efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.559608722 +0000 UTC m=+0.116297556 container start efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.464249551 +0000 UTC m=+0.020938405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.562938505 +0000 UTC m=+0.119627339 container attach efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:51:50 compute-0 blissful_curie[243383]: 167 167
Oct 02 11:51:50 compute-0 systemd[1]: libpod-efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937.scope: Deactivated successfully.
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.564751239 +0000 UTC m=+0.121440063 container died efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd54a3a3a1920df12d90841e576a7eaea9729f414f18df07b6ae67eb525cfce7-merged.mount: Deactivated successfully.
Oct 02 11:51:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:50 compute-0 podman[243366]: 2025-10-02 11:51:50.604001502 +0000 UTC m=+0.160690336 container remove efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:51:50 compute-0 systemd[1]: libpod-conmon-efb4fe58d2753b12174763b34280c67477787cbe1227670e3cb7d696b0086937.scope: Deactivated successfully.
Oct 02 11:51:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:50 compute-0 podman[243482]: 2025-10-02 11:51:50.743674421 +0000 UTC m=+0.034580630 container create 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:51:50 compute-0 systemd[1]: Started libpod-conmon-33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49.scope.
Oct 02 11:51:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862ca977dcad9e90dabe6a4fb3bb6fc8bd5df6d8874e354722ec0dfc1d241178/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862ca977dcad9e90dabe6a4fb3bb6fc8bd5df6d8874e354722ec0dfc1d241178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862ca977dcad9e90dabe6a4fb3bb6fc8bd5df6d8874e354722ec0dfc1d241178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862ca977dcad9e90dabe6a4fb3bb6fc8bd5df6d8874e354722ec0dfc1d241178/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:50 compute-0 podman[243482]: 2025-10-02 11:51:50.729848992 +0000 UTC m=+0.020755231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:50 compute-0 podman[243482]: 2025-10-02 11:51:50.899931537 +0000 UTC m=+0.190837766 container init 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:51:50 compute-0 sudo[243576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmhwnouojcfkctzvpvmiqpyvghvztfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405910.6585577-2993-275450149235769/AnsiballZ_systemd_service.py'
Oct 02 11:51:50 compute-0 podman[243482]: 2025-10-02 11:51:50.906792925 +0000 UTC m=+0.197699144 container start 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:50 compute-0 sudo[243576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:50 compute-0 podman[243482]: 2025-10-02 11:51:50.917279373 +0000 UTC m=+0.208185592 container attach 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.948760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405910948902, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 293, "num_deletes": 254, "total_data_size": 52169, "memory_usage": 59128, "flush_reason": "Manual Compaction"}
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405910958046, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 52315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16944, "largest_seqno": 17236, "table_properties": {"data_size": 50413, "index_size": 130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4614, "raw_average_key_size": 16, "raw_value_size": 46522, "raw_average_value_size": 166, "num_data_blocks": 6, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405907, "oldest_key_time": 1759405907, "file_creation_time": 1759405910, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9270 microseconds, and 946 cpu microseconds.
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.958079) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 52315 bytes OK
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.958095) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.961715) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.961729) EVENT_LOG_v1 {"time_micros": 1759405910961724, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.961746) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 49996, prev total WAL file size 65969, number of live WAL files 2.
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.962129) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(51KB)], [38(8118KB)]
Oct 02 11:51:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405910962157, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 8365633, "oldest_snapshot_seqno": -1}
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4009 keys, 8017851 bytes, temperature: kUnknown
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911031465, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8017851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7989679, "index_size": 17054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101364, "raw_average_key_size": 25, "raw_value_size": 7915465, "raw_average_value_size": 1974, "num_data_blocks": 704, "num_entries": 4009, "num_filter_entries": 4009, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759405910, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.031878) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8017851 bytes
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.038108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.5 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(313.2) write-amplify(153.3) OK, records in: 4527, records dropped: 518 output_compression: NoCompression
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.038155) EVENT_LOG_v1 {"time_micros": 1759405911038138, "job": 18, "event": "compaction_finished", "compaction_time_micros": 69434, "compaction_time_cpu_micros": 16224, "output_level": 6, "num_output_files": 1, "total_output_size": 8017851, "num_input_records": 4527, "num_output_records": 4009, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911038365, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911039758, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:50.962060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.039867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.039874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.039876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.039878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:51:51.039880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:51:51 compute-0 python3.9[243580]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:51 compute-0 sudo[243576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:51 compute-0 ceph-mon[73607]: pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]: {
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:     "1": [
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:         {
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "devices": [
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "/dev/loop3"
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             ],
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "lv_name": "ceph_lv0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "lv_size": "7511998464",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "name": "ceph_lv0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "tags": {
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.cluster_name": "ceph",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.crush_device_class": "",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.encrypted": "0",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.osd_id": "1",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.type": "block",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:                 "ceph.vdo": "0"
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             },
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "type": "block",
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:             "vg_name": "ceph_vg0"
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:         }
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]:     ]
Oct 02 11:51:51 compute-0 eloquent_elbakyan[243523]: }
Oct 02 11:51:51 compute-0 sudo[243735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddwpjxmugeiwguqyrgsxncltlzevvqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405911.3739505-2993-65925531807135/AnsiballZ_systemd_service.py'
Oct 02 11:51:51 compute-0 sudo[243735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:51 compute-0 systemd[1]: libpod-33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49.scope: Deactivated successfully.
Oct 02 11:51:51 compute-0 conmon[243523]: conmon 33b7298e43ef343087b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49.scope/container/memory.events
Oct 02 11:51:51 compute-0 podman[243482]: 2025-10-02 11:51:51.669384286 +0000 UTC m=+0.960290495 container died 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-862ca977dcad9e90dabe6a4fb3bb6fc8bd5df6d8874e354722ec0dfc1d241178-merged.mount: Deactivated successfully.
Oct 02 11:51:51 compute-0 podman[243482]: 2025-10-02 11:51:51.758725229 +0000 UTC m=+1.049631438 container remove 33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:51 compute-0 systemd[1]: libpod-conmon-33b7298e43ef343087b48c60d14b5335a35a236ef0f509ccd5ef924ee87ddc49.scope: Deactivated successfully.
Oct 02 11:51:51 compute-0 sudo[243272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:51 compute-0 sudo[243750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:51 compute-0 sudo[243750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:51 compute-0 sudo[243750]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:51 compute-0 sudo[243775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:51 compute-0 sudo[243775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:51 compute-0 sudo[243775]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:51 compute-0 python3.9[243737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:52 compute-0 sudo[243735]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:52 compute-0 sudo[243800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:52 compute-0 sudo[243800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:52 compute-0 sudo[243800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:52 compute-0 sudo[243832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:51:52 compute-0 sudo[243832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:52.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.42564273 +0000 UTC m=+0.036639580 container create ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:51:52 compute-0 systemd[1]: Started libpod-conmon-ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b.scope.
Oct 02 11:51:52 compute-0 sudo[244056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajjfomoqrlvtlbpjyelxzyveqyoioftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405912.1275525-2993-28378614716881/AnsiballZ_systemd_service.py'
Oct 02 11:51:52 compute-0 sudo[244056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.505007869 +0000 UTC m=+0.116004719 container init ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.410710664 +0000 UTC m=+0.021707514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.511360074 +0000 UTC m=+0.122356924 container start ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:51:52 compute-0 hungry_cartwright[244057]: 167 167
Oct 02 11:51:52 compute-0 systemd[1]: libpod-ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b.scope: Deactivated successfully.
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.519962646 +0000 UTC m=+0.130959526 container attach ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.521027322 +0000 UTC m=+0.132024172 container died ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:51:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-86001142db880224293addaad53894e79b2f02f1027e4e9d4a2eff6d65bce731-merged.mount: Deactivated successfully.
Oct 02 11:51:52 compute-0 podman[244014]: 2025-10-02 11:51:52.56168958 +0000 UTC m=+0.172686410 container remove ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:52 compute-0 systemd[1]: libpod-conmon-ade586018ebf7848dcaee642e8d40e80a71673bdacc6652d00a67a502356320b.scope: Deactivated successfully.
Oct 02 11:51:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:52 compute-0 podman[244082]: 2025-10-02 11:51:52.722484508 +0000 UTC m=+0.036826095 container create 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:52 compute-0 systemd[1]: Started libpod-conmon-4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807.scope.
Oct 02 11:51:52 compute-0 python3.9[244061]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0426729c38191bd3f349f8e065869360540171c190f9dc56d0522457108c924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0426729c38191bd3f349f8e065869360540171c190f9dc56d0522457108c924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0426729c38191bd3f349f8e065869360540171c190f9dc56d0522457108c924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0426729c38191bd3f349f8e065869360540171c190f9dc56d0522457108c924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:52 compute-0 podman[244082]: 2025-10-02 11:51:52.7959333 +0000 UTC m=+0.110274897 container init 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:51:52 compute-0 podman[244082]: 2025-10-02 11:51:52.706619398 +0000 UTC m=+0.020961015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:52 compute-0 sudo[244056]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:52 compute-0 podman[244082]: 2025-10-02 11:51:52.81262262 +0000 UTC m=+0.126964217 container start 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:51:52 compute-0 podman[244082]: 2025-10-02 11:51:52.815933962 +0000 UTC m=+0.130275649 container attach 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:51:53 compute-0 sudo[244259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfngwwcxeqdwwxgfrnhkugnhhfeezzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405913.2293904-3170-49045361651404/AnsiballZ_file.py'
Oct 02 11:51:53 compute-0 sudo[244259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]: {
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:         "osd_id": 1,
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:         "type": "bluestore"
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]:     }
Oct 02 11:51:53 compute-0 affectionate_herschel[244099]: }
Oct 02 11:51:53 compute-0 systemd[1]: libpod-4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807.scope: Deactivated successfully.
Oct 02 11:51:53 compute-0 podman[244082]: 2025-10-02 11:51:53.668730306 +0000 UTC m=+0.983071903 container died 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:53 compute-0 python3.9[244261]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:53 compute-0 sudo[244259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0426729c38191bd3f349f8e065869360540171c190f9dc56d0522457108c924-merged.mount: Deactivated successfully.
Oct 02 11:51:53 compute-0 podman[244082]: 2025-10-02 11:51:53.782988631 +0000 UTC m=+1.097330218 container remove 4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:51:53 compute-0 systemd[1]: libpod-conmon-4825af1a89a78df80ce86453f07257f2c12deb9c6f9ee779178dae5e9b995807.scope: Deactivated successfully.
Oct 02 11:51:53 compute-0 ceph-mon[73607]: pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:53 compute-0 sudo[243832]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:51:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:51:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cf0fd423-8d9a-443a-85ce-e3a087853daa does not exist
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 62369258-c2cb-4ab2-bdfe-1c15e5044135 does not exist
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 26c6e0ae-d8c1-4352-9ec9-bf47a0e3cf52 does not exist
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:51:53 compute-0 sudo[244359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:53 compute-0 sudo[244359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:53 compute-0 sudo[244359]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:53 compute-0 sudo[244405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:51:53 compute-0 sudo[244405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:53 compute-0 sudo[244405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:54 compute-0 sudo[244485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrvfpsxichnnmyzmlerubziozedgrtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405913.8314414-3170-187342288759116/AnsiballZ_file.py'
Oct 02 11:51:54 compute-0 sudo[244485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:54.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:54 compute-0 python3.9[244487]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:54 compute-0 sudo[244485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:54 compute-0 sudo[244638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enbelxcfgmxtgbgnsqqneyyozbxcelrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405914.4510374-3170-257769251952829/AnsiballZ_file.py'
Oct 02 11:51:54 compute-0 sudo[244638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:51:54 compute-0 ceph-mon[73607]: pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:55 compute-0 python3.9[244640]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:55 compute-0 sudo[244638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:55 compute-0 sudo[244790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flzgricmwftzonoqcehlqaqmtbarncnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405915.1864028-3170-217479169051651/AnsiballZ_file.py'
Oct 02 11:51:55 compute-0 sudo[244790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:55 compute-0 python3.9[244792]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:55 compute-0 sudo[244790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:56 compute-0 sudo[244942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgxjyzgilwepempovjfapwtmupwtzxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405915.802131-3170-151725225054851/AnsiballZ_file.py'
Oct 02 11:51:56 compute-0 sudo[244942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:56.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:56 compute-0 python3.9[244944]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:56 compute-0 sudo[244942]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:56 compute-0 sudo[245095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcvysvqpheppcozqwzpbsjryyaiqfaun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405916.418075-3170-220753091811040/AnsiballZ_file.py'
Oct 02 11:51:56 compute-0 sudo[245095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:56 compute-0 python3.9[245097]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:56 compute-0 sudo[245095]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:57 compute-0 sudo[245247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrlqocmaphfkpdcnhpqqijbjchrzylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405917.043817-3170-19553339673501/AnsiballZ_file.py'
Oct 02 11:51:57 compute-0 sudo[245247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:57 compute-0 python3.9[245249]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:57 compute-0 sudo[245247]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:57 compute-0 ceph-mon[73607]: pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:57 compute-0 sudo[245399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqsddcxybjsqwkiiltthpqhqpqtfxxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405917.6503174-3170-99929176363260/AnsiballZ_file.py'
Oct 02 11:51:57 compute-0 sudo[245399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:58 compute-0 python3.9[245401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:58 compute-0 sudo[245399]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:51:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:58.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:58 compute-0 ceph-mon[73607]: pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:59 compute-0 sudo[245552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgcanddaqgxwibzlknljslzlvhxggwhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405919.1892428-3341-195932184781482/AnsiballZ_file.py'
Oct 02 11:51:59 compute-0 sudo[245552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:59 compute-0 python3.9[245554]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:59 compute-0 sudo[245552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:00.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:00 compute-0 sudo[245704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgtrhbynwkydcdobmqtvxswkfjtuwjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405919.9513123-3341-255327439852953/AnsiballZ_file.py'
Oct 02 11:52:00 compute-0 sudo[245704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:00 compute-0 python3.9[245706]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:00 compute-0 sudo[245704]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:00.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:00 compute-0 sudo[245857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpaqlttacuviahlwxfmpbttqqiruckh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405920.5524662-3341-162985844013032/AnsiballZ_file.py'
Oct 02 11:52:00 compute-0 sudo[245857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-0 python3.9[245859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:01 compute-0 sudo[245857]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:01 compute-0 sudo[246009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfeehyjugapeerrncwjmdgcgmtkzeqip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.2478354-3341-200870875235932/AnsiballZ_file.py'
Oct 02 11:52:01 compute-0 sudo[246009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-0 python3.9[246011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:01 compute-0 ceph-mon[73607]: pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:01 compute-0 sudo[246009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:02 compute-0 sudo[246161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmwofeupwpzvdjvbhknrymxhvxizvgkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.903793-3341-56067426935501/AnsiballZ_file.py'
Oct 02 11:52:02 compute-0 sudo[246161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:02 compute-0 python3.9[246163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:02 compute-0 sudo[246161]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:02.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:02 compute-0 sudo[246314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfsjdbxvcazdopsopdxyktkduqviprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405922.585433-3341-239248350362207/AnsiballZ_file.py'
Oct 02 11:52:02 compute-0 sudo[246314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-0 python3.9[246316]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:03 compute-0 sudo[246314]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:03 compute-0 sudo[246466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llmglgcetoyzvafhtwdvdwidvrmyaxcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.15216-3341-152791427983109/AnsiballZ_file.py'
Oct 02 11:52:03 compute-0 sudo[246466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-0 python3.9[246468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:03 compute-0 sudo[246466]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:03 compute-0 ceph-mon[73607]: pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:04 compute-0 sudo[246625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wceaqtvrzjvmemnnvwpnyjvcgrhybzse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.7480009-3341-92678879845723/AnsiballZ_file.py'
Oct 02 11:52:04 compute-0 sudo[246625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:04 compute-0 podman[246592]: 2025-10-02 11:52:04.146613131 +0000 UTC m=+0.089213951 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:52:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:04.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:04 compute-0 python3.9[246631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:04 compute-0 sudo[246625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:04 compute-0 ceph-mon[73607]: pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:04 compute-0 sudo[246663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:04 compute-0 sudo[246663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:04 compute-0 sudo[246663]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-0 sudo[246688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:04 compute-0 sudo[246688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:04 compute-0 sudo[246688]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:05 compute-0 sudo[246838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcephojzxvnlfpjlgagvsgixufbszep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405925.3948123-3515-117022961747838/AnsiballZ_command.py'
Oct 02 11:52:05 compute-0 sudo[246838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:05 compute-0 python3.9[246840]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:05 compute-0 sudo[246838]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:06.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:52:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:52:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:07 compute-0 python3.9[246993]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:52:07 compute-0 sudo[247143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcaedcufiolqaoyxurndlakqwyewibut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405927.3522973-3569-140410304969496/AnsiballZ_systemd_service.py'
Oct 02 11:52:07 compute-0 sudo[247143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:07 compute-0 ceph-mon[73607]: pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:07 compute-0 python3.9[247145]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:52:07 compute-0 systemd[1]: Reloading.
Oct 02 11:52:08 compute-0 systemd-rc-local-generator[247173]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:52:08 compute-0 systemd-sysv-generator[247176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:52:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:08.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:08 compute-0 sudo[247143]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:08.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:09 compute-0 sudo[247332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqokjyvolowtniejjrvuzsjaunqkowko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405928.6942906-3593-66274845429654/AnsiballZ_command.py'
Oct 02 11:52:09 compute-0 sudo[247332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:09 compute-0 python3.9[247334]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:09 compute-0 sudo[247332]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:09 compute-0 sudo[247485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwbrdkfiwirsmzeoaqhefohugkmwskq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405929.4651952-3593-146304275508442/AnsiballZ_command.py'
Oct 02 11:52:09 compute-0 sudo[247485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:09 compute-0 ceph-mon[73607]: pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:09 compute-0 python3.9[247487]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:10 compute-0 sudo[247485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:10.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:10 compute-0 sudo[247639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtrgswycrpmxdxqxhgwybknbausorxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405930.1648812-3593-189041073248772/AnsiballZ_command.py'
Oct 02 11:52:10 compute-0 sudo[247639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:10 compute-0 python3.9[247641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:10 compute-0 sudo[247639]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:10 compute-0 ceph-mon[73607]: pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:11 compute-0 sudo[247792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edhmifwoprwlyddeyhqbsxvrhjtqojyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405930.8027704-3593-113034186235250/AnsiballZ_command.py'
Oct 02 11:52:11 compute-0 sudo[247792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:11 compute-0 python3.9[247794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:11 compute-0 sudo[247792]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:11 compute-0 sudo[247945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqpqomnaxrypozyjctpnmmlexxwetscy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405931.4460938-3593-245892017953948/AnsiballZ_command.py'
Oct 02 11:52:11 compute-0 sudo[247945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:11 compute-0 python3.9[247947]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:11 compute-0 sudo[247945]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:12.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:12 compute-0 sudo[248098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opsrlqeiwzzhefqzhyyxwfwjrwcdqchq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405932.0507793-3593-170048140196434/AnsiballZ_command.py'
Oct 02 11:52:12 compute-0 sudo[248098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:12 compute-0 python3.9[248101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:12 compute-0 sudo[248098]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:12 compute-0 ceph-mon[73607]: pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:12 compute-0 sudo[248252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxmqiwvtqhyhjvcgvdpcedqhothyjgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405932.6473043-3593-107688897761829/AnsiballZ_command.py'
Oct 02 11:52:12 compute-0 sudo[248252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:13 compute-0 python3.9[248254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:13 compute-0 sudo[248252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:13 compute-0 sudo[248405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iabcmdjyntsqweusbgfyvredwsldkqda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405933.2958539-3593-27924329726282/AnsiballZ_command.py'
Oct 02 11:52:13 compute-0 sudo[248405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:13 compute-0 python3.9[248407]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:13 compute-0 sudo[248405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:14.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:15 compute-0 ceph-mon[73607]: pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:15 compute-0 podman[248436]: 2025-10-02 11:52:15.925565352 +0000 UTC m=+0.062971417 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:16 compute-0 sudo[248580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bglmvdtbfqflbmmzbbzekvlxfpxxjofn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405935.8761413-3800-235340224756270/AnsiballZ_file.py'
Oct 02 11:52:16 compute-0 sudo[248580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:16 compute-0 python3.9[248582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:16 compute-0 sudo[248580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:16 compute-0 podman[248584]: 2025-10-02 11:52:16.496494248 +0000 UTC m=+0.077515337 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:52:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:16 compute-0 sudo[248753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktfmjwadkcjqiobrzdlekauesafgwln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405936.5715675-3800-180821380715573/AnsiballZ_file.py'
Oct 02 11:52:16 compute-0 sudo[248753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:17 compute-0 python3.9[248755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:17 compute-0 sudo[248753]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:17 compute-0 sudo[248905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waigvdqfpmixeoojpzkzoqisiyzilibt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405937.1978679-3800-172968837649873/AnsiballZ_file.py'
Oct 02 11:52:17 compute-0 sudo[248905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:17 compute-0 python3.9[248907]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:17 compute-0 sudo[248905]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:17 compute-0 ceph-mon[73607]: pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:18 compute-0 sudo[249064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izdifpydaslgbeywvkhgenrvwujlgnrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405938.227589-3866-32137305031092/AnsiballZ_file.py'
Oct 02 11:52:18 compute-0 sudo[249064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:18.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:18 compute-0 podman[249033]: 2025-10-02 11:52:18.614408614 +0000 UTC m=+0.128283530 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:52:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-0 python3.9[249072]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:18 compute-0 sudo[249064]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:18 compute-0 ceph-mon[73607]: pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:19 compute-0 sudo[249237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfglnskkyulcutftaevxdokvwjaiheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405938.9171336-3866-10947811741398/AnsiballZ_file.py'
Oct 02 11:52:19 compute-0 sudo[249237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:19 compute-0 python3.9[249239]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:19 compute-0 sudo[249237]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:19 compute-0 sudo[249389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcrjrbffxaybkyqedyjuzxzxrkrwgnri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.5659912-3866-53298899486331/AnsiballZ_file.py'
Oct 02 11:52:19 compute-0 sudo[249389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:19 compute-0 python3.9[249391]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:20 compute-0 sudo[249389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:20 compute-0 sudo[249542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhblbywcqvzbwymccnuxjozxjiayglka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405940.1347816-3866-76604941414891/AnsiballZ_file.py'
Oct 02 11:52:20 compute-0 sudo[249542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:20.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:20 compute-0 python3.9[249544]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:20 compute-0 sudo[249542]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:21 compute-0 sudo[249694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadpxtjoqaizrvmtrytmipcsszwpxejj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405940.8105588-3866-136174785386295/AnsiballZ_file.py'
Oct 02 11:52:21 compute-0 sudo[249694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:21 compute-0 python3.9[249696]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:21 compute-0 sudo[249694]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:21 compute-0 sudo[249846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtwudalijvysktgvivxhgdgoaysngruk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405941.4943619-3866-47639378455539/AnsiballZ_file.py'
Oct 02 11:52:21 compute-0 sudo[249846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:21 compute-0 ceph-mon[73607]: pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:21 compute-0 python3.9[249848]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:21 compute-0 sudo[249846]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:22 compute-0 sudo[249999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjxuxhgbugrivufihiqoqqaczpwlxsqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405942.118856-3866-270529376425593/AnsiballZ_file.py'
Oct 02 11:52:22 compute-0 sudo[249999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:22 compute-0 python3.9[250001]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:22 compute-0 sudo[249999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:23 compute-0 ceph-mon[73607]: pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:23 compute-0 sudo[250151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twfulwlzasqahuhkyoffbopbgotldbvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.1527817-3866-279837681823998/AnsiballZ_file.py'
Oct 02 11:52:23 compute-0 sudo[250151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:23 compute-0 python3.9[250153]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:23 compute-0 sudo[250151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:24 compute-0 sudo[250303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwijqrtmosdtxgdqqdvykricyxxikmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.8204818-3866-79543571469365/AnsiballZ_file.py'
Oct 02 11:52:24 compute-0 sudo[250303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:24 compute-0 python3.9[250305]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:24 compute-0 sudo[250303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:25 compute-0 sudo[250331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:25 compute-0 sudo[250331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:25 compute-0 sudo[250331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:25 compute-0 sudo[250356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:25 compute-0 sudo[250356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:25 compute-0 sudo[250356]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:25 compute-0 ceph-mon[73607]: pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:26.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:52:26.909 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:52:26.909 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:52:26.909 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:52:26 compute-0 ceph-mon[73607]: pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:28.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:29 compute-0 ceph-mon[73607]: pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:30.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:30.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:31 compute-0 sudo[250509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uioisxexdaiojmheyxuhrplkjufrwjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405950.7133563-4233-222935110163760/AnsiballZ_getent.py'
Oct 02 11:52:31 compute-0 sudo[250509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:31 compute-0 python3.9[250511]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 02 11:52:31 compute-0 sudo[250509]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:31 compute-0 ceph-mon[73607]: pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:32 compute-0 sudo[250662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqkxpxvntwjzpkexenbnimtvenrbnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405951.6555638-4257-243152709416822/AnsiballZ_group.py'
Oct 02 11:52:32 compute-0 sudo[250662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:32 compute-0 python3.9[250664]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:52:32 compute-0 groupadd[250666]: group added to /etc/group: name=nova, GID=42436
Oct 02 11:52:32 compute-0 groupadd[250666]: group added to /etc/gshadow: name=nova
Oct 02 11:52:32 compute-0 groupadd[250666]: new group: name=nova, GID=42436
Oct 02 11:52:32 compute-0 sudo[250662]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:32 compute-0 ceph-mon[73607]: pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:33 compute-0 sudo[250821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajqwghwknnhkjpjmwjhofydozqlzzmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405952.6247516-4281-115357575245081/AnsiballZ_user.py'
Oct 02 11:52:33 compute-0 sudo[250821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:33 compute-0 python3.9[250823]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:52:33 compute-0 useradd[250825]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 02 11:52:33 compute-0 useradd[250825]: add 'nova' to group 'libvirt'
Oct 02 11:52:33 compute-0 useradd[250825]: add 'nova' to shadow group 'libvirt'
Oct 02 11:52:33 compute-0 sudo[250821]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:34 compute-0 sshd-session[250857]: Accepted publickey for zuul from 192.168.122.30 port 59206 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 11:52:34 compute-0 systemd-logind[789]: New session 53 of user zuul.
Oct 02 11:52:34 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 02 11:52:34 compute-0 sshd-session[250857]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:52:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:34.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:34 compute-0 podman[250859]: 2025-10-02 11:52:34.610022704 +0000 UTC m=+0.070663717 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:34 compute-0 sshd-session[250861]: Received disconnect from 192.168.122.30 port 59206:11: disconnected by user
Oct 02 11:52:34 compute-0 sshd-session[250861]: Disconnected from user zuul 192.168.122.30 port 59206
Oct 02 11:52:34 compute-0 sshd-session[250857]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:52:34 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 02 11:52:34 compute-0 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Oct 02 11:52:34 compute-0 systemd-logind[789]: Removed session 53.
Oct 02 11:52:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:35 compute-0 python3.9[251030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:35 compute-0 ceph-mon[73607]: pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:36 compute-0 python3.9[251151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405954.9063418-4356-156394448751354/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:36 compute-0 python3.9[251302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:37 compute-0 python3.9[251378]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:37 compute-0 ceph-mon[73607]: pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:37 compute-0 python3.9[251528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:38 compute-0 python3.9[251649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405957.3796659-4356-46170406705550/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:39 compute-0 python3.9[251800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:39 compute-0 python3.9[251921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405958.4555392-4356-232003612204821/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:39 compute-0 ceph-mon[73607]: pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:40 compute-0 python3.9[252071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:40 compute-0 python3.9[252193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405959.697846-4356-219320562371048/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:41 compute-0 ceph-mon[73607]: pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:41 compute-0 sudo[252343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtreexywyykdrzbkakftrizcekirktd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405961.5903332-4563-276618585674313/AnsiballZ_file.py'
Oct 02 11:52:41 compute-0 sudo[252343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:42 compute-0 python3.9[252345]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:42 compute-0 sudo[252343]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:52:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:52:42
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'volumes', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:52:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:42 compute-0 sudo[252496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdjljtgooynefyuhldrqrqiaqbyubbmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405962.4515169-4587-114989204434397/AnsiballZ_copy.py'
Oct 02 11:52:42 compute-0 sudo[252496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:52:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:52:42 compute-0 python3.9[252498]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:42 compute-0 sudo[252496]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:43 compute-0 sudo[252648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvpcdrwbynzbzzqlssvuqstnkwxhzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405963.2151365-4611-191494614438120/AnsiballZ_stat.py'
Oct 02 11:52:43 compute-0 sudo[252648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:43 compute-0 python3.9[252650]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:52:43 compute-0 sudo[252648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:43 compute-0 ceph-mon[73607]: pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:44 compute-0 sudo[252800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwbwvjaxxcppeegjahzptcgrhjfphcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405964.020094-4635-93076906833798/AnsiballZ_stat.py'
Oct 02 11:52:44 compute-0 sudo[252800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:44 compute-0 python3.9[252802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:44 compute-0 sudo[252800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-0 sudo[252924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntrvtbkawodbsxoojgcucleaiiptpzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405964.020094-4635-93076906833798/AnsiballZ_copy.py'
Oct 02 11:52:44 compute-0 sudo[252924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:44 compute-0 ceph-mon[73607]: pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-0 python3.9[252926]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759405964.020094-4635-93076906833798/.source _original_basename=.574oqq9r follow=False checksum=9d4c1b1ed5e9e98f41d932752d86aa631f478b9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 02 11:52:45 compute-0 sudo[252924]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:45 compute-0 sudo[252953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:45 compute-0 sudo[252953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:45 compute-0 sudo[252953]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:45 compute-0 sudo[252978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:45 compute-0 sudo[252978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:45 compute-0 sudo[252978]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:46 compute-0 python3.9[253128]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:52:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:46.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:46 compute-0 podman[253255]: 2025-10-02 11:52:46.682797883 +0000 UTC m=+0.058778533 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:46 compute-0 podman[253256]: 2025-10-02 11:52:46.699998818 +0000 UTC m=+0.074632785 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 11:52:46 compute-0 python3.9[253310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:47 compute-0 python3.9[253441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405966.402175-4713-162481611225943/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:47 compute-0 ceph-mon[73607]: pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:48 compute-0 python3.9[253591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:48.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:48 compute-0 python3.9[253713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405967.7492242-4758-119465399940635/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:48 compute-0 podman[253714]: 2025-10-02 11:52:48.889618766 +0000 UTC m=+0.075460885 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 11:52:49 compute-0 ceph-mon[73607]: pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:49 compute-0 sudo[253888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcccjbjadckieoahipxerkmcztovgcdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405969.4091408-4809-253423115813858/AnsiballZ_container_config_data.py'
Oct 02 11:52:49 compute-0 sudo[253888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:49 compute-0 python3.9[253890]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 02 11:52:49 compute-0 sudo[253888]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:50.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:50 compute-0 sudo[254041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osqksaebeqtamjwnxdqtzhxysehbmpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405970.3710475-4836-105884111364884/AnsiballZ_container_config_hash.py'
Oct 02 11:52:50 compute-0 sudo[254041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:50 compute-0 python3.9[254043]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:52:50 compute-0 sudo[254041]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 sudo[254193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvsgotfieopnlaslhaoaocpshiifzjns ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405971.490149-4866-115686159251242/AnsiballZ_edpm_container_manage.py'
Oct 02 11:52:51 compute-0 sudo[254193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:51 compute-0 ceph-mon[73607]: pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:51 compute-0 python3[254195]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:52:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:52.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:52.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:53 compute-0 ceph-mon[73607]: pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:52:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:54 compute-0 sudo[254251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:54 compute-0 sudo[254251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 sudo[254251]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 sudo[254276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:54 compute-0 sudo[254276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 sudo[254276]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 sudo[254301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:54 compute-0 sudo[254301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 sudo[254301]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 sudo[254326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:52:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:54 compute-0 sudo[254326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:54 compute-0 ceph-mon[73607]: pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:56.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:57 compute-0 ceph-mon[73607]: pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:58.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:52:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:52:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6236 writes, 25K keys, 6236 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6236 writes, 1117 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 509 writes, 770 keys, 509 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 509 writes, 252 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:52:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:00 compute-0 ceph-mon[73607]: pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:02.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:02 compute-0 sudo[254326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:53:02 compute-0 podman[254208]: 2025-10-02 11:53:02.641703915 +0000 UTC m=+10.587180975 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 11:53:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:02 compute-0 ceph-mon[73607]: pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:02 compute-0 podman[254418]: 2025-10-02 11:53:02.815862638 +0000 UTC m=+0.057993914 container create 2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_managed=true)
Oct 02 11:53:02 compute-0 podman[254418]: 2025-10-02 11:53:02.779873669 +0000 UTC m=+0.022004965 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 11:53:02 compute-0 python3[254195]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 02 11:53:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:53:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:53:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:53:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:02 compute-0 sudo[254444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:02 compute-0 sudo[254444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:02 compute-0 sudo[254444]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:02 compute-0 sudo[254193]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:03 compute-0 sudo[254482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254482]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:03 compute-0 sudo[254531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254531]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:53:03 compute-0 sudo[254556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254556]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8fe4b8b8-5a8c-41f2-ab16-0c1f80ed0b20 does not exist
Oct 02 11:53:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 82b4bd4e-e515-4c35-846a-deabb38c671b does not exist
Oct 02 11:53:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8ebc9bc6-5c46-4ae7-8883-b12b1b61d2a8 does not exist
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:53:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:03 compute-0 sudo[254737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzujwwwsrhclqbvvuloohlrjzcklstm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405983.5029628-4890-146279885994537/AnsiballZ_stat.py'
Oct 02 11:53:03 compute-0 sudo[254737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:03 compute-0 sudo[254739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:03 compute-0 ceph-mon[73607]: pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:53:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:03 compute-0 sudo[254739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254739]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:03 compute-0 sudo[254765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254765]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:03 compute-0 sudo[254790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:03 compute-0 sudo[254790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 python3.9[254743]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:03 compute-0 sudo[254815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:53:03 compute-0 sudo[254737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 sudo[254815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.301687058 +0000 UTC m=+0.039467646 container create 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:53:04 compute-0 systemd[1]: Started libpod-conmon-6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829.scope.
Oct 02 11:53:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.285570729 +0000 UTC m=+0.023351347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.438532638 +0000 UTC m=+0.176313226 container init 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.444603409 +0000 UTC m=+0.182383997 container start 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:53:04 compute-0 nostalgic_raman[254924]: 167 167
Oct 02 11:53:04 compute-0 systemd[1]: libpod-6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829.scope: Deactivated successfully.
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.491515098 +0000 UTC m=+0.229295706 container attach 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.492169934 +0000 UTC m=+0.229950552 container died 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:53:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cef0e032a71416f0893a30848342d3c3fe0c1e699a57982c31fcb216fb64b07b-merged.mount: Deactivated successfully.
Oct 02 11:53:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:04 compute-0 podman[254907]: 2025-10-02 11:53:04.81822201 +0000 UTC m=+0.556002588 container remove 6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:04 compute-0 systemd[1]: libpod-conmon-6ece2a1636217aff20a76892ef17218eb439d88f34c711a708ce0f723cb9a829.scope: Deactivated successfully.
Oct 02 11:53:04 compute-0 podman[254943]: 2025-10-02 11:53:04.883620665 +0000 UTC m=+0.232576617 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:53:05 compute-0 podman[255012]: 2025-10-02 11:53:05.027943221 +0000 UTC m=+0.090416904 container create 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:53:05 compute-0 podman[255012]: 2025-10-02 11:53:04.9659646 +0000 UTC m=+0.028438333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:05 compute-0 systemd[1]: Started libpod-conmon-027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26.scope.
Oct 02 11:53:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:05 compute-0 podman[255012]: 2025-10-02 11:53:05.123291758 +0000 UTC m=+0.185765491 container init 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:53:05 compute-0 podman[255012]: 2025-10-02 11:53:05.133785997 +0000 UTC m=+0.196259700 container start 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:53:05 compute-0 podman[255012]: 2025-10-02 11:53:05.175264452 +0000 UTC m=+0.237738145 container attach 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:53:05 compute-0 sudo[255116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgjcpqnzvtkqoevtqheatvzodhsycebu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405984.8830137-4926-111301295691431/AnsiballZ_container_config_data.py'
Oct 02 11:53:05 compute-0 sudo[255116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:05 compute-0 sudo[255119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:05 compute-0 sudo[255119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:05 compute-0 sudo[255119]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-0 python3.9[255118]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 02 11:53:05 compute-0 sudo[255116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-0 ceph-mon[73607]: pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:05 compute-0 sudo[255144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:05 compute-0 sudo[255144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:05 compute-0 sudo[255144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:06 compute-0 magical_dewdney[255082]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:53:06 compute-0 magical_dewdney[255082]: --> relative data size: 1.0
Oct 02 11:53:06 compute-0 magical_dewdney[255082]: --> All data devices are unavailable
Oct 02 11:53:06 compute-0 systemd[1]: libpod-027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26.scope: Deactivated successfully.
Oct 02 11:53:06 compute-0 podman[255012]: 2025-10-02 11:53:06.039133615 +0000 UTC m=+1.101607318 container died 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:53:06 compute-0 sudo[255329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptmgxlthsukturyjzkwrtpprnyqlmvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405985.778621-4953-273071009677526/AnsiballZ_container_config_hash.py'
Oct 02 11:53:06 compute-0 sudo[255329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:06.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:06 compute-0 python3.9[255336]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:53:06 compute-0 sudo[255329]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a08dbca2e74e62e53e5e4f8d5b750b27739303b0aabce57b06e370480f01d16-merged.mount: Deactivated successfully.
Oct 02 11:53:06 compute-0 podman[255012]: 2025-10-02 11:53:06.459422149 +0000 UTC m=+1.521895842 container remove 027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:53:06 compute-0 sudo[254815]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 systemd[1]: libpod-conmon-027e35a67adaa083e013de67a2d9eb1768e402b1656015b6795f42234ef59c26.scope: Deactivated successfully.
Oct 02 11:53:06 compute-0 sudo[255369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:06 compute-0 sudo[255369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:06 compute-0 sudo[255369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 sudo[255394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:06 compute-0 sudo[255394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:06 compute-0 sudo[255394]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:06 compute-0 sudo[255419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:06 compute-0 sudo[255419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:06 compute-0 sudo[255419]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:06 compute-0 sudo[255450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:53:06 compute-0 sudo[255450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 11:53:06 compute-0 sudo[255630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxqyhibpmwocbeyktnynvruwxdiykhv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405986.7095168-4983-55313092926918/AnsiballZ_edpm_container_manage.py'
Oct 02 11:53:06 compute-0 sudo[255630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.052066581 +0000 UTC m=+0.038318788 container create fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:53:07 compute-0 systemd[1]: Started libpod-conmon-fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54.scope.
Oct 02 11:53:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.129622158 +0000 UTC m=+0.115874375 container init fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.035930043 +0000 UTC m=+0.022182360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.137627335 +0000 UTC m=+0.123879542 container start fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.140850555 +0000 UTC m=+0.127102782 container attach fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:53:07 compute-0 dazzling_yalow[255653]: 167 167
Oct 02 11:53:07 compute-0 systemd[1]: libpod-fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54.scope: Deactivated successfully.
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.145357566 +0000 UTC m=+0.131609783 container died fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-73914f770e4e1c5a9ba1339ade972791a6d971fb125c366dd953e074ed0f4027-merged.mount: Deactivated successfully.
Oct 02 11:53:07 compute-0 podman[255637]: 2025-10-02 11:53:07.177840198 +0000 UTC m=+0.164092405 container remove fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:07 compute-0 systemd[1]: libpod-conmon-fe9795a6d3b500da3b28d214d140cbf128b9c83be8fde0348db26d6f93672a54.scope: Deactivated successfully.
Oct 02 11:53:07 compute-0 python3[255636]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:53:07 compute-0 podman[255690]: 2025-10-02 11:53:07.351678583 +0000 UTC m=+0.054920467 container create df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:53:07 compute-0 systemd[1]: Started libpod-conmon-df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2.scope.
Oct 02 11:53:07 compute-0 podman[255725]: 2025-10-02 11:53:07.40820518 +0000 UTC m=+0.049223048 container create 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, container_name=nova_compute)
Oct 02 11:53:07 compute-0 podman[255725]: 2025-10-02 11:53:07.381710276 +0000 UTC m=+0.022728164 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 11:53:07 compute-0 python3[255636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 02 11:53:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:07 compute-0 podman[255690]: 2025-10-02 11:53:07.328197863 +0000 UTC m=+0.031439777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff03dbdd016158ecce1d190365a7564220e9a9670d31d3388e8ae2ec8c6f9ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff03dbdd016158ecce1d190365a7564220e9a9670d31d3388e8ae2ec8c6f9ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff03dbdd016158ecce1d190365a7564220e9a9670d31d3388e8ae2ec8c6f9ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff03dbdd016158ecce1d190365a7564220e9a9670d31d3388e8ae2ec8c6f9ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:07 compute-0 podman[255690]: 2025-10-02 11:53:07.43169492 +0000 UTC m=+0.134936814 container init df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:07 compute-0 podman[255690]: 2025-10-02 11:53:07.441149734 +0000 UTC m=+0.144391618 container start df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:53:07 compute-0 podman[255690]: 2025-10-02 11:53:07.444901727 +0000 UTC m=+0.148143691 container attach df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 11:53:07 compute-0 sudo[255630]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:07 compute-0 ceph-mon[73607]: pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]: {
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:     "1": [
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:         {
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "devices": [
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "/dev/loop3"
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             ],
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "lv_name": "ceph_lv0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "lv_size": "7511998464",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "name": "ceph_lv0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "tags": {
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.cluster_name": "ceph",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.crush_device_class": "",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.encrypted": "0",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.osd_id": "1",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.type": "block",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:                 "ceph.vdo": "0"
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             },
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "type": "block",
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:             "vg_name": "ceph_vg0"
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:         }
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]:     ]
Oct 02 11:53:08 compute-0 reverent_hypatia[255741]: }
Oct 02 11:53:08 compute-0 systemd[1]: libpod-df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2.scope: Deactivated successfully.
Oct 02 11:53:08 compute-0 podman[255690]: 2025-10-02 11:53:08.182116041 +0000 UTC m=+0.885357915 container died df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff03dbdd016158ecce1d190365a7564220e9a9670d31d3388e8ae2ec8c6f9ef-merged.mount: Deactivated successfully.
Oct 02 11:53:08 compute-0 podman[255690]: 2025-10-02 11:53:08.250301435 +0000 UTC m=+0.953543309 container remove df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:53:08 compute-0 sudo[255937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glqhjnihdgvcwtowkgpbtjslonsgwmcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405987.9883215-5007-248590449981155/AnsiballZ_stat.py'
Oct 02 11:53:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:08 compute-0 systemd[1]: libpod-conmon-df8feb973703109867d398b56bb697c39d4ff8dff953842155e044da523945d2.scope: Deactivated successfully.
Oct 02 11:53:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:08 compute-0 sudo[255937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:08 compute-0 sudo[255450]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:08 compute-0 sudo[255940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:08 compute-0 sudo[255940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:08 compute-0 sudo[255940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:08 compute-0 sudo[255966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:08 compute-0 sudo[255966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:08 compute-0 sudo[255966]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:08 compute-0 sudo[255991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:08 compute-0 sudo[255991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:08 compute-0 sudo[255991]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:08 compute-0 python3.9[255939]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:08 compute-0 sudo[255937]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:08 compute-0 sudo[256016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:53:08 compute-0 sudo[256016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:08.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.833531276 +0000 UTC m=+0.041774033 container create 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:08 compute-0 systemd[1]: Started libpod-conmon-1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff.scope.
Oct 02 11:53:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.904997593 +0000 UTC m=+0.113240370 container init 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.812722892 +0000 UTC m=+0.020965679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.911276937 +0000 UTC m=+0.119519684 container start 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:53:08 compute-0 quirky_payne[256124]: 167 167
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.915403229 +0000 UTC m=+0.123646006 container attach 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:53:08 compute-0 systemd[1]: libpod-1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff.scope: Deactivated successfully.
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.915910931 +0000 UTC m=+0.124153698 container died 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1175a26a876ea38f8790774b28ec2a33e6bd5702bada3f3a5f296395570396bc-merged.mount: Deactivated successfully.
Oct 02 11:53:08 compute-0 podman[256107]: 2025-10-02 11:53:08.955712216 +0000 UTC m=+0.163954973 container remove 1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:53:08 compute-0 systemd[1]: libpod-conmon-1d51f0a6d5d624d1ae86887cc559dfbca7a1cf11f2b9b65d449f2e6f6b2300ff.scope: Deactivated successfully.
Oct 02 11:53:09 compute-0 podman[256198]: 2025-10-02 11:53:09.097201241 +0000 UTC m=+0.036291297 container create 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:53:09 compute-0 systemd[1]: Started libpod-conmon-56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c.scope.
Oct 02 11:53:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74c6ee2eb5dec0a57e1d4bfcb0f4fb6969314078ef3a96f49fb319af6d6c498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74c6ee2eb5dec0a57e1d4bfcb0f4fb6969314078ef3a96f49fb319af6d6c498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74c6ee2eb5dec0a57e1d4bfcb0f4fb6969314078ef3a96f49fb319af6d6c498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74c6ee2eb5dec0a57e1d4bfcb0f4fb6969314078ef3a96f49fb319af6d6c498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:09 compute-0 podman[256198]: 2025-10-02 11:53:09.175407344 +0000 UTC m=+0.114497420 container init 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:53:09 compute-0 podman[256198]: 2025-10-02 11:53:09.080944769 +0000 UTC m=+0.020034845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:09 compute-0 podman[256198]: 2025-10-02 11:53:09.184272032 +0000 UTC m=+0.123362088 container start 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:53:09 compute-0 podman[256198]: 2025-10-02 11:53:09.186809885 +0000 UTC m=+0.125899961 container attach 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:53:09 compute-0 sudo[256292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsulukcitkrgenuzmdvhldidflngpyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405988.9805708-5034-169766956267634/AnsiballZ_file.py'
Oct 02 11:53:09 compute-0 sudo[256292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:09 compute-0 python3.9[256294]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:09 compute-0 sudo[256292]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:09 compute-0 ceph-mon[73607]: pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:09 compute-0 confident_villani[256237]: {
Oct 02 11:53:09 compute-0 confident_villani[256237]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:53:09 compute-0 confident_villani[256237]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:53:09 compute-0 confident_villani[256237]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:53:09 compute-0 confident_villani[256237]:         "osd_id": 1,
Oct 02 11:53:09 compute-0 confident_villani[256237]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:53:09 compute-0 confident_villani[256237]:         "type": "bluestore"
Oct 02 11:53:09 compute-0 confident_villani[256237]:     }
Oct 02 11:53:09 compute-0 confident_villani[256237]: }
Oct 02 11:53:10 compute-0 podman[256198]: 2025-10-02 11:53:10.017302204 +0000 UTC m=+0.956392250 container died 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:10 compute-0 systemd[1]: libpod-56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c.scope: Deactivated successfully.
Oct 02 11:53:10 compute-0 sudo[256459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcawtmypfmuosxyecxohzwnjfclcomn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405989.5680642-5034-241763621387141/AnsiballZ_copy.py'
Oct 02 11:53:10 compute-0 sudo[256459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74c6ee2eb5dec0a57e1d4bfcb0f4fb6969314078ef3a96f49fb319af6d6c498-merged.mount: Deactivated successfully.
Oct 02 11:53:10 compute-0 podman[256198]: 2025-10-02 11:53:10.088260048 +0000 UTC m=+1.027350094 container remove 56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:10 compute-0 systemd[1]: libpod-conmon-56e19526d67c7209f5133e77e28f608a94408897ac37ffc2fc8ffd60f8ba697c.scope: Deactivated successfully.
Oct 02 11:53:10 compute-0 sudo[256016]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:53:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:53:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bbca11f4-158e-45a8-9b0a-3970caf27854 does not exist
Oct 02 11:53:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a6a45e9f-602d-4ac0-9436-2181d51e174c does not exist
Oct 02 11:53:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d4693cb7-6c7d-4fe8-b3c3-29b67a0b6da0 does not exist
Oct 02 11:53:10 compute-0 sudo[256473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:10 compute-0 sudo[256473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:10 compute-0 sudo[256473]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:10 compute-0 python3.9[256468]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405989.5680642-5034-241763621387141/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:10 compute-0 sudo[256459]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:10 compute-0 sudo[256498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:53:10 compute-0 sudo[256498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:10 compute-0 sudo[256498]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:10 compute-0 sudo[256597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiazouqqrmgvpfciyyqnoqdrqdgoachf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405989.5680642-5034-241763621387141/AnsiballZ_systemd.py'
Oct 02 11:53:10 compute-0 sudo[256597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:10 compute-0 python3.9[256599]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:53:10 compute-0 systemd[1]: Reloading.
Oct 02 11:53:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:10 compute-0 systemd-sysv-generator[256626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:53:10 compute-0 systemd-rc-local-generator[256622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:53:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:53:11 compute-0 ceph-mon[73607]: pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:11 compute-0 sudo[256597]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:11 compute-0 sudo[256708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahamquohvfiuqfukrfzlslmzmalkwau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405989.5680642-5034-241763621387141/AnsiballZ_systemd.py'
Oct 02 11:53:11 compute-0 sudo[256708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:11 compute-0 python3.9[256710]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:53:12 compute-0 systemd[1]: Reloading.
Oct 02 11:53:12 compute-0 systemd-rc-local-generator[256740]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:53:12 compute-0 systemd-sysv-generator[256744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:53:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:12.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:12 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 11:53:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:12 compute-0 podman[256751]: 2025-10-02 11:53:12.502106056 +0000 UTC m=+0.078592644 container init 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:53:12 compute-0 podman[256751]: 2025-10-02 11:53:12.511566839 +0000 UTC m=+0.088053407 container start 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:53:12 compute-0 podman[256751]: nova_compute
Oct 02 11:53:12 compute-0 nova_compute[256766]: + sudo -E kolla_set_configs
Oct 02 11:53:12 compute-0 systemd[1]: Started nova_compute container.
Oct 02 11:53:12 compute-0 sudo[256708]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Validating config file
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying service configuration files
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Deleting /etc/ceph
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Creating directory /etc/ceph
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Writing out command to execute
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:12 compute-0 nova_compute[256766]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 11:53:12 compute-0 nova_compute[256766]: ++ cat /run_command
Oct 02 11:53:12 compute-0 nova_compute[256766]: + CMD=nova-compute
Oct 02 11:53:12 compute-0 nova_compute[256766]: + ARGS=
Oct 02 11:53:12 compute-0 nova_compute[256766]: + sudo kolla_copy_cacerts
Oct 02 11:53:12 compute-0 nova_compute[256766]: + [[ ! -n '' ]]
Oct 02 11:53:12 compute-0 nova_compute[256766]: + . kolla_extend_start
Oct 02 11:53:12 compute-0 nova_compute[256766]: Running command: 'nova-compute'
Oct 02 11:53:12 compute-0 nova_compute[256766]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 11:53:12 compute-0 nova_compute[256766]: + umask 0022
Oct 02 11:53:12 compute-0 nova_compute[256766]: + exec nova-compute
Oct 02 11:53:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:12.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:13 compute-0 python3.9[256927]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:13 compute-0 ceph-mon[73607]: pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:14.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:15 compute-0 python3.9[257079]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.140 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.141 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.141 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.141 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 11:53:15 compute-0 ceph-mon[73607]: pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.293 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.306 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:53:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:15 compute-0 nova_compute[256766]: 2025-10-02 11:53:15.945 2 INFO nova.virt.driver [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.047 2 INFO nova.compute.provider_config [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.086 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.086 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.086 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.087 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.088 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.089 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.090 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.091 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.092 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.092 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.092 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.092 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.092 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 python3.9[257233]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.093 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.093 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.093 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.093 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.093 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.094 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.094 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.094 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.094 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.094 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.095 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.096 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.097 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.098 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.098 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.098 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.098 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.098 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.099 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.100 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.101 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.102 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.103 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.104 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.104 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.104 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.104 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.104 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.105 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.106 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.106 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.106 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.106 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.107 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.108 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.108 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.108 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.108 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.108 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.109 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.109 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.109 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.109 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.109 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.110 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.110 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.110 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.110 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.110 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.111 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.111 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.111 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.111 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.111 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.112 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.112 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.112 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.112 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.112 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.113 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.113 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.113 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.113 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.113 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.114 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.114 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.114 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.114 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.114 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.115 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.115 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.115 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.115 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.115 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.116 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.117 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.117 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.117 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.117 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.118 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.119 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.120 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.121 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.122 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.123 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.123 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.123 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.123 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.123 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.124 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.124 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.124 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.124 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.124 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.125 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.125 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.125 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.125 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.125 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.126 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.126 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.126 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.126 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.126 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.127 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.128 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.128 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.128 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.128 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.128 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.129 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.129 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.129 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.129 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.129 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.130 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.131 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.131 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.131 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.131 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.131 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.132 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.132 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.132 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.132 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.133 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.133 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.133 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.133 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.133 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.134 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.134 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.134 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.134 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.134 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.135 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.135 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.135 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.135 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.135 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.136 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.136 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.136 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.136 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.136 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.137 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.138 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.139 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.140 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.140 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.140 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.140 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.140 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.141 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.141 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.141 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.141 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.141 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.142 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.142 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.142 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.142 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.142 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.143 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.144 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.145 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.146 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.147 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.148 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.149 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.149 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.149 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.149 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.150 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.150 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.150 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.150 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.151 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.152 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.153 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.153 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.153 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.153 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.153 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.154 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.154 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.154 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.154 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.154 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.155 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.155 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.155 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.155 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.155 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.156 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.156 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.156 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.156 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.156 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.157 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.157 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.157 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.157 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.157 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.158 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.158 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.158 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.158 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.158 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.159 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.159 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.159 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.159 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.159 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.160 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.160 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.160 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.160 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.160 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.161 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.161 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.161 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.161 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.161 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.162 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.162 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.162 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.162 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.162 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.163 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.163 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.163 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.163 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.163 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.164 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.164 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.164 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.164 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.164 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.165 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.166 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.166 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.166 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.166 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.166 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.167 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.167 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.167 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.167 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.167 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.168 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.168 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.168 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.168 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.169 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.169 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.169 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.169 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.169 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.170 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.170 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.170 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.170 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.170 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.171 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.171 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.171 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.171 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.171 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.172 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.172 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.172 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.172 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.172 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.173 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.173 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.173 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.173 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.173 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.174 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.174 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.174 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.174 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.175 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.175 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.175 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.175 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.175 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.176 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.176 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.176 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.176 2 WARNING oslo_config.cfg [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 11:53:16 compute-0 nova_compute[256766]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 11:53:16 compute-0 nova_compute[256766]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 11:53:16 compute-0 nova_compute[256766]: and ``live_migration_inbound_addr`` respectively.
Oct 02 11:53:16 compute-0 nova_compute[256766]: ).  Its value may be silently ignored in the future.
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.177 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.177 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.177 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.177 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.177 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.178 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.178 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.178 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.178 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.179 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.179 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.179 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.179 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.180 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.180 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.180 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.180 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.180 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.181 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rbd_secret_uuid        = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.181 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.181 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.181 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.181 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.182 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.182 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.182 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.182 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.182 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.183 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.183 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.183 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.183 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.183 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.184 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.184 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.184 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.184 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.184 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.185 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.185 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.185 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.185 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.185 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.186 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.187 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.188 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.189 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.189 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.189 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.189 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.189 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.190 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.190 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.190 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.190 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.190 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.191 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.192 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.192 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.192 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.192 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.193 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.193 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.193 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.193 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.194 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.194 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.194 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.194 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.194 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.195 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.195 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.195 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.195 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.195 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.196 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.197 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.197 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.197 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.197 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.198 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.198 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.198 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.198 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.198 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.199 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.200 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.200 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.200 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.200 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.200 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.201 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.202 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.202 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.202 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.202 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.202 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.203 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.203 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.203 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.203 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.203 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.204 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.204 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.204 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.204 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.205 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.205 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.205 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.205 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.205 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.206 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.207 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.208 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.208 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.208 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.208 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.209 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.209 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.209 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.209 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.209 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.210 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.210 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.210 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.210 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.210 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.211 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.211 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.211 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.211 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.211 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.212 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.212 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.212 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.212 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.212 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.213 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.214 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.215 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.215 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.215 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.215 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.216 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.216 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.216 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.216 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.216 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.217 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.218 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.218 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.218 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.218 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.218 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.219 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.219 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.219 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.219 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.219 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.220 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.220 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.220 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.220 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.220 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.221 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.221 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.221 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.221 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.221 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.222 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.223 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.223 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.223 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.223 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.223 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.224 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.224 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.224 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.224 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.224 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.225 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.225 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.225 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.226 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.226 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.226 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.226 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.226 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.227 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.227 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.227 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.227 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.227 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.228 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.229 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.230 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.230 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.230 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.230 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.230 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.231 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.232 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.232 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.232 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.232 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.232 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.233 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.233 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.233 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.233 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.233 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.234 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.234 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.234 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.234 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.234 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.235 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.236 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.236 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.236 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.236 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.236 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.237 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.237 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.237 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.237 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.238 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.238 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.238 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.238 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.238 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.239 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.239 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.239 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.239 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.239 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.240 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.241 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.241 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.241 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.241 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.241 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.242 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.242 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.242 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.242 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.242 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.243 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.243 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.243 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.243 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.243 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.244 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.244 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.244 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.244 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.244 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.245 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.245 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.245 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.245 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.245 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.246 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.247 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.248 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.248 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.248 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.248 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.249 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.250 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.251 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.252 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.252 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.252 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.252 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.252 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.253 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.253 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.253 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.253 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.253 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.254 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.254 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.254 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.254 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.254 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.255 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.256 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.256 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.256 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.256 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.256 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.257 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.257 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.257 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.257 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.258 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.258 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.258 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.258 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.258 2 DEBUG oslo_service.service [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.259 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 11:53:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:16.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.312 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.313 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.313 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.313 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 11:53:16 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 11:53:16 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.386 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6fc27b8be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.388 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6fc27b8be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.389 2 INFO nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Connection event '1' reason 'None'
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.415 2 WARNING nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 11:53:16 compute-0 nova_compute[256766]: 2025-10-02 11:53:16.416 2 DEBUG nova.virt.libvirt.volume.mount [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 11:53:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:16.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:16 compute-0 sudo[257475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzjoyywtneufymgfdpvsrcnawlvfrqta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405996.5787952-5214-69868840325961/AnsiballZ_podman_container.py'
Oct 02 11:53:16 compute-0 sudo[257475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:16 compute-0 podman[257411]: 2025-10-02 11:53:16.888661594 +0000 UTC m=+0.063931261 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 11:53:16 compute-0 podman[257410]: 2025-10-02 11:53:16.889236697 +0000 UTC m=+0.064177996 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 11:53:17 compute-0 python3.9[257486]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.187 2 INFO nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]: 
Oct 02 11:53:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <host>
Oct 02 11:53:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <uuid>8a59133c-d138-4412-952a-4a6587089b61</uuid>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <arch>x86_64</arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model>EPYC-Rome-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <vendor>AMD</vendor>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <microcode version='16777317'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <signature family='23' model='49' stepping='0'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='x2apic'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='tsc-deadline'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='osxsave'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='hypervisor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='tsc_adjust'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='spec-ctrl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='stibp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='arch-capabilities'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='cmp_legacy'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='topoext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='virt-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='lbrv'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='tsc-scale'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='vmcb-clean'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='pause-filter'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='pfthreshold'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='svme-addr-chk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='rdctl-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='mds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature name='pschange-mc-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <pages unit='KiB' size='4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <pages unit='KiB' size='2048'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <pages unit='KiB' size='1048576'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <power_management>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <suspend_mem/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </power_management>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <iommu support='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <migration_features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <live/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <uri_transports>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <uri_transport>tcp</uri_transport>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <uri_transport>rdma</uri_transport>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </uri_transports>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </migration_features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <topology>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <cells num='1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <cell id='0'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <memory unit='KiB'>7864104</memory>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <distances>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <sibling id='0' value='10'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           </distances>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           <cpus num='8'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:           </cpus>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         </cell>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </cells>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </topology>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <cache>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </cache>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <secmodel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model>selinux</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <doi>0</doi>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </secmodel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <secmodel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model>dac</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <doi>0</doi>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </secmodel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </host>
Oct 02 11:53:17 compute-0 nova_compute[256766]: 
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <guest>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <os_type>hvm</os_type>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <arch name='i686'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <wordsize>32</wordsize>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <domain type='qemu'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <domain type='kvm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <pae/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <nonpae/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <acpi default='on' toggle='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <apic default='on' toggle='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <cpuselection/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <deviceboot/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <disksnapshot default='on' toggle='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <externalSnapshot/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </guest>
Oct 02 11:53:17 compute-0 nova_compute[256766]: 
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <guest>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <os_type>hvm</os_type>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <arch name='x86_64'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <wordsize>64</wordsize>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <domain type='qemu'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <domain type='kvm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <acpi default='on' toggle='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <apic default='on' toggle='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <cpuselection/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <deviceboot/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <disksnapshot default='on' toggle='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <externalSnapshot/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </guest>
Oct 02 11:53:17 compute-0 nova_compute[256766]: 
Oct 02 11:53:17 compute-0 nova_compute[256766]: </capabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]: 
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.196 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 11:53:17 compute-0 sudo[257475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.224 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 11:53:17 compute-0 nova_compute[256766]: <domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <domain>kvm</domain>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <arch>i686</arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <vcpu max='4096'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <iothreads supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <os supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='firmware'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <loader supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>rom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pflash</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='readonly'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>yes</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='secure'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </loader>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </os>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='maximumMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <vendor>AMD</vendor>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='succor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='custom' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-128'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-256'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-512'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <memoryBacking supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='sourceType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>file</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>anonymous</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>memfd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </memoryBacking>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <disk supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='diskDevice'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>disk</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cdrom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>floppy</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>lun</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>fdc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>sata</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </disk>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <graphics supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vnc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egl-headless</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>dbus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </graphics>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <video supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='modelType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vga</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cirrus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>none</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>bochs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ramfb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </video>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hostdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='mode'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>subsystem</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='startupPolicy'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>mandatory</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>requisite</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>optional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='subsysType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pci</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='capsType'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='pciBackend'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hostdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <rng supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>random</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </rng>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <filesystem supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='driverType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>path</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>handle</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtiofs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </filesystem>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <tpm supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-tis</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-crb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emulator</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>external</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendVersion'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>2.0</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </tpm>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <redirdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </redirdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <channel supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pty</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>unix</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </channel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <crypto supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>qemu</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </crypto>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <interface supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>passt</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </interface>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <panic supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>isa</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>hyperv</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </panic>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <gic supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <genid supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backup supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <async-teardown supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <ps2 supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sev supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sgx supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hyperv supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='features'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>relaxed</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vapic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>spinlocks</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vpindex</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>runtime</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>synic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>stimer</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reset</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vendor_id</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>frequencies</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reenlightenment</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tlbflush</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ipi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>avic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emsr_bitmap</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>xmm_input</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hyperv>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <launchSecurity supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]: </domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.232 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 11:53:17 compute-0 nova_compute[256766]: <domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <domain>kvm</domain>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <arch>i686</arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <vcpu max='240'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <iothreads supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <os supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='firmware'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <loader supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>rom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pflash</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='readonly'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>yes</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='secure'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </loader>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </os>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='maximumMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <vendor>AMD</vendor>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='succor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='custom' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-128'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-256'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-512'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <memoryBacking supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='sourceType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>file</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>anonymous</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>memfd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </memoryBacking>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <disk supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='diskDevice'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>disk</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cdrom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>floppy</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>lun</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ide</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>fdc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>sata</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </disk>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <graphics supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vnc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egl-headless</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>dbus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </graphics>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <video supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='modelType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vga</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cirrus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>none</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>bochs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ramfb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </video>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hostdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='mode'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>subsystem</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='startupPolicy'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>mandatory</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>requisite</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>optional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='subsysType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pci</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='capsType'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='pciBackend'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hostdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <rng supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>random</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </rng>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <filesystem supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='driverType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>path</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>handle</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtiofs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </filesystem>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <tpm supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-tis</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-crb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emulator</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>external</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendVersion'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>2.0</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </tpm>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <redirdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </redirdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <channel supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pty</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>unix</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </channel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <crypto supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>qemu</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </crypto>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <interface supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>passt</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </interface>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <panic supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>isa</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>hyperv</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </panic>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <gic supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <genid supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backup supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <async-teardown supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <ps2 supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sev supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sgx supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hyperv supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='features'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>relaxed</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vapic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>spinlocks</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vpindex</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>runtime</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>synic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>stimer</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reset</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vendor_id</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>frequencies</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reenlightenment</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tlbflush</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ipi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>avic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emsr_bitmap</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>xmm_input</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hyperv>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <launchSecurity supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]: </domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.264 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.268 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 11:53:17 compute-0 nova_compute[256766]: <domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <domain>kvm</domain>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <arch>x86_64</arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <vcpu max='4096'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <iothreads supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <os supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='firmware'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>efi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <loader supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>rom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pflash</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='readonly'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>yes</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='secure'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>yes</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </loader>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </os>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='maximumMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <vendor>AMD</vendor>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='succor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='custom' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-128'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-256'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-512'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <memoryBacking supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='sourceType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>file</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>anonymous</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>memfd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </memoryBacking>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <disk supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='diskDevice'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>disk</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cdrom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>floppy</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>lun</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>fdc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>sata</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </disk>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <graphics supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vnc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egl-headless</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>dbus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </graphics>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <video supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='modelType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vga</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cirrus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>none</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>bochs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ramfb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </video>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hostdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='mode'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>subsystem</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='startupPolicy'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>mandatory</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>requisite</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>optional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='subsysType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pci</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='capsType'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='pciBackend'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hostdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <rng supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>random</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </rng>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <filesystem supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='driverType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>path</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>handle</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtiofs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </filesystem>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <tpm supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-tis</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-crb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emulator</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>external</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendVersion'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>2.0</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </tpm>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <redirdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </redirdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <channel supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pty</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>unix</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </channel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <crypto supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>qemu</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </crypto>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <interface supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>passt</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </interface>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <panic supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>isa</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>hyperv</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </panic>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <gic supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <genid supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backup supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <async-teardown supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <ps2 supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sev supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sgx supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hyperv supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='features'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>relaxed</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vapic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>spinlocks</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vpindex</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>runtime</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>synic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>stimer</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reset</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vendor_id</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>frequencies</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reenlightenment</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tlbflush</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ipi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>avic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emsr_bitmap</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>xmm_input</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hyperv>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <launchSecurity supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]: </domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.328 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 11:53:17 compute-0 nova_compute[256766]: <domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <domain>kvm</domain>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <arch>x86_64</arch>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <vcpu max='240'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <iothreads supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <os supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='firmware'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <loader supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>rom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pflash</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='readonly'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>yes</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='secure'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>no</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </loader>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </os>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='maximumMigratable'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>on</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>off</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <vendor>AMD</vendor>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='succor'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <mode name='custom' supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Denverton-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='auto-ibrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amd-psfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='stibp-always-on'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='EPYC-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-128'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-256'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx10-512'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='prefetchiti'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Haswell-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512er'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512pf'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fma4'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tbm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xop'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='amx-tile'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-bf16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-fp16'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bitalg'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrc'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fzrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='la57'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='taa-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xfd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ifma'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cmpccxadd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fbsdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='fsrs'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ibrs-all'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mcdt-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pbrsb-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='psdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='serialize'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vaes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='hle'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='rtm'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512bw'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512cd'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512dq'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512f'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='avx512vl'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='invpcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pcid'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='pku'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='mpx'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='core-capability'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='split-lock-detect'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='cldemote'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='erms'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='gfni'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdir64b'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='movdiri'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='xsaves'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='athlon-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='core2duo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='coreduo-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='n270-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='ss'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <blockers model='phenom-v1'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnow'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <feature name='3dnowext'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </blockers>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </mode>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <memoryBacking supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <enum name='sourceType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>file</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>anonymous</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <value>memfd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </memoryBacking>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <disk supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='diskDevice'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>disk</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cdrom</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>floppy</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>lun</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ide</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>fdc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>sata</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </disk>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <graphics supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vnc</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egl-headless</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>dbus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </graphics>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <video supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='modelType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vga</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>cirrus</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>none</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>bochs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ramfb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </video>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hostdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='mode'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>subsystem</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='startupPolicy'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>mandatory</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>requisite</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>optional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='subsysType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pci</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>scsi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='capsType'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='pciBackend'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hostdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <rng supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtio-non-transitional</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>random</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>egd</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </rng>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <filesystem supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='driverType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>path</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>handle</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>virtiofs</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </filesystem>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <tpm supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-tis</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tpm-crb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emulator</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>external</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendVersion'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>2.0</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </tpm>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <redirdev supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='bus'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>usb</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </redirdev>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <channel supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>pty</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>unix</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </channel>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <crypto supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='type'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>qemu</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendModel'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>builtin</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </crypto>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <interface supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='backendType'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>default</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>passt</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </interface>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <panic supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='model'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>isa</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>hyperv</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </panic>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </devices>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <features>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <gic supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <genid supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <backup supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <async-teardown supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <ps2 supported='yes'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sev supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <sgx supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <hyperv supported='yes'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       <enum name='features'>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>relaxed</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vapic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>spinlocks</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vpindex</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>runtime</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>synic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>stimer</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reset</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>vendor_id</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>frequencies</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>reenlightenment</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>tlbflush</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>ipi</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>avic</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>emsr_bitmap</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:         <value>xmm_input</value>
Oct 02 11:53:17 compute-0 nova_compute[256766]:       </enum>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     </hyperv>
Oct 02 11:53:17 compute-0 nova_compute[256766]:     <launchSecurity supported='no'/>
Oct 02 11:53:17 compute-0 nova_compute[256766]:   </features>
Oct 02 11:53:17 compute-0 nova_compute[256766]: </domainCapabilities>
Oct 02 11:53:17 compute-0 nova_compute[256766]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.385 2 DEBUG nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.385 2 INFO nova.virt.libvirt.host [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Secure Boot support detected
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.387 2 INFO nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.387 2 INFO nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.396 2 DEBUG nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] cpu compare xml: <cpu match="exact">
Oct 02 11:53:17 compute-0 nova_compute[256766]:   <model>Nehalem</model>
Oct 02 11:53:17 compute-0 nova_compute[256766]: </cpu>
Oct 02 11:53:17 compute-0 nova_compute[256766]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.398 2 DEBUG nova.virt.libvirt.driver [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.458 2 INFO nova.virt.node [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Determined node identity a293a24c-b5ed-43d1-8783-f02da4f75ad4 from /var/lib/nova/compute_id
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.483 2 WARNING nova.compute.manager [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Compute nodes ['a293a24c-b5ed-43d1-8783-f02da4f75ad4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.553 2 INFO nova.compute.manager [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.592 2 WARNING nova.compute.manager [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.592 2 DEBUG oslo_concurrency.lockutils [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.593 2 DEBUG oslo_concurrency.lockutils [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.593 2 DEBUG oslo_concurrency.lockutils [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.593 2 DEBUG nova.compute.resource_tracker [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:53:17 compute-0 nova_compute[256766]: 2025-10-02 11:53:17.593 2 DEBUG oslo_concurrency.processutils [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:53:17 compute-0 ceph-mon[73607]: pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:17 compute-0 sudo[257682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkjqfslraaygmughaiuxrmrqsdkdkti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405997.562028-5238-60525141801936/AnsiballZ_systemd.py'
Oct 02 11:53:17 compute-0 sudo[257682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:53:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791391173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:18 compute-0 nova_compute[256766]: 2025-10-02 11:53:18.019 2 DEBUG oslo_concurrency.processutils [None req-c21bea09-d16a-45d1-b531-a4d73ca30da3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:53:18 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 11:53:18 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 11:53:18 compute-0 python3.9[257684]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:53:18 compute-0 systemd[1]: Stopping nova_compute container...
Oct 02 11:53:18 compute-0 nova_compute[256766]: 2025-10-02 11:53:18.241 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:53:18 compute-0 nova_compute[256766]: 2025-10-02 11:53:18.241 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:53:18 compute-0 nova_compute[256766]: 2025-10-02 11:53:18.241 2 DEBUG oslo_concurrency.lockutils [None req-6a884699-53e9-4c02-8474-913a2db2b51e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:53:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:18.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:18 compute-0 virtqemud[257280]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 11:53:18 compute-0 virtqemud[257280]: hostname: compute-0
Oct 02 11:53:18 compute-0 virtqemud[257280]: End of file while reading data: Input/output error
Oct 02 11:53:18 compute-0 systemd[1]: libpod-813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0.scope: Deactivated successfully.
Oct 02 11:53:18 compute-0 systemd[1]: libpod-813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0.scope: Consumed 3.553s CPU time.
Oct 02 11:53:18 compute-0 podman[257710]: 2025-10-02 11:53:18.607379447 +0000 UTC m=+0.415949318 container died 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Oct 02 11:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0-userdata-shm.mount: Deactivated successfully.
Oct 02 11:53:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d-merged.mount: Deactivated successfully.
Oct 02 11:53:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:19 compute-0 podman[257744]: 2025-10-02 11:53:19.962670952 +0000 UTC m=+0.099762886 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 11:53:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:20.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:20.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1791391173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4205628115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:21 compute-0 podman[257710]: 2025-10-02 11:53:21.521266779 +0000 UTC m=+3.329836650 container cleanup 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:21 compute-0 podman[257710]: nova_compute
Oct 02 11:53:21 compute-0 podman[257774]: nova_compute
Oct 02 11:53:21 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 02 11:53:21 compute-0 systemd[1]: Stopped nova_compute container.
Oct 02 11:53:21 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 11:53:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89655e451678009717071f59c22a48c7b3c8887a4673dead44f5b0855fcda41d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:21 compute-0 podman[257787]: 2025-10-02 11:53:21.699758449 +0000 UTC m=+0.095317615 container init 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001)
Oct 02 11:53:21 compute-0 podman[257787]: 2025-10-02 11:53:21.705084231 +0000 UTC m=+0.100643367 container start 813a58f3d2838c54de5503dcf1d12c543a24db20d36517da0b52796545261ce0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:53:21 compute-0 nova_compute[257802]: + sudo -E kolla_set_configs
Oct 02 11:53:21 compute-0 podman[257787]: nova_compute
Oct 02 11:53:21 compute-0 systemd[1]: Started nova_compute container.
Oct 02 11:53:21 compute-0 sudo[257682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Validating config file
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying service configuration files
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /etc/ceph
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Creating directory /etc/ceph
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Writing out command to execute
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:21 compute-0 nova_compute[257802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 11:53:21 compute-0 nova_compute[257802]: ++ cat /run_command
Oct 02 11:53:21 compute-0 nova_compute[257802]: + CMD=nova-compute
Oct 02 11:53:21 compute-0 nova_compute[257802]: + ARGS=
Oct 02 11:53:21 compute-0 nova_compute[257802]: + sudo kolla_copy_cacerts
Oct 02 11:53:21 compute-0 nova_compute[257802]: + [[ ! -n '' ]]
Oct 02 11:53:21 compute-0 nova_compute[257802]: + . kolla_extend_start
Oct 02 11:53:21 compute-0 nova_compute[257802]: Running command: 'nova-compute'
Oct 02 11:53:21 compute-0 nova_compute[257802]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 11:53:21 compute-0 nova_compute[257802]: + umask 0022
Oct 02 11:53:21 compute-0 nova_compute[257802]: + exec nova-compute
Oct 02 11:53:22 compute-0 sudo[257963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tokclctjdmydemnbgjsirdtmsdmiqdog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406001.9685757-5265-83222927864780/AnsiballZ_podman_container.py'
Oct 02 11:53:22 compute-0 sudo[257963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:22.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:22 compute-0 ceph-mon[73607]: pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:22 compute-0 ceph-mon[73607]: pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:22 compute-0 python3.9[257965]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 11:53:22 compute-0 systemd[1]: Started libpod-conmon-2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e.scope.
Oct 02 11:53:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:22.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cbc9736afc5e19e32dde346b49bd0e6ba4df87869eaf5c27804fd0f61d9a02/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cbc9736afc5e19e32dde346b49bd0e6ba4df87869eaf5c27804fd0f61d9a02/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cbc9736afc5e19e32dde346b49bd0e6ba4df87869eaf5c27804fd0f61d9a02/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:22 compute-0 podman[257991]: 2025-10-02 11:53:22.680804817 +0000 UTC m=+0.113192067 container init 2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 11:53:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:22 compute-0 podman[257991]: 2025-10-02 11:53:22.687839571 +0000 UTC m=+0.120226801 container start 2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 11:53:22 compute-0 python3.9[257965]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Applying nova statedir ownership
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 02 11:53:22 compute-0 nova_compute_init[258010]: INFO:nova_statedir:Nova statedir ownership complete
Oct 02 11:53:22 compute-0 systemd[1]: libpod-2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e.scope: Deactivated successfully.
Oct 02 11:53:22 compute-0 podman[258024]: 2025-10-02 11:53:22.782589292 +0000 UTC m=+0.026137957 container died 2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3)
Oct 02 11:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e-userdata-shm.mount: Deactivated successfully.
Oct 02 11:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-92cbc9736afc5e19e32dde346b49bd0e6ba4df87869eaf5c27804fd0f61d9a02-merged.mount: Deactivated successfully.
Oct 02 11:53:22 compute-0 podman[258024]: 2025-10-02 11:53:22.818218303 +0000 UTC m=+0.061766928 container cleanup 2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3)
Oct 02 11:53:22 compute-0 systemd[1]: libpod-conmon-2879575e524e25d8488920d42b0bcac3c847dde96f09afd24c6cbeecb514f01e.scope: Deactivated successfully.
Oct 02 11:53:22 compute-0 sudo[257963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:23 compute-0 ceph-mon[73607]: pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:23 compute-0 sshd-session[218880]: Connection closed by 192.168.122.30 port 37556
Oct 02 11:53:23 compute-0 sshd-session[218877]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:23 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 02 11:53:23 compute-0 systemd[1]: session-51.scope: Consumed 2min 38.187s CPU time.
Oct 02 11:53:23 compute-0 systemd-logind[789]: Session 51 logged out. Waiting for processes to exit.
Oct 02 11:53:23 compute-0 systemd-logind[789]: Removed session 51.
Oct 02 11:53:23 compute-0 nova_compute[257802]: 2025-10-02 11:53:23.916 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:23 compute-0 nova_compute[257802]: 2025-10-02 11:53:23.918 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:23 compute-0 nova_compute[257802]: 2025-10-02 11:53:23.919 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 11:53:23 compute-0 nova_compute[257802]: 2025-10-02 11:53:23.919 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.063 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.087 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:53:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.569 2 INFO nova.virt.driver [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 11:53:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:24.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.670 2 INFO nova.compute.provider_config [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.681 2 DEBUG oslo_concurrency.lockutils [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.682 2 DEBUG oslo_concurrency.lockutils [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.682 2 DEBUG oslo_concurrency.lockutils [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.682 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.683 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.683 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.683 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.683 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.683 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.684 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.685 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.685 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.685 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.685 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.686 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.686 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.686 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.687 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.687 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.687 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.688 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.688 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.688 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.688 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.688 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.689 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.689 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.689 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.689 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.690 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.690 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.690 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.690 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.691 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.691 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.691 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.691 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.691 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.692 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.692 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.692 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.692 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.692 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.693 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.693 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.693 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.693 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.693 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.694 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.694 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.694 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.694 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.694 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.695 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.695 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.695 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.695 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.695 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.696 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.696 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.696 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.696 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.696 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.697 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.697 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.697 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.697 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.697 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.698 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.699 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.699 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.699 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.699 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.700 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.700 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.700 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.700 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.700 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.701 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.701 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.701 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.701 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.702 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.703 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.703 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.703 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.703 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.704 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.704 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.704 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.704 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.704 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.705 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.705 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.705 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.705 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.706 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.706 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.706 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.706 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.706 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.707 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.707 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.707 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.707 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.708 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.708 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.708 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.708 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.709 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.709 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.709 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.709 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.710 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.710 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.710 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.710 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.711 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.711 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.711 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.711 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.712 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.712 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.712 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.712 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.713 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.713 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.713 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.713 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.713 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.714 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.714 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.714 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.714 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.714 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.715 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.715 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.715 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.715 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.715 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.716 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.716 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.716 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.716 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.717 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.717 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.717 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.717 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.718 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.718 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.718 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.718 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.718 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.719 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.720 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.720 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.720 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.720 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.720 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.721 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.721 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.721 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.722 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.722 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.722 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.722 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.723 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.723 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.723 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.723 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.724 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.724 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.724 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.724 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.724 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.725 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.725 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.725 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.725 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.725 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.726 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.726 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.726 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.726 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.726 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.727 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.728 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.728 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.728 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.728 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.728 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.729 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.729 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.729 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.729 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.729 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.730 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.730 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.730 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.730 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.730 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.731 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.731 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.731 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.731 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.732 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.732 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.732 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.733 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.733 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.733 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.734 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.734 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.734 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.735 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.735 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.735 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.735 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.735 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.736 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.736 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.736 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.736 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.737 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.737 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.737 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.737 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.738 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.738 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.738 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.738 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.739 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.739 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.739 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.739 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.739 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.740 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.740 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.740 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.740 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.741 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.741 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.741 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.741 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.742 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.742 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.742 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.742 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.742 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.743 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.744 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.744 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.744 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.744 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.744 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.745 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.746 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.746 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.746 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.746 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.746 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.747 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.747 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.747 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.747 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.747 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.748 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.749 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.749 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.749 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.749 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.749 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.750 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.751 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.751 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.751 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.751 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.751 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.752 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.753 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.753 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.753 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.753 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.753 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.754 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.755 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.755 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.755 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.755 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.755 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.756 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.756 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.756 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.756 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.757 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.757 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.757 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.757 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.757 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.758 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.758 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.758 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.758 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.759 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.759 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.759 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.759 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.759 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.760 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.761 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.761 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.761 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.761 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.762 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.762 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.762 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.762 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.762 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.763 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.763 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.763 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.763 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.763 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.764 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.764 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.764 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.764 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.764 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.765 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.765 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.765 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.765 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.765 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.766 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.766 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.766 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.766 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.766 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.767 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.767 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.767 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.767 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.767 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.768 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.769 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.769 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.769 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.769 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.769 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.770 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.770 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.770 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.770 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.771 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.771 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.771 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.771 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.771 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.772 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.772 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.772 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.772 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.772 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.773 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.773 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.773 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.773 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.773 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.774 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.774 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.774 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.774 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.774 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.775 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.775 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.775 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.775 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.775 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.776 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.776 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.776 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.776 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.776 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.777 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.777 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.777 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.777 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.777 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.778 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.778 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.778 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.778 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.778 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.779 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.779 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.779 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.779 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.779 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.780 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.780 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.780 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.780 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.780 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.781 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.781 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.781 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.781 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.782 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.782 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.782 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.782 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.782 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.783 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.784 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.784 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.784 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.784 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.784 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.785 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.785 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.785 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.785 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.785 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.786 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.786 2 WARNING oslo_config.cfg [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 11:53:24 compute-0 nova_compute[257802]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 11:53:24 compute-0 nova_compute[257802]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 11:53:24 compute-0 nova_compute[257802]: and ``live_migration_inbound_addr`` respectively.
Oct 02 11:53:24 compute-0 nova_compute[257802]: ).  Its value may be silently ignored in the future.
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.786 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.787 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.787 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.787 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.787 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.787 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.788 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.788 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.788 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.788 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.789 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.789 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.789 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.789 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.789 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.790 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.790 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.790 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.790 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rbd_secret_uuid        = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.791 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.791 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.791 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.791 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.792 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.792 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.792 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.792 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.792 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.793 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.793 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.793 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.793 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.793 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.794 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.795 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.795 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.795 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.795 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.795 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.796 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.796 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.796 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.796 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.796 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.797 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.797 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.797 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.797 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.797 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.798 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.798 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.798 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.798 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.798 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.799 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.800 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.800 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.800 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.800 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.800 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.801 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.801 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.801 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.801 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.801 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.802 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.802 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.802 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.802 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.802 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.803 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.804 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.804 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.804 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.804 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.804 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.805 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.805 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.805 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.805 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.805 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.806 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.807 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.807 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.807 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.807 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.808 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.808 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.808 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.808 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.809 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.810 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.810 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.810 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.810 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.811 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.811 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.811 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.811 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.811 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.812 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.812 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.812 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.812 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.812 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.813 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.814 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.814 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.814 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.814 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.815 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.816 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.816 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.816 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.816 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.817 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.818 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.818 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.818 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.818 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.818 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.819 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.820 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.820 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.820 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.820 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.820 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.821 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.822 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.822 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.822 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.822 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.822 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.823 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.823 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.823 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.823 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.823 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.824 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.824 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.824 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.824 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.824 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.825 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.825 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.825 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.825 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.826 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.826 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.826 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.826 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.826 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.827 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.827 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.827 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.827 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.828 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.828 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.828 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.828 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.828 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.829 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.829 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.829 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.829 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.829 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.830 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.831 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.831 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.831 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.831 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.831 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.832 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.832 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.832 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.832 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.832 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.833 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.833 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.833 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.833 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.833 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.834 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.834 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.834 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.834 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.834 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.835 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.835 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.835 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.835 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.835 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.836 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.836 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.836 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.837 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.837 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.837 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.837 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.837 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.838 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.839 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.839 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.839 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.839 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.840 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.840 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.840 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.840 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.841 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.841 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.841 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.841 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.842 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.842 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.842 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.842 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.843 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.844 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.844 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.844 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.844 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.844 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.845 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.845 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.845 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.845 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.845 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.846 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.846 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.846 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.846 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.847 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.848 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.848 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.848 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.848 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.848 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.849 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.849 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.849 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.849 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.849 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.850 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.850 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.850 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.850 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.850 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.851 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.852 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.852 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.852 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.852 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.852 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.853 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.853 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.853 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.853 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.853 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.854 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.854 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.854 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.854 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.854 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.855 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.855 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.855 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.855 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.855 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.856 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.857 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.857 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.857 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.857 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.857 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.858 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.859 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.859 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.859 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.859 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.859 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.860 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.861 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.861 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.861 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.861 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.861 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.862 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.862 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.862 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.862 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.862 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.863 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.863 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.863 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.863 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.863 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.864 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.864 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.864 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.864 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.865 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.865 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.865 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.865 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.865 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.866 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.866 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.866 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.866 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.866 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.867 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.867 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.867 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.867 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.867 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.868 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.868 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.868 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.868 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.868 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.869 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.869 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.869 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.869 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.869 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.870 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.871 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.871 2 DEBUG oslo_service.service [None req-adcf836b-2298-4b67-bfda-5c20330a2dc6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:53:24 compute-0 nova_compute[257802]: 2025-10-02 11:53:24.872 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.053 2 INFO nova.virt.node [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Determined node identity a293a24c-b5ed-43d1-8783-f02da4f75ad4 from /var/lib/nova/compute_id
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.054 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.054 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.055 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.055 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.066 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8ae986bb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.068 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8ae986bb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.069 2 INFO nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Connection event '1' reason 'None'
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.076 2 INFO nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]: 
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <host>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <uuid>8a59133c-d138-4412-952a-4a6587089b61</uuid>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <arch>x86_64</arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model>EPYC-Rome-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <vendor>AMD</vendor>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <microcode version='16777317'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <signature family='23' model='49' stepping='0'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='x2apic'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='tsc-deadline'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='osxsave'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='hypervisor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='tsc_adjust'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='spec-ctrl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='stibp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='arch-capabilities'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='cmp_legacy'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='topoext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='virt-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='lbrv'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='tsc-scale'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='vmcb-clean'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='pause-filter'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='pfthreshold'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='svme-addr-chk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='rdctl-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='mds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature name='pschange-mc-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <pages unit='KiB' size='4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <pages unit='KiB' size='2048'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <pages unit='KiB' size='1048576'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <power_management>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <suspend_mem/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </power_management>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <iommu support='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <migration_features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <live/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <uri_transports>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <uri_transport>tcp</uri_transport>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <uri_transport>rdma</uri_transport>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </uri_transports>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </migration_features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <topology>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <cells num='1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <cell id='0'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <memory unit='KiB'>7864104</memory>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <distances>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <sibling id='0' value='10'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           </distances>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           <cpus num='8'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:           </cpus>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         </cell>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </cells>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </topology>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <cache>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </cache>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <secmodel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model>selinux</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <doi>0</doi>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </secmodel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <secmodel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model>dac</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <doi>0</doi>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </secmodel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </host>
Oct 02 11:53:25 compute-0 nova_compute[257802]: 
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <guest>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <os_type>hvm</os_type>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <arch name='i686'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <wordsize>32</wordsize>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <domain type='qemu'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <domain type='kvm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <pae/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <nonpae/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <acpi default='on' toggle='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <apic default='on' toggle='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <cpuselection/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <deviceboot/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <disksnapshot default='on' toggle='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <externalSnapshot/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </guest>
Oct 02 11:53:25 compute-0 nova_compute[257802]: 
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <guest>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <os_type>hvm</os_type>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <arch name='x86_64'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <wordsize>64</wordsize>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <domain type='qemu'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <domain type='kvm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <acpi default='on' toggle='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <apic default='on' toggle='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <cpuselection/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <deviceboot/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <disksnapshot default='on' toggle='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <externalSnapshot/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </guest>
Oct 02 11:53:25 compute-0 nova_compute[257802]: 
Oct 02 11:53:25 compute-0 nova_compute[257802]: </capabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]: 
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.081 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.084 2 DEBUG nova.virt.libvirt.volume.mount [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.085 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 11:53:25 compute-0 nova_compute[257802]: <domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <domain>kvm</domain>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <arch>i686</arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <vcpu max='4096'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <iothreads supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <os supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='firmware'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <loader supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>rom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pflash</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='readonly'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>yes</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='secure'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </loader>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </os>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='maximumMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <vendor>AMD</vendor>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='succor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='custom' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-128'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-256'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-512'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <memoryBacking supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='sourceType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>file</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>anonymous</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>memfd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </memoryBacking>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <disk supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='diskDevice'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>disk</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cdrom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>floppy</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>lun</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>fdc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>sata</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <graphics supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vnc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egl-headless</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>dbus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <video supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='modelType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vga</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cirrus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>none</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>bochs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ramfb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </video>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hostdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='mode'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>subsystem</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='startupPolicy'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>mandatory</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>requisite</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>optional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='subsysType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pci</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='capsType'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='pciBackend'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hostdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <rng supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>random</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <filesystem supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='driverType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>path</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>handle</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtiofs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </filesystem>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <tpm supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-tis</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-crb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emulator</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>external</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendVersion'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>2.0</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </tpm>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <redirdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </redirdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <channel supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pty</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>unix</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </channel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <crypto supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>qemu</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </crypto>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <interface supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>passt</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <panic supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>isa</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>hyperv</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </panic>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <gic supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <genid supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backup supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <async-teardown supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <ps2 supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sev supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sgx supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hyperv supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='features'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>relaxed</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vapic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>spinlocks</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vpindex</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>runtime</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>synic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>stimer</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reset</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vendor_id</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>frequencies</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reenlightenment</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tlbflush</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ipi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>avic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emsr_bitmap</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>xmm_input</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hyperv>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <launchSecurity supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]: </domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.090 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 11:53:25 compute-0 nova_compute[257802]: <domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <domain>kvm</domain>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <arch>i686</arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <vcpu max='240'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <iothreads supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <os supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='firmware'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <loader supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>rom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pflash</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='readonly'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>yes</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='secure'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </loader>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </os>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='maximumMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <vendor>AMD</vendor>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='succor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='custom' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-128'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-256'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-512'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <memoryBacking supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='sourceType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>file</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>anonymous</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>memfd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </memoryBacking>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <disk supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='diskDevice'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>disk</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cdrom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>floppy</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>lun</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ide</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>fdc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>sata</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <graphics supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vnc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egl-headless</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>dbus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <video supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='modelType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vga</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cirrus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>none</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>bochs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ramfb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </video>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hostdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='mode'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>subsystem</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='startupPolicy'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>mandatory</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>requisite</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>optional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='subsysType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pci</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='capsType'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='pciBackend'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hostdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <rng supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>random</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <filesystem supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='driverType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>path</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>handle</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtiofs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </filesystem>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <tpm supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-tis</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-crb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emulator</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>external</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendVersion'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>2.0</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </tpm>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <redirdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </redirdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <channel supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pty</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>unix</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </channel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <crypto supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>qemu</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </crypto>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <interface supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>passt</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <panic supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>isa</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>hyperv</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </panic>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <gic supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <genid supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backup supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <async-teardown supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <ps2 supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sev supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sgx supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hyperv supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='features'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>relaxed</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vapic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>spinlocks</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vpindex</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>runtime</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>synic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>stimer</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reset</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vendor_id</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>frequencies</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reenlightenment</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tlbflush</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ipi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>avic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emsr_bitmap</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>xmm_input</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hyperv>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <launchSecurity supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]: </domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.119 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.123 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 11:53:25 compute-0 nova_compute[257802]: <domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <domain>kvm</domain>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <arch>x86_64</arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <vcpu max='4096'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <iothreads supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <os supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='firmware'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>efi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <loader supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>rom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pflash</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='readonly'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>yes</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='secure'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>yes</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </loader>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </os>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='maximumMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <vendor>AMD</vendor>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='succor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='custom' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-128'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-256'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-512'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <memoryBacking supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='sourceType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>file</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>anonymous</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>memfd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </memoryBacking>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <disk supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='diskDevice'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>disk</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cdrom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>floppy</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>lun</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>fdc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>sata</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <graphics supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vnc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egl-headless</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>dbus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <video supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='modelType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vga</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cirrus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>none</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>bochs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ramfb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </video>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hostdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='mode'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>subsystem</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='startupPolicy'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>mandatory</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>requisite</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>optional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='subsysType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pci</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='capsType'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='pciBackend'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hostdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <rng supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>random</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <filesystem supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='driverType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>path</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>handle</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtiofs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </filesystem>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <tpm supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-tis</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-crb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emulator</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>external</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendVersion'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>2.0</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </tpm>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <redirdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </redirdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <channel supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pty</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>unix</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </channel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <crypto supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>qemu</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </crypto>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <interface supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>passt</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <panic supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>isa</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>hyperv</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </panic>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <gic supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <genid supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backup supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <async-teardown supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <ps2 supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sev supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sgx supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hyperv supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='features'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>relaxed</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vapic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>spinlocks</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vpindex</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>runtime</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>synic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>stimer</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reset</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vendor_id</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>frequencies</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reenlightenment</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tlbflush</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ipi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>avic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emsr_bitmap</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>xmm_input</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hyperv>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <launchSecurity supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]: </domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.186 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 11:53:25 compute-0 nova_compute[257802]: <domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <domain>kvm</domain>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <arch>x86_64</arch>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <vcpu max='240'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <iothreads supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <os supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='firmware'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <loader supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>rom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pflash</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='readonly'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>yes</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='secure'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>no</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </loader>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </os>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-passthrough' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='hostPassthroughMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='maximum' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='maximumMigratable'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>on</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>off</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='host-model' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <vendor>AMD</vendor>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='x2apic'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='hypervisor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='stibp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='overflow-recov'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='succor'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lbrv'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='tsc-scale'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='flushbyasid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pause-filter'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pfthreshold'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rdctl-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='mds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='gds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='require' name='rfds-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <feature policy='disable' name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <mode name='custom' supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Broadwell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Cooperlake-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Denverton-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Dhyana-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='auto-ibrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Milan-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amd-psfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='no-nested-data-bp'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='null-sel-clr-base'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='stibp-always-on'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-Rome-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='EPYC-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='GraniteRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-128'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-256'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx10-512'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='prefetchiti'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Haswell-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v6'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Icelake-Server-v7'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='IvyBridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='KnightsMill-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4fmaps'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-4vnniw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512er'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512pf'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G4-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Opteron_G5-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fma4'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tbm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xop'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SapphireRapids-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='amx-tile'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-bf16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-fp16'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512-vpopcntdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bitalg'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vbmi2'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrc'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fzrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='la57'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='taa-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='tsx-ldtrk'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xfd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='SierraForest-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ifma'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-ne-convert'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx-vnni-int8'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='bus-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cmpccxadd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fbsdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='fsrs'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ibrs-all'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mcdt-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pbrsb-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='psdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='sbdr-ssdp-no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='serialize'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vaes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='vpclmulqdq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Client-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='hle'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='rtm'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Skylake-Server-v5'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512bw'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512cd'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512dq'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512f'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='avx512vl'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='invpcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pcid'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='pku'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='mpx'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v2'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v3'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='core-capability'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='split-lock-detect'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='Snowridge-v4'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='cldemote'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='erms'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='gfni'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdir64b'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='movdiri'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='xsaves'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='athlon-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='core2duo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='coreduo-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='n270-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='ss'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <blockers model='phenom-v1'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnow'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <feature name='3dnowext'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </blockers>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </mode>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <memoryBacking supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <enum name='sourceType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>file</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>anonymous</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <value>memfd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </memoryBacking>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <disk supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='diskDevice'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>disk</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cdrom</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>floppy</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>lun</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ide</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>fdc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>sata</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <graphics supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vnc</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egl-headless</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>dbus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <video supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='modelType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vga</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>cirrus</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>none</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>bochs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ramfb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </video>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hostdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='mode'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>subsystem</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='startupPolicy'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>mandatory</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>requisite</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>optional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='subsysType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pci</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>scsi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='capsType'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='pciBackend'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hostdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <rng supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtio-non-transitional</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>random</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>egd</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <filesystem supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='driverType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>path</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>handle</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>virtiofs</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </filesystem>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <tpm supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-tis</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tpm-crb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emulator</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>external</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendVersion'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>2.0</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </tpm>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <redirdev supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='bus'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>usb</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </redirdev>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <channel supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>pty</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>unix</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </channel>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <crypto supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='type'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>qemu</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendModel'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>builtin</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </crypto>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <interface supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='backendType'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>default</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>passt</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <panic supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='model'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>isa</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>hyperv</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </panic>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <features>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <gic supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <vmcoreinfo supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <genid supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backingStoreInput supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <backup supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <async-teardown supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <ps2 supported='yes'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sev supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <sgx supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <hyperv supported='yes'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       <enum name='features'>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>relaxed</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vapic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>spinlocks</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vpindex</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>runtime</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>synic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>stimer</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reset</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>vendor_id</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>frequencies</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>reenlightenment</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>tlbflush</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>ipi</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>avic</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>emsr_bitmap</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:         <value>xmm_input</value>
Oct 02 11:53:25 compute-0 nova_compute[257802]:       </enum>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     </hyperv>
Oct 02 11:53:25 compute-0 nova_compute[257802]:     <launchSecurity supported='no'/>
Oct 02 11:53:25 compute-0 nova_compute[257802]:   </features>
Oct 02 11:53:25 compute-0 nova_compute[257802]: </domainCapabilities>
Oct 02 11:53:25 compute-0 nova_compute[257802]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.239 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.239 2 INFO nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Secure Boot support detected
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.241 2 INFO nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.241 2 INFO nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.249 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] cpu compare xml: <cpu match="exact">
Oct 02 11:53:25 compute-0 nova_compute[257802]:   <model>Nehalem</model>
Oct 02 11:53:25 compute-0 nova_compute[257802]: </cpu>
Oct 02 11:53:25 compute-0 nova_compute[257802]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.252 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.302 2 INFO nova.virt.node [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Determined node identity a293a24c-b5ed-43d1-8783-f02da4f75ad4 from /var/lib/nova/compute_id
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.327 2 WARNING nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Compute nodes ['a293a24c-b5ed-43d1-8783-f02da4f75ad4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.393 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.424 2 WARNING nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.425 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.425 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.425 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.425 2 DEBUG nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.426 2 DEBUG oslo_concurrency.processutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:53:25 compute-0 sudo[258102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:25 compute-0 sudo[258102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:25 compute-0 sudo[258102]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:25 compute-0 sudo[258146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:25 compute-0 sudo[258146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:25 compute-0 sudo[258146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:25 compute-0 ceph-mon[73607]: pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/842359320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2836184901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:53:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601764064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:25 compute-0 nova_compute[257802]: 2025-10-02 11:53:25.860 2 DEBUG oslo_concurrency.processutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:53:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.009 2 WARNING nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.010 2 DEBUG nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5261MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.010 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.010 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.028 2 WARNING nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] No compute node record for compute-0.ctlplane.example.com:a293a24c-b5ed-43d1-8783-f02da4f75ad4: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a293a24c-b5ed-43d1-8783-f02da4f75ad4 could not be found.
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.046 2 INFO nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a293a24c-b5ed-43d1-8783-f02da4f75ad4
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.097 2 DEBUG nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.097 2 DEBUG nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.235 2 INFO nova.scheduler.client.report [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [req-4f0ff0d2-36c8-49f1-9c5a-826085488e04] Created resource provider record via placement API for resource provider with UUID a293a24c-b5ed-43d1-8783-f02da4f75ad4 and name compute-0.ctlplane.example.com.
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.251 2 DEBUG oslo_concurrency.processutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:53:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:26.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:26.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:53:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641805278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.704 2 DEBUG oslo_concurrency.processutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.710 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 02 11:53:26 compute-0 nova_compute[257802]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.710 2 INFO nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] kernel doesn't support AMD SEV
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.711 2 DEBUG nova.compute.provider_tree [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.712 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.715 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Libvirt baseline CPU <cpu>
Oct 02 11:53:26 compute-0 nova_compute[257802]:   <arch>x86_64</arch>
Oct 02 11:53:26 compute-0 nova_compute[257802]:   <model>Nehalem</model>
Oct 02 11:53:26 compute-0 nova_compute[257802]:   <vendor>AMD</vendor>
Oct 02 11:53:26 compute-0 nova_compute[257802]:   <topology sockets="8" cores="1" threads="1"/>
Oct 02 11:53:26 compute-0 nova_compute[257802]: </cpu>
Oct 02 11:53:26 compute-0 nova_compute[257802]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.813 2 DEBUG nova.scheduler.client.report [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Updated inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.813 2 DEBUG nova.compute.provider_tree [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Updating resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.813 2 DEBUG nova.compute.provider_tree [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.868 2 DEBUG nova.compute.provider_tree [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Updating resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.886 2 DEBUG nova.compute.resource_tracker [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.887 2 DEBUG oslo_concurrency.lockutils [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.887 2 DEBUG nova.service [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 02 11:53:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/601764064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2103042759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3641805278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2563342187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:53:26.910 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:53:26.912 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:53:26.912 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.950 2 DEBUG nova.service [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 02 11:53:26 compute-0 nova_compute[257802]: 2025-10-02 11:53:26.951 2 DEBUG nova.servicegroup.drivers.db [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 02 11:53:27 compute-0 ceph-mon[73607]: pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:28.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:28.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:28 compute-0 ceph-mon[73607]: pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:31 compute-0 ceph-mon[73607]: pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:32.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:32 compute-0 ceph-mon[73607]: pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:35 compute-0 ceph-mon[73607]: pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:35 compute-0 podman[258200]: 2025-10-02 11:53:35.947565218 +0000 UTC m=+0.078365637 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 11:53:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:36.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:37 compute-0 ceph-mon[73607]: pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:38.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:38.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:39 compute-0 ceph-mon[73607]: pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:40.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:40 compute-0 ceph-mon[73607]: pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:42.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:53:42
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'vms']
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:53:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:53:43 compute-0 ceph-mon[73607]: pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:44.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:44.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:45 compute-0 sudo[258226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:45 compute-0 sudo[258226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:45 compute-0 sudo[258226]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:45 compute-0 sudo[258251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:45 compute-0 sudo[258251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:45 compute-0 sudo[258251]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:45 compute-0 ceph-mon[73607]: pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:46.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:46.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:46 compute-0 ceph-mon[73607]: pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:47 compute-0 podman[258277]: 2025-10-02 11:53:47.909629661 +0000 UTC m=+0.052762615 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 11:53:47 compute-0 podman[258278]: 2025-10-02 11:53:47.916146923 +0000 UTC m=+0.056139018 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:53:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:48.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:49 compute-0 ceph-mon[73607]: pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:50.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:51 compute-0 ceph-mon[73607]: pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:51 compute-0 podman[258319]: 2025-10-02 11:53:51.948770606 +0000 UTC m=+0.091173294 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:52 compute-0 ceph-mon[73607]: pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:53:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 11:53:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968607232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:53:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 11:53:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968607232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:53:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:54.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2968607232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:53:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2968607232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:53:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:54.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 11:53:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3069198296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:53:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 11:53:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3069198296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3922491892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3922491892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73607]: pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3069198296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3069198296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:56.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:57 compute-0 ceph-mon[73607]: pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:58.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:53:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:59 compute-0 ceph-mon[73607]: pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:01 compute-0 ceph-mon[73607]: pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:02.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:02.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:02 compute-0 ceph-mon[73607]: pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:04.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:05 compute-0 ceph-mon[73607]: pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:05 compute-0 sudo[258353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:05 compute-0 sudo[258353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:05 compute-0 sudo[258353]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:05 compute-0 sudo[258378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:05 compute-0 sudo[258378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:05 compute-0 sudo[258378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:06.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:06 compute-0 ceph-mon[73607]: pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:06 compute-0 podman[258404]: 2025-10-02 11:54:06.91681901 +0000 UTC m=+0.049108704 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:54:06 compute-0 nova_compute[257802]: 2025-10-02 11:54:06.952 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:06 compute-0 nova_compute[257802]: 2025-10-02 11:54:06.986 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:09 compute-0 ceph-mon[73607]: pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:10 compute-0 sudo[258426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:10 compute-0 sudo[258426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:10 compute-0 sudo[258426]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:10 compute-0 sudo[258451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:54:10 compute-0 sudo[258451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:10 compute-0 sudo[258451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:10.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:10 compute-0 sudo[258476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:10 compute-0 sudo[258476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:10 compute-0 sudo[258476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:10 compute-0 sudo[258501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:54:10 compute-0 sudo[258501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:54:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:54:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:11 compute-0 sudo[258501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:54:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:54:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:54:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:54:11 compute-0 ceph-mon[73607]: pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:54:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:54:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:54:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 83fc9b86-8e52-4cdd-b386-bbc039a1f345 does not exist
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 312ddf87-f83a-4373-82c0-2a53678e6c1b does not exist
Oct 02 11:54:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c970ff30-d6f7-4206-b44e-4a8e148afbe9 does not exist
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:54:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:54:12 compute-0 sudo[258558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:12 compute-0 sudo[258558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:12 compute-0 sudo[258558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:12 compute-0 sudo[258583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:54:12 compute-0 sudo[258583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:12 compute-0 sudo[258583]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:12 compute-0 sudo[258608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:12 compute-0 sudo[258608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:12 compute-0 sudo[258608]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:12 compute-0 sudo[258633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:54:12 compute-0 sudo[258633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:54:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.27373034 +0000 UTC m=+0.043978058 container create 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:54:13 compute-0 systemd[1]: Started libpod-conmon-0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786.scope.
Oct 02 11:54:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.251390688 +0000 UTC m=+0.021638426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.356879214 +0000 UTC m=+0.127126962 container init 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.364307147 +0000 UTC m=+0.134554865 container start 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.367663391 +0000 UTC m=+0.137911109 container attach 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:13 compute-0 systemd[1]: libpod-0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786.scope: Deactivated successfully.
Oct 02 11:54:13 compute-0 compassionate_sinoussi[258716]: 167 167
Oct 02 11:54:13 compute-0 conmon[258716]: conmon 0e2794b613c6cb2e60ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786.scope/container/memory.events
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.370816728 +0000 UTC m=+0.141064446 container died 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9788d70094fb1cbd4597401c8d9e0866987d2fa009780e69607f57323074e36-merged.mount: Deactivated successfully.
Oct 02 11:54:13 compute-0 podman[258700]: 2025-10-02 11:54:13.413269727 +0000 UTC m=+0.183517455 container remove 0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:54:13 compute-0 systemd[1]: libpod-conmon-0e2794b613c6cb2e60ff3c27a5e9e017c8805bd05345e5c929c19c91d56c2786.scope: Deactivated successfully.
Oct 02 11:54:13 compute-0 podman[258740]: 2025-10-02 11:54:13.566206795 +0000 UTC m=+0.042247014 container create 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:54:13 compute-0 systemd[1]: Started libpod-conmon-39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65.scope.
Oct 02 11:54:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:13 compute-0 podman[258740]: 2025-10-02 11:54:13.638390589 +0000 UTC m=+0.114430828 container init 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:54:13 compute-0 podman[258740]: 2025-10-02 11:54:13.55062164 +0000 UTC m=+0.026661889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:13 compute-0 podman[258740]: 2025-10-02 11:54:13.64812069 +0000 UTC m=+0.124160919 container start 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:54:13 compute-0 podman[258740]: 2025-10-02 11:54:13.651333619 +0000 UTC m=+0.127373848 container attach 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:14.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:14 compute-0 silly_merkle[258756]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:54:14 compute-0 silly_merkle[258756]: --> relative data size: 1.0
Oct 02 11:54:14 compute-0 silly_merkle[258756]: --> All data devices are unavailable
Oct 02 11:54:14 compute-0 systemd[1]: libpod-39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65.scope: Deactivated successfully.
Oct 02 11:54:14 compute-0 podman[258740]: 2025-10-02 11:54:14.486557015 +0000 UTC m=+0.962597244 container died 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6131abc349ffce09d8d8a3c63e0250cb0337cd2413688c95e5c214d49a412f47-merged.mount: Deactivated successfully.
Oct 02 11:54:14 compute-0 podman[258740]: 2025-10-02 11:54:14.556297858 +0000 UTC m=+1.032338087 container remove 39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:14 compute-0 systemd[1]: libpod-conmon-39bb7723029c17d61d6f177f40aae5074e89ca32335112f94ecda465d7e45c65.scope: Deactivated successfully.
Oct 02 11:54:14 compute-0 sudo[258633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:14 compute-0 sudo[258783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:14 compute-0 sudo[258783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:14 compute-0 sudo[258783]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:14.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:14 compute-0 sudo[258808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:54:14 compute-0 sudo[258808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:14 compute-0 sudo[258808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:14 compute-0 sudo[258833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:14 compute-0 sudo[258833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:14 compute-0 sudo[258833]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:14 compute-0 sudo[258858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:54:14 compute-0 sudo[258858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.153371539 +0000 UTC m=+0.040830999 container create 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:54:15 compute-0 systemd[1]: Started libpod-conmon-4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161.scope.
Oct 02 11:54:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.220772974 +0000 UTC m=+0.108232454 container init 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.228713271 +0000 UTC m=+0.116172731 container start 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.134483413 +0000 UTC m=+0.021942903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.232372401 +0000 UTC m=+0.119831881 container attach 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:54:15 compute-0 compassionate_kirch[258940]: 167 167
Oct 02 11:54:15 compute-0 systemd[1]: libpod-4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161.scope: Deactivated successfully.
Oct 02 11:54:15 compute-0 conmon[258940]: conmon 4ee1ad47c0d0efe70780 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161.scope/container/memory.events
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.236813711 +0000 UTC m=+0.124273171 container died 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-049b757fb873d30ea73169e5defb24f9c8962aa4e9ffa0dbf6001646989d1590-merged.mount: Deactivated successfully.
Oct 02 11:54:15 compute-0 podman[258923]: 2025-10-02 11:54:15.284702804 +0000 UTC m=+0.172162264 container remove 4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:54:15 compute-0 systemd[1]: libpod-conmon-4ee1ad47c0d0efe70780576a63e6e2122536873137de20d05350a94a0808e161.scope: Deactivated successfully.
Oct 02 11:54:15 compute-0 podman[258966]: 2025-10-02 11:54:15.453854254 +0000 UTC m=+0.037933828 container create 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 02 11:54:15 compute-0 systemd[1]: Started libpod-conmon-41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a.scope.
Oct 02 11:54:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fa564d1974cc1c694c4c81611a39c56e2936347a7125f30a146e659bc47079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fa564d1974cc1c694c4c81611a39c56e2936347a7125f30a146e659bc47079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fa564d1974cc1c694c4c81611a39c56e2936347a7125f30a146e659bc47079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fa564d1974cc1c694c4c81611a39c56e2936347a7125f30a146e659bc47079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:15 compute-0 podman[258966]: 2025-10-02 11:54:15.438086363 +0000 UTC m=+0.022165967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:15 compute-0 podman[258966]: 2025-10-02 11:54:15.53950645 +0000 UTC m=+0.123586054 container init 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:54:15 compute-0 podman[258966]: 2025-10-02 11:54:15.545299103 +0000 UTC m=+0.129378677 container start 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:54:15 compute-0 podman[258966]: 2025-10-02 11:54:15.551486135 +0000 UTC m=+0.135565709 container attach 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:54:15 compute-0 ceph-mon[73607]: pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:16 compute-0 awesome_merkle[258982]: {
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:     "1": [
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:         {
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "devices": [
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "/dev/loop3"
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             ],
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "lv_name": "ceph_lv0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "lv_size": "7511998464",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "name": "ceph_lv0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "tags": {
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.cluster_name": "ceph",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.crush_device_class": "",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.encrypted": "0",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.osd_id": "1",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.type": "block",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:                 "ceph.vdo": "0"
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             },
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "type": "block",
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:             "vg_name": "ceph_vg0"
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:         }
Oct 02 11:54:16 compute-0 awesome_merkle[258982]:     ]
Oct 02 11:54:16 compute-0 awesome_merkle[258982]: }
Oct 02 11:54:16 compute-0 systemd[1]: libpod-41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a.scope: Deactivated successfully.
Oct 02 11:54:16 compute-0 podman[258966]: 2025-10-02 11:54:16.297897836 +0000 UTC m=+0.881977410 container died 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8fa564d1974cc1c694c4c81611a39c56e2936347a7125f30a146e659bc47079-merged.mount: Deactivated successfully.
Oct 02 11:54:16 compute-0 podman[258966]: 2025-10-02 11:54:16.357847268 +0000 UTC m=+0.941926852 container remove 41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:16 compute-0 systemd[1]: libpod-conmon-41e5832f4617bd53c0a6b0917a5db3d64d9e00308087c8b5bc6d40d5c966570a.scope: Deactivated successfully.
Oct 02 11:54:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:16.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:16 compute-0 sudo[258858]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 sudo[259003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:16 compute-0 sudo[259003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 sudo[259003]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 sudo[259028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:54:16 compute-0 sudo[259028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 sudo[259028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 sudo[259053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:16 compute-0 sudo[259053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 sudo[259053]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 sudo[259078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:54:16 compute-0 sudo[259078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:16 compute-0 ceph-mon[73607]: pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:16 compute-0 podman[259142]: 2025-10-02 11:54:16.951866285 +0000 UTC m=+0.042534833 container create 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:54:16 compute-0 systemd[1]: Started libpod-conmon-81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a.scope.
Oct 02 11:54:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:16.932320742 +0000 UTC m=+0.022989320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:17.031550133 +0000 UTC m=+0.122218691 container init 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:17.040545745 +0000 UTC m=+0.131214293 container start 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:17.044519474 +0000 UTC m=+0.135188022 container attach 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:54:17 compute-0 unruffled_hopper[259158]: 167 167
Oct 02 11:54:17 compute-0 systemd[1]: libpod-81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a.scope: Deactivated successfully.
Oct 02 11:54:17 compute-0 conmon[259158]: conmon 81c73f9c64256fba5e11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a.scope/container/memory.events
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:17.049791143 +0000 UTC m=+0.140459691 container died 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b89386dc5b439a8ac633630addcddd6c14b6b94594d09971f0cfd41036cf600-merged.mount: Deactivated successfully.
Oct 02 11:54:17 compute-0 podman[259142]: 2025-10-02 11:54:17.106446804 +0000 UTC m=+0.197115352 container remove 81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hopper, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:54:17 compute-0 systemd[1]: libpod-conmon-81c73f9c64256fba5e11db78855ed0001311e2de80096c24134d8be045acf99a.scope: Deactivated successfully.
Oct 02 11:54:17 compute-0 podman[259180]: 2025-10-02 11:54:17.270434245 +0000 UTC m=+0.038659447 container create 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:54:17 compute-0 systemd[1]: Started libpod-conmon-961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7.scope.
Oct 02 11:54:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcfff2dfa9d671ab7fbc07bc7bd9b786e8af3ac52db96cd9d9fc738f51a37df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcfff2dfa9d671ab7fbc07bc7bd9b786e8af3ac52db96cd9d9fc738f51a37df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcfff2dfa9d671ab7fbc07bc7bd9b786e8af3ac52db96cd9d9fc738f51a37df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcfff2dfa9d671ab7fbc07bc7bd9b786e8af3ac52db96cd9d9fc738f51a37df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:17 compute-0 podman[259180]: 2025-10-02 11:54:17.25404607 +0000 UTC m=+0.022271292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:17 compute-0 podman[259180]: 2025-10-02 11:54:17.415457099 +0000 UTC m=+0.183682331 container init 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:54:17 compute-0 podman[259180]: 2025-10-02 11:54:17.422592525 +0000 UTC m=+0.190817727 container start 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:54:17 compute-0 podman[259180]: 2025-10-02 11:54:17.444044624 +0000 UTC m=+0.212269836 container attach 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:54:18 compute-0 zen_burnell[259197]: {
Oct 02 11:54:18 compute-0 zen_burnell[259197]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:54:18 compute-0 zen_burnell[259197]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:54:18 compute-0 zen_burnell[259197]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:54:18 compute-0 zen_burnell[259197]:         "osd_id": 1,
Oct 02 11:54:18 compute-0 zen_burnell[259197]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:54:18 compute-0 zen_burnell[259197]:         "type": "bluestore"
Oct 02 11:54:18 compute-0 zen_burnell[259197]:     }
Oct 02 11:54:18 compute-0 zen_burnell[259197]: }
Oct 02 11:54:18 compute-0 systemd[1]: libpod-961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7.scope: Deactivated successfully.
Oct 02 11:54:18 compute-0 podman[259180]: 2025-10-02 11:54:18.268548035 +0000 UTC m=+1.036773267 container died 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fcfff2dfa9d671ab7fbc07bc7bd9b786e8af3ac52db96cd9d9fc738f51a37df-merged.mount: Deactivated successfully.
Oct 02 11:54:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:18.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:18 compute-0 podman[259180]: 2025-10-02 11:54:18.42901374 +0000 UTC m=+1.197238942 container remove 961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:54:18 compute-0 systemd[1]: libpod-conmon-961f60a5b38ebc1c5bcf739feb85e5ab20353e5b7bf81392be414d87db2a7fd7.scope: Deactivated successfully.
Oct 02 11:54:18 compute-0 sudo[259078]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:54:18 compute-0 podman[259218]: 2025-10-02 11:54:18.477987099 +0000 UTC m=+0.176803299 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 11:54:18 compute-0 podman[259225]: 2025-10-02 11:54:18.489925925 +0000 UTC m=+0.188873338 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:54:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:54:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f8f97e92-4646-4f37-ae4a-a1143077beed does not exist
Oct 02 11:54:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e7945edc-ad2a-4e93-a083-ba8746925c7a does not exist
Oct 02 11:54:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3d6dbfa5-1980-4240-9ec4-1ba951eedea3 does not exist
Oct 02 11:54:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:18 compute-0 sudo[259272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:18 compute-0 sudo[259272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:18 compute-0 sudo[259272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:18 compute-0 sudo[259297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:54:18 compute-0 sudo[259297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:18 compute-0 sudo[259297]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:54:19 compute-0 ceph-mon[73607]: pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:20.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:21 compute-0 ceph-mon[73607]: pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:22.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:23 compute-0 podman[259324]: 2025-10-02 11:54:23.02198652 +0000 UTC m=+0.151910662 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 11:54:23 compute-0 ceph-mon[73607]: pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1722527071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.160 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.160 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.160 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.161 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.253 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.253 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.254 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.254 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.254 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:54:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:24.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:54:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016483927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.734 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.884 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.886 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5241MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.886 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:54:24 compute-0 nova_compute[257802]: 2025-10-02 11:54:24.886 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.062 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.062 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.099 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:54:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4026187952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2984334205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3016483927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:54:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050708693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.504 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.512 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.545 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.547 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:54:25 compute-0 nova_compute[257802]: 2025-10-02 11:54:25.548 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:54:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:26 compute-0 sudo[259395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:26 compute-0 sudo[259395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:26 compute-0 sudo[259395]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:26 compute-0 sudo[259420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:26 compute-0 sudo[259420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:26 compute-0 sudo[259420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:26 compute-0 ceph-mon[73607]: pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/885902181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3050708693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:54:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:26.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:26.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:54:26.911 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:54:26.912 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:54:26.912 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:54:27 compute-0 ceph-mon[73607]: pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:28.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:28.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:28 compute-0 ceph-mon[73607]: pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:30.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:32 compute-0 ceph-mon[73607]: pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:32.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:33 compute-0 ceph-mon[73607]: pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:34.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:34.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:35 compute-0 ceph-mon[73607]: pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:36.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:37 compute-0 ceph-mon[73607]: pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 11:54:37 compute-0 podman[259451]: 2025-10-02 11:54:37.438667061 +0000 UTC m=+0.070858742 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 02 11:54:37 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 02 11:54:38 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 02 11:54:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:54:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:38.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:54:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:38.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 11:54:38 compute-0 ceph-mon[73607]: pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 11:54:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:54:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:40.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:40 compute-0 ceph-mon[73607]: pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:54:42
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root', 'vms']
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:54:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:42.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:54:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:54:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:54:43 compute-0 ceph-mon[73607]: pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 11:54:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 0 B/s wr, 75 op/s
Oct 02 11:54:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:45 compute-0 ceph-mon[73607]: pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 0 B/s wr, 75 op/s
Oct 02 11:54:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:46 compute-0 sudo[259474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:46 compute-0 sudo[259474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:46 compute-0 sudo[259474]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-0 sudo[259499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:46 compute-0 sudo[259499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:46 compute-0 sudo[259499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Oct 02 11:54:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:47 compute-0 ceph-mon[73607]: pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Oct 02 11:54:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:48.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 95 op/s
Oct 02 11:54:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:48 compute-0 podman[259526]: 2025-10-02 11:54:48.928215858 +0000 UTC m=+0.062909180 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 11:54:48 compute-0 podman[259527]: 2025-10-02 11:54:48.929239143 +0000 UTC m=+0.059906589 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 11:54:49 compute-0 ceph-mon[73607]: pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 95 op/s
Oct 02 11:54:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:50.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Oct 02 11:54:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:51 compute-0 ceph-mon[73607]: pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Oct 02 11:54:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:54:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:53 compute-0 ceph-mon[73607]: pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 80 op/s
Oct 02 11:54:53 compute-0 podman[259566]: 2025-10-02 11:54:53.945427353 +0000 UTC m=+0.086147683 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:54:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 94 op/s
Oct 02 11:54:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:55 compute-0 ceph-mon[73607]: pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 94 op/s
Oct 02 11:54:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3545203692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:54:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3545203692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:54:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:54:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:56.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:57 compute-0 ceph-mon[73607]: pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:54:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:54:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:58.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:54:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 11:54:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:54:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:59 compute-0 ceph-mon[73607]: pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 11:55:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:00.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 02 11:55:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:01 compute-0 ceph-mon[73607]: pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 02 11:55:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:55:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:02 compute-0 ceph-mon[73607]: pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:55:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:04.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:55:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:05 compute-0 ceph-mon[73607]: pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 11:55:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:06 compute-0 sudo[259599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:06 compute-0 sudo[259599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 sudo[259599]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 sudo[259624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:06 compute-0 sudo[259624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 sudo[259624]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 02 11:55:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:07 compute-0 podman[259649]: 2025-10-02 11:55:07.931021261 +0000 UTC m=+0.063194318 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 11:55:08 compute-0 ceph-mon[73607]: pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 02 11:55:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:08.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Oct 02 11:55:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:09 compute-0 ceph-mon[73607]: pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Oct 02 11:55:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:10.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 11:55:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:11 compute-0 ceph-mon[73607]: pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 11:55:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:12.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 11:55:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:13 compute-0 ceph-mon[73607]: pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 11:55:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:14.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 11:55:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:15 compute-0 ceph-mon[73607]: pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 11:55:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:16.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:17 compute-0 ceph-mon[73607]: pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:18.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:19 compute-0 sudo[259672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:19 compute-0 sudo[259672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:19 compute-0 sudo[259672]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:19 compute-0 podman[259696]: 2025-10-02 11:55:19.226643122 +0000 UTC m=+0.060421530 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:55:19 compute-0 sudo[259710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:19 compute-0 podman[259697]: 2025-10-02 11:55:19.230477345 +0000 UTC m=+0.063094345 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 11:55:19 compute-0 sudo[259710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:19 compute-0 sudo[259710]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:19 compute-0 sudo[259762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:19 compute-0 sudo[259762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:19 compute-0 sudo[259762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:19 compute-0 sudo[259788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:55:19 compute-0 sudo[259788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:19 compute-0 ceph-mon[73607]: pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:19 compute-0 sudo[259788]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:55:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:55:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:55:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:55:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 457068ee-3499-421f-81f2-fbc6daed4d65 does not exist
Oct 02 11:55:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 48ef7854-76c9-47a7-8c3d-527ed792edb3 does not exist
Oct 02 11:55:20 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1d3bfe61-006e-43bd-96f5-c2e6f4fcbcea does not exist
Oct 02 11:55:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:55:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:55:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:55:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:20 compute-0 sudo[259844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:20 compute-0 sudo[259844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:20 compute-0 sudo[259844]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:20 compute-0 sudo[259870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:20 compute-0 sudo[259870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:20 compute-0 sudo[259870]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:20 compute-0 sudo[259895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:20 compute-0 sudo[259895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:20 compute-0 sudo[259895]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:20 compute-0 sudo[259920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:55:20 compute-0 sudo[259920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:55:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:20 compute-0 podman[259985]: 2025-10-02 11:55:20.898061317 +0000 UTC m=+0.045460000 container create 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:20 compute-0 systemd[1]: Started libpod-conmon-4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265.scope.
Oct 02 11:55:20 compute-0 podman[259985]: 2025-10-02 11:55:20.873856631 +0000 UTC m=+0.021255344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:20 compute-0 podman[259985]: 2025-10-02 11:55:20.990958091 +0000 UTC m=+0.138356794 container init 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:55:20 compute-0 podman[259985]: 2025-10-02 11:55:20.998645537 +0000 UTC m=+0.146044220 container start 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:55:21 compute-0 focused_nightingale[260002]: 167 167
Oct 02 11:55:21 compute-0 podman[259985]: 2025-10-02 11:55:21.004800405 +0000 UTC m=+0.152199118 container attach 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:55:21 compute-0 podman[259985]: 2025-10-02 11:55:21.00544103 +0000 UTC m=+0.152839733 container died 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:21 compute-0 systemd[1]: libpod-4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265.scope: Deactivated successfully.
Oct 02 11:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f335ea0f854187bd25c7a124e1b3c9f420b3563d7e2dafe1db8fca6f4575254-merged.mount: Deactivated successfully.
Oct 02 11:55:21 compute-0 podman[259985]: 2025-10-02 11:55:21.101964283 +0000 UTC m=+0.249362966 container remove 4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:55:21 compute-0 systemd[1]: libpod-conmon-4859b54f77355196e796265919b879dba40f3366e0f5a83555aed0caa8205265.scope: Deactivated successfully.
Oct 02 11:55:21 compute-0 podman[260024]: 2025-10-02 11:55:21.266513859 +0000 UTC m=+0.051037015 container create 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:55:21 compute-0 systemd[1]: Started libpod-conmon-7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9.scope.
Oct 02 11:55:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:21 compute-0 podman[260024]: 2025-10-02 11:55:21.238624985 +0000 UTC m=+0.023148171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:21 compute-0 podman[260024]: 2025-10-02 11:55:21.344409201 +0000 UTC m=+0.128932377 container init 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:55:21 compute-0 podman[260024]: 2025-10-02 11:55:21.362337423 +0000 UTC m=+0.146860579 container start 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:55:21 compute-0 podman[260024]: 2025-10-02 11:55:21.365966091 +0000 UTC m=+0.150489267 container attach 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:21 compute-0 ceph-mon[73607]: pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:22 compute-0 sad_elbakyan[260041]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:55:22 compute-0 sad_elbakyan[260041]: --> relative data size: 1.0
Oct 02 11:55:22 compute-0 sad_elbakyan[260041]: --> All data devices are unavailable
Oct 02 11:55:22 compute-0 systemd[1]: libpod-7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9.scope: Deactivated successfully.
Oct 02 11:55:22 compute-0 podman[260024]: 2025-10-02 11:55:22.195394141 +0000 UTC m=+0.979917297 container died 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-59093a9f2165f1bda8997912e1a2300159adc772e189ec2754ae1661784c4d50-merged.mount: Deactivated successfully.
Oct 02 11:55:22 compute-0 podman[260024]: 2025-10-02 11:55:22.267006972 +0000 UTC m=+1.051530128 container remove 7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:55:22 compute-0 systemd[1]: libpod-conmon-7c5999f559545cc79458f5ee31e37079f1e8b479d3dcf72962e3c871ea9200b9.scope: Deactivated successfully.
Oct 02 11:55:22 compute-0 sudo[259920]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:22 compute-0 sudo[260070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:22 compute-0 sudo[260070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:22 compute-0 sudo[260070]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:22 compute-0 sudo[260096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:22 compute-0 sudo[260096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:22.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:22 compute-0 sudo[260096]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:22 compute-0 sudo[260121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:22 compute-0 sudo[260121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:22 compute-0 sudo[260121]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:22 compute-0 sudo[260146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:55:22 compute-0 sudo[260146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:22 compute-0 podman[260212]: 2025-10-02 11:55:22.95855398 +0000 UTC m=+0.044249909 container create 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:55:22 compute-0 systemd[1]: Started libpod-conmon-7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce.scope.
Oct 02 11:55:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:22.938754572 +0000 UTC m=+0.024450481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:23.041888684 +0000 UTC m=+0.127584593 container init 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:23.049654231 +0000 UTC m=+0.135350120 container start 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:23.054392707 +0000 UTC m=+0.140088616 container attach 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:23 compute-0 quirky_euler[260229]: 167 167
Oct 02 11:55:23 compute-0 systemd[1]: libpod-7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce.scope: Deactivated successfully.
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:23.058629729 +0000 UTC m=+0.144325618 container died 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee3b06df080acbad4416583f8f37a8bd383acd89fcd80a77f3832226ea45f214-merged.mount: Deactivated successfully.
Oct 02 11:55:23 compute-0 podman[260212]: 2025-10-02 11:55:23.111523497 +0000 UTC m=+0.197219396 container remove 7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_euler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:55:23 compute-0 systemd[1]: libpod-conmon-7fa7918ac6bdd09c593540fb60446111d514d77727988f3cf75ff67168fb61ce.scope: Deactivated successfully.
Oct 02 11:55:23 compute-0 podman[260253]: 2025-10-02 11:55:23.277923358 +0000 UTC m=+0.037644011 container create 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:55:23 compute-0 ceph-mon[73607]: pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:23 compute-0 systemd[1]: Started libpod-conmon-57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172.scope.
Oct 02 11:55:23 compute-0 podman[260253]: 2025-10-02 11:55:23.261318007 +0000 UTC m=+0.021038680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39de34c4aaf1ad684610351efa1698b7ea964dd164737f0e45cab0f8a7c1b5a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39de34c4aaf1ad684610351efa1698b7ea964dd164737f0e45cab0f8a7c1b5a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39de34c4aaf1ad684610351efa1698b7ea964dd164737f0e45cab0f8a7c1b5a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39de34c4aaf1ad684610351efa1698b7ea964dd164737f0e45cab0f8a7c1b5a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:23 compute-0 podman[260253]: 2025-10-02 11:55:23.380176928 +0000 UTC m=+0.139897601 container init 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:55:23 compute-0 podman[260253]: 2025-10-02 11:55:23.388472288 +0000 UTC m=+0.148192941 container start 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:23 compute-0 podman[260253]: 2025-10-02 11:55:23.392208128 +0000 UTC m=+0.151928811 container attach 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]: {
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:     "1": [
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:         {
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "devices": [
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "/dev/loop3"
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             ],
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "lv_name": "ceph_lv0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "lv_size": "7511998464",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "name": "ceph_lv0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "tags": {
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.cluster_name": "ceph",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.crush_device_class": "",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.encrypted": "0",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.osd_id": "1",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.type": "block",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:                 "ceph.vdo": "0"
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             },
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "type": "block",
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:             "vg_name": "ceph_vg0"
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:         }
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]:     ]
Oct 02 11:55:24 compute-0 affectionate_noyce[260270]: }
Oct 02 11:55:24 compute-0 systemd[1]: libpod-57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172.scope: Deactivated successfully.
Oct 02 11:55:24 compute-0 podman[260253]: 2025-10-02 11:55:24.172181244 +0000 UTC m=+0.931901897 container died 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-39de34c4aaf1ad684610351efa1698b7ea964dd164737f0e45cab0f8a7c1b5a7-merged.mount: Deactivated successfully.
Oct 02 11:55:24 compute-0 podman[260253]: 2025-10-02 11:55:24.233527506 +0000 UTC m=+0.993248159 container remove 57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_noyce, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:55:24 compute-0 systemd[1]: libpod-conmon-57a15558b2c06d356c6b98cabdfb91d75e4d0136c2806931cfec391faf123172.scope: Deactivated successfully.
Oct 02 11:55:24 compute-0 sudo[260146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:24 compute-0 podman[260280]: 2025-10-02 11:55:24.285486032 +0000 UTC m=+0.076292724 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:55:24 compute-0 sudo[260313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:24 compute-0 sudo[260313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:24 compute-0 sudo[260313]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:24 compute-0 sudo[260341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:24 compute-0 sudo[260341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:24 compute-0 sudo[260341]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:24 compute-0 sudo[260367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:24 compute-0 sudo[260367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:24 compute-0 sudo[260367]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:24 compute-0 sudo[260392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:55:24 compute-0 sudo[260392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.785512253 +0000 UTC m=+0.036654996 container create 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:55:24 compute-0 systemd[1]: Started libpod-conmon-3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a.scope.
Oct 02 11:55:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.861739725 +0000 UTC m=+0.112882488 container init 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.769989378 +0000 UTC m=+0.021132141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.870095287 +0000 UTC m=+0.121238030 container start 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.873504189 +0000 UTC m=+0.124646952 container attach 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:55:24 compute-0 suspicious_swanson[260474]: 167 167
Oct 02 11:55:24 compute-0 systemd[1]: libpod-3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a.scope: Deactivated successfully.
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.876402119 +0000 UTC m=+0.127544862 container died 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-664936e7345947f2ab314fb9f5697e85aa02ccea7a52c0e33f0540c51a383027-merged.mount: Deactivated successfully.
Oct 02 11:55:24 compute-0 podman[260458]: 2025-10-02 11:55:24.918563538 +0000 UTC m=+0.169706281 container remove 3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:55:24 compute-0 systemd[1]: libpod-conmon-3b18630d1573aa5860d9cdd49f3bb608d3b7e7fb9a26518f6a210f59e3265c0a.scope: Deactivated successfully.
Oct 02 11:55:25 compute-0 podman[260498]: 2025-10-02 11:55:25.078066622 +0000 UTC m=+0.046771581 container create 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:55:25 compute-0 systemd[1]: Started libpod-conmon-087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb.scope.
Oct 02 11:55:25 compute-0 podman[260498]: 2025-10-02 11:55:25.055921367 +0000 UTC m=+0.024626346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3815c4675eded114b0c1d0791d1286f077b10f22c6ad45f970a99ad8efcc720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3815c4675eded114b0c1d0791d1286f077b10f22c6ad45f970a99ad8efcc720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3815c4675eded114b0c1d0791d1286f077b10f22c6ad45f970a99ad8efcc720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3815c4675eded114b0c1d0791d1286f077b10f22c6ad45f970a99ad8efcc720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:25 compute-0 podman[260498]: 2025-10-02 11:55:25.190477958 +0000 UTC m=+0.159182927 container init 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:55:25 compute-0 podman[260498]: 2025-10-02 11:55:25.199691781 +0000 UTC m=+0.168396740 container start 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:25 compute-0 podman[260498]: 2025-10-02 11:55:25.20504134 +0000 UTC m=+0.173746319 container attach 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.541 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.543 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.562 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.563 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.563 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.589 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.590 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.590 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.590 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:55:25 compute-0 nova_compute[257802]: 2025-10-02 11:55:25.590 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:55:25 compute-0 ceph-mon[73607]: pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3131646479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/206332093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:55:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728348652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.036 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]: {
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:         "osd_id": 1,
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:         "type": "bluestore"
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]:     }
Oct 02 11:55:26 compute-0 beautiful_goldwasser[260514]: }
Oct 02 11:55:26 compute-0 systemd[1]: libpod-087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb.scope: Deactivated successfully.
Oct 02 11:55:26 compute-0 podman[260498]: 2025-10-02 11:55:26.111340048 +0000 UTC m=+1.080045017 container died 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3815c4675eded114b0c1d0791d1286f077b10f22c6ad45f970a99ad8efcc720-merged.mount: Deactivated successfully.
Oct 02 11:55:26 compute-0 podman[260498]: 2025-10-02 11:55:26.172110256 +0000 UTC m=+1.140815215 container remove 087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldwasser, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:55:26 compute-0 systemd[1]: libpod-conmon-087754b1415e7f99cf30ff30e6fe5a66370283022b5c4590dde8acd2f56593eb.scope: Deactivated successfully.
Oct 02 11:55:26 compute-0 sudo[260392]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.214 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.215 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.215 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.216 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:55:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.296 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:55:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.349 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:55:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1472f44e-d7d2-4a6b-99d3-93732f5c1baf does not exist
Oct 02 11:55:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fc2ff0ad-8ee2-4250-a626-b72be630a166 does not exist
Oct 02 11:55:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b62646ba-693d-4df5-a1c9-a0e9f0dc16c1 does not exist
Oct 02 11:55:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:26 compute-0 sudo[260569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:26 compute-0 sudo[260569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:26 compute-0 sudo[260569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-0 sudo[260613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:26 compute-0 sudo[260613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:26 compute-0 sudo[260614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:55:26 compute-0 sudo[260614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:26 compute-0 sudo[260613]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-0 sudo[260614]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-0 sudo[260663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:26 compute-0 sudo[260663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:26 compute-0 sudo[260663]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:55:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960780317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.810 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.818 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2728348652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1893648748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1013949214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:55:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1960780317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.841 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.842 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:55:26 compute-0 nova_compute[257802]: 2025-10-02 11:55:26.842 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:55:26.912 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:55:26.913 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:55:26.913 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:55:27 compute-0 nova_compute[257802]: 2025-10-02 11:55:27.379 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:55:27 compute-0 nova_compute[257802]: 2025-10-02 11:55:27.379 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:55:27 compute-0 nova_compute[257802]: 2025-10-02 11:55:27.380 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:55:27 compute-0 nova_compute[257802]: 2025-10-02 11:55:27.492 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 11:55:27 compute-0 ceph-mon[73607]: pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:28.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:28 compute-0 ceph-mon[73607]: pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:31 compute-0 ceph-mon[73607]: pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:32.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:33 compute-0 ceph-mon[73607]: pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.426285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133426320, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 251, "total_data_size": 3956885, "memory_usage": 4028512, "flush_reason": "Manual Compaction"}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133454971, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3870551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17237, "largest_seqno": 19317, "table_properties": {"data_size": 3861083, "index_size": 6026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18835, "raw_average_key_size": 20, "raw_value_size": 3842313, "raw_average_value_size": 4096, "num_data_blocks": 269, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405910, "oldest_key_time": 1759405910, "file_creation_time": 1759406133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 28843 microseconds, and 13055 cpu microseconds.
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.455123) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3870551 bytes OK
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.455181) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.456928) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.456966) EVENT_LOG_v1 {"time_micros": 1759406133456960, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.456983) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3948499, prev total WAL file size 3948499, number of live WAL files 2.
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.458248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3779KB)], [41(7829KB)]
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133458275, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11888402, "oldest_snapshot_seqno": -1}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4428 keys, 9816386 bytes, temperature: kUnknown
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133536260, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9816386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9784039, "index_size": 20205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 110521, "raw_average_key_size": 24, "raw_value_size": 9701045, "raw_average_value_size": 2190, "num_data_blocks": 841, "num_entries": 4428, "num_filter_entries": 4428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.536565) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9816386 bytes
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.538638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.3 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 4947, records dropped: 519 output_compression: NoCompression
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.538663) EVENT_LOG_v1 {"time_micros": 1759406133538650, "job": 20, "event": "compaction_finished", "compaction_time_micros": 78077, "compaction_time_cpu_micros": 20756, "output_level": 6, "num_output_files": 1, "total_output_size": 9816386, "num_input_records": 4947, "num_output_records": 4428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133539555, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133541947, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.458171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.542082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.542088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.542090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.542091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:55:33.542093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:55:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:35 compute-0 ceph-mon[73607]: pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:36.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:37 compute-0 ceph-mon[73607]: pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:38.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:38 compute-0 podman[260696]: 2025-10-02 11:55:38.933854033 +0000 UTC m=+0.067680556 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:55:38 compute-0 ceph-mon[73607]: pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:40.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:41 compute-0 ceph-mon[73607]: pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:55:42
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.meta']
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:55:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:42.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:55:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:55:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:43 compute-0 ceph-mon[73607]: pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:44.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:44 compute-0 ceph-mon[73607]: pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:46.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:46 compute-0 sudo[260721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:46 compute-0 sudo[260721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:46 compute-0 sudo[260721]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:46 compute-0 sudo[260746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:46 compute-0 sudo[260746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:46 compute-0 sudo[260746]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:47 compute-0 ceph-mon[73607]: pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:48.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:55:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:48.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:55:49 compute-0 ceph-mon[73607]: pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:49 compute-0 podman[260773]: 2025-10-02 11:55:49.913737695 +0000 UTC m=+0.051558647 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:55:49 compute-0 podman[260772]: 2025-10-02 11:55:49.925958351 +0000 UTC m=+0.062560333 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 11:55:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:50.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:51 compute-0 ceph-mon[73607]: pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:52.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:52 compute-0 ceph-mon[73607]: pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:53 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:55:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:54.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:54 compute-0 podman[260816]: 2025-10-02 11:55:54.962648396 +0000 UTC m=+0.100557040 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 11:55:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/487431768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:55:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 11:55:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/487431768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:55:55 compute-0 ceph-mon[73607]: pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/487431768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:55:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/487431768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:55:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:55:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:56.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:55:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:57 compute-0 ceph-mon[73607]: pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:55:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:58.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:59 compute-0 ceph-mon[73607]: pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:00 compute-0 ceph-mon[73607]: pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:02.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:02 compute-0 ceph-mon[73607]: pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:04.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:04 compute-0 ceph-mon[73607]: pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:06.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:06 compute-0 sudo[260849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:06 compute-0 sudo[260849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:06 compute-0 sudo[260849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:06 compute-0 sudo[260874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:06 compute-0 sudo[260874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:06 compute-0 sudo[260874]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:07 compute-0 ceph-mon[73607]: pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:08.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:09 compute-0 ceph-mon[73607]: pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:09 compute-0 podman[260900]: 2025-10-02 11:56:09.930986017 +0000 UTC m=+0.072796380 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:56:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:11 compute-0 ceph-mon[73607]: pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:12.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:12 compute-0 ceph-mon[73607]: pgmap v854: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:15 compute-0 ceph-mon[73607]: pgmap v855: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:16 compute-0 rsyslogd[1007]: imjournal from <np0005465986:ceph-mgr>: begin to drop messages due to rate-limiting
Oct 02 11:56:17 compute-0 ceph-mon[73607]: pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:18.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:19 compute-0 ceph-mon[73607]: pgmap v857: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:20 compute-0 podman[260926]: 2025-10-02 11:56:20.914598159 +0000 UTC m=+0.046211018 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:56:20 compute-0 podman[260925]: 2025-10-02 11:56:20.920711167 +0000 UTC m=+0.053832492 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 11:56:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:21 compute-0 ceph-mon[73607]: pgmap v858: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:21.440 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:56:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:21.441 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 11:56:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:21.442 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:56:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:56:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:56:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:23 compute-0 ceph-mon[73607]: pgmap v859: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:24 compute-0 nova_compute[257802]: 2025-10-02 11:56:24.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:24 compute-0 ceph-mon[73607]: pgmap v860: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:25 compute-0 nova_compute[257802]: 2025-10-02 11:56:25.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:25 compute-0 nova_compute[257802]: 2025-10-02 11:56:25.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:25 compute-0 nova_compute[257802]: 2025-10-02 11:56:25.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:56:25 compute-0 podman[260965]: 2025-10-02 11:56:25.955714052 +0000 UTC m=+0.095243422 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 02 11:56:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1295016656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.247 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.247 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.247 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.433 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.433 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.433 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.433 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.434 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:56:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:26.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:56:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693735087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:26 compute-0 nova_compute[257802]: 2025-10-02 11:56:26.872 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:56:26 compute-0 sudo[261014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:26 compute-0 sudo[261014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:26.913 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:26.913 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:56:26.914 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:56:26 compute-0 sudo[261014]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:26 compute-0 sudo[261041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:26 compute-0 sudo[261041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:26 compute-0 sudo[261041]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 sudo[261062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:27 compute-0 sudo[261062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 sudo[261062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.038 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.039 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5243MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.039 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.040 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:56:27 compute-0 sudo[261089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:27 compute-0 sudo[261089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 sudo[261089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 sudo[261114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:27 compute-0 sudo[261114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 sudo[261114]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 sudo[261139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:56:27 compute-0 sudo[261139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3620511028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: pgmap v861: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/693735087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1602707873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.342 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:56:27 compute-0 sudo[261139]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4c3e50f8-b531-4de3-b179-3c7094d95d68 does not exist
Oct 02 11:56:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c0edaf3f-902b-4493-addb-65bad7a1f65c does not exist
Oct 02 11:56:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7474110c-1b99-4033-ab19-f8dd3a751eab does not exist
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:56:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885499514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.788 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:56:27 compute-0 sudo[261217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:27 compute-0 sudo[261217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.797 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:56:27 compute-0 sudo[261217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.820 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.821 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:56:27 compute-0 nova_compute[257802]: 2025-10-02 11:56:27.822 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:56:27 compute-0 sudo[261244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:27 compute-0 sudo[261244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 sudo[261244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 sudo[261269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:27 compute-0 sudo[261269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:27 compute-0 sudo[261269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:27 compute-0 sudo[261294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:56:27 compute-0 sudo[261294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.292738009 +0000 UTC m=+0.025207120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.401062056 +0000 UTC m=+0.133531147 container create f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2885499514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3674817225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:56:28 compute-0 systemd[1]: Started libpod-conmon-f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff.scope.
Oct 02 11:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:28.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.613514229 +0000 UTC m=+0.345983320 container init f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.62263807 +0000 UTC m=+0.355107161 container start f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:56:28 compute-0 gifted_leavitt[261375]: 167 167
Oct 02 11:56:28 compute-0 systemd[1]: libpod-f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff.scope: Deactivated successfully.
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.657219385 +0000 UTC m=+0.389688506 container attach f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:56:28 compute-0 podman[261358]: 2025-10-02 11:56:28.657627906 +0000 UTC m=+0.390096997 container died f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:56:28 compute-0 nova_compute[257802]: 2025-10-02 11:56:28.672 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:28 compute-0 nova_compute[257802]: 2025-10-02 11:56:28.674 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:28 compute-0 nova_compute[257802]: 2025-10-02 11:56:28.674 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:56:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-91c288efd1e236b71a978f2c53ae9f50aae84ef636d7d92f300838b8bf9896e3-merged.mount: Deactivated successfully.
Oct 02 11:56:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:36 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.odxjnj missed beacon ack from the monitors
Oct 02 11:56:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:38 compute-0 PackageKit[188085]: daemon quit
Oct 02 11:56:38 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:56:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:38.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:38.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:40.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:40 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.odxjnj missed beacon ack from the monitors
Oct 02 11:56:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 11:56:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 11:56:42 compute-0 ceph-mon[73607]: pgmap v862: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:56:42
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'vms', 'images', 'volumes']
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 11:56:42 compute-0 ceph-mon[73607]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Oct 02 11:56:42 compute-0 podman[261358]: 2025-10-02 11:56:42.491111979 +0000 UTC m=+14.223581080 container remove f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:56:42 compute-0 systemd[1]: libpod-conmon-f7aa9cadfc8d0f1e5d711b84ce79ea05ffc82e66acc175a15054f6dd32cafcff.scope: Deactivated successfully.
Oct 02 11:56:42 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:56:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:42 compute-0 podman[261399]: 2025-10-02 11:56:42.596704803 +0000 UTC m=+1.708494530 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:42 compute-0 podman[261426]: 2025-10-02 11:56:42.658753503 +0000 UTC m=+0.029863027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:56:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:56:42 compute-0 podman[261426]: 2025-10-02 11:56:42.844586337 +0000 UTC m=+0.215695851 container create f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:56:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:56:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.fmcstn(active, since 25m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 11:56:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:56:43 compute-0 systemd[1]: Started libpod-conmon-f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee.scope.
Oct 02 11:56:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:43 compute-0 podman[261426]: 2025-10-02 11:56:43.146183836 +0000 UTC m=+0.517293390 container init f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:56:43 compute-0 podman[261426]: 2025-10-02 11:56:43.15607974 +0000 UTC m=+0.527189274 container start f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:56:43 compute-0 podman[261426]: 2025-10-02 11:56:43.270915053 +0000 UTC m=+0.642024587 container attach f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:56:44 compute-0 vibrant_wing[261444]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:56:44 compute-0 vibrant_wing[261444]: --> relative data size: 1.0
Oct 02 11:56:44 compute-0 vibrant_wing[261444]: --> All data devices are unavailable
Oct 02 11:56:44 compute-0 systemd[1]: libpod-f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee.scope: Deactivated successfully.
Oct 02 11:56:44 compute-0 podman[261426]: 2025-10-02 11:56:44.035363498 +0000 UTC m=+1.406473022 container died f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7629575311d6c996cd9ea866e156a8bdc5bf26502cb84eb5613ed0da76feb76d-merged.mount: Deactivated successfully.
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v863: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v864: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v865: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v866: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: mon.compute-2 calling monitor election
Oct 02 11:56:44 compute-0 ceph-mon[73607]: mon.compute-1 calling monitor election
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v867: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v868: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: mon.compute-0 calling monitor election
Oct 02 11:56:44 compute-0 ceph-mon[73607]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:56:44 compute-0 ceph-mon[73607]: pgmap v869: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 ceph-mon[73607]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:56:44 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 11:56:44 compute-0 ceph-mon[73607]: osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:56:44 compute-0 ceph-mon[73607]: mgrmap e11: compute-0.fmcstn(active, since 25m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 11:56:44 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 11:56:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:45 compute-0 podman[261426]: 2025-10-02 11:56:45.514318146 +0000 UTC m=+2.885427670 container remove f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:56:45 compute-0 systemd[1]: libpod-conmon-f1c85e982745e74a86d837973160ff589166d6a476baa76d4c3d43b62c6a81ee.scope: Deactivated successfully.
Oct 02 11:56:45 compute-0 sudo[261294]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-0 ceph-mon[73607]: pgmap v870: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:45 compute-0 sudo[261472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:45 compute-0 sudo[261472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:45 compute-0 sudo[261472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-0 sudo[261497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:45 compute-0 sudo[261497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:45 compute-0 sudo[261497]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-0 sudo[261522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:45 compute-0 sudo[261522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:45 compute-0 sudo[261522]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-0 sudo[261547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:56:45 compute-0 sudo[261547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.187621664 +0000 UTC m=+0.087140271 container create b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.120041217 +0000 UTC m=+0.019559814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:46 compute-0 systemd[1]: Started libpod-conmon-b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b.scope.
Oct 02 11:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.374087403 +0000 UTC m=+0.273605990 container init b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.382588052 +0000 UTC m=+0.282106619 container start b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:56:46 compute-0 wizardly_galileo[261629]: 167 167
Oct 02 11:56:46 compute-0 systemd[1]: libpod-b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b.scope: Deactivated successfully.
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.434865961 +0000 UTC m=+0.334384578 container attach b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.435771494 +0000 UTC m=+0.335290061 container died b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:56:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-639ea9709c5b28a3e8bea88cf59c817ccc2956c91db73179a5f4842937eb83a7-merged.mount: Deactivated successfully.
Oct 02 11:56:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:46 compute-0 podman[261612]: 2025-10-02 11:56:46.912215115 +0000 UTC m=+0.811733682 container remove b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:56:46 compute-0 systemd[1]: libpod-conmon-b236e24f4f31ac979f3c0c6df86bc1013af0d84d2d6a5f00a47d67c3d4af067b.scope: Deactivated successfully.
Oct 02 11:56:47 compute-0 ceph-mon[73607]: pgmap v871: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:47 compute-0 podman[261654]: 2025-10-02 11:56:47.039482494 +0000 UTC m=+0.022726011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:47 compute-0 podman[261654]: 2025-10-02 11:56:47.146908454 +0000 UTC m=+0.130151951 container create 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:56:47 compute-0 sudo[261668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:47 compute-0 sudo[261668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:47 compute-0 sudo[261668]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:47 compute-0 sudo[261693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:47 compute-0 sudo[261693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:47 compute-0 sudo[261693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:47 compute-0 systemd[1]: Started libpod-conmon-8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434.scope.
Oct 02 11:56:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe38abd04f2ce9a0f1b2e2047d412619ab07cc1705489a1007baad1f5d18c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe38abd04f2ce9a0f1b2e2047d412619ab07cc1705489a1007baad1f5d18c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe38abd04f2ce9a0f1b2e2047d412619ab07cc1705489a1007baad1f5d18c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe38abd04f2ce9a0f1b2e2047d412619ab07cc1705489a1007baad1f5d18c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:47 compute-0 podman[261654]: 2025-10-02 11:56:47.343627646 +0000 UTC m=+0.326871163 container init 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:56:47 compute-0 podman[261654]: 2025-10-02 11:56:47.350142807 +0000 UTC m=+0.333386314 container start 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:56:47 compute-0 podman[261654]: 2025-10-02 11:56:47.420661626 +0000 UTC m=+0.403905123 container attach 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:56:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]: {
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:     "1": [
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:         {
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "devices": [
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "/dev/loop3"
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             ],
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "lv_name": "ceph_lv0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "lv_size": "7511998464",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "name": "ceph_lv0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "tags": {
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.cluster_name": "ceph",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.crush_device_class": "",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.encrypted": "0",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.osd_id": "1",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.type": "block",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:                 "ceph.vdo": "0"
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             },
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "type": "block",
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:             "vg_name": "ceph_vg0"
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:         }
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]:     ]
Oct 02 11:56:48 compute-0 magical_kapitsa[261721]: }
Oct 02 11:56:48 compute-0 systemd[1]: libpod-8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434.scope: Deactivated successfully.
Oct 02 11:56:48 compute-0 podman[261654]: 2025-10-02 11:56:48.149101573 +0000 UTC m=+1.132345090 container died 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe38abd04f2ce9a0f1b2e2047d412619ab07cc1705489a1007baad1f5d18c59-merged.mount: Deactivated successfully.
Oct 02 11:56:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:48.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:48 compute-0 podman[261654]: 2025-10-02 11:56:48.835156635 +0000 UTC m=+1.818400132 container remove 8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:56:48 compute-0 sudo[261547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:48.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:48 compute-0 systemd[1]: libpod-conmon-8e213ee2d1a8ff3413aa96816f35e9a9ba71acd98b1cd2bed5a560c9f52ca434.scope: Deactivated successfully.
Oct 02 11:56:48 compute-0 sudo[261743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:48 compute-0 sudo[261743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:48 compute-0 sudo[261743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:48 compute-0 sudo[261768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:48 compute-0 sudo[261768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:48 compute-0 sudo[261768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-0 sudo[261793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:49 compute-0 sudo[261793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:49 compute-0 sudo[261793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-0 sudo[261818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:56:49 compute-0 sudo[261818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:49 compute-0 ceph-mon[73607]: pgmap v872: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.427573257 +0000 UTC m=+0.022551077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.522944259 +0000 UTC m=+0.117922069 container create 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:56:49 compute-0 systemd[1]: Started libpod-conmon-2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9.scope.
Oct 02 11:56:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.799427078 +0000 UTC m=+0.394404948 container init 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.811070646 +0000 UTC m=+0.406048496 container start 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:56:49 compute-0 wizardly_edison[261899]: 167 167
Oct 02 11:56:49 compute-0 systemd[1]: libpod-2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9.scope: Deactivated successfully.
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.953387725 +0000 UTC m=+0.548365575 container attach 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:56:49 compute-0 podman[261882]: 2025-10-02 11:56:49.953939099 +0000 UTC m=+0.548916949 container died 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9bf4d7fba44b7016d6fc5f6f0ab6e598d4ff64a25cd6a9e35b4e14c28a3a44f-merged.mount: Deactivated successfully.
Oct 02 11:56:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:51 compute-0 podman[261882]: 2025-10-02 11:56:51.017950513 +0000 UTC m=+1.612928323 container remove 2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:56:51 compute-0 systemd[1]: libpod-conmon-2fcc978483b3a974c158a383574e0daf2db0a7a74f78401d71c457813f6ffab9.scope: Deactivated successfully.
Oct 02 11:56:51 compute-0 ceph-mon[73607]: pgmap v873: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:51 compute-0 podman[261944]: 2025-10-02 11:56:51.202414532 +0000 UTC m=+0.031140658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:51 compute-0 podman[261944]: 2025-10-02 11:56:51.36974859 +0000 UTC m=+0.198474716 container create 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:56:51 compute-0 podman[261919]: 2025-10-02 11:56:51.48045014 +0000 UTC m=+0.338014347 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:51 compute-0 systemd[1]: Started libpod-conmon-1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8.scope.
Oct 02 11:56:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a158b45dc3a2ccf7f8bfc2f9e48b4a5aec148a19acd5d54913a14567193a72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a158b45dc3a2ccf7f8bfc2f9e48b4a5aec148a19acd5d54913a14567193a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a158b45dc3a2ccf7f8bfc2f9e48b4a5aec148a19acd5d54913a14567193a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a158b45dc3a2ccf7f8bfc2f9e48b4a5aec148a19acd5d54913a14567193a72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:51 compute-0 podman[261944]: 2025-10-02 11:56:51.695551856 +0000 UTC m=+0.524278002 container init 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:51 compute-0 podman[261944]: 2025-10-02 11:56:51.701712167 +0000 UTC m=+0.530438273 container start 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:56:51 compute-0 podman[261944]: 2025-10-02 11:56:51.812244264 +0000 UTC m=+0.640970390 container attach 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:56:51 compute-0 podman[261920]: 2025-10-02 11:56:51.852923797 +0000 UTC m=+0.709144521 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]: {
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:         "osd_id": 1,
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:         "type": "bluestore"
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]:     }
Oct 02 11:56:52 compute-0 wonderful_shockley[261980]: }
Oct 02 11:56:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:52 compute-0 systemd[1]: libpod-1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8.scope: Deactivated successfully.
Oct 02 11:56:52 compute-0 podman[261944]: 2025-10-02 11:56:52.541960222 +0000 UTC m=+1.370686358 container died 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:56:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:52.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a158b45dc3a2ccf7f8bfc2f9e48b4a5aec148a19acd5d54913a14567193a72-merged.mount: Deactivated successfully.
Oct 02 11:56:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:53 compute-0 ceph-mon[73607]: pgmap v874: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:53 compute-0 podman[261944]: 2025-10-02 11:56:53.18773103 +0000 UTC m=+2.016457156 container remove 1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:56:53 compute-0 systemd[1]: libpod-conmon-1bcc340558edeea1ce0c9e48b15c41a9524cb909a04608cde804f7b5192f8ab8.scope: Deactivated successfully.
Oct 02 11:56:53 compute-0 sudo[261818]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:56:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:56:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 456e81e1-4567-4b41-915b-497ba91b5395 does not exist
Oct 02 11:56:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a54aefc9-b048-415d-9fef-8fba01c2913f does not exist
Oct 02 11:56:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e6955062-e715-4150-96b8-1f991b114ac6 does not exist
Oct 02 11:56:53 compute-0 sudo[262013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:53 compute-0 sudo[262013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:53 compute-0 sudo[262013]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:53 compute-0 sudo[262038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:56:53 compute-0 sudo[262038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:53 compute-0 sudo[262038]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:56:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:56:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:55 compute-0 ceph-mon[73607]: pgmap v875: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1982814260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:56:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1982814260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:56:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:56:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:56:56 compute-0 podman[262065]: 2025-10-02 11:56:56.974026949 +0000 UTC m=+0.101771562 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 11:56:57 compute-0 ceph-mon[73607]: pgmap v876: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:56:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:58.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:59 compute-0 ceph-mon[73607]: pgmap v877: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:00.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:01 compute-0 ceph-mon[73607]: pgmap v878: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:02.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:03 compute-0 ceph-mon[73607]: pgmap v879: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:04.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:04 compute-0 ceph-mon[73607]: pgmap v880: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:06.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:07 compute-0 ceph-mon[73607]: pgmap v881: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:07 compute-0 sudo[262097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:07 compute-0 sudo[262097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:07 compute-0 sudo[262097]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:07 compute-0 sudo[262122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:07 compute-0 sudo[262122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:07 compute-0 sudo[262122]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:08.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:09 compute-0 ceph-mon[73607]: pgmap v882: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:10.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:11 compute-0 ceph-mon[73607]: pgmap v883: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:12.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:12.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:12 compute-0 podman[262150]: 2025-10-02 11:57:12.915616308 +0000 UTC m=+0.052310000 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:57:13 compute-0 ceph-mon[73607]: pgmap v884: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:14.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:14.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:15 compute-0 ceph-mon[73607]: pgmap v885: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:16.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:17 compute-0 ceph-mon[73607]: pgmap v886: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:18.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:19 compute-0 ceph-mon[73607]: pgmap v887: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:20.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:20.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:21 compute-0 ceph-mon[73607]: pgmap v888: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:21 compute-0 podman[262172]: 2025-10-02 11:57:21.902576452 +0000 UTC m=+0.047586444 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:21 compute-0 podman[262192]: 2025-10-02 11:57:21.988266586 +0000 UTC m=+0.060030912 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:57:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:22.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:22.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:23 compute-0 ceph-mon[73607]: pgmap v889: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:24.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:24.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:24 compute-0 ceph-mon[73607]: pgmap v890: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:25 compute-0 nova_compute[257802]: 2025-10-02 11:57:25.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:25 compute-0 nova_compute[257802]: 2025-10-02 11:57:25.152 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.143 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.144 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:26 compute-0 nova_compute[257802]: 2025-10-02 11:57:26.144 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2032781063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:26.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:57:26.915 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:57:26.915 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:57:26.915 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.122 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.122 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:57:27 compute-0 sudo[262235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:27 compute-0 sudo[262235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:27 compute-0 sudo[262235]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:57:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978944842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.591 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:57:27 compute-0 sudo[262266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:27 compute-0 sudo[262266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:27 compute-0 sudo[262266]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-0 podman[262259]: 2025-10-02 11:57:27.645009649 +0000 UTC m=+0.112364082 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 11:57:27 compute-0 ceph-mon[73607]: pgmap v891: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1526375032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3231675116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1978944842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.744 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.745 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5253MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.745 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.746 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.858 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.859 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:57:27 compute-0 nova_compute[257802]: 2025-10-02 11:57:27.877 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:57:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:57:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3563145931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:28 compute-0 nova_compute[257802]: 2025-10-02 11:57:28.297 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:57:28 compute-0 nova_compute[257802]: 2025-10-02 11:57:28.302 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:57:28 compute-0 nova_compute[257802]: 2025-10-02 11:57:28.321 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:57:28 compute-0 nova_compute[257802]: 2025-10-02 11:57:28.323 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:57:28 compute-0 nova_compute[257802]: 2025-10-02 11:57:28.323 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:57:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:28.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:28.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2855471575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3563145931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:57:29 compute-0 ceph-mon[73607]: pgmap v892: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:29 compute-0 nova_compute[257802]: 2025-10-02 11:57:29.324 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:29 compute-0 nova_compute[257802]: 2025-10-02 11:57:29.324 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:29 compute-0 nova_compute[257802]: 2025-10-02 11:57:29.324 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:57:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:30.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:30.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:31 compute-0 ceph-mon[73607]: pgmap v893: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:32.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:32.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:33 compute-0 ceph-mon[73607]: pgmap v894: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:34.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:34.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:35 compute-0 ceph-mon[73607]: pgmap v895: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:36.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:37 compute-0 ceph-mon[73607]: pgmap v896: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=404 latency=0.001000024s ======
Oct 02 11:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:38.071 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.001000024s
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - - [02/Oct/2025:11:57:38.104 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:38.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:38.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:38 compute-0 ceph-mon[73607]: pgmap v897: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:40.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:41 compute-0 ceph-mon[73607]: pgmap v898: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:57:42
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root']
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:57:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 02 11:57:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 02 11:57:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 02 11:57:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:42.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:57:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 02 11:57:43 compute-0 ceph-mon[73607]: osdmap e127: 3 total, 3 up, 3 in
Oct 02 11:57:43 compute-0 ceph-mon[73607]: pgmap v900: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 02 11:57:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 02 11:57:43 compute-0 podman[262341]: 2025-10-02 11:57:43.908569378 +0000 UTC m=+0.043615826 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:44 compute-0 ceph-mon[73607]: osdmap e128: 3 total, 3 up, 3 in
Oct 02 11:57:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:44.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Oct 02 11:57:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:44.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 02 11:57:45 compute-0 ceph-mon[73607]: pgmap v902: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Oct 02 11:57:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 02 11:57:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 02 11:57:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:46.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 2.0 KiB/s wr, 7 op/s
Oct 02 11:57:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:46.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:46 compute-0 ceph-mon[73607]: osdmap e129: 3 total, 3 up, 3 in
Oct 02 11:57:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.624802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267625464, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1269, "num_deletes": 252, "total_data_size": 2129868, "memory_usage": 2172568, "flush_reason": "Manual Compaction"}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267636285, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1249437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19319, "largest_seqno": 20586, "table_properties": {"data_size": 1244827, "index_size": 2006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12117, "raw_average_key_size": 20, "raw_value_size": 1234610, "raw_average_value_size": 2092, "num_data_blocks": 91, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406134, "oldest_key_time": 1759406134, "file_creation_time": 1759406267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 11153 microseconds, and 5286 cpu microseconds.
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.636533) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1249437 bytes OK
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.636651) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.638605) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.638621) EVENT_LOG_v1 {"time_micros": 1759406267638615, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.638639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2124321, prev total WAL file size 2124321, number of live WAL files 2.
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.640124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1220KB)], [44(9586KB)]
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267640191, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11065823, "oldest_snapshot_seqno": -1}
Oct 02 11:57:47 compute-0 sudo[262362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:47 compute-0 sudo[262362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:47 compute-0 sudo[262362]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4549 keys, 8028883 bytes, temperature: kUnknown
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267700863, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8028883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7998731, "index_size": 17690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 113485, "raw_average_key_size": 24, "raw_value_size": 7916531, "raw_average_value_size": 1740, "num_data_blocks": 732, "num_entries": 4549, "num_filter_entries": 4549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.701093) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8028883 bytes
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.702356) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 132.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(15.3) write-amplify(6.4) OK, records in: 5018, records dropped: 469 output_compression: NoCompression
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.702376) EVENT_LOG_v1 {"time_micros": 1759406267702366, "job": 22, "event": "compaction_finished", "compaction_time_micros": 60743, "compaction_time_cpu_micros": 18996, "output_level": 6, "num_output_files": 1, "total_output_size": 8028883, "num_input_records": 5018, "num_output_records": 4549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267702737, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267704982, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.640025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.705028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.705031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.705033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.705035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:57:47.705037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:57:47 compute-0 sudo[262387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:47 compute-0 sudo[262387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:47 compute-0 sudo[262387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:48 compute-0 ceph-mon[73607]: pgmap v904: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 2.0 KiB/s wr, 7 op/s
Oct 02 11:57:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 2.5 MiB/s wr, 29 op/s
Oct 02 11:57:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:48.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 02 11:57:49 compute-0 ceph-mon[73607]: pgmap v905: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 2.5 MiB/s wr, 29 op/s
Oct 02 11:57:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 02 11:57:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 02 11:57:50 compute-0 ceph-mon[73607]: osdmap e130: 3 total, 3 up, 3 in
Oct 02 11:57:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 5.7 MiB/s wr, 52 op/s
Oct 02 11:57:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:51 compute-0 ceph-mon[73607]: pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 5.7 MiB/s wr, 52 op/s
Oct 02 11:57:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 02 11:57:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:52.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 11:57:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 02 11:57:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 02 11:57:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:52 compute-0 podman[262415]: 2025-10-02 11:57:52.934980555 +0000 UTC m=+0.075408541 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 11:57:52 compute-0 podman[262416]: 2025-10-02 11:57:52.967159589 +0000 UTC m=+0.089775495 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:57:53 compute-0 sudo[262456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:53 compute-0 sudo[262456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:53 compute-0 sudo[262456]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:53 compute-0 sudo[262481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:53 compute-0 sudo[262481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:53 compute-0 sudo[262481]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:53 compute-0 sudo[262506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:53 compute-0 sudo[262506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:53 compute-0 sudo[262506]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:54 compute-0 sudo[262531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:57:54 compute-0 sudo[262531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:57:54 compute-0 ceph-mon[73607]: pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 11:57:54 compute-0 ceph-mon[73607]: osdmap e131: 3 total, 3 up, 3 in
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019029609854829704 of space, bias 1.0, pg target 0.5708882956448911 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:57:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:54.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:54 compute-0 podman[262628]: 2025-10-02 11:57:54.738662283 +0000 UTC m=+0.285134895 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:57:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 02 11:57:54 compute-0 podman[262628]: 2025-10-02 11:57:54.832475876 +0000 UTC m=+0.378948408 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:57:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:57:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:57:55 compute-0 ceph-mon[73607]: pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 02 11:57:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:57:55 compute-0 podman[262766]: 2025-10-02 11:57:55.994292383 +0000 UTC m=+0.484886662 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:57:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:57:56 compute-0 podman[262766]: 2025-10-02 11:57:56.183269384 +0000 UTC m=+0.673863603 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 11:57:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:56 compute-0 podman[262832]: 2025-10-02 11:57:56.709025772 +0000 UTC m=+0.175930191 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Oct 02 11:57:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 24 op/s
Oct 02 11:57:56 compute-0 podman[262832]: 2025-10-02 11:57:56.924075886 +0000 UTC m=+0.390980285 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, vcs-type=git, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 02 11:57:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:57 compute-0 ceph-mon[73607]: pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 24 op/s
Oct 02 11:57:57 compute-0 sudo[262531]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:57:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:57:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:57 compute-0 sudo[262884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:57 compute-0 sudo[262884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 sudo[262884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:57 compute-0 sudo[262909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:57 compute-0 sudo[262909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 sudo[262909]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:57 compute-0 sudo[262934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:57 compute-0 sudo[262934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 sudo[262934]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:57 compute-0 sudo[262959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:57:57 compute-0 sudo[262959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:57 compute-0 podman[262983]: 2025-10-02 11:57:57.791083461 +0000 UTC m=+0.086431363 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 11:57:58 compute-0 sudo[262959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 27f315a0-8ea8-47b0-9a97-9e622119e661 does not exist
Oct 02 11:57:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c7bc0060-f689-4336-87ce-359d616a4ff6 does not exist
Oct 02 11:57:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 22535a63-f3f4-4541-a4d0-cff3012f78e8 does not exist
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:57:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:57:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:58 compute-0 sudo[263042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:58 compute-0 sudo[263042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:58 compute-0 sudo[263042]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:58 compute-0 sudo[263067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:58 compute-0 sudo[263067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:58 compute-0 sudo[263067]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:58 compute-0 sudo[263092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:58 compute-0 sudo[263092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:58 compute-0 sudo[263092]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:58 compute-0 sudo[263117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:57:58 compute-0 sudo[263117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:57:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:58.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Oct 02 11:57:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:57:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:58.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.006582921 +0000 UTC m=+0.088303109 container create 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:58.94045669 +0000 UTC m=+0.022176898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:59 compute-0 systemd[1]: Started libpod-conmon-56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08.scope.
Oct 02 11:57:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.129253007 +0000 UTC m=+0.210973225 container init 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.136449424 +0000 UTC m=+0.218169612 container start 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:57:59 compute-0 elastic_merkle[263200]: 167 167
Oct 02 11:57:59 compute-0 systemd[1]: libpod-56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08.scope: Deactivated successfully.
Oct 02 11:57:59 compute-0 conmon[263200]: conmon 56ab9de8ae63e0cdd338 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08.scope/container/memory.events
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.151319861 +0000 UTC m=+0.233040069 container attach 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.151953217 +0000 UTC m=+0.233673415 container died 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa3824ee0c37eb6e60369d822711795ebe77fe499936d74e0fd558817d7e712-merged.mount: Deactivated successfully.
Oct 02 11:57:59 compute-0 podman[263183]: 2025-10-02 11:57:59.328213754 +0000 UTC m=+0.409933942 container remove 56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:57:59 compute-0 systemd[1]: libpod-conmon-56ab9de8ae63e0cdd338c8c5c65b3961eb47eece1fb60fcdb53e6610569e3f08.scope: Deactivated successfully.
Oct 02 11:57:59 compute-0 podman[263225]: 2025-10-02 11:57:59.470876153 +0000 UTC m=+0.021641585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:59 compute-0 podman[263225]: 2025-10-02 11:57:59.572735045 +0000 UTC m=+0.123500457 container create f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:57:59 compute-0 systemd[1]: Started libpod-conmon-f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca.scope.
Oct 02 11:57:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:59 compute-0 podman[263225]: 2025-10-02 11:57:59.751248608 +0000 UTC m=+0.302014040 container init f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:57:59 compute-0 podman[263225]: 2025-10-02 11:57:59.75861924 +0000 UTC m=+0.309384652 container start f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:59 compute-0 podman[263225]: 2025-10-02 11:57:59.767622352 +0000 UTC m=+0.318387774 container attach f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:57:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:57:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:57:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:57:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:59 compute-0 ceph-mon[73607]: pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Oct 02 11:58:00 compute-0 hungry_dewdney[263242]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:58:00 compute-0 hungry_dewdney[263242]: --> relative data size: 1.0
Oct 02 11:58:00 compute-0 hungry_dewdney[263242]: --> All data devices are unavailable
Oct 02 11:58:00 compute-0 systemd[1]: libpod-f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca.scope: Deactivated successfully.
Oct 02 11:58:00 compute-0 podman[263225]: 2025-10-02 11:58:00.552155062 +0000 UTC m=+1.102920474 container died f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-39075eec55a1870e169c50a182d9cfbb37b47792460af317532c70f2c1e120a7-merged.mount: Deactivated successfully.
Oct 02 11:58:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:00.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Oct 02 11:58:00 compute-0 podman[263225]: 2025-10-02 11:58:00.861001801 +0000 UTC m=+1.411767213 container remove f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:00 compute-0 sudo[263117]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:00 compute-0 systemd[1]: libpod-conmon-f5e077f18b9163f1faaa0d190d76db7d7da31d88e3e1e762ad9205a3d6336eca.scope: Deactivated successfully.
Oct 02 11:58:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:00.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:00 compute-0 sudo[263270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:00 compute-0 sudo[263270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:00 compute-0 sudo[263270]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:01 compute-0 sudo[263296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:01 compute-0 sudo[263296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:01 compute-0 sudo[263296]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:01 compute-0 sudo[263321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:01 compute-0 sudo[263321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:01 compute-0 sudo[263321]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:01 compute-0 sudo[263346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:58:01 compute-0 sudo[263346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.428217181 +0000 UTC m=+0.021140672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.59165663 +0000 UTC m=+0.184580111 container create 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:01 compute-0 systemd[1]: Started libpod-conmon-975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575.scope.
Oct 02 11:58:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:01 compute-0 ceph-mon[73607]: pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.927259119 +0000 UTC m=+0.520182600 container init 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.934723473 +0000 UTC m=+0.527646954 container start 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:58:01 compute-0 silly_neumann[263428]: 167 167
Oct 02 11:58:01 compute-0 systemd[1]: libpod-975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575.scope: Deactivated successfully.
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.981663461 +0000 UTC m=+0.574586972 container attach 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:58:01 compute-0 podman[263411]: 2025-10-02 11:58:01.983182979 +0000 UTC m=+0.576106460 container died 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e134e695b0ebaf313429468f36c20601d58f22711264776408d5c8f3dc2523cd-merged.mount: Deactivated successfully.
Oct 02 11:58:02 compute-0 podman[263411]: 2025-10-02 11:58:02.064524095 +0000 UTC m=+0.657447596 container remove 975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_neumann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:58:02 compute-0 systemd[1]: libpod-conmon-975bc148aaa0cfa46ec32230586ab9e9d0c7659b231fd015e033373831454575.scope: Deactivated successfully.
Oct 02 11:58:02 compute-0 podman[263454]: 2025-10-02 11:58:02.248298417 +0000 UTC m=+0.060962684 container create b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:58:02 compute-0 systemd[1]: Started libpod-conmon-b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82.scope.
Oct 02 11:58:02 compute-0 podman[263454]: 2025-10-02 11:58:02.211427608 +0000 UTC m=+0.024091905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a823169b1f5f86b388bd548c825130b0aecc2d9fd00b2f0851490c74ca681481/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a823169b1f5f86b388bd548c825130b0aecc2d9fd00b2f0851490c74ca681481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a823169b1f5f86b388bd548c825130b0aecc2d9fd00b2f0851490c74ca681481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a823169b1f5f86b388bd548c825130b0aecc2d9fd00b2f0851490c74ca681481/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:02 compute-0 podman[263454]: 2025-10-02 11:58:02.361564911 +0000 UTC m=+0.174229188 container init b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:58:02 compute-0 podman[263454]: 2025-10-02 11:58:02.369832035 +0000 UTC m=+0.182496302 container start b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:02 compute-0 podman[263454]: 2025-10-02 11:58:02.395484227 +0000 UTC m=+0.208148554 container attach b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:58:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:58:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:02.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:58:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Oct 02 11:58:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:02.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:03 compute-0 elegant_edison[263471]: {
Oct 02 11:58:03 compute-0 elegant_edison[263471]:     "1": [
Oct 02 11:58:03 compute-0 elegant_edison[263471]:         {
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "devices": [
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "/dev/loop3"
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             ],
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "lv_name": "ceph_lv0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "lv_size": "7511998464",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "name": "ceph_lv0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "tags": {
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.cluster_name": "ceph",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.crush_device_class": "",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.encrypted": "0",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.osd_id": "1",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.type": "block",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:                 "ceph.vdo": "0"
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             },
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "type": "block",
Oct 02 11:58:03 compute-0 elegant_edison[263471]:             "vg_name": "ceph_vg0"
Oct 02 11:58:03 compute-0 elegant_edison[263471]:         }
Oct 02 11:58:03 compute-0 elegant_edison[263471]:     ]
Oct 02 11:58:03 compute-0 elegant_edison[263471]: }
Oct 02 11:58:03 compute-0 systemd[1]: libpod-b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82.scope: Deactivated successfully.
Oct 02 11:58:03 compute-0 podman[263454]: 2025-10-02 11:58:03.13875041 +0000 UTC m=+0.951414677 container died b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a823169b1f5f86b388bd548c825130b0aecc2d9fd00b2f0851490c74ca681481-merged.mount: Deactivated successfully.
Oct 02 11:58:03 compute-0 podman[263454]: 2025-10-02 11:58:03.402420914 +0000 UTC m=+1.215085181 container remove b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:58:03 compute-0 sudo[263346]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:03 compute-0 ceph-mon[73607]: pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Oct 02 11:58:03 compute-0 systemd[1]: libpod-conmon-b439d290872795686e2137d52e1dbb87b62c93ca76f09c66a9950e973fbbcd82.scope: Deactivated successfully.
Oct 02 11:58:03 compute-0 sudo[263493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:03 compute-0 sudo[263493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:03 compute-0 sudo[263493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:03 compute-0 sudo[263518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:03 compute-0 sudo[263518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:03 compute-0 sudo[263518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:03 compute-0 sudo[263543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:03 compute-0 sudo[263543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:03 compute-0 sudo[263543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:03 compute-0 sudo[263568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:58:03 compute-0 sudo[263568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.023325279 +0000 UTC m=+0.023638715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.163852535 +0000 UTC m=+0.164165951 container create ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:04 compute-0 systemd[1]: Started libpod-conmon-ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803.scope.
Oct 02 11:58:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.302606898 +0000 UTC m=+0.302920344 container init ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.309464826 +0000 UTC m=+0.309778262 container start ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:58:04 compute-0 thirsty_feistel[263649]: 167 167
Oct 02 11:58:04 compute-0 systemd[1]: libpod-ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803.scope: Deactivated successfully.
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.384346913 +0000 UTC m=+0.384660359 container attach ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.385468261 +0000 UTC m=+0.385781677 container died ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-80d344692573e9e599a274db9bf47cebcccd4e288705a168e31ec6a8f0ffcb19-merged.mount: Deactivated successfully.
Oct 02 11:58:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:04.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 171 B/s rd, 85 B/s wr, 0 op/s
Oct 02 11:58:04 compute-0 podman[263632]: 2025-10-02 11:58:04.862312722 +0000 UTC m=+0.862626128 container remove ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_feistel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:58:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:04 compute-0 systemd[1]: libpod-conmon-ea0cf2ce51aabf2988d979469796d6d852a62c083943ad69fec6723a7f3b5803.scope: Deactivated successfully.
Oct 02 11:58:05 compute-0 ceph-mon[73607]: pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 171 B/s rd, 85 B/s wr, 0 op/s
Oct 02 11:58:05 compute-0 podman[263674]: 2025-10-02 11:58:05.084080293 +0000 UTC m=+0.086019633 container create 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:58:05 compute-0 podman[263674]: 2025-10-02 11:58:05.02357407 +0000 UTC m=+0.025513430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:05 compute-0 systemd[1]: Started libpod-conmon-2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee.scope.
Oct 02 11:58:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb907a6b6b9a0b5a086278a0690c76bce04aa94b273a46f56d5cff068542d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb907a6b6b9a0b5a086278a0690c76bce04aa94b273a46f56d5cff068542d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb907a6b6b9a0b5a086278a0690c76bce04aa94b273a46f56d5cff068542d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb907a6b6b9a0b5a086278a0690c76bce04aa94b273a46f56d5cff068542d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:05 compute-0 podman[263674]: 2025-10-02 11:58:05.174091773 +0000 UTC m=+0.176031113 container init 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:58:05 compute-0 podman[263674]: 2025-10-02 11:58:05.181891456 +0000 UTC m=+0.183830796 container start 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:05 compute-0 podman[263674]: 2025-10-02 11:58:05.352712749 +0000 UTC m=+0.354652119 container attach 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]: {
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:         "osd_id": 1,
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:         "type": "bluestore"
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]:     }
Oct 02 11:58:05 compute-0 gifted_wozniak[263690]: }
Oct 02 11:58:06 compute-0 systemd[1]: libpod-2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee.scope: Deactivated successfully.
Oct 02 11:58:06 compute-0 podman[263674]: 2025-10-02 11:58:06.014164053 +0000 UTC m=+1.016103403 container died 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ccb907a6b6b9a0b5a086278a0690c76bce04aa94b273a46f56d5cff068542d8-merged.mount: Deactivated successfully.
Oct 02 11:58:06 compute-0 podman[263674]: 2025-10-02 11:58:06.13038491 +0000 UTC m=+1.132324240 container remove 2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:58:06 compute-0 systemd[1]: libpod-conmon-2a3db8efd021ac7bc6726423fbde5588fdfc4c97954fe5dea1d2da823713f4ee.scope: Deactivated successfully.
Oct 02 11:58:06 compute-0 sudo[263568]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:58:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:58:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:58:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:58:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9dcc80ae-9cb4-492b-b4e9-e7a3dcda24cb does not exist
Oct 02 11:58:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4bb828f2-6730-4357-957a-1ab324af3fba does not exist
Oct 02 11:58:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dd318102-c031-4e7d-8773-ce5b11fe881d does not exist
Oct 02 11:58:06 compute-0 sudo[263725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:06 compute-0 sudo[263725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:06 compute-0 sudo[263725]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:06.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:06 compute-0 sudo[263750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:58:06 compute-0 sudo[263750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:06 compute-0 sudo[263750]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:58:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:06.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:58:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:58:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:58:07 compute-0 ceph-mon[73607]: pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:07 compute-0 sudo[263775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:07 compute-0 sudo[263775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:07 compute-0 sudo[263775]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:07 compute-0 sudo[263800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:07 compute-0 sudo[263800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:07 compute-0 sudo[263800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:08.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:09 compute-0 ceph-mon[73607]: pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:09.055 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:58:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:09.056 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 11:58:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:10.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:10 compute-0 ceph-mon[73607]: pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:13 compute-0 ceph-mon[73607]: pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:13 compute-0 nova_compute[257802]: 2025-10-02 11:58:13.886 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:13 compute-0 nova_compute[257802]: 2025-10-02 11:58:13.887 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:13 compute-0 nova_compute[257802]: 2025-10-02 11:58:13.924 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.074 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.074 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.084 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.085 2 INFO nova.compute.claims [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Claim successful on node compute-0.ctlplane.example.com
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.243 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:58:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712218398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.663 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.670 2 DEBUG nova.compute.provider_tree [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.696 2 DEBUG nova.scheduler.client.report [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:58:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1712218398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.722 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.723 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 11:58:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:14.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.780 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.800 2 INFO nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 11:58:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.824 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.918 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.919 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.920 2 INFO nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Creating image(s)
Oct 02 11:58:14 compute-0 podman[263851]: 2025-10-02 11:58:14.943050003 +0000 UTC m=+0.082698941 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.943 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.965 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.993 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.996 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:14 compute-0 nova_compute[257802]: 2025-10-02 11:58:14.997 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:15 compute-0 ceph-mon[73607]: pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:16 compute-0 nova_compute[257802]: 2025-10-02 11:58:16.103 2 DEBUG nova.virt.libvirt.imagebackend [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c2d0c2bc-fe21-4689-86ae-d6728c15874c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c2d0c2bc-fe21-4689-86ae-d6728c15874c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 11:58:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:16.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:16 compute-0 ceph-mon[73607]: pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:16.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:17.058 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:58:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:18.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 5 op/s
Oct 02 11:58:18 compute-0 ceph-mon[73607]: pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 5 op/s
Oct 02 11:58:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:18.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.192 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.246 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.247 2 DEBUG nova.virt.images [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] c2d0c2bc-fe21-4689-86ae-d6728c15874c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.248 2 DEBUG nova.privsep.utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.249 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.513 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.518 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.579 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.580 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.601 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:19 compute-0 nova_compute[257802]: 2025-10-02 11:58:19.604 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 02 11:58:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 02 11:58:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 02 11:58:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 11:58:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 02 11:58:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:20.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:20 compute-0 ceph-mon[73607]: osdmap e132: 3 total, 3 up, 3 in
Oct 02 11:58:20 compute-0 ceph-mon[73607]: pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 11:58:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 02 11:58:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.268 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.345 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] resizing rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.473 2 DEBUG nova.objects.instance [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'migration_context' on Instance uuid 17ee1f97-9d49-445d-835d-583cd2aeffaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.493 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.493 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Ensure instance console log exists: /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.494 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.494 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.494 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.497 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.503 2 WARNING nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.510 2 DEBUG nova.virt.libvirt.host [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.510 2 DEBUG nova.virt.libvirt.host [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.513 2 DEBUG nova.virt.libvirt.host [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.514 2 DEBUG nova.virt.libvirt.host [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.515 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.516 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.516 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.516 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.516 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.517 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.517 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.517 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.517 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.517 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.518 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.518 2 DEBUG nova.virt.hardware [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.521 2 DEBUG nova.privsep.utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.521 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:58:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643583077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.959 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.986 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:21 compute-0 ceph-mon[73607]: osdmap e133: 3 total, 3 up, 3 in
Oct 02 11:58:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2643583077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:21 compute-0 nova_compute[257802]: 2025-10-02 11:58:21.990 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:58:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140805211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.417 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.419 2 DEBUG nova.objects.instance [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'pci_devices' on Instance uuid 17ee1f97-9d49-445d-835d-583cd2aeffaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.433 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <uuid>17ee1f97-9d49-445d-835d-583cd2aeffaf</uuid>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <name>instance-00000001</name>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <metadata>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1417069383</nova:name>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 11:58:21</nova:creationTime>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:user uuid="17903cd0333c407b96f0aede6dd3b16c">tempest-AutoAllocateNetworkTest-379631237-project-member</nova:user>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <nova:project uuid="8972026d0f3a4bf4b6debd9555f9225c">tempest-AutoAllocateNetworkTest-379631237</nova:project>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </metadata>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <system>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="serial">17ee1f97-9d49-445d-835d-583cd2aeffaf</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="uuid">17ee1f97-9d49-445d-835d-583cd2aeffaf</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </system>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <os>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </os>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <features>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <apic/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </features>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </clock>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/17ee1f97-9d49-445d-835d-583cd2aeffaf_disk">
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </source>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config">
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </source>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:58:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/console.log" append="off"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </serial>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <video>
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </video>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 11:58:22 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 11:58:22 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 11:58:22 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:58:22 compute-0 nova_compute[257802]: </domain>
Oct 02 11:58:22 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.485 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.486 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.486 2 INFO nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Using config drive
Oct 02 11:58:22 compute-0 nova_compute[257802]: 2025-10-02 11:58:22.511 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Oct 02 11:58:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:22.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/140805211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:23 compute-0 ceph-mon[73607]: pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.355 2 INFO nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Creating config drive at /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.360 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw280mqs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.493 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw280mqs" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.527 2 DEBUG nova.storage.rbd_utils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.532 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.769 2 DEBUG oslo_concurrency.processutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config 17ee1f97-9d49-445d-835d-583cd2aeffaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:23 compute-0 nova_compute[257802]: 2025-10-02 11:58:23.771 2 INFO nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Deleting local config drive /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf/disk.config because it was imported into RBD.
Oct 02 11:58:23 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 11:58:23 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 11:58:23 compute-0 podman[264174]: 2025-10-02 11:58:23.870084538 +0000 UTC m=+0.059654093 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:58:23 compute-0 systemd-machined[211836]: New machine qemu-1-instance-00000001.
Oct 02 11:58:23 compute-0 podman[264173]: 2025-10-02 11:58:23.89774437 +0000 UTC m=+0.089448877 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 11:58:23 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.155 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.155 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.156 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 11:58:24 compute-0 nova_compute[257802]: 2025-10-02 11:58:24.168 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 76 MiB data, 211 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 47 op/s
Oct 02 11:58:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:24.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:24 compute-0 ceph-mon[73607]: pgmap v927: 305 pgs: 305 active+clean; 76 MiB data, 211 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 47 op/s
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.057 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406305.0571256, 17ee1f97-9d49-445d-835d-583cd2aeffaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.058 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] VM Resumed (Lifecycle Event)
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.060 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.060 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.063 2 INFO nova.virt.libvirt.driver [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance spawned successfully.
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.064 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.186 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.192 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.195 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.195 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.195 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.196 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.196 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.196 2 DEBUG nova.virt.libvirt.driver [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.224 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.224 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406305.0581312, 17ee1f97-9d49-445d-835d-583cd2aeffaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.224 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] VM Started (Lifecycle Event)
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.442 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.445 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.496 2 INFO nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Took 10.58 seconds to spawn the instance on the hypervisor.
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.497 2 DEBUG nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.503 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.578 2 INFO nova.compute.manager [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Took 11.55 seconds to build instance.
Oct 02 11:58:25 compute-0 nova_compute[257802]: 2025-10-02 11:58:25.612 2 DEBUG oslo_concurrency.lockutils [None req-c1020dbe-dd22-49e8-ba31-c2c545407fd5 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:26.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 595 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:26.915 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:26.916 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:58:26.916 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:26 compute-0 ceph-mon[73607]: pgmap v928: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 595 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Oct 02 11:58:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.192 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.193 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.193 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.462 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.463 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.463 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 11:58:27 compute-0 nova_compute[257802]: 2025-10-02 11:58:27.463 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17ee1f97-9d49-445d-835d-583cd2aeffaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:58:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 02 11:58:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 02 11:58:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 02 11:58:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1899045596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3111796916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:27 compute-0 ceph-mon[73607]: osdmap e134: 3 total, 3 up, 3 in
Oct 02 11:58:27 compute-0 sudo[264300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:27 compute-0 sudo[264300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:27 compute-0 sudo[264300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:27 compute-0 podman[264285]: 2025-10-02 11:58:27.991817731 +0000 UTC m=+0.119656873 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 11:58:28 compute-0 nova_compute[257802]: 2025-10-02 11:58:28.003 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:58:28 compute-0 sudo[264335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:28 compute-0 sudo[264335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:28 compute-0 sudo[264335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:28.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 122 op/s
Oct 02 11:58:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3537283019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2656278343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:29 compute-0 ceph-mon[73607]: pgmap v930: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 122 op/s
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.505 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.580 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.580 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.581 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.581 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.581 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.582 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.582 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.582 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.582 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.658 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.659 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.659 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.659 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:58:29 compute-0 nova_compute[257802]: 2025-10-02 11:58:29.660 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:58:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291312376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.091 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1291312376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.176 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.176 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.304 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.306 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.306 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.307 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.537 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17ee1f97-9d49-445d-835d-583cd2aeffaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.538 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.538 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.602 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.669 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.669 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.688 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.705 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 11:58:30 compute-0 nova_compute[257802]: 2025-10-02 11:58:30.737 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:30.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 02 11:58:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:58:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237859362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.162 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.168 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 11:58:31 compute-0 ceph-mon[73607]: pgmap v931: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 02 11:58:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3237859362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.430 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updated inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.431 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.431 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.550 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:58:31 compute-0 nova_compute[257802]: 2025-10-02 11:58:31.551 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.067 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.068 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.189 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.189 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.325 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 11:58:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 11:58:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:32 compute-0 ceph-mon[73607]: pgmap v932: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 11:58:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.992 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:32 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.993 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.998 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:32.998 2 INFO nova.compute.claims [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Claim successful on node compute-0.ctlplane.example.com
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:33.377 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:58:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141530037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:33.798 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:33.803 2 DEBUG nova.compute.provider_tree [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:58:33 compute-0 nova_compute[257802]: 2025-10-02 11:58:33.830 2 DEBUG nova.scheduler.client.report [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:58:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/584807344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4139404090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1141530037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.108 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.109 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.361 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.361 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.661 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.764 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 11:58:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:34.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 467 KiB/s wr, 90 op/s
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.918 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.920 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.920 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Creating image(s)
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.948 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:34 compute-0 nova_compute[257802]: 2025-10-02 11:58:34.984 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.011 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.014 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:35 compute-0 ceph-mon[73607]: pgmap v933: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 467 KiB/s wr, 90 op/s
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.071 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.072 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.073 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.073 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.113 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.117 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.161 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Automatically allocating a network for project 8972026d0f3a4bf4b6debd9555f9225c. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.358 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.421 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] resizing rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.520 2 DEBUG nova.objects.instance [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'migration_context' on Instance uuid 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.604 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.604 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Ensure instance console log exists: /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.605 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.605 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:35 compute-0 nova_compute[257802]: 2025-10-02 11:58:35.605 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:36.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 85 op/s
Oct 02 11:58:36 compute-0 ceph-mon[73607]: pgmap v934: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 85 op/s
Oct 02 11:58:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 150 MiB data, 267 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.1 MiB/s wr, 141 op/s
Oct 02 11:58:38 compute-0 ceph-mon[73607]: pgmap v935: 305 pgs: 305 active+clean; 150 MiB data, 267 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.1 MiB/s wr, 141 op/s
Oct 02 11:58:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:38.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:39 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 11:58:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 256 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.4 MiB/s wr, 161 op/s
Oct 02 11:58:40 compute-0 ceph-mon[73607]: pgmap v936: 305 pgs: 305 active+clean; 256 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.4 MiB/s wr, 161 op/s
Oct 02 11:58:40 compute-0 sshd-session[264601]: Invalid user riscv from 167.99.55.34 port 41990
Oct 02 11:58:40 compute-0 sshd-session[264601]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 11:58:40 compute-0 sshd-session[264601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 11:58:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:58:42
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'images', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data']
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:58:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 256 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 142 op/s
Oct 02 11:58:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 02 11:58:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 02 11:58:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 02 11:58:42 compute-0 ceph-mon[73607]: pgmap v937: 305 pgs: 305 active+clean; 256 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 142 op/s
Oct 02 11:58:42 compute-0 ceph-mon[73607]: osdmap e135: 3 total, 3 up, 3 in
Oct 02 11:58:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:43 compute-0 sshd-session[264601]: Failed password for invalid user riscv from 167.99.55.34 port 41990 ssh2
Oct 02 11:58:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 02 11:58:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 02 11:58:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 02 11:58:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 239 op/s
Oct 02 11:58:44 compute-0 ceph-mon[73607]: osdmap e136: 3 total, 3 up, 3 in
Oct 02 11:58:44 compute-0 ceph-mon[73607]: pgmap v940: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 239 op/s
Oct 02 11:58:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:44.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:45 compute-0 sshd-session[264601]: Connection closed by invalid user riscv 167.99.55.34 port 41990 [preauth]
Oct 02 11:58:45 compute-0 podman[264605]: 2025-10-02 11:58:45.909643444 +0000 UTC m=+0.045507944 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:58:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.9 MiB/s wr, 153 op/s
Oct 02 11:58:46 compute-0 ceph-mon[73607]: pgmap v941: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.9 MiB/s wr, 153 op/s
Oct 02 11:58:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:46.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:58:48 compute-0 sudo[264625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:48 compute-0 sudo[264625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:48 compute-0 sudo[264625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:48 compute-0 sudo[264650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:48 compute-0 sudo[264650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:48 compute-0 sudo[264650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 124 KiB/s wr, 26 op/s
Oct 02 11:58:48 compute-0 ceph-mon[73607]: pgmap v942: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 124 KiB/s wr, 26 op/s
Oct 02 11:58:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 124 KiB/s wr, 26 op/s
Oct 02 11:58:50 compute-0 ceph-mon[73607]: pgmap v943: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 124 KiB/s wr, 26 op/s
Oct 02 11:58:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 100 KiB/s wr, 21 op/s
Oct 02 11:58:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:58:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 02 11:58:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 02 11:58:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 02 11:58:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:52.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:53 compute-0 ceph-mon[73607]: pgmap v944: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 100 KiB/s wr, 21 op/s
Oct 02 11:58:53 compute-0 ceph-mon[73607]: osdmap e137: 3 total, 3 up, 3 in
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005132905557882003 of space, bias 1.0, pg target 1.539871667364601 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 11:58:54 compute-0 nova_compute[257802]: 2025-10-02 11:58:54.242 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Automatically allocated network: {'id': '48e5c857-28d2-421a-9519-d32a13037daa', 'name': 'auto_allocated_network', 'tenant_id': '8972026d0f3a4bf4b6debd9555f9225c', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['16e0798e-6426-4773-8f30-55d7d7bbe4dc', 'ad194778-c6b1-4bb8-a50b-11049b061235'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T11:58:36Z', 'updated_at': '2025-10-02T11:58:47Z', 'revision_number': 4, 'project_id': '8972026d0f3a4bf4b6debd9555f9225c'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Oct 02 11:58:54 compute-0 nova_compute[257802]: 2025-10-02 11:58:54.252 2 WARNING oslo_policy.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 11:58:54 compute-0 nova_compute[257802]: 2025-10-02 11:58:54.252 2 WARNING oslo_policy.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 11:58:54 compute-0 nova_compute[257802]: 2025-10-02 11:58:54.254 2 DEBUG nova.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17903cd0333c407b96f0aede6dd3b16c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 11:58:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 14 KiB/s wr, 1 op/s
Oct 02 11:58:54 compute-0 podman[264679]: 2025-10-02 11:58:54.906756886 +0000 UTC m=+0.050780564 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 02 11:58:54 compute-0 podman[264680]: 2025-10-02 11:58:54.907358811 +0000 UTC m=+0.049086212 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 11:58:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3825480791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:54 compute-0 ceph-mon[73607]: pgmap v946: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 14 KiB/s wr, 1 op/s
Oct 02 11:58:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:55 compute-0 nova_compute[257802]: 2025-10-02 11:58:55.862 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Successfully created port: 2df223c7-4bd2-4c58-9fb6-44a0b529b795 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 11:58:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/59783456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:58:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1787676681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:58:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1787676681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:58:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 13 KiB/s wr, 0 op/s
Oct 02 11:58:56 compute-0 ceph-mon[73607]: pgmap v947: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 13 KiB/s wr, 0 op/s
Oct 02 11:58:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:57 compute-0 nova_compute[257802]: 2025-10-02 11:58:57.510 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Successfully updated port: 2df223c7-4bd2-4c58-9fb6-44a0b529b795 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 11:58:57 compute-0 nova_compute[257802]: 2025-10-02 11:58:57.539 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:58:57 compute-0 nova_compute[257802]: 2025-10-02 11:58:57.539 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquired lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:58:57 compute-0 nova_compute[257802]: 2025-10-02 11:58:57.539 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 11:58:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:58:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3339636446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:58 compute-0 nova_compute[257802]: 2025-10-02 11:58:58.075 2 DEBUG nova.compute.manager [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-changed-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:58:58 compute-0 nova_compute[257802]: 2025-10-02 11:58:58.075 2 DEBUG nova.compute.manager [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Refreshing instance network info cache due to event network-changed-2df223c7-4bd2-4c58-9fb6-44a0b529b795. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 11:58:58 compute-0 nova_compute[257802]: 2025-10-02 11:58:58.076 2 DEBUG oslo_concurrency.lockutils [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:58:58 compute-0 nova_compute[257802]: 2025-10-02 11:58:58.226 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:58:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:58:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:58.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:58:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s
Oct 02 11:58:58 compute-0 podman[264721]: 2025-10-02 11:58:58.951330155 +0000 UTC m=+0.087789846 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 11:58:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:58:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1453781497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:58:59 compute-0 ceph-mon[73607]: pgmap v948: 305 pgs: 305 active+clean; 259 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.893 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Updating instance_info_cache with network_info: [{"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.965 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Releasing lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.965 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Instance network_info: |[{"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.965 2 DEBUG oslo_concurrency.lockutils [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.966 2 DEBUG nova.network.neutron [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Refreshing network info cache for port 2df223c7-4bd2-4c58-9fb6-44a0b529b795 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.968 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Start _get_guest_xml network_info=[{"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.972 2 WARNING nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.980 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.980 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.983 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.984 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.985 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.986 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 11:58:59 compute-0 nova_compute[257802]: 2025-10-02 11:58:59.989 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661555325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.401 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.422 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.425 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3661555325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:00.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133207046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.822 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.824 2 DEBUG nova.virt.libvirt.vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-3',id=4,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:34Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.825 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.826 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.828 2 DEBUG nova.objects.instance [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 269 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 309 KiB/s rd, 169 KiB/s wr, 37 op/s
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.847 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] End _get_guest_xml xml=<domain type="kvm">
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <uuid>6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab</uuid>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <name>instance-00000004</name>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <metadata>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:name>tempest-tempest.common.compute-instance-524116669-3</nova:name>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 11:58:59</nova:creationTime>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:user uuid="17903cd0333c407b96f0aede6dd3b16c">tempest-AutoAllocateNetworkTest-379631237-project-member</nova:user>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:project uuid="8972026d0f3a4bf4b6debd9555f9225c">tempest-AutoAllocateNetworkTest-379631237</nova:project>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <nova:port uuid="2df223c7-4bd2-4c58-9fb6-44a0b529b795">
Oct 02 11:59:00 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.1.0.126" ipVersion="4"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="fdfe:381f:8400:1::144" ipVersion="6"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </metadata>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <system>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="serial">6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="uuid">6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </system>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <os>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </os>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <features>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <apic/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </features>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </clock>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk">
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config">
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:84:10:73"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <target dev="tap2df223c7-4b"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </interface>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/console.log" append="off"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </serial>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <video>
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </video>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 11:59:00 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 11:59:00 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 11:59:00 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:59:00 compute-0 nova_compute[257802]: </domain>
Oct 02 11:59:00 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.849 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Preparing to wait for external event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.849 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.849 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.850 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.850 2 DEBUG nova.virt.libvirt.vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-3',id=4,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:34Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.851 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.852 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.852 2 DEBUG os_vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.910 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.910 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.910 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 11:59:00 compute-0 nova_compute[257802]: 2025-10-02 11:59:00.925 2 INFO oslo.privsep.daemon [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp7xyrlrsh/privsep.sock']
Oct 02 11:59:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:01.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1133207046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:01 compute-0 ceph-mon[73607]: pgmap v949: 305 pgs: 305 active+clean; 269 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 309 KiB/s rd, 169 KiB/s wr, 37 op/s
Oct 02 11:59:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3537301381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.563 2 INFO oslo.privsep.daemon [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Spawned new privsep daemon via rootwrap
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.441 872 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.445 872 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.447 872 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.450 872 INFO oslo.privsep.daemon [-] privsep daemon running as pid 872
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2df223c7-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2df223c7-4b, col_values=(('external_ids', {'iface-id': '2df223c7-4bd2-4c58-9fb6-44a0b529b795', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:10:73', 'vm-uuid': '6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:01 compute-0 NetworkManager[44987]: <info>  [1759406341.9897] manager: (tap2df223c7-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:01 compute-0 nova_compute[257802]: 2025-10-02 11:59:01.996 2 INFO os_vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b')
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.057 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.057 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.058 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No VIF found with MAC fa:16:3e:84:10:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.058 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Using config drive
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.079 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.181 2 DEBUG nova.network.neutron [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Updated VIF entry in instance network info cache for port 2df223c7-4bd2-4c58-9fb6-44a0b529b795. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.181 2 DEBUG nova.network.neutron [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Updating instance_info_cache with network_info: [{"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.206 2 DEBUG oslo_concurrency.lockutils [req-5dc6e83f-cfb3-4ac5-bc23-91bb4418d7c5 req-310751ef-68b8-4ab6-bbe0-6fe63390cac6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1430692618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.774 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Creating config drive at /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.778 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk8bz5doa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 269 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 309 KiB/s rd, 169 KiB/s wr, 37 op/s
Oct 02 11:59:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.900 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk8bz5doa" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.924 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:02 compute-0 nova_compute[257802]: 2025-10-02 11:59:02.927 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.070 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.071 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Deleting local config drive /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab/disk.config because it was imported into RBD.
Oct 02 11:59:03 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 02 11:59:03 compute-0 kernel: tap2df223c7-4b: entered promiscuous mode
Oct 02 11:59:03 compute-0 NetworkManager[44987]: <info>  [1759406343.1263] manager: (tap2df223c7-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 02 11:59:03 compute-0 ovn_controller[148183]: 2025-10-02T11:59:03Z|00027|binding|INFO|Claiming lport 2df223c7-4bd2-4c58-9fb6-44a0b529b795 for this chassis.
Oct 02 11:59:03 compute-0 ovn_controller[148183]: 2025-10-02T11:59:03Z|00028|binding|INFO|2df223c7-4bd2-4c58-9fb6-44a0b529b795: Claiming fa:16:3e:84:10:73 10.1.0.126 fdfe:381f:8400:1::144
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:03 compute-0 systemd-udevd[264892]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:03 compute-0 NetworkManager[44987]: <info>  [1759406343.1815] device (tap2df223c7-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:59:03 compute-0 NetworkManager[44987]: <info>  [1759406343.1823] device (tap2df223c7-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.200 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:10:73 10.1.0.126 fdfe:381f:8400:1::144'], port_security=['fa:16:3e:84:10:73 10.1.0.126 fdfe:381f:8400:1::144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.126/26 fdfe:381f:8400:1::144/64', 'neutron:device_id': '6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e5c857-28d2-421a-9519-d32a13037daa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1955bc9-f08c-4e28-af03-54d4a3949aee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c8e46c-a173-4ff5-bd2b-7026f16b2de8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2df223c7-4bd2-4c58-9fb6-44a0b529b795) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.201 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2df223c7-4bd2-4c58-9fb6-44a0b529b795 in datapath 48e5c857-28d2-421a-9519-d32a13037daa bound to our chassis
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.203 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e5c857-28d2-421a-9519-d32a13037daa
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.204 158261 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp3e0xfle1/privsep.sock']
Oct 02 11:59:03 compute-0 systemd-machined[211836]: New machine qemu-2-instance-00000004.
Oct 02 11:59:03 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:03 compute-0 ovn_controller[148183]: 2025-10-02T11:59:03Z|00029|binding|INFO|Setting lport 2df223c7-4bd2-4c58-9fb6-44a0b529b795 ovn-installed in OVS
Oct 02 11:59:03 compute-0 ovn_controller[148183]: 2025-10-02T11:59:03Z|00030|binding|INFO|Setting lport 2df223c7-4bd2-4c58-9fb6-44a0b529b795 up in Southbound
Oct 02 11:59:03 compute-0 nova_compute[257802]: 2025-10-02 11:59:03.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:03 compute-0 ceph-mon[73607]: pgmap v950: 305 pgs: 305 active+clean; 269 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 309 KiB/s rd, 169 KiB/s wr, 37 op/s
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.904 158261 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.905 158261 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3e0xfle1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.747 264953 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.751 264953 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.753 264953 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.754 264953 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264953
Oct 02 11:59:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:03.908 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[64267656-6732-4ee2-8927-aaab57e7750a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.052 2 DEBUG nova.compute.manager [req-88fbaa3a-e5ee-474f-be6c-73e9114affa0 req-8bf7673c-fa1f-4d32-9733-e055d4218dfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.053 2 DEBUG oslo_concurrency.lockutils [req-88fbaa3a-e5ee-474f-be6c-73e9114affa0 req-8bf7673c-fa1f-4d32-9733-e055d4218dfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.053 2 DEBUG oslo_concurrency.lockutils [req-88fbaa3a-e5ee-474f-be6c-73e9114affa0 req-8bf7673c-fa1f-4d32-9733-e055d4218dfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.053 2 DEBUG oslo_concurrency.lockutils [req-88fbaa3a-e5ee-474f-be6c-73e9114affa0 req-8bf7673c-fa1f-4d32-9733-e055d4218dfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.054 2 DEBUG nova.compute.manager [req-88fbaa3a-e5ee-474f-be6c-73e9114affa0 req-8bf7673c-fa1f-4d32-9733-e055d4218dfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Processing event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.061 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406344.0613124, 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.062 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] VM Started (Lifecycle Event)
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.064 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.067 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.070 2 INFO nova.virt.libvirt.driver [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Instance spawned successfully.
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.070 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.125 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.128 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.137 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.137 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.138 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.138 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.139 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.139 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.188 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.191 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406344.061413, 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.191 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] VM Paused (Lifecycle Event)
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.215 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.218 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406344.0667684, 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.218 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] VM Resumed (Lifecycle Event)
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.227 2 INFO nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Took 29.31 seconds to spawn the instance on the hypervisor.
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.227 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.251 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.254 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.295 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.318 2 INFO nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Took 31.86 seconds to build instance.
Oct 02 11:59:04 compute-0 nova_compute[257802]: 2025-10-02 11:59:04.361 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/841098749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 11:59:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:05.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:05 compute-0 nova_compute[257802]: 2025-10-02 11:59:05.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:05.132 264953 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:05.133 264953 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:05.133 264953 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1493660323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:05 compute-0 ceph-mon[73607]: pgmap v951: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.133 2 DEBUG nova.compute.manager [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.133 2 DEBUG oslo_concurrency.lockutils [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.134 2 DEBUG oslo_concurrency.lockutils [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.134 2 DEBUG oslo_concurrency.lockutils [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.134 2 DEBUG nova.compute.manager [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] No waiting events found dispatching network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 11:59:06 compute-0 nova_compute[257802]: 2025-10-02 11:59:06.134 2 WARNING nova.compute.manager [req-f7e97bbc-afe6-4284-8ac1-5373c2e74fb9 req-d607deaa-5483-404f-bc69-7ee0be0ff188 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received unexpected event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 for instance with vm_state active and task_state None.
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.469 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[26e66fc2-0181-4dc9-b5d8-0dafca7b3b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.470 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e5c857-21 in ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.472 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e5c857-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.472 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd67afb-296f-4f65-86b4-f3eda875bcaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.477 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6924e8bb-a0b3-4eb4-8f47-ff5121859494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.509 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[279a95e0-2f8b-43f3-b35a-884aff9f2496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.540 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0d755426-dea9-44c4-b217-80a35ffd66e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:06.544 158261 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpklahmoll/privsep.sock']
Oct 02 11:59:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 11:59:06 compute-0 ceph-mon[73607]: pgmap v952: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 11:59:07 compute-0 nova_compute[257802]: 2025-10-02 11:59:07.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:07 compute-0 sudo[264970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:07 compute-0 sudo[264970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:07 compute-0 sudo[264970]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.244 158261 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.244 158261 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpklahmoll/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.090 264969 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.094 264969 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.096 264969 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.096 264969 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264969
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.247 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[339042d2-1ea2-4f81-b4ae-25e0e438bf1e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:07 compute-0 sudo[264995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:07 compute-0 sudo[264995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:07 compute-0 sudo[264995]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:07 compute-0 sudo[265023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:07 compute-0 sudo[265023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:07 compute-0 sudo[265023]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:07 compute-0 sudo[265048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:59:07 compute-0 sudo[265048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.826 264969 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.826 264969 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:07.826 264969 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:07 compute-0 sudo[265048]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f5ee982e-ca3c-4f07-a2af-e23573bf86bc does not exist
Oct 02 11:59:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 438325d6-61fb-4394-905c-ec22928e5fd7 does not exist
Oct 02 11:59:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7ad70bf9-f4f0-408c-98d9-d356e91925ca does not exist
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:59:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:59:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:08 compute-0 sudo[265106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:08 compute-0 sudo[265106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 sudo[265106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[265131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:08 compute-0 sudo[265131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 sudo[265131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[265156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:08 compute-0 sudo[265156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 sudo[265157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:08 compute-0 sudo[265157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 sudo[265156]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[265157]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[265206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:08 compute-0 sudo[265206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 sudo[265206]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[265208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:59:08 compute-0 sudo[265208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.451 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1f54a0-d106-4f8c-b7a1-076ed6450233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.455 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e1689920-1bc1-43bb-bd2d-4bf3235e7af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 NetworkManager[44987]: <info>  [1759406348.4618] manager: (tap48e5c857-20): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 02 11:59:08 compute-0 systemd-udevd[265262]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.481 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba86803-0769-446d-8132-d5fd80b79e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.485 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f86413be-6c7c-4b5d-8aa6-884be2377614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 NetworkManager[44987]: <info>  [1759406348.5140] device (tap48e5c857-20): carrier: link connected
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.518 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1d870d1b-b920-4e35-9d55-06a15c12bb8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.535 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[beb010da-e957-4ed3-9401-6d22a2636688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e5c857-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:83:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441613, 'reachable_time': 25872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265288, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.549 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a22d28fa-06b4-469b-a430-02307a5cb305]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:83c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441613, 'tstamp': 441613}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265293, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.563 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[19ba4167-2267-4091-9f40-d2bf37b18ca7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e5c857-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:83:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441613, 'reachable_time': 25872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265296, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.587 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[275ccab6-551f-43b6-ae3d-3b6439e76ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.640 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c169e9-01ca-4361-9698-bb3e15270c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.643 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e5c857-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.643 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.644 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e5c857-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:08 compute-0 NetworkManager[44987]: <info>  [1759406348.6462] manager: (tap48e5c857-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 02 11:59:08 compute-0 nova_compute[257802]: 2025-10-02 11:59:08.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:08 compute-0 kernel: tap48e5c857-20: entered promiscuous mode
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.650 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e5c857-20, col_values=(('external_ids', {'iface-id': '108050f3-e876-480b-8cdd-c1255d33ae84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:08 compute-0 nova_compute[257802]: 2025-10-02 11:59:08.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:08 compute-0 ovn_controller[148183]: 2025-10-02T11:59:08Z|00031|binding|INFO|Releasing lport 108050f3-e876-480b-8cdd-c1255d33ae84 from this chassis (sb_readonly=0)
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.654 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.655 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[30fff411-2d15-46ed-8a56-c5d922b38497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.656 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: global
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-48e5c857-28d2-421a-9519-d32a13037daa
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 48e5c857-28d2-421a-9519-d32a13037daa
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 11:59:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:08.657 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'env', 'PROCESS_TAG=haproxy-48e5c857-28d2-421a-9519-d32a13037daa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e5c857-28d2-421a-9519-d32a13037daa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 11:59:08 compute-0 nova_compute[257802]: 2025-10-02 11:59:08.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.733060321 +0000 UTC m=+0.042062418 container create b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:59:08 compute-0 systemd[1]: Started libpod-conmon-b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118.scope.
Oct 02 11:59:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.710150296 +0000 UTC m=+0.019152413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:08.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.820088698 +0000 UTC m=+0.129090825 container init b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.82872481 +0000 UTC m=+0.137726907 container start b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.831735535 +0000 UTC m=+0.140737652 container attach b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:59:08 compute-0 vigorous_hermann[265350]: 167 167
Oct 02 11:59:08 compute-0 systemd[1]: libpod-b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118.scope: Deactivated successfully.
Oct 02 11:59:08 compute-0 conmon[265350]: conmon b59c7b78f2bbcc1b5043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118.scope/container/memory.events
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.837465226 +0000 UTC m=+0.146467323 container died b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:59:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 02 11:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8613df759cf8e1c57cee7fdabad41b7e2acc0a0fd0df456e10c1eda1ace360a-merged.mount: Deactivated successfully.
Oct 02 11:59:08 compute-0 podman[265333]: 2025-10-02 11:59:08.885539922 +0000 UTC m=+0.194542019 container remove b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:59:08 compute-0 systemd[1]: libpod-conmon-b59c7b78f2bbcc1b5043b308dad9761adca9102ca45043e3abe9335479e7d118.scope: Deactivated successfully.
Oct 02 11:59:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:09.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:09 compute-0 podman[265391]: 2025-10-02 11:59:09.039323454 +0000 UTC m=+0.055620782 container create 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 11:59:09 compute-0 podman[265408]: 2025-10-02 11:59:09.060162569 +0000 UTC m=+0.051565183 container create 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:59:09 compute-0 systemd[1]: Started libpod-conmon-7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30.scope.
Oct 02 11:59:09 compute-0 systemd[1]: Started libpod-conmon-87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2.scope.
Oct 02 11:59:09 compute-0 podman[265391]: 2025-10-02 11:59:09.014472982 +0000 UTC m=+0.030770330 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:59:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ac3b406ab13d127ac0954d82b8ccdf99fc44c7387be94a2a986695513408d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:59:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:59:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:09 compute-0 ceph-mon[73607]: pgmap v953: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 02 11:59:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 podman[265408]: 2025-10-02 11:59:09.037795817 +0000 UTC m=+0.029198451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:09 compute-0 podman[265408]: 2025-10-02 11:59:09.146617741 +0000 UTC m=+0.138020375 container init 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:09 compute-0 podman[265391]: 2025-10-02 11:59:09.156628738 +0000 UTC m=+0.172926116 container init 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:59:09 compute-0 podman[265408]: 2025-10-02 11:59:09.158205937 +0000 UTC m=+0.149608571 container start 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:59:09 compute-0 podman[265408]: 2025-10-02 11:59:09.16403607 +0000 UTC m=+0.155438714 container attach 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:59:09 compute-0 podman[265391]: 2025-10-02 11:59:09.16561602 +0000 UTC m=+0.181913358 container start 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 11:59:09 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [NOTICE]   (265437) : New worker (265439) forked
Oct 02 11:59:09 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [NOTICE]   (265437) : Loading success.
Oct 02 11:59:09 compute-0 pedantic_golick[265431]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:59:10 compute-0 pedantic_golick[265431]: --> relative data size: 1.0
Oct 02 11:59:10 compute-0 pedantic_golick[265431]: --> All data devices are unavailable
Oct 02 11:59:10 compute-0 systemd[1]: libpod-87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2.scope: Deactivated successfully.
Oct 02 11:59:10 compute-0 podman[265458]: 2025-10-02 11:59:10.073121904 +0000 UTC m=+0.026562916 container died 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:59:10 compute-0 nova_compute[257802]: 2025-10-02 11:59:10.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-90ef23006e706d64b2dc5c7740da1b85252cbad78dd74ef066f511f40fca3568-merged.mount: Deactivated successfully.
Oct 02 11:59:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:10.265 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:59:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:10.266 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 11:59:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:10.266 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:10 compute-0 nova_compute[257802]: 2025-10-02 11:59:10.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:10 compute-0 podman[265458]: 2025-10-02 11:59:10.385418927 +0000 UTC m=+0.338859899 container remove 87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_golick, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:59:10 compute-0 systemd[1]: libpod-conmon-87a0bafea53fc5a5d01cec8f76714e1823f1447cdc1e0691a77a630a89d11dc2.scope: Deactivated successfully.
Oct 02 11:59:10 compute-0 sudo[265208]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-0 sudo[265473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:10 compute-0 sudo[265473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:10 compute-0 sudo[265473]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-0 sudo[265498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:10 compute-0 sudo[265498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:10 compute-0 sudo[265498]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-0 sudo[265523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:10 compute-0 sudo[265523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:10 compute-0 sudo[265523]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-0 sudo[265548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 11:59:10 compute-0 sudo[265548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 276 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.8 MiB/s wr, 304 op/s
Oct 02 11:59:10 compute-0 ceph-mon[73607]: pgmap v954: 305 pgs: 305 active+clean; 276 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.8 MiB/s wr, 304 op/s
Oct 02 11:59:10 compute-0 podman[265613]: 2025-10-02 11:59:10.98399781 +0000 UTC m=+0.040613313 container create c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:59:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:11.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:11 compute-0 systemd[1]: Started libpod-conmon-c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e.scope.
Oct 02 11:59:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:10.966592671 +0000 UTC m=+0.023208194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:11.069901879 +0000 UTC m=+0.126517402 container init c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:11.077131058 +0000 UTC m=+0.133746571 container start c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:11.081548776 +0000 UTC m=+0.138164279 container attach c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:59:11 compute-0 recursing_cori[265629]: 167 167
Oct 02 11:59:11 compute-0 systemd[1]: libpod-c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e.scope: Deactivated successfully.
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:11.083210517 +0000 UTC m=+0.139826020 container died c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e85477b009275b8c93675c174c615f41006693a10938faf4fb64902f2f6797-merged.mount: Deactivated successfully.
Oct 02 11:59:11 compute-0 podman[265613]: 2025-10-02 11:59:11.147474282 +0000 UTC m=+0.204089775 container remove c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:59:11 compute-0 systemd[1]: libpod-conmon-c77fd6ea3987ef7a59c38fdfcb8fcfbc50dd99475baf2956e14f381e4f45792e.scope: Deactivated successfully.
Oct 02 11:59:11 compute-0 podman[265652]: 2025-10-02 11:59:11.313868967 +0000 UTC m=+0.043330810 container create 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:59:11 compute-0 systemd[1]: Started libpod-conmon-5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3.scope.
Oct 02 11:59:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38227a2947f4fb2f5d42dab406a95dcdbd90786a0f056313daa4709037e32fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38227a2947f4fb2f5d42dab406a95dcdbd90786a0f056313daa4709037e32fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38227a2947f4fb2f5d42dab406a95dcdbd90786a0f056313daa4709037e32fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38227a2947f4fb2f5d42dab406a95dcdbd90786a0f056313daa4709037e32fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:11 compute-0 podman[265652]: 2025-10-02 11:59:11.29535846 +0000 UTC m=+0.024820333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:11 compute-0 podman[265652]: 2025-10-02 11:59:11.396930315 +0000 UTC m=+0.126392188 container init 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:59:11 compute-0 podman[265652]: 2025-10-02 11:59:11.408591843 +0000 UTC m=+0.138053686 container start 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:11 compute-0 podman[265652]: 2025-10-02 11:59:11.411890304 +0000 UTC m=+0.141352167 container attach 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2314377611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:12 compute-0 nova_compute[257802]: 2025-10-02 11:59:12.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]: {
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:     "1": [
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:         {
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "devices": [
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "/dev/loop3"
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             ],
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "lv_name": "ceph_lv0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "lv_size": "7511998464",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "name": "ceph_lv0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "tags": {
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.cluster_name": "ceph",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.crush_device_class": "",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.encrypted": "0",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.osd_id": "1",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.type": "block",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:                 "ceph.vdo": "0"
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             },
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "type": "block",
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:             "vg_name": "ceph_vg0"
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:         }
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]:     ]
Oct 02 11:59:12 compute-0 focused_kowalevski[265669]: }
Oct 02 11:59:12 compute-0 systemd[1]: libpod-5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3.scope: Deactivated successfully.
Oct 02 11:59:12 compute-0 podman[265652]: 2025-10-02 11:59:12.210139573 +0000 UTC m=+0.939601416 container died 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c38227a2947f4fb2f5d42dab406a95dcdbd90786a0f056313daa4709037e32fb-merged.mount: Deactivated successfully.
Oct 02 11:59:12 compute-0 podman[265652]: 2025-10-02 11:59:12.261962991 +0000 UTC m=+0.991424834 container remove 5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:59:12 compute-0 systemd[1]: libpod-conmon-5913fff97a6c5b50b0df0067e40d5f9f03c15a0644f7f6ed3f8ec9dcf31a19d3.scope: Deactivated successfully.
Oct 02 11:59:12 compute-0 sudo[265548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-0 sudo[265692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:12 compute-0 sudo[265692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:12 compute-0 sudo[265692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-0 sudo[265718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:12 compute-0 sudo[265718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:12 compute-0 sudo[265718]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-0 sudo[265743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:12 compute-0 sudo[265743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:12 compute-0 sudo[265743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-0 sudo[265768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 11:59:12 compute-0 sudo[265768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.83196964 +0000 UTC m=+0.042948680 container create 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 276 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.7 MiB/s wr, 275 op/s
Oct 02 11:59:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:12 compute-0 systemd[1]: Started libpod-conmon-1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240.scope.
Oct 02 11:59:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.814019887 +0000 UTC m=+0.024998957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.916028254 +0000 UTC m=+0.127007314 container init 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.92235686 +0000 UTC m=+0.133335910 container start 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:59:12 compute-0 confident_diffie[265850]: 167 167
Oct 02 11:59:12 compute-0 systemd[1]: libpod-1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240.scope: Deactivated successfully.
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.926735408 +0000 UTC m=+0.137714478 container attach 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.927346983 +0000 UTC m=+0.138326043 container died 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0679ea246db7cc644e96b4514ed7556abcf91f702f0d38b67d758d352d8d9fa6-merged.mount: Deactivated successfully.
Oct 02 11:59:12 compute-0 podman[265833]: 2025-10-02 11:59:12.95926204 +0000 UTC m=+0.170241090 container remove 1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:59:12 compute-0 ceph-mon[73607]: pgmap v955: 305 pgs: 305 active+clean; 276 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.7 MiB/s wr, 275 op/s
Oct 02 11:59:12 compute-0 systemd[1]: libpod-conmon-1a84206ceb14cca86c01be16aebfd68a5862c01908541b3e76b754a5d84e7240.scope: Deactivated successfully.
Oct 02 11:59:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:13 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:59:13 compute-0 podman[265874]: 2025-10-02 11:59:13.132831791 +0000 UTC m=+0.056384321 container create 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:13 compute-0 systemd[1]: Started libpod-conmon-55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0.scope.
Oct 02 11:59:13 compute-0 podman[265874]: 2025-10-02 11:59:13.102295578 +0000 UTC m=+0.025848118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df10961afc70aa8885127af5e6ac41844fa482df45c8262a94bd697818e79e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df10961afc70aa8885127af5e6ac41844fa482df45c8262a94bd697818e79e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df10961afc70aa8885127af5e6ac41844fa482df45c8262a94bd697818e79e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df10961afc70aa8885127af5e6ac41844fa482df45c8262a94bd697818e79e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:13 compute-0 podman[265874]: 2025-10-02 11:59:13.233698699 +0000 UTC m=+0.157251219 container init 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:59:13 compute-0 podman[265874]: 2025-10-02 11:59:13.242969338 +0000 UTC m=+0.166521868 container start 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:59:13 compute-0 podman[265874]: 2025-10-02 11:59:13.257811864 +0000 UTC m=+0.181364414 container attach 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:59:14 compute-0 funny_leavitt[265891]: {
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:         "osd_id": 1,
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:         "type": "bluestore"
Oct 02 11:59:14 compute-0 funny_leavitt[265891]:     }
Oct 02 11:59:14 compute-0 funny_leavitt[265891]: }
Oct 02 11:59:14 compute-0 systemd[1]: libpod-55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0.scope: Deactivated successfully.
Oct 02 11:59:14 compute-0 conmon[265891]: conmon 55a2e4c115f2379893e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0.scope/container/memory.events
Oct 02 11:59:14 compute-0 podman[265874]: 2025-10-02 11:59:14.065378522 +0000 UTC m=+0.988931052 container died 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df10961afc70aa8885127af5e6ac41844fa482df45c8262a94bd697818e79e9-merged.mount: Deactivated successfully.
Oct 02 11:59:14 compute-0 podman[265874]: 2025-10-02 11:59:14.132863097 +0000 UTC m=+1.056415627 container remove 55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leavitt, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:59:14 compute-0 systemd[1]: libpod-conmon-55a2e4c115f2379893e5527a0e552fac6d23c74793ab800b525207cbd94b30f0.scope: Deactivated successfully.
Oct 02 11:59:14 compute-0 sudo[265768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:59:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:59:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5c69c5ad-d223-4d65-85c1-91c911b89d83 does not exist
Oct 02 11:59:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ae98f2e5-65d9-4dac-9628-754a98ea4124 does not exist
Oct 02 11:59:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6724c785-0ca2-4062-a94e-341101d9f799 does not exist
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.205778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354205807, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1170, "num_deletes": 251, "total_data_size": 1749422, "memory_usage": 1773912, "flush_reason": "Manual Compaction"}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354219355, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1706365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20587, "largest_seqno": 21756, "table_properties": {"data_size": 1700792, "index_size": 2904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12442, "raw_average_key_size": 20, "raw_value_size": 1689373, "raw_average_value_size": 2738, "num_data_blocks": 129, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406267, "oldest_key_time": 1759406267, "file_creation_time": 1759406354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13727 microseconds, and 4108 cpu microseconds.
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.219500) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1706365 bytes OK
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.219554) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.222310) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.222359) EVENT_LOG_v1 {"time_micros": 1759406354222349, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.222382) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1744125, prev total WAL file size 1744125, number of live WAL files 2.
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.223633) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1666KB)], [47(7840KB)]
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354223705, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9735248, "oldest_snapshot_seqno": -1}
Oct 02 11:59:14 compute-0 sudo[265925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:14 compute-0 sudo[265925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:14 compute-0 sudo[265925]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4644 keys, 7689688 bytes, temperature: kUnknown
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354276401, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7689688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7659222, "index_size": 17748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116202, "raw_average_key_size": 25, "raw_value_size": 7575629, "raw_average_value_size": 1631, "num_data_blocks": 730, "num_entries": 4644, "num_filter_entries": 4644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.276774) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7689688 bytes
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.278232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.9 rd, 145.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 5166, records dropped: 522 output_compression: NoCompression
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.278251) EVENT_LOG_v1 {"time_micros": 1759406354278242, "job": 24, "event": "compaction_finished", "compaction_time_micros": 52927, "compaction_time_cpu_micros": 20400, "output_level": 6, "num_output_files": 1, "total_output_size": 7689688, "num_input_records": 5166, "num_output_records": 4644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354278916, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354280489, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.223561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.280605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.280610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.280615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.280616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:14.280618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:14 compute-0 sudo[265950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:59:14 compute-0 sudo[265950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:14 compute-0 sudo[265950]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:14.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 228 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.1 MiB/s wr, 355 op/s
Oct 02 11:59:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:15.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:15 compute-0 nova_compute[257802]: 2025-10-02 11:59:15.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 11:59:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/760847532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:15 compute-0 ceph-mon[73607]: pgmap v956: 305 pgs: 305 active+clean; 228 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.1 MiB/s wr, 355 op/s
Oct 02 11:59:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1457032129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 214 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 325 op/s
Oct 02 11:59:16 compute-0 podman[265978]: 2025-10-02 11:59:16.912531858 +0000 UTC m=+0.050579909 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.986 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.986 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.987 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.987 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.987 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.988 2 INFO nova.compute.manager [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Terminating instance
Oct 02 11:59:16 compute-0 nova_compute[257802]: 2025-10-02 11:59:16.988 2 DEBUG nova.compute.manager [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:17 compute-0 kernel: tap2df223c7-4b (unregistering): left promiscuous mode
Oct 02 11:59:17 compute-0 NetworkManager[44987]: <info>  [1759406357.0330] device (tap2df223c7-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 11:59:17 compute-0 ovn_controller[148183]: 2025-10-02T11:59:17Z|00032|binding|INFO|Releasing lport 2df223c7-4bd2-4c58-9fb6-44a0b529b795 from this chassis (sb_readonly=0)
Oct 02 11:59:17 compute-0 ovn_controller[148183]: 2025-10-02T11:59:17Z|00033|binding|INFO|Setting lport 2df223c7-4bd2-4c58-9fb6-44a0b529b795 down in Southbound
Oct 02 11:59:17 compute-0 ovn_controller[148183]: 2025-10-02T11:59:17Z|00034|binding|INFO|Removing iface tap2df223c7-4b ovn-installed in OVS
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.054 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:10:73 10.1.0.126 fdfe:381f:8400:1::144'], port_security=['fa:16:3e:84:10:73 10.1.0.126 fdfe:381f:8400:1::144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.126/26 fdfe:381f:8400:1::144/64', 'neutron:device_id': '6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e5c857-28d2-421a-9519-d32a13037daa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1955bc9-f08c-4e28-af03-54d4a3949aee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c8e46c-a173-4ff5-bd2b-7026f16b2de8, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2df223c7-4bd2-4c58-9fb6-44a0b529b795) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.055 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2df223c7-4bd2-4c58-9fb6-44a0b529b795 in datapath 48e5c857-28d2-421a-9519-d32a13037daa unbound from our chassis
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.056 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e5c857-28d2-421a-9519-d32a13037daa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.057 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[66dd5812-992e-4a30-bd84-1b65d0a8ceeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.058 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa namespace which is not needed anymore
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 02 11:59:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 13.649s CPU time.
Oct 02 11:59:17 compute-0 systemd-machined[211836]: Machine qemu-2-instance-00000004 terminated.
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.219 2 INFO nova.virt.libvirt.driver [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Instance destroyed successfully.
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.219 2 DEBUG nova.objects.instance [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'resources' on Instance uuid 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [NOTICE]   (265437) : haproxy version is 2.8.14-c23fe91
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [NOTICE]   (265437) : path to executable is /usr/sbin/haproxy
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [WARNING]  (265437) : Exiting Master process...
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [WARNING]  (265437) : Exiting Master process...
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [ALERT]    (265437) : Current worker (265439) exited with code 143 (Terminated)
Oct 02 11:59:17 compute-0 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[265426]: [WARNING]  (265437) : All workers exited. Exiting... (0)
Oct 02 11:59:17 compute-0 systemd[1]: libpod-7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30.scope: Deactivated successfully.
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.235 2 DEBUG nova.virt.libvirt.vif [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-3',id=4,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T11:59:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:04Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.236 2 DEBUG nova.network.os_vif_util [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "address": "fa:16:3e:84:10:73", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::144", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df223c7-4b", "ovs_interfaceid": "2df223c7-4bd2-4c58-9fb6-44a0b529b795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 11:59:17 compute-0 podman[266021]: 2025-10-02 11:59:17.237811941 +0000 UTC m=+0.069023813 container died 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.237 2 DEBUG nova.network.os_vif_util [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.238 2 DEBUG os_vif [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2df223c7-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.247 2 INFO os_vif [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:10:73,bridge_name='br-int',has_traffic_filtering=True,id=2df223c7-4bd2-4c58-9fb6-44a0b529b795,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df223c7-4b')
Oct 02 11:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30-userdata-shm.mount: Deactivated successfully.
Oct 02 11:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ac3b406ab13d127ac0954d82b8ccdf99fc44c7387be94a2a986695513408d5-merged.mount: Deactivated successfully.
Oct 02 11:59:17 compute-0 podman[266021]: 2025-10-02 11:59:17.286220024 +0000 UTC m=+0.117431896 container cleanup 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 11:59:17 compute-0 systemd[1]: libpod-conmon-7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30.scope: Deactivated successfully.
Oct 02 11:59:17 compute-0 ceph-mon[73607]: pgmap v957: 305 pgs: 305 active+clean; 214 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 325 op/s
Oct 02 11:59:17 compute-0 podman[266077]: 2025-10-02 11:59:17.369856397 +0000 UTC m=+0.055363876 container remove 7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[82039e4b-a2b1-462e-8b97-34b8331c9321]: (4, ('Thu Oct  2 11:59:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa (7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30)\n7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30\nThu Oct  2 11:59:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa (7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30)\n7f4efb0f45fb8d2d529cb0cab31f6097559591e431ed292b730c711118ba4f30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.383 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4bf474-62ab-42a7-8fe2-1b461fbe51b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.384 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e5c857-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:17 compute-0 kernel: tap48e5c857-20: left promiscuous mode
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.405 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e69055da-2a5e-448b-926c-37e7766a50b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.433 2 DEBUG nova.compute.manager [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-unplugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.434 2 DEBUG oslo_concurrency.lockutils [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.434 2 DEBUG oslo_concurrency.lockutils [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.434 2 DEBUG oslo_concurrency.lockutils [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.435 2 DEBUG nova.compute.manager [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] No waiting events found dispatching network-vif-unplugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.435 2 DEBUG nova.compute.manager [req-860f753d-78e3-47a2-8cf6-caddf0164fc9 req-302cdf4d-f2a2-4508-8a5f-c50bfae3adab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-unplugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.442 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ece4443-73cc-4248-a19a-10c6f5cfe0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.444 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[808385e2-c595-4305-8513-2beeda003e1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.463 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[40e5ff34-1b16-441f-9c5a-d9d811413bef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441607, 'reachable_time': 42306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266092, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d48e5c857\x2d28d2\x2d421a\x2d9519\x2dd32a13037daa.mount: Deactivated successfully.
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.478 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 11:59:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:17.479 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[72599178-27c6-42aa-83a8-ae6355ef21ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.851 2 INFO nova.virt.libvirt.driver [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Deleting instance files /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_del
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.852 2 INFO nova.virt.libvirt.driver [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Deletion of /var/lib/nova/instances/6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab_del complete
Oct 02 11:59:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.990 2 DEBUG nova.virt.libvirt.host [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.991 2 INFO nova.virt.libvirt.host [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] UEFI support detected
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.993 2 INFO nova.compute.manager [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.993 2 DEBUG oslo.service.loopingcall [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.993 2 DEBUG nova.compute.manager [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 11:59:17 compute-0 nova_compute[257802]: 2025-10-02 11:59:17.993 2 DEBUG nova.network.neutron [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 11:59:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 194 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 315 op/s
Oct 02 11:59:18 compute-0 ceph-mon[73607]: pgmap v958: 305 pgs: 305 active+clean; 194 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 315 op/s
Oct 02 11:59:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1449325745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.581 2 DEBUG nova.compute.manager [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.581 2 DEBUG oslo_concurrency.lockutils [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.581 2 DEBUG oslo_concurrency.lockutils [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.581 2 DEBUG oslo_concurrency.lockutils [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.582 2 DEBUG nova.compute.manager [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] No waiting events found dispatching network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.582 2 WARNING nova.compute.manager [req-b58b5d0b-2f87-4021-8c96-220a4238246c req-69f09348-5b6c-42a9-906c-6afe79e7ff3b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received unexpected event network-vif-plugged-2df223c7-4bd2-4c58-9fb6-44a0b529b795 for instance with vm_state active and task_state deleting.
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.954 2 DEBUG nova.network.neutron [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:19 compute-0 nova_compute[257802]: 2025-10-02 11:59:19.985 2 INFO nova.compute.manager [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Took 1.99 seconds to deallocate network for instance.
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.067 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.068 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.138 2 DEBUG oslo_concurrency.processutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846368999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.599 2 DEBUG oslo_concurrency.processutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.606 2 DEBUG nova.compute.provider_tree [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:59:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/846368999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:20.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 185 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 264 op/s
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.877 2 DEBUG nova.scheduler.client.report [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:59:20 compute-0 nova_compute[257802]: 2025-10-02 11:59:20.956 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:21 compute-0 nova_compute[257802]: 2025-10-02 11:59:21.049 2 INFO nova.scheduler.client.report [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Deleted allocations for instance 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab
Oct 02 11:59:21 compute-0 nova_compute[257802]: 2025-10-02 11:59:21.161 2 DEBUG oslo_concurrency.lockutils [None req-d596cc7b-2f68-416e-8e4b-1d1778a1d755 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2000129043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:21 compute-0 ceph-mon[73607]: pgmap v959: 305 pgs: 305 active+clean; 185 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 264 op/s
Oct 02 11:59:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1319315663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:21 compute-0 nova_compute[257802]: 2025-10-02 11:59:21.675 2 DEBUG nova.compute.manager [req-24bcc163-44d6-4a91-aa0c-8b02fc3d30c5 req-0211c07f-b445-463a-b73a-f7eca46ba52c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Received event network-vif-deleted-2df223c7-4bd2-4c58-9fb6-44a0b529b795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 11:59:22 compute-0 nova_compute[257802]: 2025-10-02 11:59:22.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:22.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 185 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 166 op/s
Oct 02 11:59:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:22 compute-0 ceph-mon[73607]: pgmap v960: 305 pgs: 305 active+clean; 185 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 166 op/s
Oct 02 11:59:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 243 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Oct 02 11:59:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:25 compute-0 ceph-mon[73607]: pgmap v961: 305 pgs: 305 active+clean; 243 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Oct 02 11:59:25 compute-0 nova_compute[257802]: 2025-10-02 11:59:25.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:25 compute-0 podman[266122]: 2025-10-02 11:59:25.911545158 +0000 UTC m=+0.052789593 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 11:59:25 compute-0 podman[266121]: 2025-10-02 11:59:25.938725188 +0000 UTC m=+0.081832379 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:59:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 926 KiB/s rd, 4.5 MiB/s wr, 174 op/s
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:26.916 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:26.917 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:26.917 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:27 compute-0 ceph-mon[73607]: pgmap v962: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 926 KiB/s rd, 4.5 MiB/s wr, 174 op/s
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.896037) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367896120, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 381, "num_deletes": 256, "total_data_size": 247259, "memory_usage": 256072, "flush_reason": "Manual Compaction"}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367900355, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 245410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21757, "largest_seqno": 22137, "table_properties": {"data_size": 243113, "index_size": 397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5343, "raw_average_key_size": 17, "raw_value_size": 238547, "raw_average_value_size": 764, "num_data_blocks": 18, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406355, "oldest_key_time": 1759406355, "file_creation_time": 1759406367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4360 microseconds, and 1485 cpu microseconds.
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.900408) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 245410 bytes OK
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.900425) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.904432) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.904492) EVENT_LOG_v1 {"time_micros": 1759406367904479, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.904522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 244768, prev total WAL file size 244768, number of live WAL files 2.
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.905089) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(239KB)], [50(7509KB)]
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367905124, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 7935098, "oldest_snapshot_seqno": -1}
Oct 02 11:59:27 compute-0 nova_compute[257802]: 2025-10-02 11:59:27.915 2 DEBUG nova.compute.manager [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4436 keys, 7822739 bytes, temperature: kUnknown
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367964368, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7822739, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7792901, "index_size": 17632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 113086, "raw_average_key_size": 25, "raw_value_size": 7712231, "raw_average_value_size": 1738, "num_data_blocks": 721, "num_entries": 4436, "num_filter_entries": 4436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.964797) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7822739 bytes
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.967215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 131.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.2) write-amplify(31.9) OK, records in: 4956, records dropped: 520 output_compression: NoCompression
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.967267) EVENT_LOG_v1 {"time_micros": 1759406367967243, "job": 26, "event": "compaction_finished", "compaction_time_micros": 59430, "compaction_time_cpu_micros": 18362, "output_level": 6, "num_output_files": 1, "total_output_size": 7822739, "num_input_records": 4956, "num_output_records": 4436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367968011, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367972646, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.905005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.972864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.972874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.972877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.972879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:27 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-11:59:27.972881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.047 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.048 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.079 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.098 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.099 2 INFO nova.compute.claims [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.099 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'resources' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.101 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.128 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'numa_topology' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.154 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.202 2 INFO nova.compute.resource_tracker [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating resource usage from migration 72a7a986-629c-41a7-83d5-4be5e86579ab
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.202 2 DEBUG nova.compute.resource_tracker [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Starting to track incoming migration 72a7a986-629c-41a7-83d5-4be5e86579ab with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.268 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.286 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.286 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.286 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.287 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17ee1f97-9d49-445d-835d-583cd2aeffaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.432 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:59:28 compute-0 sudo[266180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:28 compute-0 sudo[266180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:28 compute-0 sudo[266180]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:28 compute-0 sudo[266205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:28 compute-0 sudo[266205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:28 compute-0 sudo[266205]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54290286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.698 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.703 2 DEBUG nova.compute.provider_tree [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.735 2 DEBUG nova.scheduler.client.report [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.768 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.768 2 INFO nova.compute.manager [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Migrating
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.769 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.769 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.775 2 INFO nova.compute.rpcapi [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.776 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.829 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.860 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.861 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.861 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "17ee1f97-9d49-445d-835d-583cd2aeffaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.861 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.861 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.863 2 INFO nova.compute.manager [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Terminating instance
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.863 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.864 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.865 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.865 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquired lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.865 2 DEBUG nova.network.neutron [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.866 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.867 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.868 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.898 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.899 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.899 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.899 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 11:59:28 compute-0 nova_compute[257802]: 2025-10-02 11:59:28.900 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.048 2 DEBUG nova.network.neutron [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:59:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1443149474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/54290286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:29 compute-0 ceph-mon[73607]: pgmap v963: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Oct 02 11:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999341198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.351 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.356 2 DEBUG nova.network.neutron [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.376 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Releasing lock "refresh_cache-17ee1f97-9d49-445d-835d-583cd2aeffaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.377 2 DEBUG nova.compute.manager [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 11:59:29 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 02 11:59:29 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.467s CPU time.
Oct 02 11:59:29 compute-0 systemd-machined[211836]: Machine qemu-1-instance-00000001 terminated.
Oct 02 11:59:29 compute-0 podman[266255]: 2025-10-02 11:59:29.543664434 +0000 UTC m=+0.116244838 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.598 2 INFO nova.virt.libvirt.driver [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance destroyed successfully.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.599 2 DEBUG nova.objects.instance [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'resources' on Instance uuid 17ee1f97-9d49-445d-835d-583cd2aeffaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:29 compute-0 sshd-session[266281]: Accepted publickey for nova from 192.168.122.102 port 33348 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 11:59:29 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 11:59:29 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 11:59:29 compute-0 systemd-logind[789]: New session 54 of user nova.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.636 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.636 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 11:59:29 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 11:59:29 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 11:59:29 compute-0 systemd[266306]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.778 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.780 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4758MB free_disk=20.876476287841797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.781 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.781 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:29 compute-0 systemd[266306]: Queued start job for default target Main User Target.
Oct 02 11:59:29 compute-0 systemd[266306]: Created slice User Application Slice.
Oct 02 11:59:29 compute-0 systemd[266306]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:59:29 compute-0 systemd[266306]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:59:29 compute-0 systemd[266306]: Reached target Paths.
Oct 02 11:59:29 compute-0 systemd[266306]: Reached target Timers.
Oct 02 11:59:29 compute-0 systemd[266306]: Starting D-Bus User Message Bus Socket...
Oct 02 11:59:29 compute-0 systemd[266306]: Starting Create User's Volatile Files and Directories...
Oct 02 11:59:29 compute-0 systemd[266306]: Finished Create User's Volatile Files and Directories.
Oct 02 11:59:29 compute-0 systemd[266306]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:59:29 compute-0 systemd[266306]: Reached target Sockets.
Oct 02 11:59:29 compute-0 systemd[266306]: Reached target Basic System.
Oct 02 11:59:29 compute-0 systemd[266306]: Reached target Main User Target.
Oct 02 11:59:29 compute-0 systemd[266306]: Startup finished in 132ms.
Oct 02 11:59:29 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.822 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance b4e4932c-8129-4ceb-95ef-3a612ef502f9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 11:59:29 compute-0 systemd[1]: Started Session 54 of User nova.
Oct 02 11:59:29 compute-0 sshd-session[266281]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.858 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating resource usage from migration 72a7a986-629c-41a7-83d5-4be5e86579ab
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.858 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Starting to track incoming migration 72a7a986-629c-41a7-83d5-4be5e86579ab with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.878 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17ee1f97-9d49-445d-835d-583cd2aeffaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 11:59:29 compute-0 sshd-session[266324]: Received disconnect from 192.168.122.102 port 33348:11: disconnected by user
Oct 02 11:59:29 compute-0 sshd-session[266324]: Disconnected from user nova 192.168.122.102 port 33348
Oct 02 11:59:29 compute-0 sshd-session[266281]: pam_unix(sshd:session): session closed for user nova
Oct 02 11:59:29 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 02 11:59:29 compute-0 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.897 2 WARNING nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance b4e4932c-8129-4ceb-95ef-3a612ef502f9 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 02 11:59:29 compute-0 systemd-logind[789]: Removed session 54.
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.898 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.898 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 11:59:29 compute-0 nova_compute[257802]: 2025-10-02 11:59:29.954 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:30 compute-0 sshd-session[266326]: Accepted publickey for nova from 192.168.122.102 port 33352 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 11:59:30 compute-0 systemd-logind[789]: New session 56 of user nova.
Oct 02 11:59:30 compute-0 systemd[1]: Started Session 56 of User nova.
Oct 02 11:59:30 compute-0 sshd-session[266326]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 11:59:30 compute-0 sshd-session[266330]: Received disconnect from 192.168.122.102 port 33352:11: disconnected by user
Oct 02 11:59:30 compute-0 sshd-session[266330]: Disconnected from user nova 192.168.122.102 port 33352
Oct 02 11:59:30 compute-0 sshd-session[266326]: pam_unix(sshd:session): session closed for user nova
Oct 02 11:59:30 compute-0 systemd-logind[789]: Session 56 logged out. Waiting for processes to exit.
Oct 02 11:59:30 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 02 11:59:30 compute-0 systemd-logind[789]: Removed session 56.
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3892960183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1999341198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2460624604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2096794138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606751976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.371 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.376 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.400 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.423 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.423 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.653 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.672 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:30 compute-0 nova_compute[257802]: 2025-10-02 11:59:30.673 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 02 11:59:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:31.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1606751976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:31 compute-0 ceph-mon[73607]: pgmap v964: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.013 2 INFO nova.virt.libvirt.driver [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Deleting instance files /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf_del
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.014 2 INFO nova.virt.libvirt.driver [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Deletion of /var/lib/nova/instances/17ee1f97-9d49-445d-835d-583cd2aeffaf_del complete
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.078 2 INFO nova.compute.manager [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Took 2.70 seconds to destroy the instance on the hypervisor.
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.078 2 DEBUG oslo.service.loopingcall [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.079 2 DEBUG nova.compute.manager [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.079 2 DEBUG nova.network.neutron [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.112 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.217 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406357.2171824, 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.218 2 INFO nova.compute.manager [-] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] VM Stopped (Lifecycle Event)
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.239 2 DEBUG nova.compute.manager [None req-ddb7fd4d-344e-409f-8d07-d98ecb21d744 - - - - - -] [instance: 6cf826c6-5587-4dcf-8697-c9ef9eb5c9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:32 compute-0 nova_compute[257802]: 2025-10-02 11:59:32.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3948024583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 11:59:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:33.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.228 2 DEBUG nova.network.neutron [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.245 2 DEBUG nova.network.neutron [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.262 2 INFO nova.compute.manager [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Took 1.18 seconds to deallocate network for instance.
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.341 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.342 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.405 2 DEBUG oslo_concurrency.processutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:33 compute-0 ceph-mon[73607]: pgmap v965: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 11:59:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149887202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.911 2 DEBUG oslo_concurrency.processutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.918 2 DEBUG nova.compute.provider_tree [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.953 2 DEBUG nova.scheduler.client.report [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:59:33 compute-0 nova_compute[257802]: 2025-10-02 11:59:33.992 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:34 compute-0 nova_compute[257802]: 2025-10-02 11:59:34.023 2 INFO nova.scheduler.client.report [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Deleted allocations for instance 17ee1f97-9d49-445d-835d-583cd2aeffaf
Oct 02 11:59:34 compute-0 nova_compute[257802]: 2025-10-02 11:59:34.106 2 DEBUG oslo_concurrency.lockutils [None req-2a928f33-51a3-449b-a3ce-212da92a9be4 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "17ee1f97-9d49-445d-835d-583cd2aeffaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2149887202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 206 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 169 op/s
Oct 02 11:59:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:35.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:35 compute-0 nova_compute[257802]: 2025-10-02 11:59:35.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:35 compute-0 ceph-mon[73607]: pgmap v966: 305 pgs: 305 active+clean; 206 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 169 op/s
Oct 02 11:59:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 200 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 116 op/s
Oct 02 11:59:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/563414360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3325323058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:37 compute-0 rsyslogd[1007]: imjournal: 3066 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 11:59:37 compute-0 ceph-mon[73607]: pgmap v967: 305 pgs: 305 active+clean; 200 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 116 op/s
Oct 02 11:59:37 compute-0 nova_compute[257802]: 2025-10-02 11:59:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 218 MiB data, 346 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 125 op/s
Oct 02 11:59:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:39 compute-0 ceph-mon[73607]: pgmap v968: 305 pgs: 305 active+clean; 218 MiB data, 346 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 125 op/s
Oct 02 11:59:40 compute-0 nova_compute[257802]: 2025-10-02 11:59:40.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:40 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 11:59:40 compute-0 systemd[266306]: Activating special unit Exit the Session...
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped target Main User Target.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped target Basic System.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped target Paths.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped target Sockets.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped target Timers.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:59:40 compute-0 systemd[266306]: Closed D-Bus User Message Bus Socket.
Oct 02 11:59:40 compute-0 systemd[266306]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:59:40 compute-0 systemd[266306]: Removed slice User Application Slice.
Oct 02 11:59:40 compute-0 systemd[266306]: Reached target Shutdown.
Oct 02 11:59:40 compute-0 systemd[266306]: Finished Exit the Session.
Oct 02 11:59:40 compute-0 systemd[266306]: Reached target Exit the Session.
Oct 02 11:59:40 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 11:59:40 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 11:59:40 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 11:59:40 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 11:59:40 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 11:59:40 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 11:59:40 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 11:59:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:40.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 243 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Oct 02 11:59:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:41.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:41 compute-0 ceph-mon[73607]: pgmap v969: 305 pgs: 305 active+clean; 243 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Oct 02 11:59:42 compute-0 nova_compute[257802]: 2025-10-02 11:59:42.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_11:59:42
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.control', 'images', '.mgr']
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:59:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 243 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Oct 02 11:59:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:42.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:42 compute-0 ceph-mon[73607]: pgmap v970: 305 pgs: 305 active+clean; 243 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Oct 02 11:59:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:43.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.598 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406369.5964894, 17ee1f97-9d49-445d-835d-583cd2aeffaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.598 2 INFO nova.compute.manager [-] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] VM Stopped (Lifecycle Event)
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.623 2 DEBUG nova.compute.manager [None req-40b1621a-8d7f-4a58-9a62-3ad417bdf86a - - - - - -] [instance: 17ee1f97-9d49-445d-835d-583cd2aeffaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.697 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.697 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquired lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.697 2 DEBUG nova.network.neutron [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 11:59:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Oct 02 11:59:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:44.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:44 compute-0 nova_compute[257802]: 2025-10-02 11:59:44.863 2 DEBUG nova.network.neutron [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 11:59:44 compute-0 ceph-mon[73607]: pgmap v971: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Oct 02 11:59:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:45.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.324 2 DEBUG nova.network.neutron [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.348 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Releasing lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.440 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.442 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.442 2 INFO nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Creating image(s)
Oct 02 11:59:45 compute-0 nova_compute[257802]: 2025-10-02 11:59:45.476 2 DEBUG nova.storage.rbd_utils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] creating snapshot(nova-resize) on rbd image(b4e4932c-8129-4ceb-95ef-3a612ef502f9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 11:59:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 02 11:59:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1394270312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.175 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Creating tmpfile /var/lib/nova/instances/tmpvwueeft7 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 11:59:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 02 11:59:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.301 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.533 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.621 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.621 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Ensure instance console log exists: /var/lib/nova/instances/b4e4932c-8129-4ceb-95ef-3a612ef502f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.621 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.622 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.622 2 DEBUG oslo_concurrency.lockutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.623 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.626 2 WARNING nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.631 2 DEBUG nova.virt.libvirt.host [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.631 2 DEBUG nova.virt.libvirt.host [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.634 2 DEBUG nova.virt.libvirt.host [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.634 2 DEBUG nova.virt.libvirt.host [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.635 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.636 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.636 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.636 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.637 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.638 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.638 2 DEBUG nova.virt.hardware [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.638 2 DEBUG nova.objects.instance [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:46 compute-0 nova_compute[257802]: 2025-10-02 11:59:46.659 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 02 11:59:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:47.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591073312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.106 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.147 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:47 compute-0 ceph-mon[73607]: osdmap e138: 3 total, 3 up, 3 in
Oct 02 11:59:47 compute-0 ceph-mon[73607]: pgmap v973: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 02 11:59:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3591073312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090273833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.592 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.610 2 DEBUG oslo_concurrency.processutils [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.613 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <uuid>b4e4932c-8129-4ceb-95ef-3a612ef502f9</uuid>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <name>instance-00000006</name>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <metadata>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:name>tempest-MigrationsAdminTest-server-874031768</nova:name>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 11:59:46</nova:creationTime>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:user uuid="1a06819bf8cc4ff7bccbbb2616ff2d21">tempest-MigrationsAdminTest-819597356-project-member</nova:user>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <nova:project uuid="f1ce36070fb047479c3a083f36733f63">tempest-MigrationsAdminTest-819597356</nova:project>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </metadata>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <system>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="serial">b4e4932c-8129-4ceb-95ef-3a612ef502f9</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="uuid">b4e4932c-8129-4ceb-95ef-3a612ef502f9</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </system>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <os>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </os>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <features>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <apic/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </features>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </clock>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/b4e4932c-8129-4ceb-95ef-3a612ef502f9_disk">
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/b4e4932c-8129-4ceb-95ef-3a612ef502f9_disk.config">
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/b4e4932c-8129-4ceb-95ef-3a612ef502f9/console.log" append="off"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </serial>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <video>
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </video>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 11:59:47 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 11:59:47 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 11:59:47 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:59:47 compute-0 nova_compute[257802]: </domain>
Oct 02 11:59:47 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.625 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.625 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.625 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.683 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.684 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:47 compute-0 nova_compute[257802]: 2025-10-02 11:59:47.684 2 INFO nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Using config drive
Oct 02 11:59:47 compute-0 systemd-machined[211836]: New machine qemu-3-instance-00000006.
Oct 02 11:59:47 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Oct 02 11:59:47 compute-0 podman[266546]: 2025-10-02 11:59:47.871505588 +0000 UTC m=+0.061531199 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 11:59:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2090273833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:48 compute-0 sudo[266617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:48 compute-0 sudo[266617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[266617]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 sudo[266642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:48 compute-0 sudo[266642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[266642]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.828 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.852 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.854 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.855 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Creating instance directory: /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.855 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Ensure instance console log exists: /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.855 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.856 2 DEBUG nova.virt.libvirt.vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:40Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.857 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.857 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.858 2 DEBUG os_vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 11:59:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0281338-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0281338-9f, col_values=(('external_ids', {'iface-id': 'b0281338-9ff1-492d-b3aa-aeee41f08075', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:16:f5', 'vm-uuid': '530e7ace-2feb-4a9e-9430-5a1bfc678d22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:48 compute-0 NetworkManager[44987]: <info>  [1759406388.8647] manager: (tapb0281338-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 11:59:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.871 2 INFO os_vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f')
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.871 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.871 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.896 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406388.8959568, b4e4932c-8129-4ceb-95ef-3a612ef502f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.897 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] VM Resumed (Lifecycle Event)
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.898 2 DEBUG nova.compute.manager [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.903 2 INFO nova.virt.libvirt.driver [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance running successfully.
Oct 02 11:59:48 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.906 2 DEBUG nova.virt.libvirt.guest [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.906 2 DEBUG nova.virt.libvirt.driver [None req-87ff865a-dd4c-4595-8a37-ef2cf43e8099 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.920 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.922 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.965 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.966 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406388.8968477, b4e4932c-8129-4ceb-95ef-3a612ef502f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.966 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] VM Started (Lifecycle Event)
Oct 02 11:59:48 compute-0 nova_compute[257802]: 2025-10-02 11:59:48.997 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:49 compute-0 nova_compute[257802]: 2025-10-02 11:59:49.000 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:59:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:49.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:49 compute-0 ceph-mon[73607]: pgmap v974: 305 pgs: 305 active+clean; 246 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.531 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Port b0281338-9ff1-492d-b3aa-aeee41f08075 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.532 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 11:59:50 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 11:59:50 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 11:59:50 compute-0 kernel: tapb0281338-9f: entered promiscuous mode
Oct 02 11:59:50 compute-0 ovn_controller[148183]: 2025-10-02T11:59:50Z|00035|binding|INFO|Claiming lport b0281338-9ff1-492d-b3aa-aeee41f08075 for this additional chassis.
Oct 02 11:59:50 compute-0 ovn_controller[148183]: 2025-10-02T11:59:50Z|00036|binding|INFO|b0281338-9ff1-492d-b3aa-aeee41f08075: Claiming fa:16:3e:0f:16:f5 10.100.0.13
Oct 02 11:59:50 compute-0 ovn_controller[148183]: 2025-10-02T11:59:50Z|00037|binding|INFO|Claiming lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 for this additional chassis.
Oct 02 11:59:50 compute-0 ovn_controller[148183]: 2025-10-02T11:59:50Z|00038|binding|INFO|36ee45f1-8361-4e2d-a0d4-c4907ede4743: Claiming fa:16:3e:1a:38:f6 19.80.0.52
Oct 02 11:59:50 compute-0 NetworkManager[44987]: <info>  [1759406390.7876] manager: (tapb0281338-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:50 compute-0 systemd-udevd[266615]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:59:50 compute-0 NetworkManager[44987]: <info>  [1759406390.8044] device (tapb0281338-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:59:50 compute-0 NetworkManager[44987]: <info>  [1759406390.8053] device (tapb0281338-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 11:59:50 compute-0 systemd-machined[211836]: New machine qemu-4-instance-00000007.
Oct 02 11:59:50 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000007.
Oct 02 11:59:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 246 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 51 KiB/s wr, 152 op/s
Oct 02 11:59:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:50.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:50 compute-0 ovn_controller[148183]: 2025-10-02T11:59:50Z|00039|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 ovn-installed in OVS
Oct 02 11:59:50 compute-0 nova_compute[257802]: 2025-10-02 11:59:50.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/897661849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:59:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/897661849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:59:50 compute-0 ceph-mon[73607]: pgmap v975: 305 pgs: 305 active+clean; 246 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 51 KiB/s wr, 152 op/s
Oct 02 11:59:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:51.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 02 11:59:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 02 11:59:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 02 11:59:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 246 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 29 KiB/s wr, 120 op/s
Oct 02 11:59:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:52.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:53 compute-0 ceph-mon[73607]: osdmap e139: 3 total, 3 up, 3 in
Oct 02 11:59:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/766376539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:53 compute-0 ceph-mon[73607]: pgmap v977: 305 pgs: 305 active+clean; 246 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 29 KiB/s wr, 120 op/s
Oct 02 11:59:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:53.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.334 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406393.3345268, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.335 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Started (Lifecycle Event)
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.358 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.987 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406393.987569, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 11:59:53 compute-0 nova_compute[257802]: 2025-10-02 11:59:53.988 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Resumed (Lifecycle Event)
Oct 02 11:59:54 compute-0 nova_compute[257802]: 2025-10-02 11:59:54.009 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 11:59:54 compute-0 nova_compute[257802]: 2025-10-02 11:59:54.012 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 11:59:54 compute-0 nova_compute[257802]: 2025-10-02 11:59:54.028 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005337014935790061 of space, bias 1.0, pg target 1.6011044807370183 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 11:59:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2715093501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:54 compute-0 ovn_controller[148183]: 2025-10-02T11:59:54Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:16:f5 10.100.0.13
Oct 02 11:59:54 compute-0 ovn_controller[148183]: 2025-10-02T11:59:54Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:16:f5 10.100.0.13
Oct 02 11:59:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 185 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 572 KiB/s wr, 201 op/s
Oct 02 11:59:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 11:59:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606833735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:59:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 11:59:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606833735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:59:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:55 compute-0 ceph-mon[73607]: pgmap v978: 305 pgs: 305 active+clean; 185 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 572 KiB/s wr, 201 op/s
Oct 02 11:59:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3606833735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 11:59:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3606833735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00040|binding|INFO|Claiming lport b0281338-9ff1-492d-b3aa-aeee41f08075 for this chassis.
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00041|binding|INFO|b0281338-9ff1-492d-b3aa-aeee41f08075: Claiming fa:16:3e:0f:16:f5 10.100.0.13
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00042|binding|INFO|Claiming lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 for this chassis.
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00043|binding|INFO|36ee45f1-8361-4e2d-a0d4-c4907ede4743: Claiming fa:16:3e:1a:38:f6 19.80.0.52
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00044|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 up in Southbound
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00045|binding|INFO|Setting lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 up in Southbound
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.350 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:38:f6 19.80.0.52'], port_security=['fa:16:3e:1a:38:f6 19.80.0.52'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['b0281338-9ff1-492d-b3aa-aeee41f08075'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-589500470', 'neutron:cidrs': '19.80.0.52/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-589500470', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ec7a7eef-8b3d-415d-8728-897cefc62c0c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36ee45f1-8361-4e2d-a0d4-c4907ede4743) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.352 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:16:f5 10.100.0.13'], port_security=['fa:16:3e:0f:16:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1480997899', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '530e7ace-2feb-4a9e-9430-5a1bfc678d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1480997899', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '11', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b0281338-9ff1-492d-b3aa-aeee41f08075) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.353 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 36ee45f1-8361-4e2d-a0d4-c4907ede4743 in datapath 4874fd53-2e81-4c7f-81b7-b85c23edc180 bound to our chassis
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.354 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4874fd53-2e81-4c7f-81b7-b85c23edc180
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.371 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[25eede5e-1fd1-4b02-a2d7-58eff6bf7f66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.372 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4874fd53-21 in ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.374 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4874fd53-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.374 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8257ffb0-e7e3-42d7-a966-1541072fe737]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.376 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6dda18c1-a2c6-4f13-82cf-176dc60def9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.392 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[f6022328-f253-4caa-8fd8-43bb107dfde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.416 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cd443021-51a8-4400-9d8a-506aed63b246]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.440 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[303e3c2a-190d-491f-abce-85e985988575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 NetworkManager[44987]: <info>  [1759406395.4472] manager: (tap4874fd53-20): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.445 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b5449-c394-46e7-a5eb-c6aca74a654c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 systemd-udevd[266762]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.474 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1b26b016-6312-472b-b4b6-c5c3eb178b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.477 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4d768e-a882-40cc-9799-37652c635704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 NetworkManager[44987]: <info>  [1759406395.5031] device (tap4874fd53-20): carrier: link connected
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.511 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[76c90a87-5b66-4f02-8aae-de0b66b9ee5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.527 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa1e83b-2e94-4092-8508-c2565fa24107]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4874fd53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:c0:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446312, 'reachable_time': 22137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266781, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.542 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[54673748-d1cf-4eae-8729-67d62cf27ce9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:c040'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446312, 'tstamp': 446312}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266782, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.559 2 INFO nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Post operation of migration started
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.566 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b486f379-6c84-4e46-8159-e6b287e0bbba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4874fd53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:c0:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446312, 'reachable_time': 22137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266783, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.602 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8248a271-b268-4a87-b8a1-7f4fe927372c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.663 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37e4e0ee-b010-4f72-90a0-28457b029f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.665 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4874fd53-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.665 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.665 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4874fd53-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:55 compute-0 kernel: tap4874fd53-20: entered promiscuous mode
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:55 compute-0 NetworkManager[44987]: <info>  [1759406395.6703] manager: (tap4874fd53-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.675 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4874fd53-20, col_values=(('external_ids', {'iface-id': 'a3d6bcff-8fb0-4a18-b382-3921f86e4cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:55 compute-0 ovn_controller[148183]: 2025-10-02T11:59:55Z|00046|binding|INFO|Releasing lport a3d6bcff-8fb0-4a18-b382-3921f86e4cde from this chassis (sb_readonly=0)
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.680 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.680 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[700b84e9-111a-4c89-b009-1a642810c205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.684 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: global
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4874fd53-2e81-4c7f-81b7-b85c23edc180
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4874fd53-2e81-4c7f-81b7-b85c23edc180
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 11:59:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:55.685 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'env', 'PROCESS_TAG=haproxy-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4874fd53-2e81-4c7f-81b7-b85c23edc180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 11:59:55 compute-0 nova_compute[257802]: 2025-10-02 11:59:55.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.082 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.083 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.083 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 11:59:56 compute-0 podman[266816]: 2025-10-02 11:59:56.010970468 +0000 UTC m=+0.028387261 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:59:56 compute-0 podman[266816]: 2025-10-02 11:59:56.127622855 +0000 UTC m=+0.145039628 container create 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 11:59:56 compute-0 systemd[1]: Started libpod-conmon-101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c.scope.
Oct 02 11:59:56 compute-0 podman[266829]: 2025-10-02 11:59:56.221036039 +0000 UTC m=+0.066843090 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 11:59:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8165cdb3614c4db89a0c94f4dc1ed8c741126f988659b0b10258b26766d8de9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:56 compute-0 podman[266816]: 2025-10-02 11:59:56.280386933 +0000 UTC m=+0.297803726 container init 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:59:56 compute-0 podman[266830]: 2025-10-02 11:59:56.28148201 +0000 UTC m=+0.117517910 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 11:59:56 compute-0 podman[266816]: 2025-10-02 11:59:56.287431087 +0000 UTC m=+0.304847860 container start 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:59:56 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [NOTICE]   (266876) : New worker (266878) forked
Oct 02 11:59:56 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [NOTICE]   (266876) : Loading success.
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.339 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.340 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.366 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.372 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b0281338-9ff1-492d-b3aa-aeee41f08075 in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.374 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.386 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa40a34f-25b5-48bd-85eb-0d69669fde86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.387 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e991676-91 in ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.389 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e991676-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.389 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[149c9dae-2b95-4848-b555-fdfd1b4b84dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.390 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0b2742-baf7-4235-8aef-06e1685f9a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.404 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ce029666-8d15-49d4-a7b0-290b4f647c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.427 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d3c243-0bd6-4159-98f3-f38fd2940251]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.436 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.437 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.443 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.443 2 INFO nova.compute.claims [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Claim successful on node compute-0.ctlplane.example.com
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.455 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe8af6c-7e76-4a8e-b5d2-a76ebdb4e126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 systemd-udevd[266773]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:59:56 compute-0 NetworkManager[44987]: <info>  [1759406396.4629] manager: (tap1e991676-90): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.465 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8309cd2c-6105-40fd-a01e-e80cb5b73a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.498 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c8422d8e-959f-40c0-bb2b-8281ef4bc0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.501 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[74ca1c91-8e43-43d8-a631-070b25042545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 NetworkManager[44987]: <info>  [1759406396.5231] device (tap1e991676-90): carrier: link connected
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.526 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a71e87eb-d3cb-4b4a-ac44-916e73b50de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.540 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99017e0e-ba20-4006-8ba7-e4dcc004cd72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446414, 'reachable_time': 22434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266898, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.559 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[daa9c096-9293-4fa0-8870-99ef5aa4a0f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:ef6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446414, 'tstamp': 446414}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266899, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec7ef7a-fca1-4ffd-ad87-c360c1453382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446414, 'reachable_time': 22434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266900, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.597 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.612 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a50e501c-e304-41c2-b051-8ec2bb43aef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.668 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[91d31fe0-be8e-4779-9c10-05b31360fa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.670 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.670 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.670 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:56 compute-0 NetworkManager[44987]: <info>  [1759406396.6731] manager: (tap1e991676-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 02 11:59:56 compute-0 kernel: tap1e991676-90: entered promiscuous mode
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.675 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:59:56 compute-0 ovn_controller[148183]: 2025-10-02T11:59:56Z|00047|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct 02 11:59:56 compute-0 nova_compute[257802]: 2025-10-02 11:59:56.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.695 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.696 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[846a68cc-855f-4627-9eb4-520434dd1afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.697 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: global
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 11:59:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 11:59:56.698 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'env', 'PROCESS_TAG=haproxy-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e991676-99e8-43d9-8575-4a21f50b0ed5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 11:59:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 176 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 217 op/s
Oct 02 11:59:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:59:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:56.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:59:56 compute-0 ceph-mon[73607]: pgmap v979: 305 pgs: 305 active+clean; 176 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 217 op/s
Oct 02 11:59:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 11:59:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983909947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.090 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.095 2 DEBUG nova.compute.provider_tree [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 11:59:57 compute-0 podman[266952]: 2025-10-02 11:59:57.100583843 +0000 UTC m=+0.102436518 container create f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 11:59:57 compute-0 podman[266952]: 2025-10-02 11:59:57.020422876 +0000 UTC m=+0.022275531 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:59:57 compute-0 systemd[1]: Started libpod-conmon-f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974.scope.
Oct 02 11:59:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f549b1f3cccd0ed49a15448343901bf9b41d83f41d23816cf0af9034582a8888/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.168 2 DEBUG nova.scheduler.client.report [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 11:59:57 compute-0 podman[266952]: 2025-10-02 11:59:57.181224482 +0000 UTC m=+0.183077137 container init f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 11:59:57 compute-0 podman[266952]: 2025-10-02 11:59:57.187464495 +0000 UTC m=+0.189317121 container start f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:59:57 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [NOTICE]   (266973) : New worker (266975) forked
Oct 02 11:59:57 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [NOTICE]   (266973) : Loading success.
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.436 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.437 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.556 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.556 2 DEBUG nova.network.neutron [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.717 2 INFO nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.752 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.874 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.876 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.876 2 INFO nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Creating image(s)
Oct 02 11:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 11:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 02 11:59:57 compute-0 nova_compute[257802]: 2025-10-02 11:59:57.942 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 02 11:59:57 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.073 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2983909947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 11:59:58 compute-0 ceph-mon[73607]: osdmap e140: 3 total, 3 up, 3 in
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.104 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.114 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.184 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.185 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.186 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.186 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.214 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.217 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.407 2 DEBUG nova.network.neutron [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.408 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.533 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.597 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] resizing rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.794 2 DEBUG nova.objects.instance [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'migration_context' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.828 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.829 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Ensure instance console log exists: /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.829 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.830 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.830 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.831 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.834 2 WARNING nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.839 2 DEBUG nova.virt.libvirt.host [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.840 2 DEBUG nova.virt.libvirt.host [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.842 2 DEBUG nova.virt.libvirt.host [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.842 2 DEBUG nova.virt.libvirt.host [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.843 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.843 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.844 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.844 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.844 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.844 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.845 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.845 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.845 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.845 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.845 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.846 2 DEBUG nova.virt.hardware [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.848 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 227 op/s
Oct 02 11:59:58 compute-0 nova_compute[257802]: 2025-10-02 11:59:58.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 11:59:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 11:59:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:59.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:59 compute-0 ceph-mon[73607]: pgmap v981: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 227 op/s
Oct 02 11:59:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096141512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.301 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.319 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.342 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.345 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.559 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.575 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.576 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.576 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.581 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 11:59:59 compute-0 virtqemud[257280]: Domain id=4 name='instance-00000007' uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22 is tainted: custom-monitor
Oct 02 11:59:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:59:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666828709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.783 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.784 2 DEBUG nova.objects.instance [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.808 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] End _get_guest_xml xml=<domain type="kvm">
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <uuid>c5bc8428-581c-4ea6-85d8-6f153a1ee723</uuid>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <name>instance-00000008</name>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <metadata>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:name>tempest-MigrationsAdminTest-server-1495132323</nova:name>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 11:59:58</nova:creationTime>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:user uuid="1a06819bf8cc4ff7bccbbb2616ff2d21">tempest-MigrationsAdminTest-819597356-project-member</nova:user>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <nova:project uuid="f1ce36070fb047479c3a083f36733f63">tempest-MigrationsAdminTest-819597356</nova:project>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </metadata>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <system>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="serial">c5bc8428-581c-4ea6-85d8-6f153a1ee723</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="uuid">c5bc8428-581c-4ea6-85d8-6f153a1ee723</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </system>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <os>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </os>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <features>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <apic/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </features>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </clock>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </cpu>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   <devices>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk">
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config">
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </source>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 11:59:59 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       </auth>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </disk>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/console.log" append="off"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </serial>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <video>
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </video>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </rng>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 11:59:59 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 11:59:59 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 11:59:59 compute-0 nova_compute[257802]:   </devices>
Oct 02 11:59:59 compute-0 nova_compute[257802]: </domain>
Oct 02 11:59:59 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.897 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.898 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.898 2 INFO nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Using config drive
Oct 02 11:59:59 compute-0 nova_compute[257802]: 2025-10-02 11:59:59.947 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 11:59:59 compute-0 podman[267214]: 2025-10-02 11:59:59.963695141 +0000 UTC m=+0.120529963 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:00:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:00:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3096141512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3666828709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.251 2 INFO nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Creating config drive at /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.256 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprnkomm84 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.382 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprnkomm84" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.405 2 DEBUG nova.storage.rbd_utils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] rbd image c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.409 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.552 2 DEBUG oslo_concurrency.processutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.553 2 INFO nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Deleting local config drive /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/disk.config because it was imported into RBD.
Oct 02 12:00:00 compute-0 nova_compute[257802]: 2025-10-02 12:00:00.591 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:00:00 compute-0 systemd-machined[211836]: New machine qemu-5-instance-00000008.
Oct 02 12:00:00 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Oct 02 12:00:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 231 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 252 op/s
Oct 02 12:00:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:01.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:01 compute-0 ceph-mon[73607]: pgmap v982: 305 pgs: 305 active+clean; 231 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 252 op/s
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.453 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.454 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.455 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406401.4526422, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.456 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Resumed (Lifecycle Event)
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.462 2 INFO nova.virt.libvirt.driver [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance spawned successfully.
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.462 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.497 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.504 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.508 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.509 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.509 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.511 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.512 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.512 2 DEBUG nova.virt.libvirt.driver [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.599 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.604 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.618 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.619 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406401.4529014, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.619 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Started (Lifecycle Event)
Oct 02 12:00:01 compute-0 nova_compute[257802]: 2025-10-02 12:00:01.728 2 DEBUG nova.objects.instance [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.744 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.748 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.815 2 INFO nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Took 4.94 seconds to spawn the instance on the hypervisor.
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.816 2 DEBUG nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.828 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:00:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 231 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 224 op/s
Oct 02 12:00:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.947 2 INFO nova.compute.manager [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Took 6.53 seconds to build instance.
Oct 02 12:00:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:02 compute-0 nova_compute[257802]: 2025-10-02 12:00:02.997 2 DEBUG oslo_concurrency.lockutils [None req-f7d57e3b-e762-4739-ae31-2e8c4c190927 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:03 compute-0 ceph-mon[73607]: pgmap v983: 305 pgs: 305 active+clean; 231 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 224 op/s
Oct 02 12:00:03 compute-0 nova_compute[257802]: 2025-10-02 12:00:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/134888817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 246 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 195 op/s
Oct 02 12:00:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:05.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:05 compute-0 nova_compute[257802]: 2025-10-02 12:00:05.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:05 compute-0 ceph-mon[73607]: pgmap v984: 305 pgs: 305 active+clean; 246 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 195 op/s
Oct 02 12:00:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1217306547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 246 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Oct 02 12:00:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:06.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:07 compute-0 ceph-mon[73607]: pgmap v985: 305 pgs: 305 active+clean; 246 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Oct 02 12:00:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:07.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:07 compute-0 nova_compute[257802]: 2025-10-02 12:00:07.906 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:00:07 compute-0 nova_compute[257802]: 2025-10-02 12:00:07.907 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:00:07 compute-0 nova_compute[257802]: 2025-10-02 12:00:07.907 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:00:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4210241670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.287 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.576 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.625 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:00:08 compute-0 sudo[267360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:08 compute-0 sudo[267360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:08 compute-0 sudo[267360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:08 compute-0 sudo[267385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:08 compute-0 sudo[267385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:08 compute-0 sudo[267385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.872 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.873 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Creating file /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/68047b64847042869d462b8b99d39942.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.873 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/68047b64847042869d462b8b99d39942.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:08 compute-0 nova_compute[257802]: 2025-10-02 12:00:08.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:09.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:09 compute-0 ceph-mon[73607]: pgmap v986: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.407 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/68047b64847042869d462b8b99d39942.tmp" returned: 1 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.408 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/68047b64847042869d462b8b99d39942.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.408 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Creating directory /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.409 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.608 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:09 compute-0 nova_compute[257802]: 2025-10-02 12:00:09.614 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:00:10 compute-0 nova_compute[257802]: 2025-10-02 12:00:10.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Oct 02 12:00:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:10 compute-0 ceph-mon[73607]: pgmap v987: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Oct 02 12:00:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 481 KiB/s wr, 119 op/s
Oct 02 12:00:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:13 compute-0 ceph-mon[73607]: pgmap v988: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 481 KiB/s wr, 119 op/s
Oct 02 12:00:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:13 compute-0 nova_compute[257802]: 2025-10-02 12:00:13.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:14 compute-0 sudo[267416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:14 compute-0 sudo[267416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:14 compute-0 sudo[267416]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-0 sudo[267441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:14 compute-0 sudo[267441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:14 compute-0 sudo[267441]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-0 sudo[267466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:14 compute-0 sudo[267466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:14 compute-0 sudo[267466]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 256 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 1008 KiB/s wr, 132 op/s
Oct 02 12:00:14 compute-0 sudo[267491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:00:14 compute-0 sudo[267491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:14 compute-0 ceph-mon[73607]: pgmap v989: 305 pgs: 305 active+clean; 256 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 1008 KiB/s wr, 132 op/s
Oct 02 12:00:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:15 compute-0 nova_compute[257802]: 2025-10-02 12:00:15.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:15.209 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:00:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:15.210 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:00:15 compute-0 nova_compute[257802]: 2025-10-02 12:00:15.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:15 compute-0 sudo[267491]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a8446c4b-327e-4d24-b857-a9b490e14efb does not exist
Oct 02 12:00:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 797ff34e-b315-48ad-b56b-f12c2a0c54cc does not exist
Oct 02 12:00:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c2a67a26-344a-4cb0-9f4b-1ed0a358362a does not exist
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:00:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:00:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:15 compute-0 sudo[267547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:15 compute-0 sudo[267547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:15 compute-0 sudo[267547]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 sudo[267572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:15 compute-0 sudo[267572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:15 compute-0 sudo[267572]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 sudo[267597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:15 compute-0 sudo[267597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:15 compute-0 sudo[267597]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 sudo[267622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:00:15 compute-0 sudo[267622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:00:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.086910999 +0000 UTC m=+0.025528012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.336658989 +0000 UTC m=+0.275275982 container create e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:00:16 compute-0 systemd[1]: Started libpod-conmon-e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c.scope.
Oct 02 12:00:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.501166337 +0000 UTC m=+0.439783340 container init e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.509147903 +0000 UTC m=+0.447764906 container start e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:00:16 compute-0 wizardly_nash[267707]: 167 167
Oct 02 12:00:16 compute-0 systemd[1]: libpod-e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c.scope: Deactivated successfully.
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.533520285 +0000 UTC m=+0.472137278 container attach e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.534414066 +0000 UTC m=+0.473031059 container died e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8db84b32be58fc97114625a71ad4dd9d5f46b608086882d7c34618d49695c2-merged.mount: Deactivated successfully.
Oct 02 12:00:16 compute-0 podman[267690]: 2025-10-02 12:00:16.774725324 +0000 UTC m=+0.713342317 container remove e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:00:16 compute-0 systemd[1]: libpod-conmon-e66e91a4e9ef2e042648f8ebc305e9476a24fcf968f94a878bfd362f4ab7619c.scope: Deactivated successfully.
Oct 02 12:00:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 265 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Oct 02 12:00:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:16.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:16 compute-0 podman[267731]: 2025-10-02 12:00:16.953301809 +0000 UTC m=+0.052130408 container create 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:00:17 compute-0 podman[267731]: 2025-10-02 12:00:16.929904931 +0000 UTC m=+0.028733550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:17 compute-0 systemd[1]: Started libpod-conmon-3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296.scope.
Oct 02 12:00:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:17.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:17 compute-0 ceph-mon[73607]: pgmap v990: 305 pgs: 305 active+clean; 265 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Oct 02 12:00:17 compute-0 podman[267731]: 2025-10-02 12:00:17.176463223 +0000 UTC m=+0.275291842 container init 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:00:17 compute-0 podman[267731]: 2025-10-02 12:00:17.185881615 +0000 UTC m=+0.284710214 container start 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:17.212 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:17 compute-0 podman[267731]: 2025-10-02 12:00:17.224807525 +0000 UTC m=+0.323636144 container attach 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:18 compute-0 vibrant_proskuriakova[267745]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:00:18 compute-0 vibrant_proskuriakova[267745]: --> relative data size: 1.0
Oct 02 12:00:18 compute-0 vibrant_proskuriakova[267745]: --> All data devices are unavailable
Oct 02 12:00:18 compute-0 systemd[1]: libpod-3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296.scope: Deactivated successfully.
Oct 02 12:00:18 compute-0 podman[267731]: 2025-10-02 12:00:18.071458988 +0000 UTC m=+1.170287597 container died 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef3ac51c98257748c2a9d16e7142183ae6610da40e0be3b1f78da045ef27680b-merged.mount: Deactivated successfully.
Oct 02 12:00:18 compute-0 podman[267731]: 2025-10-02 12:00:18.616753768 +0000 UTC m=+1.715582367 container remove 3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:18 compute-0 sudo[267622]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:18 compute-0 systemd[1]: libpod-conmon-3a91e04ebb9b498b754907ce3d7c828dc82f632b91b7b80cbf5d1cbdc2398296.scope: Deactivated successfully.
Oct 02 12:00:18 compute-0 podman[267761]: 2025-10-02 12:00:18.678389728 +0000 UTC m=+0.564978757 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:00:18 compute-0 sudo[267788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:18 compute-0 sudo[267788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:18 compute-0 sudo[267788]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:18 compute-0 sudo[267820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:18 compute-0 sudo[267820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:18 compute-0 sudo[267820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:18 compute-0 sudo[267845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:18 compute-0 sudo[267845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:18 compute-0 sudo[267845]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:18 compute-0 sudo[267870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:00:18 compute-0 sudo[267870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 277 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 12:00:18 compute-0 nova_compute[257802]: 2025-10-02 12:00:18.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:18.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:19 compute-0 ceph-mon[73607]: pgmap v991: 305 pgs: 305 active+clean; 277 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 12:00:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:19.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.219723799 +0000 UTC m=+0.085806737 container create 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.159097414 +0000 UTC m=+0.025180372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:19 compute-0 systemd[1]: Started libpod-conmon-738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4.scope.
Oct 02 12:00:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.299107627 +0000 UTC m=+0.165190605 container init 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.305783682 +0000 UTC m=+0.171866620 container start 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:19 compute-0 magical_hypatia[267953]: 167 167
Oct 02 12:00:19 compute-0 systemd[1]: libpod-738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4.scope: Deactivated successfully.
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.322321789 +0000 UTC m=+0.188404727 container attach 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.322688369 +0000 UTC m=+0.188771317 container died 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-365bbb4979efe45921317153a3270c9bd8ad52075201408294439c4309aeba05-merged.mount: Deactivated successfully.
Oct 02 12:00:19 compute-0 podman[267937]: 2025-10-02 12:00:19.460291963 +0000 UTC m=+0.326374901 container remove 738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:19 compute-0 systemd[1]: libpod-conmon-738019f04b77001215db50efffba581a93a6b8036fe7fd5a7f209680739842a4.scope: Deactivated successfully.
Oct 02 12:00:19 compute-0 nova_compute[257802]: 2025-10-02 12:00:19.658 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:00:19 compute-0 podman[267977]: 2025-10-02 12:00:19.691594628 +0000 UTC m=+0.076156289 container create 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:00:19 compute-0 systemd[1]: Started libpod-conmon-44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03.scope.
Oct 02 12:00:19 compute-0 podman[267977]: 2025-10-02 12:00:19.662555881 +0000 UTC m=+0.047117542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bce8282a98cdd32b2ed403c4b0710a931411b1a61457f8cb2b2d944eda59d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bce8282a98cdd32b2ed403c4b0710a931411b1a61457f8cb2b2d944eda59d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bce8282a98cdd32b2ed403c4b0710a931411b1a61457f8cb2b2d944eda59d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bce8282a98cdd32b2ed403c4b0710a931411b1a61457f8cb2b2d944eda59d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:19 compute-0 podman[267977]: 2025-10-02 12:00:19.793396749 +0000 UTC m=+0.177958430 container init 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:19 compute-0 podman[267977]: 2025-10-02 12:00:19.800691609 +0000 UTC m=+0.185253270 container start 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:19 compute-0 podman[267977]: 2025-10-02 12:00:19.833219651 +0000 UTC m=+0.217781312 container attach 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:00:20 compute-0 nova_compute[257802]: 2025-10-02 12:00:20.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]: {
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:     "1": [
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:         {
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "devices": [
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "/dev/loop3"
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             ],
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "lv_name": "ceph_lv0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "lv_size": "7511998464",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "name": "ceph_lv0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "tags": {
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.cluster_name": "ceph",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.crush_device_class": "",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.encrypted": "0",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.osd_id": "1",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.type": "block",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:                 "ceph.vdo": "0"
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             },
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "type": "block",
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:             "vg_name": "ceph_vg0"
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:         }
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]:     ]
Oct 02 12:00:20 compute-0 ecstatic_pike[267993]: }
Oct 02 12:00:20 compute-0 systemd[1]: libpod-44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03.scope: Deactivated successfully.
Oct 02 12:00:20 compute-0 conmon[267993]: conmon 44ed5fae00a115698ef0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03.scope/container/memory.events
Oct 02 12:00:20 compute-0 podman[267977]: 2025-10-02 12:00:20.604530475 +0000 UTC m=+0.989092136 container died 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-923bce8282a98cdd32b2ed403c4b0710a931411b1a61457f8cb2b2d944eda59d-merged.mount: Deactivated successfully.
Oct 02 12:00:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 281 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:00:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:21 compute-0 podman[267977]: 2025-10-02 12:00:21.029291363 +0000 UTC m=+1.413853024 container remove 44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:21 compute-0 systemd[1]: libpod-conmon-44ed5fae00a115698ef00d5e2c3e0550dc0dddc4b92e566b9104aee1c976ca03.scope: Deactivated successfully.
Oct 02 12:00:21 compute-0 sudo[267870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:21 compute-0 ceph-mon[73607]: pgmap v992: 305 pgs: 305 active+clean; 281 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:00:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:21.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:21 compute-0 sudo[268014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:21 compute-0 sudo[268014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:21 compute-0 sudo[268014]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:21 compute-0 sudo[268039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:21 compute-0 sudo[268039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:21 compute-0 sudo[268039]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:21 compute-0 sudo[268064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:21 compute-0 sudo[268064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:21 compute-0 sudo[268064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:21 compute-0 sudo[268089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:00:21 compute-0 sudo[268089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.660602264 +0000 UTC m=+0.054970407 container create 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:00:21 compute-0 systemd[1]: Started libpod-conmon-1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e.scope.
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.628896902 +0000 UTC m=+0.023264965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.910475647 +0000 UTC m=+0.304843710 container init 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.916764211 +0000 UTC m=+0.311132254 container start 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:21 compute-0 romantic_faraday[268171]: 167 167
Oct 02 12:00:21 compute-0 systemd[1]: libpod-1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e.scope: Deactivated successfully.
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.948599237 +0000 UTC m=+0.342967380 container attach 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:00:21 compute-0 podman[268154]: 2025-10-02 12:00:21.948989276 +0000 UTC m=+0.343357349 container died 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c6d05515f099bd208dde6e1dda3de3948e64036facf7bd33170b07703a1d037-merged.mount: Deactivated successfully.
Oct 02 12:00:22 compute-0 podman[268154]: 2025-10-02 12:00:22.277290844 +0000 UTC m=+0.671658887 container remove 1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:00:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 02 12:00:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 14.671s CPU time.
Oct 02 12:00:22 compute-0 systemd-machined[211836]: Machine qemu-5-instance-00000008 terminated.
Oct 02 12:00:22 compute-0 systemd[1]: libpod-conmon-1e3774e2d133fe6be418187923086e199ca77ecb16f00bbb9107f83dacf4ce2e.scope: Deactivated successfully.
Oct 02 12:00:22 compute-0 podman[268200]: 2025-10-02 12:00:22.490989286 +0000 UTC m=+0.086768552 container create efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:00:22 compute-0 podman[268200]: 2025-10-02 12:00:22.425787707 +0000 UTC m=+0.021567023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:22 compute-0 systemd[1]: Started libpod-conmon-efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26.scope.
Oct 02 12:00:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c379633ef8f03582da050862985d8528b2e7395ed5de3c86a6e296554e5f024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c379633ef8f03582da050862985d8528b2e7395ed5de3c86a6e296554e5f024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c379633ef8f03582da050862985d8528b2e7395ed5de3c86a6e296554e5f024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c379633ef8f03582da050862985d8528b2e7395ed5de3c86a6e296554e5f024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.670 2 INFO nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance shutdown successfully after 13 seconds.
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.676 2 INFO nova.virt.libvirt.driver [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance destroyed successfully.
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.679 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.680 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:22 compute-0 podman[268200]: 2025-10-02 12:00:22.722430074 +0000 UTC m=+0.318209360 container init efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 12:00:22 compute-0 podman[268200]: 2025-10-02 12:00:22.730081812 +0000 UTC m=+0.325861078 container start efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:00:22 compute-0 podman[268200]: 2025-10-02 12:00:22.761471427 +0000 UTC m=+0.357250693 container attach efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.781 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.782 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:22 compute-0 nova_compute[257802]: 2025-10-02 12:00:22.783 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 281 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:00:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:22.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:23.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:23 compute-0 ceph-mon[73607]: pgmap v993: 305 pgs: 305 active+clean; 281 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]: {
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:         "osd_id": 1,
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:         "type": "bluestore"
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]:     }
Oct 02 12:00:23 compute-0 dreamy_rubin[268218]: }
Oct 02 12:00:23 compute-0 systemd[1]: libpod-efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26.scope: Deactivated successfully.
Oct 02 12:00:23 compute-0 podman[268200]: 2025-10-02 12:00:23.653593801 +0000 UTC m=+1.249373067 container died efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c379633ef8f03582da050862985d8528b2e7395ed5de3c86a6e296554e5f024-merged.mount: Deactivated successfully.
Oct 02 12:00:23 compute-0 nova_compute[257802]: 2025-10-02 12:00:23.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:24 compute-0 podman[268200]: 2025-10-02 12:00:24.233375991 +0000 UTC m=+1.829155247 container remove efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rubin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 02 12:00:24 compute-0 sudo[268089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:00:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 02 12:00:24 compute-0 systemd[1]: libpod-conmon-efc01c5b28fd7975b2b0dae81acd8484e5c5e457bff6e12ee9c438864ad81a26.scope: Deactivated successfully.
Oct 02 12:00:24 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 02 12:00:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:00:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aac2164b-fbd5-4530-9a6c-616476c146a2 does not exist
Oct 02 12:00:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 80043f13-57ab-4046-b029-c1d25c99bf27 does not exist
Oct 02 12:00:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7321bcaa-3401-4ae4-adc4-4f6d14784609 does not exist
Oct 02 12:00:24 compute-0 sudo[268252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:24 compute-0 sudo[268252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:24 compute-0 sudo[268252]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:24 compute-0 sudo[268277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:00:24 compute-0 sudo[268277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:24 compute-0 sudo[268277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 320 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Oct 02 12:00:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:24.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:25.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:25 compute-0 nova_compute[257802]: 2025-10-02 12:00:25.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:25 compute-0 ceph-mon[73607]: osdmap e141: 3 total, 3 up, 3 in
Oct 02 12:00:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:00:25 compute-0 ceph-mon[73607]: pgmap v995: 305 pgs: 305 active+clean; 320 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Oct 02 12:00:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1469888007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 239 KiB/s rd, 2.8 MiB/s wr, 105 op/s
Oct 02 12:00:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4219610136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:26.917 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:26.918 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:26.918 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:26 compute-0 podman[268304]: 2025-10-02 12:00:26.932957986 +0000 UTC m=+0.059159190 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid)
Oct 02 12:00:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:26.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:26 compute-0 podman[268303]: 2025-10-02 12:00:26.940758129 +0000 UTC m=+0.068980493 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:00:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:27.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:27 compute-0 ceph-mon[73607]: pgmap v996: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 239 KiB/s rd, 2.8 MiB/s wr, 105 op/s
Oct 02 12:00:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.135 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.136 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.136 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.136 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.136 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:00:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350541356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.594 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.673 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.674 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.677 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.677 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.680 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.680 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.825 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.826 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4495MB free_disk=20.851749420166016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.826 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.826 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 306 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 02 12:00:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:28.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.952 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance c5bc8428-581c-4ea6-85d8-6f153a1ee723 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:00:28 compute-0 nova_compute[257802]: 2025-10-02 12:00:28.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:28 compute-0 sudo[268365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:28 compute-0 sudo[268365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:28 compute-0 sudo[268365]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/189751903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3350541356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:29 compute-0 ceph-mon[73607]: pgmap v997: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 306 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.026 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating resource usage from migration a7a9cc47-fbe3-422d-995e-09749ba0ce00
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.026 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Starting to track outgoing migration a7a9cc47-fbe3-422d-995e-09749ba0ce00 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 12:00:29 compute-0 sudo[268390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:29 compute-0 sudo[268390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:29 compute-0 sudo[268390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.068 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance b4e4932c-8129-4ceb-95ef-3a612ef502f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 530e7ace-2feb-4a9e-9430-5a1bfc678d22 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration a7a9cc47-fbe3-422d-995e-09749ba0ce00 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:00:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:29.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.160 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:00:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633155890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.664 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.669 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.862 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.886 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.887 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.887 2 DEBUG nova.compute.manager [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Going to confirm migration 3 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.967 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:00:29 compute-0 nova_compute[257802]: 2025-10-02 12:00:29.967 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2811680153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1987659302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2633155890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/656861087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1036707683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.238 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.238 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.238 2 DEBUG nova.network.neutron [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.238 2 DEBUG nova.objects.instance [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'info_cache' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:00:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460692152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.496 2 DEBUG nova.network.neutron [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:00:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:00:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:30.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.959 2 DEBUG nova.network.neutron [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.965 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.965 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:00:30 compute-0 nova_compute[257802]: 2025-10-02 12:00:30.965 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:00:30 compute-0 podman[268438]: 2025-10-02 12:00:30.983869592 +0000 UTC m=+0.122871041 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.011 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.011 2 DEBUG nova.objects.instance [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'migration_context' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2460692152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:31 compute-0 ceph-mon[73607]: pgmap v998: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:00:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:31.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.192 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.193 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.193 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.193 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.196 2 DEBUG nova.storage.rbd_utils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] removing snapshot(nova-resize) on rbd image(c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:00:31 compute-0 nova_compute[257802]: 2025-10-02 12:00:31.956 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:00:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 02 12:00:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 02 12:00:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 02 12:00:32 compute-0 nova_compute[257802]: 2025-10-02 12:00:32.188 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:32 compute-0 nova_compute[257802]: 2025-10-02 12:00:32.189 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:00:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:32.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:33.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:33 compute-0 ceph-mon[73607]: osdmap e142: 3 total, 3 up, 3 in
Oct 02 12:00:33 compute-0 ceph-mon[73607]: pgmap v1000: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.217 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.262 2 DEBUG oslo_concurrency.processutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.375 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.376 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.376 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.376 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.377 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:00:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382290110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.679 2 DEBUG oslo_concurrency.processutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.684 2 DEBUG nova.compute.provider_tree [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.703 2 DEBUG nova.scheduler.client.report [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.755 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.917 2 INFO nova.scheduler.client.report [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Deleted allocation for migration a7a9cc47-fbe3-422d-995e-09749ba0ce00
Oct 02 12:00:33 compute-0 nova_compute[257802]: 2025-10-02 12:00:33.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:34 compute-0 nova_compute[257802]: 2025-10-02 12:00:34.039 2 DEBUG oslo_concurrency.lockutils [None req-ab604150-ec68-4ca7-9f9a-c76ac9e45b2d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3382290110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 480 KiB/s wr, 130 op/s
Oct 02 12:00:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:35.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:35 compute-0 ceph-mon[73607]: pgmap v1001: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 480 KiB/s wr, 130 op/s
Oct 02 12:00:35 compute-0 nova_compute[257802]: 2025-10-02 12:00:35.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:35 compute-0 nova_compute[257802]: 2025-10-02 12:00:35.505 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:00:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3141163756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.0 KiB/s wr, 100 op/s
Oct 02 12:00:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:37.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:37 compute-0 ceph-mon[73607]: pgmap v1002: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.0 KiB/s wr, 100 op/s
Oct 02 12:00:37 compute-0 nova_compute[257802]: 2025-10-02 12:00:37.464 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406422.4634051, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:00:37 compute-0 nova_compute[257802]: 2025-10-02 12:00:37.465 2 INFO nova.compute.manager [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Stopped (Lifecycle Event)
Oct 02 12:00:37 compute-0 nova_compute[257802]: 2025-10-02 12:00:37.498 2 DEBUG nova.compute.manager [None req-32223bf8-3d82-4f8f-81f0-12b5d8e094cc - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 02 12:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 02 12:00:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 02 12:00:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 459 KiB/s rd, 3.9 KiB/s wr, 32 op/s
Oct 02 12:00:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:38 compute-0 nova_compute[257802]: 2025-10-02 12:00:38.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:39 compute-0 ceph-mon[73607]: osdmap e143: 3 total, 3 up, 3 in
Oct 02 12:00:39 compute-0 ceph-mon[73607]: pgmap v1004: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 459 KiB/s rd, 3.9 KiB/s wr, 32 op/s
Oct 02 12:00:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:39.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:40 compute-0 nova_compute[257802]: 2025-10-02 12:00:40.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 24 KiB/s wr, 114 op/s
Oct 02 12:00:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:00:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:00:40 compute-0 ceph-mon[73607]: pgmap v1005: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 24 KiB/s wr, 114 op/s
Oct 02 12:00:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:41.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:00:42
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'vms', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root']
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:00:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 99 op/s
Oct 02 12:00:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:43.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:43 compute-0 ceph-mon[73607]: pgmap v1006: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 99 op/s
Oct 02 12:00:43 compute-0 nova_compute[257802]: 2025-10-02 12:00:43.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 31 KiB/s wr, 147 op/s
Oct 02 12:00:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:44.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:44 compute-0 ceph-mon[73607]: pgmap v1007: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 31 KiB/s wr, 147 op/s
Oct 02 12:00:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:45.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:45 compute-0 nova_compute[257802]: 2025-10-02 12:00:45.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 43 KiB/s wr, 143 op/s
Oct 02 12:00:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:46.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:47 compute-0 ceph-mon[73607]: pgmap v1008: 305 pgs: 305 active+clean; 327 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 43 KiB/s wr, 143 op/s
Oct 02 12:00:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:47.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:47 compute-0 nova_compute[257802]: 2025-10-02 12:00:47.933 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating tmpfile /var/lib/nova/instances/tmpxwrhq_3i to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:00:47 compute-0 nova_compute[257802]: 2025-10-02 12:00:47.934 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:00:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2598809351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 329 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 41 KiB/s wr, 132 op/s
Oct 02 12:00:48 compute-0 podman[268532]: 2025-10-02 12:00:48.903523598 +0000 UTC m=+0.048155229 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:00:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:48 compute-0 nova_compute[257802]: 2025-10-02 12:00:48.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:00:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5227 writes, 22K keys, 5226 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5227 writes, 5226 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1508 writes, 6692 keys, 1508 commit groups, 1.0 writes per commit group, ingest: 10.05 MB, 0.02 MB/s
                                           Interval WAL: 1508 writes, 1508 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    128.4      0.22              0.07        13    0.017       0      0       0.0       0.0
                                             L6      1/0    7.46 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    152.2    125.5      0.78              0.25        12    0.065     55K   6386       0.0       0.0
                                            Sum      1/0    7.46 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    118.7    126.1      1.00              0.32        25    0.040     55K   6386       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.1    113.6    113.9      0.50              0.16        12    0.041     29K   3065       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    152.2    125.5      0.78              0.25        12    0.065     55K   6386       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    130.8      0.22              0.07        12    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.028, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 1.0 seconds
                                           Interval compaction: 0.06 GB write, 0.09 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 10.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000239 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(590,9.65 MB,3.17273%) FilterBlock(26,159.30 KB,0.0511722%) IndexBlock(26,305.41 KB,0.098108%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:00:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:49 compute-0 sudo[268552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:49 compute-0 sudo[268552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:49 compute-0 sudo[268552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-0 sudo[268577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:49 compute-0 sudo[268577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:49 compute-0 sudo[268577]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-0 nova_compute[257802]: 2025-10-02 12:00:49.219 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:00:49 compute-0 nova_compute[257802]: 2025-10-02 12:00:49.255 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:00:49 compute-0 nova_compute[257802]: 2025-10-02 12:00:49.256 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:00:49 compute-0 nova_compute[257802]: 2025-10-02 12:00:49.256 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:00:49 compute-0 ceph-mon[73607]: pgmap v1009: 305 pgs: 305 active+clean; 329 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 41 KiB/s wr, 132 op/s
Oct 02 12:00:50 compute-0 nova_compute[257802]: 2025-10-02 12:00:50.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 368 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 134 op/s
Oct 02 12:00:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:51 compute-0 ceph-mon[73607]: pgmap v1010: 305 pgs: 305 active+clean; 368 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 134 op/s
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.073 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.102 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.104 2 DEBUG os_brick.utils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.104 2 INFO oslo.privsep.daemon [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpiudmgwlv/privsep.sock']
Oct 02 12:00:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.942 2 INFO oslo.privsep.daemon [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.801 1650 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.805 1650 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.809 1650 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.809 1650 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1650
Oct 02 12:00:51 compute-0 nova_compute[257802]: 2025-10-02 12:00:51.946 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[9022d408-531c-4c6f-9546-f54c62f8982e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1971463427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/127885509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.055 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.071 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.072 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[0702c8d8-f5a3-4921-80ca-dce828d62576]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.073 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.080 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.081 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ff949518-d557-4a13-a247-df1cfa8aac3f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.083 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.098 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.098 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[a79b4025-6f84-4a2a-a6a6-7742fb1db648]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.102 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0b3d35-e7b0-4443-a36d-e0b772a89c4f]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.102 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.137 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.140 2 DEBUG os_brick.initiator.connectors.lightos [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.140 2 DEBUG os_brick.initiator.connectors.lightos [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.141 2 DEBUG os_brick.initiator.connectors.lightos [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:00:52 compute-0 nova_compute[257802]: 2025-10-02 12:00:52.141 2 DEBUG os_brick.utils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] <== get_connector_properties: return (1036ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:00:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 368 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 72 op/s
Oct 02 12:00:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:52.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:53 compute-0 ceph-mon[73607]: pgmap v1011: 305 pgs: 305 active+clean; 368 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 72 op/s
Oct 02 12:00:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.668 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='829db4f7-64a6-478e-8284-7771fae0f3c1'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.669 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating instance directory: /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.670 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Ensure instance console log exists: /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.670 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.671 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.671 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.676 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.682 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.684 2 DEBUG nova.virt.libvirt.vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:00:41Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.684 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.685 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.685 2 DEBUG os_vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc377346-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc377346-4a, col_values=(('external_ids', {'iface-id': 'cc377346-4a61-4e38-b682-f2111cc47ffd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:73:25', 'vm-uuid': 'cd9deba5-2505-4edd-ac7c-483615217473'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:53 compute-0 NetworkManager[44987]: <info>  [1759406453.6955] manager: (tapcc377346-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.703 2 INFO os_vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.705 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:00:53 compute-0 nova_compute[257802]: 2025-10-02 12:00:53.706 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='829db4f7-64a6-478e-8284-7771fae0f3c1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007234341557323655 of space, bias 1.0, pg target 2.1703024671970965 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0012208394402568397 of space, bias 1.0, pg target 0.36381015319653826 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:00:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2867616679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:00:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 399 MiB data, 452 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 133 op/s
Oct 02 12:00:54 compute-0 nova_compute[257802]: 2025-10-02 12:00:54.898 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Port cc377346-4a61-4e38-b682-f2111cc47ffd updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:00:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:54.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:55 compute-0 nova_compute[257802]: 2025-10-02 12:00:55.131 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='829db4f7-64a6-478e-8284-7771fae0f3c1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:00:55 compute-0 ceph-mon[73607]: pgmap v1012: 305 pgs: 305 active+clean; 399 MiB data, 452 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 133 op/s
Oct 02 12:00:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/141035279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:00:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/141035279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:00:55 compute-0 nova_compute[257802]: 2025-10-02 12:00:55.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:55 compute-0 kernel: tapcc377346-4a: entered promiscuous mode
Oct 02 12:00:55 compute-0 NetworkManager[44987]: <info>  [1759406455.3363] manager: (tapcc377346-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 02 12:00:55 compute-0 nova_compute[257802]: 2025-10-02 12:00:55.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:55 compute-0 ovn_controller[148183]: 2025-10-02T12:00:55Z|00048|binding|INFO|Claiming lport cc377346-4a61-4e38-b682-f2111cc47ffd for this additional chassis.
Oct 02 12:00:55 compute-0 ovn_controller[148183]: 2025-10-02T12:00:55Z|00049|binding|INFO|cc377346-4a61-4e38-b682-f2111cc47ffd: Claiming fa:16:3e:1b:73:25 10.100.0.4
Oct 02 12:00:55 compute-0 ovn_controller[148183]: 2025-10-02T12:00:55Z|00050|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd ovn-installed in OVS
Oct 02 12:00:55 compute-0 nova_compute[257802]: 2025-10-02 12:00:55.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:55 compute-0 nova_compute[257802]: 2025-10-02 12:00:55.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:55 compute-0 systemd-machined[211836]: New machine qemu-6-instance-00000009.
Oct 02 12:00:55 compute-0 systemd-udevd[268633]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:00:55 compute-0 NetworkManager[44987]: <info>  [1759406455.3868] device (tapcc377346-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:00:55 compute-0 NetworkManager[44987]: <info>  [1759406455.3875] device (tapcc377346-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:00:55 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000009.
Oct 02 12:00:56 compute-0 nova_compute[257802]: 2025-10-02 12:00:56.647 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406456.647452, cd9deba5-2505-4edd-ac7c-483615217473 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:00:56 compute-0 nova_compute[257802]: 2025-10-02 12:00:56.648 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Started (Lifecycle Event)
Oct 02 12:00:56 compute-0 nova_compute[257802]: 2025-10-02 12:00:56.678 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 936 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Oct 02 12:00:56 compute-0 ceph-mon[73607]: pgmap v1013: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 936 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Oct 02 12:00:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:00:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.090 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406457.0898018, cd9deba5-2505-4edd-ac7c-483615217473 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.090 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Resumed (Lifecycle Event)
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.120 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.123 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:00:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.164 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.742 2 DEBUG nova.compute.manager [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.895 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.895 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:57 compute-0 podman[268685]: 2025-10-02 12:00:57.919707502 +0000 UTC m=+0.061207340 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:00:57 compute-0 podman[268686]: 2025-10-02 12:00:57.919708182 +0000 UTC m=+0.057704554 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.944 2 DEBUG nova.objects.instance [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'pci_requests' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.977 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.977 2 INFO nova.compute.claims [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:00:57 compute-0 nova_compute[257802]: 2025-10-02 12:00:57.978 2 DEBUG nova.objects.instance [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.000 2 DEBUG nova.objects.instance [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:00:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.071 2 INFO nova.compute.resource_tracker [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Updating resource usage from migration 9100aa3b-5271-4018-89db-db3228fa0fa2
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.072 2 DEBUG nova.compute.resource_tracker [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Starting to track incoming migration 9100aa3b-5271-4018-89db-db3228fa0fa2 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.203 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:00:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:00:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718061999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.636 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.642 2 DEBUG nova.compute.provider_tree [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:00:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2718061999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.677 2 DEBUG nova.scheduler.client.report [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.701 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:58 compute-0 nova_compute[257802]: 2025-10-02 12:00:58.702 2 INFO nova.compute.manager [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Migrating
Oct 02 12:00:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:00:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:00:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:59 compute-0 ovn_controller[148183]: 2025-10-02T12:00:59Z|00051|binding|INFO|Claiming lport cc377346-4a61-4e38-b682-f2111cc47ffd for this chassis.
Oct 02 12:00:59 compute-0 ovn_controller[148183]: 2025-10-02T12:00:59Z|00052|binding|INFO|cc377346-4a61-4e38-b682-f2111cc47ffd: Claiming fa:16:3e:1b:73:25 10.100.0.4
Oct 02 12:00:59 compute-0 ovn_controller[148183]: 2025-10-02T12:00:59Z|00053|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd up in Southbound
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.473 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '9', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.474 158261 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 bound to our chassis
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.475 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.489 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5d619e33-f907-4d44-afe1-49a2bd04e5df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.516 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3436b457-fbe2-41df-b82f-8f9cf0fc4078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.521 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1dea52ba-feeb-47a8-9a35-9303395277a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.552 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe0eeee-12ea-4d12-8e1b-c445eed65b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.569 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6905dc05-e96d-4923-86a6-537c66884bfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 31, 'tx_packets': 5, 'rx_bytes': 1798, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 31, 'tx_packets': 5, 'rx_bytes': 1798, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446414, 'reachable_time': 22434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268753, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.588 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98cb261e-97f2-4083-9ef5-83cdd595eed5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e991676-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446426, 'tstamp': 446426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268754, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e991676-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446428, 'tstamp': 446428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268754, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.590 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:59 compute-0 nova_compute[257802]: 2025-10-02 12:00:59.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.593 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.593 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.593 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:00:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:00:59.594 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:00:59 compute-0 ceph-mon[73607]: pgmap v1014: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:00:59 compute-0 nova_compute[257802]: 2025-10-02 12:00:59.815 2 INFO nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Post operation of migration started
Oct 02 12:01:00 compute-0 sshd-session[268755]: Accepted publickey for nova from 192.168.122.102 port 33268 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:01:00 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:01:00 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:01:00 compute-0 systemd-logind[789]: New session 57 of user nova.
Oct 02 12:01:00 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:01:00 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:01:00 compute-0 systemd[268759]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:01:00 compute-0 systemd[268759]: Queued start job for default target Main User Target.
Oct 02 12:01:00 compute-0 systemd[268759]: Created slice User Application Slice.
Oct 02 12:01:00 compute-0 systemd[268759]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:01:00 compute-0 systemd[268759]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:01:00 compute-0 systemd[268759]: Reached target Paths.
Oct 02 12:01:00 compute-0 systemd[268759]: Reached target Timers.
Oct 02 12:01:00 compute-0 systemd[268759]: Starting D-Bus User Message Bus Socket...
Oct 02 12:01:00 compute-0 systemd[268759]: Starting Create User's Volatile Files and Directories...
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:00 compute-0 systemd[268759]: Finished Create User's Volatile Files and Directories.
Oct 02 12:01:00 compute-0 systemd[268759]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:01:00 compute-0 systemd[268759]: Reached target Sockets.
Oct 02 12:01:00 compute-0 systemd[268759]: Reached target Basic System.
Oct 02 12:01:00 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.296 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:00 compute-0 systemd[268759]: Reached target Main User Target.
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.296 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.296 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:01:00 compute-0 systemd[268759]: Startup finished in 160ms.
Oct 02 12:01:00 compute-0 systemd[1]: Started Session 57 of User nova.
Oct 02 12:01:00 compute-0 sshd-session[268755]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:01:00 compute-0 sshd-session[268774]: Received disconnect from 192.168.122.102 port 33268:11: disconnected by user
Oct 02 12:01:00 compute-0 sshd-session[268774]: Disconnected from user nova 192.168.122.102 port 33268
Oct 02 12:01:00 compute-0 sshd-session[268755]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:01:00 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 02 12:01:00 compute-0 systemd-logind[789]: Session 57 logged out. Waiting for processes to exit.
Oct 02 12:01:00 compute-0 systemd-logind[789]: Removed session 57.
Oct 02 12:01:00 compute-0 sshd-session[268777]: Accepted publickey for nova from 192.168.122.102 port 33284 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:01:00 compute-0 systemd-logind[789]: New session 59 of user nova.
Oct 02 12:01:00 compute-0 systemd[1]: Started Session 59 of User nova.
Oct 02 12:01:00 compute-0 sshd-session[268777]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:01:00 compute-0 sshd-session[268780]: Received disconnect from 192.168.122.102 port 33284:11: disconnected by user
Oct 02 12:01:00 compute-0 sshd-session[268780]: Disconnected from user nova 192.168.122.102 port 33284
Oct 02 12:01:00 compute-0 sshd-session[268777]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:01:00 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 02 12:01:00 compute-0 systemd-logind[789]: Session 59 logged out. Waiting for processes to exit.
Oct 02 12:01:00 compute-0 systemd-logind[789]: Removed session 59.
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.765 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.766 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Oct 02 12:01:00 compute-0 nova_compute[257802]: 2025-10-02 12:01:00.911 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:01:00 compute-0 ceph-mon[73607]: pgmap v1015: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Oct 02 12:01:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.100 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.100 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.106 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.107 2 INFO nova.compute.claims [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:01:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:01 compute-0 CROND[268783]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-0 run-parts[268786]: (/etc/cron.hourly) starting 0anacron
Oct 02 12:01:01 compute-0 run-parts[268792]: (/etc/cron.hourly) finished 0anacron
Oct 02 12:01:01 compute-0 CROND[268782]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.399 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183784312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.834 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.839 2 DEBUG nova.compute.provider_tree [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.899 2 DEBUG nova.scheduler.client.report [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.959 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:01 compute-0 nova_compute[257802]: 2025-10-02 12:01:01.960 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:01:01 compute-0 podman[268815]: 2025-10-02 12:01:01.967652625 +0000 UTC m=+0.107936634 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:01:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/183784312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.020 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.020 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.074 2 INFO nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.137 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.372 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.374 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.374 2 INFO nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Creating image(s)
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.398 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.423 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.446 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.450 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.470 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.474 2 DEBUG nova.policy [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4a02a1717144da38c573ce51c727de8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6a5858e0d184dd184a3291b74794c14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.534 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.535 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.536 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.536 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.557 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.560 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.594 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.637 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.637 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.637 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.642 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:01:02 compute-0 virtqemud[257280]: Domain id=6 name='instance-00000009' uuid=cd9deba5-2505-4edd-ac7c-483615217473 is tainted: custom-monitor
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.836 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 02 12:01:02 compute-0 nova_compute[257802]: 2025-10-02 12:01:02.902 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] resizing rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:01:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:02.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.022 2 DEBUG nova.objects.instance [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'migration_context' on Instance uuid cee549ac-63b2-4eed-b8c5-0bd0948a95d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:03 compute-0 ceph-mon[73607]: pgmap v1016: 305 pgs: 305 active+clean; 409 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.077 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.077 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Ensure instance console log exists: /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.078 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.078 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.079 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.650 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:01:03 compute-0 nova_compute[257802]: 2025-10-02 12:01:03.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:04 compute-0 nova_compute[257802]: 2025-10-02 12:01:04.657 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:01:04 compute-0 nova_compute[257802]: 2025-10-02 12:01:04.660 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:04 compute-0 nova_compute[257802]: 2025-10-02 12:01:04.709 2 DEBUG nova.objects.instance [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:01:04 compute-0 nova_compute[257802]: 2025-10-02 12:01:04.796 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Successfully created port: 72564b18-ab59-4028-8bb3-e93073a18534 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:01:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 443 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 02 12:01:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:04.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:05 compute-0 ceph-mon[73607]: pgmap v1017: 305 pgs: 305 active+clean; 443 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 02 12:01:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:05 compute-0 nova_compute[257802]: 2025-10-02 12:01:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 459 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 129 op/s
Oct 02 12:01:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:06.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:07 compute-0 ceph-mon[73607]: pgmap v1018: 305 pgs: 305 active+clean; 459 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 129 op/s
Oct 02 12:01:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:08 compute-0 nova_compute[257802]: 2025-10-02 12:01:08.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 476 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct 02 12:01:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:08.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:09 compute-0 ceph-mon[73607]: pgmap v1019: 305 pgs: 305 active+clean; 476 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct 02 12:01:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:09 compute-0 sudo[269011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:09 compute-0 sudo[269011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:09 compute-0 sudo[269011]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:09 compute-0 sudo[269036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:09 compute-0 sudo[269036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:09 compute-0 sudo[269036]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:10 compute-0 nova_compute[257802]: 2025-10-02 12:01:10.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:10 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:01:10 compute-0 systemd[268759]: Activating special unit Exit the Session...
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped target Main User Target.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped target Basic System.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped target Paths.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped target Sockets.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped target Timers.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:01:10 compute-0 systemd[268759]: Closed D-Bus User Message Bus Socket.
Oct 02 12:01:10 compute-0 systemd[268759]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:01:10 compute-0 systemd[268759]: Removed slice User Application Slice.
Oct 02 12:01:10 compute-0 systemd[268759]: Reached target Shutdown.
Oct 02 12:01:10 compute-0 systemd[268759]: Finished Exit the Session.
Oct 02 12:01:10 compute-0 systemd[268759]: Reached target Exit the Session.
Oct 02 12:01:10 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:01:10 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:01:10 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:01:10 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:01:10 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:01:10 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:01:10 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:01:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:01:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:11.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:11 compute-0 ceph-mon[73607]: pgmap v1020: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:01:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:01:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:13.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:13 compute-0 ceph-mon[73607]: pgmap v1021: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:01:13 compute-0 nova_compute[257802]: 2025-10-02 12:01:13.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 425 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:01:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:15 compute-0 ceph-mon[73607]: pgmap v1022: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 425 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:01:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:15 compute-0 nova_compute[257802]: 2025-10-02 12:01:15.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2384123232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Oct 02 12:01:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:17.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:17 compute-0 ceph-mon[73607]: pgmap v1023: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Oct 02 12:01:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.252 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.252 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.252 2 DEBUG nova.network.neutron [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.512 2 DEBUG nova.network.neutron [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:01:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:17.644 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:17.645 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:01:17 compute-0 nova_compute[257802]: 2025-10-02 12:01:17.774 2 DEBUG nova.network.neutron [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/345505956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:18 compute-0 nova_compute[257802]: 2025-10-02 12:01:18.220 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:18 compute-0 nova_compute[257802]: 2025-10-02 12:01:18.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Oct 02 12:01:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:19.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:19 compute-0 ceph-mon[73607]: pgmap v1024: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Oct 02 12:01:19 compute-0 nova_compute[257802]: 2025-10-02 12:01:19.697 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:01:19 compute-0 nova_compute[257802]: 2025-10-02 12:01:19.699 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:01:19 compute-0 nova_compute[257802]: 2025-10-02 12:01:19.699 2 INFO nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Creating image(s)
Oct 02 12:01:19 compute-0 nova_compute[257802]: 2025-10-02 12:01:19.731 2 DEBUG nova.storage.rbd_utils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] creating snapshot(nova-resize) on rbd image(b8d4207f-7e3b-4a3c-ad76-60d87d695918_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:01:19 compute-0 nova_compute[257802]: 2025-10-02 12:01:19.908 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Successfully updated port: 72564b18-ab59-4028-8bb3-e93073a18534 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:01:19 compute-0 podman[269105]: 2025-10-02 12:01:19.911783836 +0000 UTC m=+0.052186618 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.009 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.009 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquired lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.009 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.092 2 DEBUG nova.compute.manager [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-changed-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.092 2 DEBUG nova.compute.manager [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Refreshing instance network info cache due to event network-changed-72564b18-ab59-4028-8bb3-e93073a18534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.092 2 DEBUG oslo_concurrency.lockutils [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.407 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:01:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 02 12:01:20 compute-0 nova_compute[257802]: 2025-10-02 12:01:20.855 2 DEBUG nova.objects.instance [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 31 KiB/s wr, 9 op/s
Oct 02 12:01:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:21.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.021 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.022 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Ensure instance console log exists: /var/lib/nova/instances/b8d4207f-7e3b-4a3c-ad76-60d87d695918/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.022 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.022 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.023 2 DEBUG oslo_concurrency.lockutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.024 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.027 2 WARNING nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.031 2 DEBUG nova.virt.libvirt.host [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.032 2 DEBUG nova.virt.libvirt.host [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.035 2 DEBUG nova.virt.libvirt.host [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.035 2 DEBUG nova.virt.libvirt.host [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.036 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.036 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.036 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.037 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.037 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.037 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.037 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.037 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.038 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.038 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.038 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.038 2 DEBUG nova.virt.hardware [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.038 2 DEBUG nova.objects.instance [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.084 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.290 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Check if temp file /var/lib/nova/instances/tmp3_47x7bn exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.290 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:01:21 compute-0 ceph-mon[73607]: osdmap e144: 3 total, 3 up, 3 in
Oct 02 12:01:21 compute-0 ceph-mon[73607]: pgmap v1026: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 31 KiB/s wr, 9 op/s
Oct 02 12:01:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:01:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676543835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.530 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.567 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.808 2 DEBUG nova.network.neutron [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updating instance_info_cache with network_info: [{"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.893 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Releasing lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.893 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Instance network_info: |[{"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.894 2 DEBUG oslo_concurrency.lockutils [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.894 2 DEBUG nova.network.neutron [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Refreshing network info cache for port 72564b18-ab59-4028-8bb3-e93073a18534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.899 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Start _get_guest_xml network_info=[{"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.904 2 WARNING nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.909 2 DEBUG nova.virt.libvirt.host [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.910 2 DEBUG nova.virt.libvirt.host [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.913 2 DEBUG nova.virt.libvirt.host [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.914 2 DEBUG nova.virt.libvirt.host [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.915 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.915 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:00:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1625425264',id=27,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-828527593',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.916 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.916 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.916 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.916 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.917 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.917 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.917 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.917 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.918 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.918 2 DEBUG nova.virt.hardware [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.921 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:01:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835666087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.982 2 DEBUG oslo_concurrency.processutils [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:21 compute-0 nova_compute[257802]: 2025-10-02 12:01:21.986 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <uuid>b8d4207f-7e3b-4a3c-ad76-60d87d695918</uuid>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <name>instance-0000000a</name>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:name>tempest-MigrationsAdminTest-server-1449740565</nova:name>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:01:21</nova:creationTime>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:user uuid="1a06819bf8cc4ff7bccbbb2616ff2d21">tempest-MigrationsAdminTest-819597356-project-member</nova:user>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <nova:project uuid="f1ce36070fb047479c3a083f36733f63">tempest-MigrationsAdminTest-819597356</nova:project>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <system>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="serial">b8d4207f-7e3b-4a3c-ad76-60d87d695918</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="uuid">b8d4207f-7e3b-4a3c-ad76-60d87d695918</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </system>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <os>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </os>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <features>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </features>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/b8d4207f-7e3b-4a3c-ad76-60d87d695918_disk">
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/b8d4207f-7e3b-4a3c-ad76-60d87d695918_disk.config">
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:01:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/b8d4207f-7e3b-4a3c-ad76-60d87d695918/console.log" append="off"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <video>
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </video>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:01:21 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:01:21 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:01:21 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:01:21 compute-0 nova_compute[257802]: </domain>
Oct 02 12:01:21 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.125 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.126 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.126 2 INFO nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Using config drive
Oct 02 12:01:22 compute-0 systemd-machined[211836]: New machine qemu-7-instance-0000000a.
Oct 02 12:01:22 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000a.
Oct 02 12:01:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:01:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441806977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.415 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.443 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.450 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3676543835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/835666087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2441806977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 31 KiB/s wr, 9 op/s
Oct 02 12:01:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:01:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273962547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.934 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.936 2 DEBUG nova.virt.libvirt.vif [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(27),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-544170697',id=11,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=27,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-j38q1r6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:01:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=cee549ac-63b2-4eed-b8c5-0bd0948a95d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.936 2 DEBUG nova.network.os_vif_util [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.937 2 DEBUG nova.network.os_vif_util [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.938 2 DEBUG nova.objects.instance [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid cee549ac-63b2-4eed-b8c5-0bd0948a95d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.981 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <uuid>cee549ac-63b2-4eed-b8c5-0bd0948a95d5</uuid>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <name>instance-0000000b</name>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-544170697</nova:name>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:01:21</nova:creationTime>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-828527593">
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:user uuid="f4a02a1717144da38c573ce51c727de8">tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member</nova:user>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:project uuid="b6a5858e0d184dd184a3291b74794c14">tempest-ServersWithSpecificFlavorTestJSON-1982850532</nova:project>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <nova:port uuid="72564b18-ab59-4028-8bb3-e93073a18534">
Oct 02 12:01:22 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <system>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="serial">cee549ac-63b2-4eed-b8c5-0bd0948a95d5</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="uuid">cee549ac-63b2-4eed-b8c5-0bd0948a95d5</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </system>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <os>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </os>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <features>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </features>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk">
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config">
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:01:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:36:35:39"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <target dev="tap72564b18-ab"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/console.log" append="off"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <video>
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </video>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:01:22 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:01:22 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:01:22 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:01:22 compute-0 nova_compute[257802]: </domain>
Oct 02 12:01:22 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.981 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Preparing to wait for external event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.981 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.982 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.982 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.983 2 DEBUG nova.virt.libvirt.vif [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(27),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-544170697',id=11,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=27,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-j38q1r6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:01:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=cee549ac-63b2-4eed-b8c5-0bd0948a95d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.983 2 DEBUG nova.network.os_vif_util [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.985 2 DEBUG nova.network.os_vif_util [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.986 2 DEBUG os_vif [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72564b18-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.994 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72564b18-ab, col_values=(('external_ids', {'iface-id': '72564b18-ab59-4028-8bb3-e93073a18534', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:35:39', 'vm-uuid': 'cee549ac-63b2-4eed-b8c5-0bd0948a95d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:22 compute-0 NetworkManager[44987]: <info>  [1759406482.9965] manager: (tap72564b18-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:22 compute-0 nova_compute[257802]: 2025-10-02 12:01:22.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.003 2 INFO os_vif [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab')
Oct 02 12:01:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.099 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.100 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.100 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No VIF found with MAC fa:16:3e:36:35:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.100 2 INFO nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Using config drive
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.128 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.134 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406483.105256, b8d4207f-7e3b-4a3c-ad76-60d87d695918 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.134 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] VM Resumed (Lifecycle Event)
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.135 2 DEBUG nova.compute.manager [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.142 2 INFO nova.virt.libvirt.driver [-] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance running successfully.
Oct 02 12:01:23 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.144 2 DEBUG nova.virt.libvirt.guest [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.144 2 DEBUG nova.virt.libvirt.driver [None req-8cc971a5-0d8f-469a-b725-6f34bcb5f275 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:01:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.174 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.188 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.233 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.234 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406483.1058736, b8d4207f-7e3b-4a3c-ad76-60d87d695918 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.234 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] VM Started (Lifecycle Event)
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.269 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.271 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:23 compute-0 ceph-mon[73607]: pgmap v1027: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 31 KiB/s wr, 9 op/s
Oct 02 12:01:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2273962547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.894 2 INFO nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Creating config drive at /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config
Oct 02 12:01:23 compute-0 nova_compute[257802]: 2025-10-02 12:01:23.901 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp03ujfni execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.039 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp03ujfni" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.081 2 DEBUG nova.storage.rbd_utils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.085 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.321 2 DEBUG nova.network.neutron [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updated VIF entry in instance network info cache for port 72564b18-ab59-4028-8bb3-e93073a18534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.322 2 DEBUG nova.network.neutron [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updating instance_info_cache with network_info: [{"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.368 2 DEBUG oslo_concurrency.lockutils [req-28c08085-bd9c-4553-bf60-37b642766c86 req-a0d78010-3d82-4ad1-86b5-361db550e15c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.606 2 DEBUG oslo_concurrency.processutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config cee549ac-63b2-4eed-b8c5-0bd0948a95d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.607 2 INFO nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Deleting local config drive /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5/disk.config because it was imported into RBD.
Oct 02 12:01:24 compute-0 kernel: tap72564b18-ab: entered promiscuous mode
Oct 02 12:01:24 compute-0 NetworkManager[44987]: <info>  [1759406484.6580] manager: (tap72564b18-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 02 12:01:24 compute-0 systemd-udevd[269360]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:24 compute-0 ovn_controller[148183]: 2025-10-02T12:01:24Z|00054|binding|INFO|Claiming lport 72564b18-ab59-4028-8bb3-e93073a18534 for this chassis.
Oct 02 12:01:24 compute-0 ovn_controller[148183]: 2025-10-02T12:01:24Z|00055|binding|INFO|72564b18-ab59-4028-8bb3-e93073a18534: Claiming fa:16:3e:36:35:39 10.100.0.6
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:24 compute-0 NetworkManager[44987]: <info>  [1759406484.6820] device (tap72564b18-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:24 compute-0 NetworkManager[44987]: <info>  [1759406484.6834] device (tap72564b18-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:01:24 compute-0 systemd-machined[211836]: New machine qemu-8-instance-0000000b.
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.696 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:35:39 10.100.0.6'], port_security=['fa:16:3e:36:35:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cee549ac-63b2-4eed-b8c5-0bd0948a95d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-797e74ff-2c6b-48fb-807e-b845ad260597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a5858e0d184dd184a3291b74794c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d7d59e9-1a48-4c19-9701-fea254de36f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0288ec27-954c-4f05-9e09-da71a7e0f954, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=72564b18-ab59-4028-8bb3-e93073a18534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.697 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 72564b18-ab59-4028-8bb3-e93073a18534 in datapath 797e74ff-2c6b-48fb-807e-b845ad260597 bound to our chassis
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.700 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:01:24 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000b.
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.713 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46059a28-554a-439f-9f03-a868cf920afe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.714 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap797e74ff-21 in ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.717 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap797e74ff-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.717 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[85657c3f-c09d-46fc-9ffd-bb5648fc9cd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.718 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee239c22-5f7a-4b2d-9500-1aee7d06ecac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.739 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[05cb05cb-f39e-42de-8825-ca14426ddcfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:24 compute-0 ovn_controller[148183]: 2025-10-02T12:01:24Z|00056|binding|INFO|Setting lport 72564b18-ab59-4028-8bb3-e93073a18534 ovn-installed in OVS
Oct 02 12:01:24 compute-0 ovn_controller[148183]: 2025-10-02T12:01:24Z|00057|binding|INFO|Setting lport 72564b18-ab59-4028-8bb3-e93073a18534 up in Southbound
Oct 02 12:01:24 compute-0 nova_compute[257802]: 2025-10-02 12:01:24.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.783 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f4a40e-f997-4e54-a545-24547f103fb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.822 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3c53b8-89ba-4298-9975-27b64b6ab601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.828 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3b823762-1dfc-4ea4-9ad8-db44954697d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 NetworkManager[44987]: <info>  [1759406484.8296] manager: (tap797e74ff-20): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 02 12:01:24 compute-0 systemd-udevd[269439]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.880 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a620b18d-4e01-47c9-a58c-b05216c04c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.885 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ca610f40-157d-4a62-a09b-c3d57326c294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 723 KiB/s rd, 20 KiB/s wr, 41 op/s
Oct 02 12:01:24 compute-0 NetworkManager[44987]: <info>  [1759406484.9165] device (tap797e74ff-20): carrier: link connected
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.923 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2da756c7-12c0-4a0b-961e-d2b4460c53bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.941 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15fe593b-4a79-48a1-a555-d26ced8305b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap797e74ff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:79:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455254, 'reachable_time': 17223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269484, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.963 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a137479-14d9-420c-9724-c6839c9cb5ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:79db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455254, 'tstamp': 455254}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269495, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:24 compute-0 sudo[269471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:24 compute-0 sudo[269471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:24 compute-0 sudo[269471]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:24.987 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d35b66ce-0d86-4008-8299-fb7b2e9efa15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap797e74ff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:79:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455254, 'reachable_time': 17223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269512, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.010 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.011 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.011 2 DEBUG nova.network.neutron [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:01:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:25.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.027 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4ad1e5-3bef-4503-9d5a-e6f778e523a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:25 compute-0 sudo[269515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:01:25 compute-0 sudo[269515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:25 compute-0 sudo[269515]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.082 2 DEBUG nova.compute.manager [req-bbe59ffe-4e7f-430d-9eb6-40f8de75b8f4 req-ff9f35a9-3709-4d78-84de-909c04d9a5f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.083 2 DEBUG oslo_concurrency.lockutils [req-bbe59ffe-4e7f-430d-9eb6-40f8de75b8f4 req-ff9f35a9-3709-4d78-84de-909c04d9a5f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.083 2 DEBUG oslo_concurrency.lockutils [req-bbe59ffe-4e7f-430d-9eb6-40f8de75b8f4 req-ff9f35a9-3709-4d78-84de-909c04d9a5f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.084 2 DEBUG oslo_concurrency.lockutils [req-bbe59ffe-4e7f-430d-9eb6-40f8de75b8f4 req-ff9f35a9-3709-4d78-84de-909c04d9a5f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.084 2 DEBUG nova.compute.manager [req-bbe59ffe-4e7f-430d-9eb6-40f8de75b8f4 req-ff9f35a9-3709-4d78-84de-909c04d9a5f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Processing event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:01:25 compute-0 ceph-mon[73607]: pgmap v1028: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 723 KiB/s rd, 20 KiB/s wr, 41 op/s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.105 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a21fa319-1a6c-49da-a945-38d7185936e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.108 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap797e74ff-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.108 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.109 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap797e74ff-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 kernel: tap797e74ff-20: entered promiscuous mode
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 NetworkManager[44987]: <info>  [1759406485.1199] manager: (tap797e74ff-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.121 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap797e74ff-20, col_values=(('external_ids', {'iface-id': '9ae39c1e-f098-4856-80de-d86e2593ff3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.125 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.126 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[828ec8ad-a1b0-4d2d-8db2-1c7342aa8f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:25 compute-0 sudo[269557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.127 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:01:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:25.128 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'env', 'PROCESS_TAG=haproxy-797e74ff-2c6b-48fb-807e-b845ad260597', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/797e74ff-2c6b-48fb-807e-b845ad260597.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:01:25 compute-0 ovn_controller[148183]: 2025-10-02T12:01:25Z|00058|binding|INFO|Releasing lport 9ae39c1e-f098-4856-80de-d86e2593ff3e from this chassis (sb_readonly=0)
Oct 02 12:01:25 compute-0 sudo[269557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:25 compute-0 sudo[269557]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.145 2 DEBUG nova.network.neutron [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:25 compute-0 sudo[269593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:01:25 compute-0 sudo[269593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.408 2 DEBUG nova.network.neutron [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.428 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-b8d4207f-7e3b-4a3c-ad76-60d87d695918" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:01:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378007941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:25 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 02 12:01:25 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Consumed 3.229s CPU time.
Oct 02 12:01:25 compute-0 systemd-machined[211836]: Machine qemu-7-instance-0000000a terminated.
Oct 02 12:01:25 compute-0 podman[269661]: 2025-10-02 12:01:25.59163163 +0000 UTC m=+0.101747001 container create 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:01:25 compute-0 podman[269661]: 2025-10-02 12:01:25.519742137 +0000 UTC m=+0.029857538 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:01:25 compute-0 systemd[1]: Started libpod-conmon-743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb.scope.
Oct 02 12:01:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743f29f4e6528aed4fc82b14802579c266cebf553315af49991364922a8f9e87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.676 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406485.6763358, cee549ac-63b2-4eed-b8c5-0bd0948a95d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.677 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] VM Started (Lifecycle Event)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.679 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.689 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.694 2 INFO nova.virt.libvirt.driver [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Instance spawned successfully.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.694 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.696 2 INFO nova.virt.libvirt.driver [-] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Instance destroyed successfully.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.697 2 DEBUG nova.objects.instance [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.704 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.710 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:25 compute-0 podman[269661]: 2025-10-02 12:01:25.723483722 +0000 UTC m=+0.233599103 container init 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:01:25 compute-0 podman[269661]: 2025-10-02 12:01:25.729597362 +0000 UTC m=+0.239712733 container start 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.732 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 sudo[269593]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.733 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.733 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.733 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.735 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.736 2 DEBUG nova.virt.libvirt.driver [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.740 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.741 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406485.677402, cee549ac-63b2-4eed-b8c5-0bd0948a95d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.741 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] VM Paused (Lifecycle Event)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.743 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.743 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:25 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [NOTICE]   (269700) : New worker (269702) forked
Oct 02 12:01:25 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [NOTICE]   (269700) : Loading success.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.778 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.779 2 DEBUG nova.objects.instance [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'migration_context' on Instance uuid b8d4207f-7e3b-4a3c-ad76-60d87d695918 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.782 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406485.6869938, cee549ac-63b2-4eed-b8c5-0bd0948a95d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.782 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] VM Resumed (Lifecycle Event)
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.812 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.822 2 INFO nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Took 23.45 seconds to spawn the instance on the hypervisor.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.822 2 DEBUG nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.823 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.849 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.885 2 INFO nova.compute.manager [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Took 24.83 seconds to build instance.
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.904 2 DEBUG oslo_concurrency.lockutils [None req-ae362741-4ba1-4b05-baf3-5ec2d200b057 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:25 compute-0 nova_compute[257802]: 2025-10-02 12:01:25.950 2 DEBUG oslo_concurrency.processutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/378007941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527703237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.432 2 DEBUG oslo_concurrency.processutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.436 2 DEBUG nova.compute.manager [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.437 2 DEBUG oslo_concurrency.lockutils [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.437 2 DEBUG oslo_concurrency.lockutils [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.437 2 DEBUG oslo_concurrency.lockutils [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.437 2 DEBUG nova.compute.manager [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.437 2 DEBUG nova.compute.manager [req-54e0ebe7-4e84-4360-853a-49789a32d0ea req-4c24a2b8-2fd1-47e4-9a00-3d6902c4886b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.443 2 DEBUG nova.compute.provider_tree [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.459 2 DEBUG nova.scheduler.client.report [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:01:26 compute-0 nova_compute[257802]: 2025-10-02 12:01:26.520 2 DEBUG oslo_concurrency.lockutils [None req-ba9a7a52-bb05-4fce-b0c3-4d25cc6f9b8d 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:26.647 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 20 KiB/s wr, 71 op/s
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:26.918 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:26.919 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:26.920 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:01:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2527703237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:27 compute-0 ceph-mon[73607]: pgmap v1029: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 20 KiB/s wr, 71 op/s
Oct 02 12:01:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.200 2 DEBUG nova.compute.manager [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.201 2 DEBUG oslo_concurrency.lockutils [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.201 2 DEBUG oslo_concurrency.lockutils [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.201 2 DEBUG oslo_concurrency.lockutils [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.201 2 DEBUG nova.compute.manager [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] No waiting events found dispatching network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.202 2 WARNING nova.compute.manager [req-67495626-6190-4401-9f4d-d2786c58d47b req-b4d1bbb7-7f1d-4c6e-9114-3bfa06141608 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received unexpected event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 for instance with vm_state active and task_state None.
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6186] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/41)
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6192] device (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6200] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/42)
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6203] device (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6209] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6213] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6216] device (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:01:27 compute-0 NetworkManager[44987]: <info>  [1759406487.6218] device (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.685 2 INFO nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 5.03 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.685 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.703 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(8a32ad27-767e-4f4b-aadf-beebd224c85c),old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='de5c36e5-1c4e-4ee2-9ef9-4f685e9dc836'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.707 2 DEBUG nova.objects.instance [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lazy-loading 'migration_context' on Instance uuid cd9deba5-2505-4edd-ac7c-483615217473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.708 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.710 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.710 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.730 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Find same serial number: pos=1, serial=134b86f3-5f3c-45b7-9cd8-0ead1590c407 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.731 2 DEBUG nova.virt.libvirt.vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:04Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.732 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.733 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.733 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:01:27 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:1b:73:25"/>
Oct 02 12:01:27 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:01:27 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:01:27 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:01:27 compute-0 nova_compute[257802]:   <target dev="tapcc377346-4a"/>
Oct 02 12:01:27 compute-0 nova_compute[257802]: </interface>
Oct 02 12:01:27 compute-0 nova_compute[257802]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.734 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:01:27 compute-0 ovn_controller[148183]: 2025-10-02T12:01:27Z|00059|binding|INFO|Releasing lport a3d6bcff-8fb0-4a18-b382-3921f86e4cde from this chassis (sb_readonly=0)
Oct 02 12:01:27 compute-0 ovn_controller[148183]: 2025-10-02T12:01:27Z|00060|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct 02 12:01:27 compute-0 ovn_controller[148183]: 2025-10-02T12:01:27Z|00061|binding|INFO|Releasing lport 9ae39c1e-f098-4856-80de-d86e2593ff3e from this chassis (sb_readonly=0)
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3449e5a2-9d43-42cb-b6d8-b8833f4e0ba8 does not exist
Oct 02 12:01:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2673b58d-5437-4125-936a-3c803e4ac2b2 does not exist
Oct 02 12:01:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 47878c76-0518-4510-94d6-ffa503a2a5f4 does not exist
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:01:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:01:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:01:27 compute-0 nova_compute[257802]: 2025-10-02 12:01:27.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:28 compute-0 sudo[269735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:28 compute-0 sudo[269735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:28 compute-0 sudo[269735]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:28 compute-0 sudo[269773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:01:28 compute-0 podman[269759]: 2025-10-02 12:01:28.108039147 +0000 UTC m=+0.065563869 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:01:28 compute-0 sudo[269773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:28 compute-0 sudo[269773]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:28 compute-0 podman[269760]: 2025-10-02 12:01:28.140654411 +0000 UTC m=+0.094076282 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:01:28 compute-0 sudo[269823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:28 compute-0 sudo[269823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:28 compute-0 sudo[269823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.213 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.213 2 INFO nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:01:28 compute-0 sudo[269848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:01:28 compute-0 sudo[269848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:01:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.292 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.579388503 +0000 UTC m=+0.058715560 container create dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.583 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 WARNING nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.584 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.585 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing instance network info cache due to event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.585 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.586 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.586 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:01:28 compute-0 systemd[1]: Started libpod-conmon-dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738.scope.
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.546149493 +0000 UTC m=+0.025476580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.658892054 +0000 UTC m=+0.138219141 container init dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.664896132 +0000 UTC m=+0.144223199 container start dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:01:28 compute-0 elastic_driscoll[269931]: 167 167
Oct 02 12:01:28 compute-0 systemd[1]: libpod-dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738.scope: Deactivated successfully.
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.673156016 +0000 UTC m=+0.152483093 container attach dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.673485573 +0000 UTC m=+0.152812640 container died dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bbe110a61e05bb56c116dc198d515c3a9b3b5fa4ebffb753c4b29036e7b90bc-merged.mount: Deactivated successfully.
Oct 02 12:01:28 compute-0 podman[269914]: 2025-10-02 12:01:28.730476379 +0000 UTC m=+0.209803446 container remove dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:01:28 compute-0 systemd[1]: libpod-conmon-dec0c989028281d1bf7c4396e4769c8356f3afaed48056ae132bfcfcc1c9c738.scope: Deactivated successfully.
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.796 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:01:28 compute-0 nova_compute[257802]: 2025-10-02 12:01:28.798 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:01:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 21 KiB/s wr, 132 op/s
Oct 02 12:01:28 compute-0 podman[269954]: 2025-10-02 12:01:28.933756283 +0000 UTC m=+0.067578047 container create b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:01:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 02 12:01:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 02 12:01:28 compute-0 systemd[1]: Started libpod-conmon-b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e.scope.
Oct 02 12:01:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 02 12:01:28 compute-0 podman[269954]: 2025-10-02 12:01:28.889930402 +0000 UTC m=+0.023752186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:29 compute-0 podman[269954]: 2025-10-02 12:01:29.081120228 +0000 UTC m=+0.214942022 container init b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:01:29 compute-0 podman[269954]: 2025-10-02 12:01:29.091191746 +0000 UTC m=+0.225013510 container start b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:01:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:29 compute-0 podman[269954]: 2025-10-02 12:01:29.191252734 +0000 UTC m=+0.325074498 container attach b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:01:29 compute-0 ceph-mon[73607]: pgmap v1030: 305 pgs: 305 active+clean; 488 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 21 KiB/s wr, 132 op/s
Oct 02 12:01:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4074648720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:29 compute-0 ceph-mon[73607]: osdmap e145: 3 total, 3 up, 3 in
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.300 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.300 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:01:29 compute-0 sudo[269976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:29 compute-0 sudo[269976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:29 compute-0 sudo[269976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:29 compute-0 sudo[270001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:29 compute-0 sudo[270001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:29 compute-0 sudo[270001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.804 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.806 2 DEBUG nova.virt.libvirt.migration [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.841 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406489.8410304, cd9deba5-2505-4edd-ac7c-483615217473 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.842 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Paused (Lifecycle Event)
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.870 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.874 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:29 compute-0 nova_compute[257802]: 2025-10-02 12:01:29.906 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During sync_power_state the instance has a pending task (migrating). Skip.
Oct 02 12:01:29 compute-0 sharp_colden[269970]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:01:29 compute-0 sharp_colden[269970]: --> relative data size: 1.0
Oct 02 12:01:29 compute-0 sharp_colden[269970]: --> All data devices are unavailable
Oct 02 12:01:30 compute-0 kernel: tapcc377346-4a (unregistering): left promiscuous mode
Oct 02 12:01:30 compute-0 NetworkManager[44987]: <info>  [1759406490.0078] device (tapcc377346-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:01:30 compute-0 systemd[1]: libpod-b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e.scope: Deactivated successfully.
Oct 02 12:01:30 compute-0 podman[269954]: 2025-10-02 12:01:30.014388967 +0000 UTC m=+1.148210731 container died b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 ovn_controller[148183]: 2025-10-02T12:01:30Z|00062|binding|INFO|Releasing lport cc377346-4a61-4e38-b682-f2111cc47ffd from this chassis (sb_readonly=0)
Oct 02 12:01:30 compute-0 ovn_controller[148183]: 2025-10-02T12:01:30Z|00063|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd down in Southbound
Oct 02 12:01:30 compute-0 ovn_controller[148183]: 2025-10-02T12:01:30Z|00064|binding|INFO|Removing iface tapcc377346-4a ovn-installed in OVS
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a6c8451365d2e1156550cd5e08f73339bee5c4d5d71933e4dadf3686f944079-merged.mount: Deactivated successfully.
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.051 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'e4c8874a-a81b-4869-98e4-aca2e3f3bf40'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '16', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.053 158261 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.055 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.070 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cdea5d1e-26d0-47d8-9bbf-752139866004]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 02 12:01:30 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Consumed 3.043s CPU time.
Oct 02 12:01:30 compute-0 systemd-machined[211836]: Machine qemu-6-instance-00000009 terminated.
Oct 02 12:01:30 compute-0 podman[269954]: 2025-10-02 12:01:30.085318616 +0000 UTC m=+1.219140380 container remove b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:01:30 compute-0 systemd[1]: libpod-conmon-b7c867748b8a830625c17f44c0b6bad7d9cde34604f5e8e1356965b9384cfc9e.scope: Deactivated successfully.
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:01:30 compute-0 sudo[269848]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.110 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cec1577a-be86-4684-9b69-0f9c4340b0c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.114 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5da2b0-658e-448d-9f4d-6774b85b58d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.143 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[20b00c03-aa2c-4b4b-8186-d98977b253b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 virtqemud[257280]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407: No such file or directory
Oct 02 12:01:30 compute-0 virtqemud[257280]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407: No such file or directory
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.162 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[79861332-11fe-44b2-8cb7-3ab97b8448a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 46, 'tx_packets': 7, 'rx_bytes': 2428, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 46, 'tx_packets': 7, 'rx_bytes': 2428, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446414, 'reachable_time': 22434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270086, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 sudo[270061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 sudo[270061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.179 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0435ecf1-92c6-4317-9d74-ed013e4b498e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e991676-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446426, 'tstamp': 446426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270090, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e991676-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446428, 'tstamp': 446428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270090, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:30 compute-0 sudo[270061]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.182 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.185 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.186 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.186 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.190 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.192 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.193 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:30.194 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:01:30 compute-0 sudo[270100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:01:30 compute-0 sudo[270100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:30 compute-0 sudo[270100]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:30 compute-0 sudo[270126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:30 compute-0 sudo[270126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:30 compute-0 sudo[270126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.308 2 DEBUG nova.virt.libvirt.guest [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'cd9deba5-2505-4edd-ac7c-483615217473' (instance-00000009) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.309 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation has completed
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.309 2 INFO nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] _post_live_migration() is started..
Oct 02 12:01:30 compute-0 sudo[270151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:01:30 compute-0 sudo[270151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4050351588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/402158354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2928742114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.429 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.429 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.429 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.518 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updated VIF entry in instance network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.520 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.575 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.575 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-changed-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.575 2 DEBUG nova.compute.manager [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Refreshing instance network info cache due to event network-changed-72564b18-ab59-4028-8bb3-e93073a18534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.576 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.576 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.576 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Refreshing network info cache for port 72564b18-ab59-4028-8bb3-e93073a18534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.707 2 DEBUG nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.707 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.708 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.708 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.708 2 DEBUG nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.708 2 DEBUG nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.709 2 DEBUG nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.709 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.709 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.710 2 DEBUG oslo_concurrency.lockutils [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.711 2 DEBUG nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:30 compute-0 nova_compute[257802]: 2025-10-02 12:01:30.712 2 WARNING nova.compute.manager [req-1a3c1b39-adf2-4258-b62f-6380121781c7 req-ce6ba9e4-690d-459c-bdd8-980a18bc51ac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.
Oct 02 12:01:30 compute-0 podman[270217]: 2025-10-02 12:01:30.733521744 +0000 UTC m=+0.089410076 container create 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:01:30 compute-0 podman[270217]: 2025-10-02 12:01:30.669717721 +0000 UTC m=+0.025606053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:30 compute-0 systemd[1]: Started libpod-conmon-5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107.scope.
Oct 02 12:01:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 206 op/s
Oct 02 12:01:30 compute-0 podman[270217]: 2025-10-02 12:01:30.960868892 +0000 UTC m=+0.316757234 container init 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:01:30 compute-0 podman[270217]: 2025-10-02 12:01:30.970680124 +0000 UTC m=+0.326568446 container start 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:01:30 compute-0 intelligent_euler[270233]: 167 167
Oct 02 12:01:30 compute-0 systemd[1]: libpod-5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107.scope: Deactivated successfully.
Oct 02 12:01:30 compute-0 conmon[270233]: conmon 5417c16be928564e9dc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107.scope/container/memory.events
Oct 02 12:01:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:31 compute-0 podman[270217]: 2025-10-02 12:01:31.207263439 +0000 UTC m=+0.563151801 container attach 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:01:31 compute-0 podman[270217]: 2025-10-02 12:01:31.208076339 +0000 UTC m=+0.563964671 container died 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.250 2 DEBUG nova.compute.manager [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.255 2 DEBUG oslo_concurrency.lockutils [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.255 2 DEBUG oslo_concurrency.lockutils [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.256 2 DEBUG oslo_concurrency.lockutils [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.257 2 DEBUG nova.compute.manager [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.258 2 DEBUG nova.compute.manager [req-4937b62f-4035-48c7-b9eb-997bf7a9f2c9 req-0531506d-e8b6-4b4d-bc81-6abc5a8afe0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c32d5bfbd803ed6f8b9c80039920d4ca1e8d9c78dadd122231e08905c6899839-merged.mount: Deactivated successfully.
Oct 02 12:01:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3086966703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:31 compute-0 ceph-mon[73607]: pgmap v1032: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 206 op/s
Oct 02 12:01:31 compute-0 podman[270217]: 2025-10-02 12:01:31.370018994 +0000 UTC m=+0.725907336 container remove 5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:01:31 compute-0 systemd[1]: libpod-conmon-5417c16be928564e9dc32b962a4e4afba7d9c878901797a49341dd560ec37107.scope: Deactivated successfully.
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.684 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Activated binding for port cc377346-4a61-4e38-b682-f2111cc47ffd and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.685 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.686 2 DEBUG nova.virt.libvirt.vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:19Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.687 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.688 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.689 2 DEBUG os_vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:01:31 compute-0 podman[270256]: 2025-10-02 12:01:31.594664164 +0000 UTC m=+0.026110754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:31 compute-0 podman[270256]: 2025-10-02 12:01:31.693447181 +0000 UTC m=+0.124893751 container create 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.698 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc377346-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.714 2 INFO os_vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.714 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.715 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.715 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.715 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.716 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deleting instance files /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del
Oct 02 12:01:31 compute-0 nova_compute[257802]: 2025-10-02 12:01:31.717 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deletion of /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del complete
Oct 02 12:01:31 compute-0 systemd[1]: Started libpod-conmon-57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09.scope.
Oct 02 12:01:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4fb26be2803f31caf4aa0f4467a9b3c6dc1c6988fe41652a932a18ea35771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4fb26be2803f31caf4aa0f4467a9b3c6dc1c6988fe41652a932a18ea35771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4fb26be2803f31caf4aa0f4467a9b3c6dc1c6988fe41652a932a18ea35771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4fb26be2803f31caf4aa0f4467a9b3c6dc1c6988fe41652a932a18ea35771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.081 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.084 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updated VIF entry in instance network info cache for port 72564b18-ab59-4028-8bb3-e93073a18534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.085 2 DEBUG nova.network.neutron [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updating instance_info_cache with network_info: [{"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:32 compute-0 podman[270256]: 2025-10-02 12:01:32.094265677 +0000 UTC m=+0.525712237 container init 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:01:32 compute-0 podman[270256]: 2025-10-02 12:01:32.102800857 +0000 UTC m=+0.534247417 container start 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.146 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.147 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.147 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.147 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.148 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.148 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:32 compute-0 podman[270256]: 2025-10-02 12:01:32.237003367 +0000 UTC m=+0.668449927 container attach 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.264 2 DEBUG oslo_concurrency.lockutils [req-6ab498e4-d225-48e5-99e9-e43e3b374642 req-8c7f9700-d8f7-4f26-9183-8fc93a329078 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cee549ac-63b2-4eed-b8c5-0bd0948a95d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.296 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.297 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.297 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.298 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.299 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2450892318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422481596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.851 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]: {
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:     "1": [
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:         {
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "devices": [
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "/dev/loop3"
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             ],
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "lv_name": "ceph_lv0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "lv_size": "7511998464",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "name": "ceph_lv0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "tags": {
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.cluster_name": "ceph",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.crush_device_class": "",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.encrypted": "0",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.osd_id": "1",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.type": "block",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:                 "ceph.vdo": "0"
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             },
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "type": "block",
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:             "vg_name": "ceph_vg0"
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:         }
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]:     ]
Oct 02 12:01:32 compute-0 funny_ptolemy[270272]: }
Oct 02 12:01:32 compute-0 systemd[1]: libpod-57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09.scope: Deactivated successfully.
Oct 02 12:01:32 compute-0 podman[270256]: 2025-10-02 12:01:32.895791757 +0000 UTC m=+1.327238317 container died 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:01:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 206 op/s
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.958 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.959 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.959 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.960 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.960 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.960 2 WARNING nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.961 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.961 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.962 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.962 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.963 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.963 2 WARNING nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 DEBUG oslo_concurrency.lockutils [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 DEBUG nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:32 compute-0 nova_compute[257802]: 2025-10-02 12:01:32.964 2 WARNING nova.compute.manager [req-8aa2e9d4-e8ea-4a47-ac08-8e74c888977c req-8c53c9ef-26c8-4836-984e-5a0d4bb7f06c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.
Oct 02 12:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d4fb26be2803f31caf4aa0f4467a9b3c6dc1c6988fe41652a932a18ea35771-merged.mount: Deactivated successfully.
Oct 02 12:01:33 compute-0 podman[270304]: 2025-10-02 12:01:33.002932099 +0000 UTC m=+0.136141368 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:01:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.029 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.029 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.033 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.033 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:33.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.038 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.038 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:33 compute-0 podman[270256]: 2025-10-02 12:01:33.128902706 +0000 UTC m=+1.560349256 container remove 57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:01:33 compute-0 sudo[270151]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:33 compute-0 systemd[1]: libpod-conmon-57cbeb8900b97cbb5618b5997106e3e36a5910e5db57d2ab322cdb4b440c3a09.scope: Deactivated successfully.
Oct 02 12:01:33 compute-0 sudo[270340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:33 compute-0 sudo[270340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:33 compute-0 sudo[270340]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.259 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.260 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=20.785137176513672GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.260 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.261 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:33 compute-0 sudo[270365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:01:33 compute-0 sudo[270365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:33 compute-0 sudo[270365]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:33 compute-0 sudo[270390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:33 compute-0 sudo[270390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:33 compute-0 sudo[270390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:33 compute-0 sudo[270415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:01:33 compute-0 sudo[270415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.654 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating resource usage from migration 8a32ad27-767e-4f4b-aadf-beebd224c85c
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.676 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance b4e4932c-8129-4ceb-95ef-3a612ef502f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.677 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 530e7ace-2feb-4a9e-9430-5a1bfc678d22 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.677 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance cee549ac-63b2-4eed-b8c5-0bd0948a95d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.677 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration 8a32ad27-767e-4f4b-aadf-beebd224c85c is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.677 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.678 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.746558521 +0000 UTC m=+0.092243787 container create 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:01:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3422481596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:33 compute-0 ceph-mon[73607]: pgmap v1033: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 206 op/s
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.674920474 +0000 UTC m=+0.020605760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:33 compute-0 nova_compute[257802]: 2025-10-02 12:01:33.774 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:33 compute-0 systemd[1]: Started libpod-conmon-523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3.scope.
Oct 02 12:01:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.924186412 +0000 UTC m=+0.269871678 container init 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.932719492 +0000 UTC m=+0.278404728 container start 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:01:33 compute-0 quirky_hoover[270500]: 167 167
Oct 02 12:01:33 compute-0 systemd[1]: libpod-523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3.scope: Deactivated successfully.
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.956953071 +0000 UTC m=+0.302638337 container attach 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:33 compute-0 podman[270482]: 2025-10-02 12:01:33.957274428 +0000 UTC m=+0.302959684 container died 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-22ff7969a8bdf59f4235f19316bc0ceb67e3a6ac0bd4b881b6a0e618ab51d92c-merged.mount: Deactivated successfully.
Oct 02 12:01:34 compute-0 podman[270482]: 2025-10-02 12:01:34.153917618 +0000 UTC m=+0.499602864 container remove 523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:01:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460430614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:34 compute-0 systemd[1]: libpod-conmon-523e3b1c771692c44f309dd2db3d4d4879f910365a9fad3b302924e4383f6cf3.scope: Deactivated successfully.
Oct 02 12:01:34 compute-0 nova_compute[257802]: 2025-10-02 12:01:34.222 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:34 compute-0 nova_compute[257802]: 2025-10-02 12:01:34.229 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:01:34 compute-0 nova_compute[257802]: 2025-10-02 12:01:34.249 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:01:34 compute-0 nova_compute[257802]: 2025-10-02 12:01:34.287 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:01:34 compute-0 nova_compute[257802]: 2025-10-02 12:01:34.288 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:34 compute-0 podman[270548]: 2025-10-02 12:01:34.312621323 +0000 UTC m=+0.021798599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:01:34 compute-0 podman[270548]: 2025-10-02 12:01:34.410994919 +0000 UTC m=+0.120172185 container create 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:01:34 compute-0 systemd[1]: Started libpod-conmon-458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f.scope.
Oct 02 12:01:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bb8a8476309c645b7240f1c0ec73aa65df7a2523533673c7446f2eec56379f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bb8a8476309c645b7240f1c0ec73aa65df7a2523533673c7446f2eec56379f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bb8a8476309c645b7240f1c0ec73aa65df7a2523533673c7446f2eec56379f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bb8a8476309c645b7240f1c0ec73aa65df7a2523533673c7446f2eec56379f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:01:34 compute-0 podman[270548]: 2025-10-02 12:01:34.583289019 +0000 UTC m=+0.292466315 container init 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:34 compute-0 podman[270548]: 2025-10-02 12:01:34.589095652 +0000 UTC m=+0.298272918 container start 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:01:34 compute-0 podman[270548]: 2025-10-02 12:01:34.635488496 +0000 UTC m=+0.344665782 container attach 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:01:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/460430614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.4 KiB/s wr, 233 op/s
Oct 02 12:01:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:35.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:35 compute-0 nova_compute[257802]: 2025-10-02 12:01:35.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:35 compute-0 elated_pasteur[270567]: {
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:         "osd_id": 1,
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:         "type": "bluestore"
Oct 02 12:01:35 compute-0 elated_pasteur[270567]:     }
Oct 02 12:01:35 compute-0 elated_pasteur[270567]: }
Oct 02 12:01:35 compute-0 systemd[1]: libpod-458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f.scope: Deactivated successfully.
Oct 02 12:01:35 compute-0 podman[270548]: 2025-10-02 12:01:35.437233002 +0000 UTC m=+1.146410288 container died 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4bb8a8476309c645b7240f1c0ec73aa65df7a2523533673c7446f2eec56379f-merged.mount: Deactivated successfully.
Oct 02 12:01:35 compute-0 podman[270548]: 2025-10-02 12:01:35.535386172 +0000 UTC m=+1.244563438 container remove 458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:01:35 compute-0 systemd[1]: libpod-conmon-458bfcbba8b7a548921d32a29fe9eb09e3dc0b3be885a84bc79afe488d8da98f.scope: Deactivated successfully.
Oct 02 12:01:35 compute-0 sudo[270415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:01:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:01:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 47160d96-65a0-491e-b806-de2fcebc2aac does not exist
Oct 02 12:01:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 367f7bd5-9ff5-4bf5-a39e-a8fbf98387bd does not exist
Oct 02 12:01:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b1a86ce5-6e54-4c52-8e3a-252cd6426eec does not exist
Oct 02 12:01:35 compute-0 sudo[270602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:35 compute-0 sudo[270602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:35 compute-0 sudo[270602]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:35 compute-0 sudo[270627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:01:35 compute-0 sudo[270627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:35 compute-0 sudo[270627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:35 compute-0 ceph-mon[73607]: pgmap v1034: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.4 KiB/s wr, 233 op/s
Oct 02 12:01:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:01:36 compute-0 nova_compute[257802]: 2025-10-02 12:01:36.239 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:36 compute-0 nova_compute[257802]: 2025-10-02 12:01:36.240 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:36 compute-0 nova_compute[257802]: 2025-10-02 12:01:36.260 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:01:36 compute-0 nova_compute[257802]: 2025-10-02 12:01:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.2 KiB/s wr, 223 op/s
Oct 02 12:01:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/715644299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:37 compute-0 ceph-mon[73607]: pgmap v1035: 305 pgs: 305 active+clean; 488 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.2 KiB/s wr, 223 op/s
Oct 02 12:01:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:37.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:37.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 02 12:01:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 02 12:01:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 02 12:01:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/506775395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 527 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 207 op/s
Oct 02 12:01:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:39.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:39.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:39 compute-0 ceph-mon[73607]: osdmap e146: 3 total, 3 up, 3 in
Oct 02 12:01:39 compute-0 ceph-mon[73607]: pgmap v1037: 305 pgs: 305 active+clean; 527 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 207 op/s
Oct 02 12:01:39 compute-0 ovn_controller[148183]: 2025-10-02T12:01:39Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:35:39 10.100.0.6
Oct 02 12:01:39 compute-0 ovn_controller[148183]: 2025-10-02T12:01:39Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:35:39 10.100.0.6
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.385 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.386 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.386 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.408 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.409 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.409 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.409 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.410 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574647464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.858 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.942 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.942 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.945 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.945 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.948 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:39 compute-0 nova_compute[257802]: 2025-10-02 12:01:39.949 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.132 2 WARNING nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.133 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4202MB free_disk=20.772300720214844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.134 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.134 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.197 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration for instance cd9deba5-2505-4edd-ac7c-483615217473 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.221 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.251 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Instance b4e4932c-8129-4ceb-95ef-3a612ef502f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.252 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Instance 530e7ace-2feb-4a9e-9430-5a1bfc678d22 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.252 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Instance cee549ac-63b2-4eed-b8c5-0bd0948a95d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.252 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration 8a32ad27-767e-4f4b-aadf-beebd224c85c is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.252 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.252 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:01:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3501241138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1574647464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.356 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487767878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.795 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.801 2 DEBUG nova.compute.provider_tree [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.825 2 DEBUG nova.scheduler.client.report [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.850 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406485.6871052, b8d4207f-7e3b-4a3c-ad76-60d87d695918 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.850 2 INFO nova.compute.manager [-] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] VM Stopped (Lifecycle Event)
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.855 2 DEBUG nova.compute.resource_tracker [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.855 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.861 2 INFO nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 02 12:01:40 compute-0 nova_compute[257802]: 2025-10-02 12:01:40.880 2 DEBUG nova.compute.manager [None req-54a88ad7-6176-45e6-a74d-009219b384c5 - - - - - -] [instance: b8d4207f-7e3b-4a3c-ad76-60d87d695918] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 589 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Oct 02 12:01:41 compute-0 nova_compute[257802]: 2025-10-02 12:01:41.012 2 INFO nova.scheduler.client.report [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Deleted allocation for migration 8a32ad27-767e-4f4b-aadf-beebd224c85c
Oct 02 12:01:41 compute-0 nova_compute[257802]: 2025-10-02 12:01:41.012 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:01:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:41.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/897392049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1487767878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:41 compute-0 ceph-mon[73607]: pgmap v1038: 305 pgs: 305 active+clean; 589 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Oct 02 12:01:41 compute-0 nova_compute[257802]: 2025-10-02 12:01:41.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:01:42
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:01:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 589 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Oct 02 12:01:43 compute-0 ceph-mon[73607]: pgmap v1039: 305 pgs: 305 active+clean; 589 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Oct 02 12:01:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:43.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1391525199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 613 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.8 MiB/s wr, 231 op/s
Oct 02 12:01:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:45.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:45.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:45 compute-0 nova_compute[257802]: 2025-10-02 12:01:45.183 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406490.181756, cd9deba5-2505-4edd-ac7c-483615217473 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:45 compute-0 nova_compute[257802]: 2025-10-02 12:01:45.183 2 INFO nova.compute.manager [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Stopped (Lifecycle Event)
Oct 02 12:01:45 compute-0 nova_compute[257802]: 2025-10-02 12:01:45.208 2 DEBUG nova.compute.manager [None req-9350a3df-f3b7-4a33-9013-fab840f79b53 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:45 compute-0 nova_compute[257802]: 2025-10-02 12:01:45.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/509611605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:01:45 compute-0 ceph-mon[73607]: pgmap v1040: 305 pgs: 305 active+clean; 613 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.8 MiB/s wr, 231 op/s
Oct 02 12:01:46 compute-0 nova_compute[257802]: 2025-10-02 12:01:46.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 614 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.9 MiB/s wr, 242 op/s
Oct 02 12:01:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:47.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:47 compute-0 ceph-mon[73607]: pgmap v1041: 305 pgs: 305 active+clean; 614 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.9 MiB/s wr, 242 op/s
Oct 02 12:01:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:47.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1140293501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 614 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.3 MiB/s wr, 290 op/s
Oct 02 12:01:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:49.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.127 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.128 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.129 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.129 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.129 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.130 2 INFO nova.compute.manager [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Terminating instance
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.131 2 DEBUG nova.compute.manager [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:01:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:49.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:49 compute-0 kernel: tapb0281338-9f (unregistering): left promiscuous mode
Oct 02 12:01:49 compute-0 NetworkManager[44987]: <info>  [1759406509.3381] device (tapb0281338-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00065|binding|INFO|Releasing lport b0281338-9ff1-492d-b3aa-aeee41f08075 from this chassis (sb_readonly=0)
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00066|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 down in Southbound
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00067|binding|INFO|Releasing lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 from this chassis (sb_readonly=0)
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00068|binding|INFO|Setting lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 down in Southbound
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00069|binding|INFO|Removing iface tapb0281338-9f ovn-installed in OVS
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00070|binding|INFO|Releasing lport a3d6bcff-8fb0-4a18-b382-3921f86e4cde from this chassis (sb_readonly=0)
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00071|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct 02 12:01:49 compute-0 ovn_controller[148183]: 2025-10-02T12:01:49Z|00072|binding|INFO|Releasing lport 9ae39c1e-f098-4856-80de-d86e2593ff3e from this chassis (sb_readonly=0)
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.356 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:38:f6 19.80.0.52'], port_security=['fa:16:3e:1a:38:f6 19.80.0.52'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['b0281338-9ff1-492d-b3aa-aeee41f08075'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-589500470', 'neutron:cidrs': '19.80.0.52/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-589500470', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ec7a7eef-8b3d-415d-8728-897cefc62c0c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36ee45f1-8361-4e2d-a0d4-c4907ede4743) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.358 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:16:f5 10.100.0.13'], port_security=['fa:16:3e:0f:16:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1480997899', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '530e7ace-2feb-4a9e-9430-5a1bfc678d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1480997899', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '11', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b0281338-9ff1-492d-b3aa-aeee41f08075) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.360 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 36ee45f1-8361-4e2d-a0d4-c4907ede4743 in datapath 4874fd53-2e81-4c7f-81b7-b85c23edc180 unbound from our chassis
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.362 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4874fd53-2e81-4c7f-81b7-b85c23edc180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.363 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46f600d9-0e05-46de-ab11-898d107ddd4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:49.364 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 namespace which is not needed anymore
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 02 12:01:49 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 8.404s CPU time.
Oct 02 12:01:49 compute-0 systemd-machined[211836]: Machine qemu-4-instance-00000007 terminated.
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [NOTICE]   (266876) : haproxy version is 2.8.14-c23fe91
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [NOTICE]   (266876) : path to executable is /usr/sbin/haproxy
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [WARNING]  (266876) : Exiting Master process...
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [WARNING]  (266876) : Exiting Master process...
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [ALERT]    (266876) : Current worker (266878) exited with code 143 (Terminated)
Oct 02 12:01:49 compute-0 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[266866]: [WARNING]  (266876) : All workers exited. Exiting... (0)
Oct 02 12:01:49 compute-0 systemd[1]: libpod-101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c.scope: Deactivated successfully.
Oct 02 12:01:49 compute-0 podman[270727]: 2025-10-02 12:01:49.523638351 +0000 UTC m=+0.067249020 container died 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.571 2 INFO nova.virt.libvirt.driver [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance destroyed successfully.
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.571 2 DEBUG nova.objects.instance [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lazy-loading 'resources' on Instance uuid 530e7ace-2feb-4a9e-9430-5a1bfc678d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.584 2 DEBUG nova.virt.libvirt.vif [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:00:02Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.584 2 DEBUG nova.network.os_vif_util [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.584 2 DEBUG nova.network.os_vif_util [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.585 2 DEBUG os_vif [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0281338-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:01:49 compute-0 nova_compute[257802]: 2025-10-02 12:01:49.592 2 INFO os_vif [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f')
Oct 02 12:01:49 compute-0 ceph-mon[73607]: pgmap v1042: 305 pgs: 305 active+clean; 614 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.3 MiB/s wr, 290 op/s
Oct 02 12:01:49 compute-0 sudo[270779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:49 compute-0 sudo[270779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:49 compute-0 sudo[270779]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8165cdb3614c4db89a0c94f4dc1ed8c741126f988659b0b10258b26766d8de9-merged.mount: Deactivated successfully.
Oct 02 12:01:49 compute-0 sudo[270810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:49 compute-0 sudo[270810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:49 compute-0 sudo[270810]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:49 compute-0 podman[270727]: 2025-10-02 12:01:49.879672403 +0000 UTC m=+0.423283072 container cleanup 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:01:49 compute-0 systemd[1]: libpod-conmon-101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c.scope: Deactivated successfully.
Oct 02 12:01:50 compute-0 podman[270836]: 2025-10-02 12:01:50.063935487 +0000 UTC m=+0.161868033 container remove 101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.074 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[beda4dc5-0702-433a-ae98-6f3994bdf822]: (4, ('Thu Oct  2 12:01:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 (101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c)\n101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c\nThu Oct  2 12:01:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 (101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c)\n101904df1c9077720c353df5b90fc793080b12cfa4e8a1d09edf69da574e819c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.076 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[61c98264-669e-4a15-b1d7-6c8480168eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.078 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4874fd53-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 kernel: tap4874fd53-20: left promiscuous mode
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.097 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[64493f1a-447d-461a-bfa5-2603a4330581]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.126 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c261fcdf-d21b-4e77-ac64-8582ff57d46a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.127 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45830336-062e-46d2-8a35-0b0efa96a94a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.145 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b077922e-846c-4334-916b-5f101cfe9e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446306, 'reachable_time': 31111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270859, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d4874fd53\x2d2e81\x2d4c7f\x2d81b7\x2db85c23edc180.mount: Deactivated successfully.
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.153 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.153 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1c3344-70d5-4db6-add6-35d481854362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.154 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b0281338-9ff1-492d-b3aa-aeee41f08075 in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.156 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e991676-99e8-43d9-8575-4a21f50b0ed5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.157 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b59e5901-374d-4e87-af42-a6e936b2e814]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.157 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace which is not needed anymore
Oct 02 12:01:50 compute-0 podman[270849]: 2025-10-02 12:01:50.198205719 +0000 UTC m=+0.079335447 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:01:50 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [NOTICE]   (266973) : haproxy version is 2.8.14-c23fe91
Oct 02 12:01:50 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [NOTICE]   (266973) : path to executable is /usr/sbin/haproxy
Oct 02 12:01:50 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [WARNING]  (266973) : Exiting Master process...
Oct 02 12:01:50 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [ALERT]    (266973) : Current worker (266975) exited with code 143 (Terminated)
Oct 02 12:01:50 compute-0 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[266969]: [WARNING]  (266973) : All workers exited. Exiting... (0)
Oct 02 12:01:50 compute-0 systemd[1]: libpod-f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974.scope: Deactivated successfully.
Oct 02 12:01:50 compute-0 podman[270887]: 2025-10-02 12:01:50.303422944 +0000 UTC m=+0.062148663 container died f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974-userdata-shm.mount: Deactivated successfully.
Oct 02 12:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f549b1f3cccd0ed49a15448343901bf9b41d83f41d23816cf0af9034582a8888-merged.mount: Deactivated successfully.
Oct 02 12:01:50 compute-0 podman[270887]: 2025-10-02 12:01:50.423615479 +0000 UTC m=+0.182341198 container cleanup f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:01:50 compute-0 systemd[1]: libpod-conmon-f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974.scope: Deactivated successfully.
Oct 02 12:01:50 compute-0 podman[270917]: 2025-10-02 12:01:50.529700136 +0000 UTC m=+0.086027093 container remove f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.534 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cab50940-5d4d-466f-8b0e-2c26ac82ac5d]: (4, ('Thu Oct  2 12:01:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974)\nf8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974\nThu Oct  2 12:01:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (f8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974)\nf8047243fe64ab25583bc6db8a745db75eb221bc1c257bb538287359aaf9b974\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.536 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3339d1-bf3d-4a2b-8c35-b005a961b20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.536 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 kernel: tap1e991676-90: left promiscuous mode
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.555 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[26cd879e-5a43-4d4f-9ec2-7a7cddc38cc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.594 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[89da3b1f-a34c-4979-ae24-ec7d646138f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.595 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c940de-25b4-44de-89c7-ca1a816ef4dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.609 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7036181b-7fc6-41f3-af63-6a89c6d6325e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446407, 'reachable_time': 34366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270935, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.610 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:01:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:01:50.610 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a19092d4-8961-4d71-b537-4dee9fa65f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.613 2 DEBUG nova.compute.manager [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.614 2 DEBUG oslo_concurrency.lockutils [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.614 2 DEBUG oslo_concurrency.lockutils [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.614 2 DEBUG oslo_concurrency.lockutils [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.615 2 DEBUG nova.compute.manager [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.615 2 DEBUG nova.compute.manager [req-d753b88e-c587-458c-b57b-62dd5ae02288 req-03332df7-8f6c-4147-8019-1000d6eb2d09 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:01:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d1e991676\x2d99e8\x2d43d9\x2d8575\x2d4a21f50b0ed5.mount: Deactivated successfully.
Oct 02 12:01:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1540367777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:01:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1540367777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.870 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Creating tmpfile /var/lib/nova/instances/tmp8l46yc1t to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:01:50 compute-0 nova_compute[257802]: 2025-10-02 12:01:50.871 2 DEBUG nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8l46yc1t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:01:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 529 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 299 op/s
Oct 02 12:01:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:01:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:51.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:01:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:51.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.469 2 INFO nova.virt.libvirt.driver [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deleting instance files /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22_del
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.470 2 INFO nova.virt.libvirt.driver [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deletion of /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22_del complete
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.516 2 INFO nova.compute.manager [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Took 2.38 seconds to destroy the instance on the hypervisor.
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.516 2 DEBUG oslo.service.loopingcall [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.517 2 DEBUG nova.compute.manager [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.517 2 DEBUG nova.network.neutron [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.740 2 DEBUG nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8l46yc1t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.765 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.766 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:01:51 compute-0 nova_compute[257802]: 2025-10-02 12:01:51.766 2 DEBUG nova.network.neutron [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:01:51 compute-0 ceph-mon[73607]: pgmap v1043: 305 pgs: 305 active+clean; 529 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 299 op/s
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.700 2 DEBUG nova.compute.manager [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.700 2 DEBUG oslo_concurrency.lockutils [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.700 2 DEBUG oslo_concurrency.lockutils [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.701 2 DEBUG oslo_concurrency.lockutils [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.701 2 DEBUG nova.compute.manager [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.702 2 WARNING nova.compute.manager [req-bc25eb87-0732-48aa-bbeb-1849d6a9e9e6 req-3c109528-fbba-425c-adc1-2edf6b70d9ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state deleting.
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.841 2 DEBUG nova.network.neutron [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.858 2 INFO nova.compute.manager [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Took 1.34 seconds to deallocate network for instance.
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.902 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:52 compute-0 nova_compute[257802]: 2025-10-02 12:01:52.903 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 529 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.0 MiB/s wr, 249 op/s
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.020 2 DEBUG oslo_concurrency.processutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:01:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:53 compute-0 ceph-mon[73607]: pgmap v1044: 305 pgs: 305 active+clean; 529 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.0 MiB/s wr, 249 op/s
Oct 02 12:01:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:53.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.062 2 DEBUG nova.network.neutron [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Updating instance_info_cache with network_info: [{"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.086 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.088 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8l46yc1t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.089 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Creating instance directory: /var/lib/nova/instances/e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.090 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Ensure instance console log exists: /var/lib/nova/instances/e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.090 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.091 2 DEBUG nova.virt.libvirt.vif [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:01:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1571369270',display_name='tempest-LiveMigrationTest-server-1571369270',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1571369270',id=12,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:01:47Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-ton1smkr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:47Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.092 2 DEBUG nova.network.os_vif_util [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.092 2 DEBUG nova.network.os_vif_util [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.093 2 DEBUG os_vif [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3f370c5-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3f370c5-77, col_values=(('external_ids', {'iface-id': 'c3f370c5-770e-48df-b015-b39eb427f259', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ff:42', 'vm-uuid': 'e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:53 compute-0 NetworkManager[44987]: <info>  [1759406513.1010] manager: (tapc3f370c5-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.108 2 INFO os_vif [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77')
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.108 2 DEBUG nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.108 2 DEBUG nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8l46yc1t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:01:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:01:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475958000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.439 2 DEBUG oslo_concurrency.processutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.444 2 DEBUG nova.compute.provider_tree [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.461 2 DEBUG nova.scheduler.client.report [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.484 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.517 2 INFO nova.scheduler.client.report [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Deleted allocations for instance 530e7ace-2feb-4a9e-9430-5a1bfc678d22
Oct 02 12:01:53 compute-0 nova_compute[257802]: 2025-10-02 12:01:53.616 2 DEBUG oslo_concurrency.lockutils [None req-da7f5d6c-5b61-4344-872d-d5a17e9359c1 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3475958000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011963390447608413 of space, bias 1.0, pg target 3.589017134282524 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0005181797296668528 of space, bias 1.0, pg target 0.15389937971105527 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:01:54 compute-0 nova_compute[257802]: 2025-10-02 12:01:54.708 2 DEBUG nova.network.neutron [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Port c3f370c5-770e-48df-b015-b39eb427f259 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:01:54 compute-0 nova_compute[257802]: 2025-10-02 12:01:54.710 2 DEBUG nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8l46yc1t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:01:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 457 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 291 op/s
Oct 02 12:01:55 compute-0 kernel: tapc3f370c5-77: entered promiscuous mode
Oct 02 12:01:55 compute-0 NetworkManager[44987]: <info>  [1759406515.0047] manager: (tapc3f370c5-77): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Oct 02 12:01:55 compute-0 ovn_controller[148183]: 2025-10-02T12:01:55Z|00073|binding|INFO|Claiming lport c3f370c5-770e-48df-b015-b39eb427f259 for this additional chassis.
Oct 02 12:01:55 compute-0 ovn_controller[148183]: 2025-10-02T12:01:55Z|00074|binding|INFO|c3f370c5-770e-48df-b015-b39eb427f259: Claiming fa:16:3e:02:ff:42 10.100.0.6
Oct 02 12:01:55 compute-0 ovn_controller[148183]: 2025-10-02T12:01:55Z|00075|binding|INFO|Claiming lport 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 for this additional chassis.
Oct 02 12:01:55 compute-0 ovn_controller[148183]: 2025-10-02T12:01:55Z|00076|binding|INFO|0fe5d3df-efa4-47f1-a32b-45858e68a8b9: Claiming fa:16:3e:88:98:01 19.80.0.134
Oct 02 12:01:55 compute-0 nova_compute[257802]: 2025-10-02 12:01:55.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:55 compute-0 systemd-machined[211836]: New machine qemu-9-instance-0000000c.
Oct 02 12:01:55 compute-0 systemd-udevd[270976]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:55 compute-0 NetworkManager[44987]: <info>  [1759406515.0452] device (tapc3f370c5-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:55 compute-0 NetworkManager[44987]: <info>  [1759406515.0462] device (tapc3f370c5-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:01:55 compute-0 nova_compute[257802]: 2025-10-02 12:01:55.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:55 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000c.
Oct 02 12:01:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:55.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:55 compute-0 ovn_controller[148183]: 2025-10-02T12:01:55Z|00077|binding|INFO|Setting lport c3f370c5-770e-48df-b015-b39eb427f259 ovn-installed in OVS
Oct 02 12:01:55 compute-0 nova_compute[257802]: 2025-10-02 12:01:55.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:55 compute-0 nova_compute[257802]: 2025-10-02 12:01:55.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:55 compute-0 ceph-mon[73607]: pgmap v1045: 305 pgs: 305 active+clean; 457 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 291 op/s
Oct 02 12:01:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1035003158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:01:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1035003158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:01:55 compute-0 nova_compute[257802]: 2025-10-02 12:01:55.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 457 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 114 KiB/s wr, 209 op/s
Oct 02 12:01:57 compute-0 ceph-mon[73607]: pgmap v1046: 305 pgs: 305 active+clean; 457 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 114 KiB/s wr, 209 op/s
Oct 02 12:01:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:01:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:01:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.215 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406518.215197, e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.216 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] VM Started (Lifecycle Event)
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.239 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.881 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406518.8813376, e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.882 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] VM Resumed (Lifecycle Event)
Oct 02 12:01:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 471 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 948 KiB/s wr, 207 op/s
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.914 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.918 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:01:58 compute-0 podman[271030]: 2025-10-02 12:01:58.943543512 +0000 UTC m=+0.071370611 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:01:58 compute-0 nova_compute[257802]: 2025-10-02 12:01:58.952 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:01:58 compute-0 podman[271029]: 2025-10-02 12:01:58.970667972 +0000 UTC m=+0.098520802 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:01:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:59.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:59 compute-0 ceph-mon[73607]: pgmap v1047: 305 pgs: 305 active+clean; 471 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 948 KiB/s wr, 207 op/s
Oct 02 12:01:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:01:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:59.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00078|binding|INFO|Claiming lport c3f370c5-770e-48df-b015-b39eb427f259 for this chassis.
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00079|binding|INFO|c3f370c5-770e-48df-b015-b39eb427f259: Claiming fa:16:3e:02:ff:42 10.100.0.6
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00080|binding|INFO|Claiming lport 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 for this chassis.
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00081|binding|INFO|0fe5d3df-efa4-47f1-a32b-45858e68a8b9: Claiming fa:16:3e:88:98:01 19.80.0.134
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00082|binding|INFO|Setting lport c3f370c5-770e-48df-b015-b39eb427f259 up in Southbound
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00083|binding|INFO|Setting lport 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 up in Southbound
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.251 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ff:42 10.100.0.6'], port_security=['fa:16:3e:02:ff:42 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-96076607', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-96076607', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c3f370c5-770e-48df-b015-b39eb427f259) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.253 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:98:01 19.80.0.134'], port_security=['fa:16:3e:88:98:01 19.80.0.134'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['c3f370c5-770e-48df-b015-b39eb427f259'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2131619737', 'neutron:cidrs': '19.80.0.134/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2131619737', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b81908e7-ad92-4b22-83eb-83c6667bf780, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0fe5d3df-efa4-47f1-a32b-45858e68a8b9) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.254 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c3f370c5-770e-48df-b015-b39eb427f259 in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d bound to our chassis
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.256 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.268 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7d089fc7-01f3-4145-910f-b066f839f7d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.269 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bd66e63-91 in ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.270 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bd66e63-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.270 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5729b135-3e7f-4130-9656-00fa675b3102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.271 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[52ddc989-8066-4349-9cc4-6eb2ab0aadc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.282 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2794ea-9831-4094-b0d4-71aa48b268da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.307 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ca310adc-f5ba-40b7-85e6-ca877f096183]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.333 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[efb9ff5b-44fa-46ff-899f-94aa9ae8d381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.338 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dd19f915-b86f-45d2-af67-665a23b61e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 NetworkManager[44987]: <info>  [1759406520.3402] manager: (tap5bd66e63-90): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.362 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[148aaae2-976e-40c3-9b04-4c1da2929277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.366 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7048db-902a-466d-b915-f0307206313e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 systemd-udevd[271073]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:02:00 compute-0 NetworkManager[44987]: <info>  [1759406520.3977] device (tap5bd66e63-90): carrier: link connected
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.402 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0a44ca8a-084c-49d6-ab3d-3a0321d5caaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[550e9227-4dc6-4de5-b699-eefca812dee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458802, 'reachable_time': 36366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271092, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.432 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1083f0d6-936f-4485-85f5-a792f2b9b7c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:100f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458802, 'tstamp': 458802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271093, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.447 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da630a7d-cdfa-4c6d-a7b5-4e8667a20a18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458802, 'reachable_time': 36366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271094, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.449 2 INFO nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Post operation of migration started
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.471 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[067b024e-a162-4c04-8fff-c668d29b8011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.532 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d632e9-433d-4ae6-8478-70d7552ea0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.533 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.533 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.534 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bd66e63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 NetworkManager[44987]: <info>  [1759406520.5359] manager: (tap5bd66e63-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 02 12:02:00 compute-0 kernel: tap5bd66e63-90: entered promiscuous mode
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.539 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bd66e63-90, col_values=(('external_ids', {'iface-id': '5a25c40a-77b7-400c-afc3-f6cb920420cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00084|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.556 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.557 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fd13b5-9b34-4b78-b49a-5e446941883e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.558 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.559 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'env', 'PROCESS_TAG=haproxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.752 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.753 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.753 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.753 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.754 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.755 2 INFO nova.compute.manager [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Terminating instance
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.756 2 DEBUG nova.compute.manager [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:02:00 compute-0 kernel: tap72564b18-ab (unregistering): left promiscuous mode
Oct 02 12:02:00 compute-0 NetworkManager[44987]: <info>  [1759406520.8148] device (tap72564b18-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00085|binding|INFO|Releasing lport 72564b18-ab59-4028-8bb3-e93073a18534 from this chassis (sb_readonly=0)
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00086|binding|INFO|Setting lport 72564b18-ab59-4028-8bb3-e93073a18534 down in Southbound
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_controller[148183]: 2025-10-02T12:02:00Z|00087|binding|INFO|Removing iface tap72564b18-ab ovn-installed in OVS
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:00.891 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:35:39 10.100.0.6'], port_security=['fa:16:3e:36:35:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cee549ac-63b2-4eed-b8c5-0bd0948a95d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-797e74ff-2c6b-48fb-807e-b845ad260597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a5858e0d184dd184a3291b74794c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d7d59e9-1a48-4c19-9701-fea254de36f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0288ec27-954c-4f05-9e09-da71a7e0f954, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=72564b18-ab59-4028-8bb3-e93073a18534) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:00 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 12:02:00 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Consumed 15.178s CPU time.
Oct 02 12:02:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 503 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Oct 02 12:02:00 compute-0 systemd-machined[211836]: Machine qemu-8-instance-0000000b terminated.
Oct 02 12:02:00 compute-0 NetworkManager[44987]: <info>  [1759406520.9721] manager: (tap72564b18-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.986 2 INFO nova.virt.libvirt.driver [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Instance destroyed successfully.
Oct 02 12:02:00 compute-0 nova_compute[257802]: 2025-10-02 12:02:00.987 2 DEBUG nova.objects.instance [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'resources' on Instance uuid cee549ac-63b2-4eed-b8c5-0bd0948a95d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:00 compute-0 podman[271129]: 2025-10-02 12:02:00.998513229 +0000 UTC m=+0.073397281 container create 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:02:01 compute-0 ceph-mon[73607]: pgmap v1048: 305 pgs: 305 active+clean; 503 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.035 2 DEBUG nova.virt.libvirt.vif [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-544170697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(27),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-544170697',id=11,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=27,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-j38q1r6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=cee549ac-63b2-4eed-b8c5-0bd0948a95d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.035 2 DEBUG nova.network.os_vif_util [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "72564b18-ab59-4028-8bb3-e93073a18534", "address": "fa:16:3e:36:35:39", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72564b18-ab", "ovs_interfaceid": "72564b18-ab59-4028-8bb3-e93073a18534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.036 2 DEBUG nova.network.os_vif_util [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.036 2 DEBUG os_vif [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72564b18-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:01 compute-0 systemd[1]: Started libpod-conmon-6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846.scope.
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:02:01 compute-0 podman[271129]: 2025-10-02 12:02:00.949878679 +0000 UTC m=+0.024762761 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.045 2 INFO os_vif [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:35:39,bridge_name='br-int',has_traffic_filtering=True,id=72564b18-ab59-4028-8bb3-e93073a18534,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72564b18-ab')
Oct 02 12:02:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.061 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.062 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.063 2 DEBUG nova.network.neutron [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/105e7c63d274aaa119458aa4bef051320da796ffbc8fb11d8def9bd7834e54fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:01 compute-0 podman[271129]: 2025-10-02 12:02:01.095165243 +0000 UTC m=+0.170049315 container init 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:01 compute-0 podman[271129]: 2025-10-02 12:02:01.102890124 +0000 UTC m=+0.177774176 container start 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:02:01 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [NOTICE]   (271169) : New worker (271171) forked
Oct 02 12:02:01 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [NOTICE]   (271169) : Loading success.
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.169 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 in datapath 89bcb068-8337-43f0-9d5d-f27225e9a30d unbound from our chassis
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.171 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 89bcb068-8337-43f0-9d5d-f27225e9a30d
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.186 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e490ee5c-4138-4ecd-9f2a-32b42e50d100]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.187 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap89bcb068-81 in ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.190 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap89bcb068-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.190 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[43872fc2-d3a4-490b-a647-eaf99d0781cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.191 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b90623c7-e707-40ca-a4fe-6da0e730b393]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:01.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:01 compute-0 ovn_controller[148183]: 2025-10-02T12:02:01Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:ff:42 10.100.0.6
Oct 02 12:02:01 compute-0 ovn_controller[148183]: 2025-10-02T12:02:01Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ff:42 10.100.0.6
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.203 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[59651f1e-1ad4-47f5-b5a6-15584d8deccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.216 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbdcc44-af76-4555-bf55-a3c7a7b7243e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.243 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b67ae3-2ccc-4b2a-983f-d7a5e39af3a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 systemd-udevd[271086]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.251 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[97e7e982-d2e3-44d3-927d-fca61b44b484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 NetworkManager[44987]: <info>  [1759406521.2535] manager: (tap89bcb068-80): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.282 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[05bcb09d-05d1-45ce-b8b0-1d66c882d76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.285 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f44c0a-b5f7-4631-955e-e4ab45e03b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 NetworkManager[44987]: <info>  [1759406521.3097] device (tap89bcb068-80): carrier: link connected
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.314 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[dddfba41-bb65-4e65-bbb0-7b99ca740fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.318 2 DEBUG nova.compute.manager [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-unplugged-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.318 2 DEBUG oslo_concurrency.lockutils [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.319 2 DEBUG oslo_concurrency.lockutils [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.319 2 DEBUG oslo_concurrency.lockutils [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.319 2 DEBUG nova.compute.manager [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] No waiting events found dispatching network-vif-unplugged-72564b18-ab59-4028-8bb3-e93073a18534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.320 2 DEBUG nova.compute.manager [req-3fbfd43f-adb4-4fc0-93b7-23defd7429b7 req-ceeb3904-1716-40cc-bc89-09f04f436e41 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-unplugged-72564b18-ab59-4028-8bb3-e93073a18534 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.332 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86ab2164-5a39-4ec3-8c78-ae0e1ec25f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap89bcb068-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:40:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458893, 'reachable_time': 43249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271190, 'error': None, 'target': 'ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.352 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[adec1644-b2c3-4dcc-a9b9-185ca3b52413]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:409e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458893, 'tstamp': 458893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271191, 'error': None, 'target': 'ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.373 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dae73c61-f9ca-4268-a368-2a6bc82d3d2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap89bcb068-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:40:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458893, 'reachable_time': 43249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271192, 'error': None, 'target': 'ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.407 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e2403936-5b7f-4dd0-89c9-874ff2e3b552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.463 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[522c7f60-dd77-4cea-b1ae-a0599e494b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.464 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89bcb068-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.464 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.464 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89bcb068-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 NetworkManager[44987]: <info>  [1759406521.4669] manager: (tap89bcb068-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 02 12:02:01 compute-0 kernel: tap89bcb068-80: entered promiscuous mode
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.469 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap89bcb068-80, col_values=(('external_ids', {'iface-id': '2d320c71-91bf-44ba-b753-c9807cdc7fdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 ovn_controller[148183]: 2025-10-02T12:02:01Z|00088|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.493 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/89bcb068-8337-43f0-9d5d-f27225e9a30d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/89bcb068-8337-43f0-9d5d-f27225e9a30d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.494 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0bcefd-b560-483b-8ee9-792187a38807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.495 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-89bcb068-8337-43f0-9d5d-f27225e9a30d
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/89bcb068-8337-43f0-9d5d-f27225e9a30d.pid.haproxy
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 89bcb068-8337-43f0-9d5d-f27225e9a30d
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:01.496 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'env', 'PROCESS_TAG=haproxy-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/89bcb068-8337-43f0-9d5d-f27225e9a30d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:02:01 compute-0 podman[271225]: 2025-10-02 12:02:01.829029573 +0000 UTC m=+0.029887148 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.996 2 INFO nova.virt.libvirt.driver [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Deleting instance files /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5_del
Oct 02 12:02:01 compute-0 nova_compute[257802]: 2025-10-02 12:02:01.998 2 INFO nova.virt.libvirt.driver [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Deletion of /var/lib/nova/instances/cee549ac-63b2-4eed-b8c5-0bd0948a95d5_del complete
Oct 02 12:02:02 compute-0 podman[271225]: 2025-10-02 12:02:02.055693784 +0000 UTC m=+0.256551339 container create 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.056 2 INFO nova.compute.manager [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.057 2 DEBUG oslo.service.loopingcall [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.057 2 DEBUG nova.compute.manager [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.057 2 DEBUG nova.network.neutron [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:02:02 compute-0 systemd[1]: Started libpod-conmon-5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb.scope.
Oct 02 12:02:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108c60ce2193e14f4e80aaa64d913573c729dd603977becea42d2467361c96d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:02 compute-0 podman[271225]: 2025-10-02 12:02:02.173772666 +0000 UTC m=+0.374630241 container init 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:02 compute-0 podman[271225]: 2025-10-02 12:02:02.179533919 +0000 UTC m=+0.380391474 container start 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [NOTICE]   (271244) : New worker (271246) forked
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [NOTICE]   (271244) : Loading success.
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.340 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 72564b18-ab59-4028-8bb3-e93073a18534 in datapath 797e74ff-2c6b-48fb-807e-b845ad260597 unbound from our chassis
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.343 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 797e74ff-2c6b-48fb-807e-b845ad260597, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.344 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1eac1756-bf61-45f3-8e6f-6b0bad5a3297]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.344 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 namespace which is not needed anymore
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [NOTICE]   (269700) : haproxy version is 2.8.14-c23fe91
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [NOTICE]   (269700) : path to executable is /usr/sbin/haproxy
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [WARNING]  (269700) : Exiting Master process...
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [ALERT]    (269700) : Current worker (269702) exited with code 143 (Terminated)
Oct 02 12:02:02 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[269682]: [WARNING]  (269700) : All workers exited. Exiting... (0)
Oct 02 12:02:02 compute-0 systemd[1]: libpod-743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb.scope: Deactivated successfully.
Oct 02 12:02:02 compute-0 podman[271273]: 2025-10-02 12:02:02.483489846 +0000 UTC m=+0.066065571 container died 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-743f29f4e6528aed4fc82b14802579c266cebf553315af49991364922a8f9e87-merged.mount: Deactivated successfully.
Oct 02 12:02:02 compute-0 podman[271273]: 2025-10-02 12:02:02.770640399 +0000 UTC m=+0.353216114 container cleanup 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:02:02 compute-0 systemd[1]: libpod-conmon-743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb.scope: Deactivated successfully.
Oct 02 12:02:02 compute-0 podman[271303]: 2025-10-02 12:02:02.839965938 +0000 UTC m=+0.043872893 container remove 743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.846 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cb69c30f-17ce-4143-bcaa-f3c703970fc0]: (4, ('Thu Oct  2 12:02:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 (743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb)\n743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb\nThu Oct  2 12:02:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 (743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb)\n743831dcd6b86075c910ea6d5f5dfc8243dfb5916b65d5c13091e2c5fb0ef2eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.848 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fb185b36-9ed8-4ddb-abd0-e8643180da7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.849 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap797e74ff-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:02 compute-0 kernel: tap797e74ff-20: left promiscuous mode
Oct 02 12:02:02 compute-0 nova_compute[257802]: 2025-10-02 12:02:02.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.869 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fe7a18-1e61-4889-8021-def0396757ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.896 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bf880245-22da-4f92-b57b-477d6d343934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.897 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[34bc96ce-3643-4de8-9007-fcffad7f39f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.911 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6163bd4c-8927-4922-9e43-6056419d05a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455244, 'reachable_time': 42726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271319, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 503 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 631 KiB/s rd, 3.2 MiB/s wr, 127 op/s
Oct 02 12:02:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d797e74ff\x2d2c6b\x2d48fb\x2d807e\x2db845ad260597.mount: Deactivated successfully.
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.915 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:02:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:02.915 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[49129bc5-bd6b-497e-977f-2465ab6d61f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:02 compute-0 ceph-mon[73607]: pgmap v1049: 305 pgs: 305 active+clean; 503 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 631 KiB/s rd, 3.2 MiB/s wr, 127 op/s
Oct 02 12:02:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:03.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.441 2 DEBUG nova.compute.manager [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.442 2 DEBUG oslo_concurrency.lockutils [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.442 2 DEBUG oslo_concurrency.lockutils [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.442 2 DEBUG oslo_concurrency.lockutils [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.442 2 DEBUG nova.compute.manager [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] No waiting events found dispatching network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:02:03 compute-0 nova_compute[257802]: 2025-10-02 12:02:03.442 2 WARNING nova.compute.manager [req-c9a8a766-1bb2-4435-99aa-5b5903262f1a req-7459a68c-1835-4cee-9082-afd0c73564d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received unexpected event network-vif-plugged-72564b18-ab59-4028-8bb3-e93073a18534 for instance with vm_state active and task_state deleting.
Oct 02 12:02:03 compute-0 podman[271320]: 2025-10-02 12:02:03.965608382 +0000 UTC m=+0.103740230 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.192 2 DEBUG nova.network.neutron [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.264 2 INFO nova.compute.manager [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Took 2.21 seconds to deallocate network for instance.
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.329 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.329 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.448 2 DEBUG nova.network.neutron [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Updating instance_info_cache with network_info: [{"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.456 2 DEBUG oslo_concurrency.processutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.504 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.542 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.569 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406509.5691023, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.570 2 INFO nova.compute.manager [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Stopped (Lifecycle Event)
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.601 2 DEBUG nova.compute.manager [None req-30ef9473-0460-4a93-9254-37be65145ef0 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.640 2 DEBUG nova.compute.manager [req-ee86ad3f-6693-404b-87a1-f201741674ed req-9279a700-6b0c-4b09-a1be-55ad51920e05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Received event network-vif-deleted-72564b18-ab59-4028-8bb3-e93073a18534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 02 12:02:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 02 12:02:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 02 12:02:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1013751851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.898 2 DEBUG oslo_concurrency.processutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.903 2 DEBUG nova.compute.provider_tree [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:02:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 461 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 5.1 MiB/s wr, 195 op/s
Oct 02 12:02:04 compute-0 ovn_controller[148183]: 2025-10-02T12:02:04Z|00089|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:04 compute-0 ovn_controller[148183]: 2025-10-02T12:02:04Z|00090|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:04 compute-0 nova_compute[257802]: 2025-10-02 12:02:04.970 2 DEBUG nova.scheduler.client.report [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.009 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.011 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.012 2 DEBUG oslo_concurrency.lockutils [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.017 2 INFO nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:02:05 compute-0 virtqemud[257280]: Domain id=9 name='instance-0000000c' uuid=e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 is tainted: custom-monitor
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.060 2 INFO nova.scheduler.client.report [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Deleted allocations for instance cee549ac-63b2-4eed-b8c5-0bd0948a95d5
Oct 02 12:02:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:05.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.180 2 DEBUG oslo_concurrency.lockutils [None req-5e224ef5-cc29-41f0-b334-f745cb14b4d7 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "cee549ac-63b2-4eed-b8c5-0bd0948a95d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:05 compute-0 ovn_controller[148183]: 2025-10-02T12:02:05Z|00091|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:05 compute-0 ovn_controller[148183]: 2025-10-02T12:02:05Z|00092|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.659 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.660 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:05 compute-0 ceph-mon[73607]: osdmap e147: 3 total, 3 up, 3 in
Oct 02 12:02:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1013751851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:05 compute-0 ceph-mon[73607]: pgmap v1051: 305 pgs: 305 active+clean; 461 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 5.1 MiB/s wr, 195 op/s
Oct 02 12:02:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1882819893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.693 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.804 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.805 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.810 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.811 2 INFO nova.compute.claims [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:02:05 compute-0 nova_compute[257802]: 2025-10-02 12:02:05.956 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.029 2 INFO nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910483981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.365 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.370 2 DEBUG nova.compute.provider_tree [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.402 2 DEBUG nova.scheduler.client.report [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.445 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.446 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.516 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.517 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.604 2 INFO nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.662 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.797 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.798 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.798 2 INFO nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Creating image(s)
Oct 02 12:02:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 443 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 5.1 MiB/s wr, 192 op/s
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.947 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.973 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.994 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:06 compute-0 nova_compute[257802]: 2025-10-02 12:02:06.997 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2563263679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3910483981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.034 2 INFO nova.virt.libvirt.driver [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.039 2 DEBUG nova.compute.manager [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.058 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.059 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.060 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.060 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.120 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.123 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0146a67d-fbd8-4085-92f6-28a147c85dce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.148 2 DEBUG nova.objects.instance [None req-24f8d1fd-a587-4259-a5b4-1cfd1c3a09fd c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:02:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.374 2 DEBUG nova.policy [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4a02a1717144da38c573ce51c727de8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6a5858e0d184dd184a3291b74794c14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.444 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0146a67d-fbd8-4085-92f6-28a147c85dce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.514 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] resizing rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.698 2 DEBUG nova.objects.instance [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'migration_context' on Instance uuid 0146a67d-fbd8-4085-92f6-28a147c85dce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.735 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.758 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.761 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.762 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.763 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.789 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.790 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.846 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.848 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.870 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:07 compute-0 nova_compute[257802]: 2025-10-02 12:02:07.874 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:08 compute-0 ceph-mon[73607]: pgmap v1052: 305 pgs: 305 active+clean; 443 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 5.1 MiB/s wr, 192 op/s
Oct 02 12:02:08 compute-0 nova_compute[257802]: 2025-10-02 12:02:08.804 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Successfully created port: de7debb3-f519-40b8-b5ea-ea2a51eee758 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:02:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 473 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.4 MiB/s wr, 217 op/s
Oct 02 12:02:08 compute-0 nova_compute[257802]: 2025-10-02 12:02:08.949 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.055 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.056 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Ensure instance console log exists: /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.056 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.056 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.057 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:09 compute-0 ceph-mon[73607]: pgmap v1053: 305 pgs: 305 active+clean; 473 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.4 MiB/s wr, 217 op/s
Oct 02 12:02:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.545 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Successfully updated port: de7debb3-f519-40b8-b5ea-ea2a51eee758 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.578 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.578 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquired lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.579 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.673 2 DEBUG nova.compute.manager [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-changed-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.674 2 DEBUG nova.compute.manager [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Refreshing instance network info cache due to event network-changed-de7debb3-f519-40b8-b5ea-ea2a51eee758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:02:09 compute-0 nova_compute[257802]: 2025-10-02 12:02:09.674 2 DEBUG oslo_concurrency.lockutils [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:09 compute-0 sudo[271692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:09 compute-0 sudo[271692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:09 compute-0 sudo[271692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:09 compute-0 sudo[271717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:09 compute-0 sudo[271717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:09 compute-0 sudo[271717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:10 compute-0 nova_compute[257802]: 2025-10-02 12:02:10.315 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:02:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1466240660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/301315600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1107844131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:10 compute-0 nova_compute[257802]: 2025-10-02 12:02:10.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 492 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:11 compute-0 ceph-mon[73607]: pgmap v1054: 305 pgs: 305 active+clean; 492 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.729 2 DEBUG nova.network.neutron [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updating instance_info_cache with network_info: [{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.755 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Releasing lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.756 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Instance network_info: |[{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.757 2 DEBUG oslo_concurrency.lockutils [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.757 2 DEBUG nova.network.neutron [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Refreshing network info cache for port de7debb3-f519-40b8-b5ea-ea2a51eee758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.763 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Start _get_guest_xml network_info=[{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.770 2 WARNING nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.775 2 DEBUG nova.virt.libvirt.host [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.776 2 DEBUG nova.virt.libvirt.host [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.787 2 DEBUG nova.virt.libvirt.host [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.787 2 DEBUG nova.virt.libvirt.host [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.788 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.789 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:00:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1208273217',id=26,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1073647524',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.789 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.790 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.790 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.791 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.791 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.791 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.792 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.792 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.792 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.793 2 DEBUG nova.virt.hardware [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:02:11 compute-0 nova_compute[257802]: 2025-10-02 12:02:11.796 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:02:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519647794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:12 compute-0 nova_compute[257802]: 2025-10-02 12:02:12.261 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:12 compute-0 nova_compute[257802]: 2025-10-02 12:02:12.263 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 02 12:02:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/519647794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 02 12:02:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 02 12:02:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:02:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576422495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:12 compute-0 nova_compute[257802]: 2025-10-02 12:02:12.679 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:12 compute-0 nova_compute[257802]: 2025-10-02 12:02:12.708 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:12 compute-0 nova_compute[257802]: 2025-10-02 12:02:12.711 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 492 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Oct 02 12:02:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:13.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:02:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078559187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.139 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.141 2 DEBUG nova.virt.libvirt.vif [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1557393845',id=14,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-fyt47mp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:02:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=0146a67d-fbd8-4085-92f6-28a147c85dce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.142 2 DEBUG nova.network.os_vif_util [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.143 2 DEBUG nova.network.os_vif_util [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.144 2 DEBUG nova.objects.instance [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0146a67d-fbd8-4085-92f6-28a147c85dce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.179 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <uuid>0146a67d-fbd8-4085-92f6-28a147c85dce</uuid>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <name>instance-0000000e</name>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1557393845</nova:name>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:02:11</nova:creationTime>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-1073647524">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:user uuid="f4a02a1717144da38c573ce51c727de8">tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member</nova:user>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:project uuid="b6a5858e0d184dd184a3291b74794c14">tempest-ServersWithSpecificFlavorTestJSON-1982850532</nova:project>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <nova:port uuid="de7debb3-f519-40b8-b5ea-ea2a51eee758">
Oct 02 12:02:13 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <system>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="serial">0146a67d-fbd8-4085-92f6-28a147c85dce</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="uuid">0146a67d-fbd8-4085-92f6-28a147c85dce</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </system>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <os>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </os>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <features>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </features>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0146a67d-fbd8-4085-92f6-28a147c85dce_disk">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </source>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0146a67d-fbd8-4085-92f6-28a147c85dce_disk.eph0">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </source>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </source>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:02:13 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:65:4c:10"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <target dev="tapde7debb3-f5"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/console.log" append="off"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <video>
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </video>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:02:13 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:02:13 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:02:13 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:02:13 compute-0 nova_compute[257802]: </domain>
Oct 02 12:02:13 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.181 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Preparing to wait for external event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.181 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.182 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.182 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.183 2 DEBUG nova.virt.libvirt.vif [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1557393845',id=14,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-fyt47mp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:02:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=0146a67d-fbd8-4085-92f6-28a147c85dce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.183 2 DEBUG nova.network.os_vif_util [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.184 2 DEBUG nova.network.os_vif_util [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.185 2 DEBUG os_vif [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde7debb3-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapde7debb3-f5, col_values=(('external_ids', {'iface-id': 'de7debb3-f519-40b8-b5ea-ea2a51eee758', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:4c:10', 'vm-uuid': '0146a67d-fbd8-4085-92f6-28a147c85dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:13 compute-0 NetworkManager[44987]: <info>  [1759406533.1932] manager: (tapde7debb3-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.201 2 INFO os_vif [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5')
Oct 02 12:02:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.344 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.345 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.345 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.345 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] No VIF found with MAC fa:16:3e:65:4c:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.345 2 INFO nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Using config drive
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.367 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:13 compute-0 ceph-mon[73607]: osdmap e148: 3 total, 3 up, 3 in
Oct 02 12:02:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/576422495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:13 compute-0 ceph-mon[73607]: pgmap v1056: 305 pgs: 305 active+clean; 492 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Oct 02 12:02:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3648740956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2078559187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.741 2 DEBUG nova.network.neutron [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updated VIF entry in instance network info cache for port de7debb3-f519-40b8-b5ea-ea2a51eee758. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.742 2 DEBUG nova.network.neutron [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updating instance_info_cache with network_info: [{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.757 2 DEBUG oslo_concurrency.lockutils [req-f0e1a8e0-cecc-4973-93d1-d6262b725d1e req-3f9f3b50-8b68-4d02-a9b6-424580763a9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.863 2 INFO nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Creating config drive at /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config
Oct 02 12:02:13 compute-0 nova_compute[257802]: 2025-10-02 12:02:13.871 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkoifc1c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.004 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkoifc1c" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.047 2 DEBUG nova.storage.rbd_utils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] rbd image 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.050 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.271 2 DEBUG oslo_concurrency.processutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config 0146a67d-fbd8-4085-92f6-28a147c85dce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.271 2 INFO nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Deleting local config drive /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce/disk.config because it was imported into RBD.
Oct 02 12:02:14 compute-0 kernel: tapde7debb3-f5: entered promiscuous mode
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.3300] manager: (tapde7debb3-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 ovn_controller[148183]: 2025-10-02T12:02:14Z|00093|binding|INFO|Claiming lport de7debb3-f519-40b8-b5ea-ea2a51eee758 for this chassis.
Oct 02 12:02:14 compute-0 ovn_controller[148183]: 2025-10-02T12:02:14Z|00094|binding|INFO|de7debb3-f519-40b8-b5ea-ea2a51eee758: Claiming fa:16:3e:65:4c:10 10.100.0.3
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.351 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:4c:10 10.100.0.3'], port_security=['fa:16:3e:65:4c:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0146a67d-fbd8-4085-92f6-28a147c85dce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-797e74ff-2c6b-48fb-807e-b845ad260597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a5858e0d184dd184a3291b74794c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d7d59e9-1a48-4c19-9701-fea254de36f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0288ec27-954c-4f05-9e09-da71a7e0f954, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=de7debb3-f519-40b8-b5ea-ea2a51eee758) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.353 158261 INFO neutron.agent.ovn.metadata.agent [-] Port de7debb3-f519-40b8-b5ea-ea2a51eee758 in datapath 797e74ff-2c6b-48fb-807e-b845ad260597 bound to our chassis
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.357 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:02:14 compute-0 systemd-udevd[271900]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.375 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[de1fab0a-7b11-42f1-a24b-2720eb24eefc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.376 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap797e74ff-21 in ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.379 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap797e74ff-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.380 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3b2d34-228b-4575-8513-a5a895f834db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ba47ccb0-b3b9-474f-8b51-b76451cf7175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 systemd-machined[211836]: New machine qemu-10-instance-0000000e.
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.3947] device (tapde7debb3-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.3955] device (tapde7debb3-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.396 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ec85c-8e73-42d6-82a4-2adc2858f69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000e.
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 ovn_controller[148183]: 2025-10-02T12:02:14Z|00095|binding|INFO|Setting lport de7debb3-f519-40b8-b5ea-ea2a51eee758 ovn-installed in OVS
Oct 02 12:02:14 compute-0 ovn_controller[148183]: 2025-10-02T12:02:14Z|00096|binding|INFO|Setting lport de7debb3-f519-40b8-b5ea-ea2a51eee758 up in Southbound
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.420 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0f101b-b3f2-4515-899d-bbf29a0e44e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.454 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3119a9-8c3e-444f-9145-856f60f52df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.4596] manager: (tap797e74ff-20): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.458 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[63915c78-2388-488f-9cad-15e5664cc326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 systemd-udevd[271907]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.495 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f12abc43-f0ea-4f58-8f15-4bb6f6372a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.497 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5376db2b-8819-4d5b-b955-f1fe611217e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.5195] device (tap797e74ff-20): carrier: link connected
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.527 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ca558a8e-7302-4a01-be71-910a9e68085b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.542 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e7a22c-164e-4948-b896-721d817a0e58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap797e74ff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:79:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460214, 'reachable_time': 31868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271936, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.557 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b29c8cdd-0194-4f60-aa4c-3b57655434b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:79db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460214, 'tstamp': 460214}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271937, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.570 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37449691-54df-49d0-bb61-e136d8ccf542]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap797e74ff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:79:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460214, 'reachable_time': 31868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271938, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.602 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7bf7c2-5ce4-4808-9548-b9f7b32fe825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.667 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[93472d58-0582-4e4e-87c1-6a9b925aa307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.668 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap797e74ff-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.668 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.669 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap797e74ff-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 kernel: tap797e74ff-20: entered promiscuous mode
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.672 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap797e74ff-20, col_values=(('external_ids', {'iface-id': '9ae39c1e-f098-4856-80de-d86e2593ff3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:14 compute-0 ovn_controller[148183]: 2025-10-02T12:02:14Z|00097|binding|INFO|Releasing lport 9ae39c1e-f098-4856-80de-d86e2593ff3e from this chassis (sb_readonly=0)
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 NetworkManager[44987]: <info>  [1759406534.6751] manager: (tap797e74ff-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 02 12:02:14 compute-0 nova_compute[257802]: 2025-10-02 12:02:14.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.690 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.690 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cae32917-9c2c-4881-b819-017729b1706b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.691 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/797e74ff-2c6b-48fb-807e-b845ad260597.pid.haproxy
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 797e74ff-2c6b-48fb-807e-b845ad260597
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:02:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:14.692 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'env', 'PROCESS_TAG=haproxy-797e74ff-2c6b-48fb-807e-b845ad260597', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/797e74ff-2c6b-48fb-807e-b845ad260597.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:02:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3056966603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 492 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 02 12:02:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.125 2 DEBUG nova.compute.manager [req-b9ccf0e6-f84d-4ad6-8212-5da9652d2875 req-30789f1a-783c-43b5-be97-0908e65e15a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.126 2 DEBUG oslo_concurrency.lockutils [req-b9ccf0e6-f84d-4ad6-8212-5da9652d2875 req-30789f1a-783c-43b5-be97-0908e65e15a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.126 2 DEBUG oslo_concurrency.lockutils [req-b9ccf0e6-f84d-4ad6-8212-5da9652d2875 req-30789f1a-783c-43b5-be97-0908e65e15a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.127 2 DEBUG oslo_concurrency.lockutils [req-b9ccf0e6-f84d-4ad6-8212-5da9652d2875 req-30789f1a-783c-43b5-be97-0908e65e15a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.127 2 DEBUG nova.compute.manager [req-b9ccf0e6-f84d-4ad6-8212-5da9652d2875 req-30789f1a-783c-43b5-be97-0908e65e15a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Processing event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:02:15 compute-0 podman[271970]: 2025-10-02 12:02:15.110982832 +0000 UTC m=+0.026449274 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:02:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:15 compute-0 podman[271970]: 2025-10-02 12:02:15.235621946 +0000 UTC m=+0.151088368 container create faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:02:15 compute-0 systemd[1]: Started libpod-conmon-faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb.scope.
Oct 02 12:02:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660bc18ccf8b91ae6b1c098331aad250c012377f56576b8198c2961e496f58a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:15 compute-0 podman[271970]: 2025-10-02 12:02:15.366631318 +0000 UTC m=+0.282097790 container init faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:02:15 compute-0 podman[271970]: 2025-10-02 12:02:15.374254905 +0000 UTC m=+0.289721327 container start faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:02:15 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [NOTICE]   (271989) : New worker (271991) forked
Oct 02 12:02:15 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [NOTICE]   (271989) : Loading success.
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:15 compute-0 ceph-mon[73607]: pgmap v1057: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 492 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.985 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406520.9844818, cee549ac-63b2-4eed-b8c5-0bd0948a95d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:15 compute-0 nova_compute[257802]: 2025-10-02 12:02:15.986 2 INFO nova.compute.manager [-] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] VM Stopped (Lifecycle Event)
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.009 2 DEBUG nova.compute.manager [None req-50b14947-f6c7-41c1-8b42-2f6c84692a9b - - - - - -] [instance: cee549ac-63b2-4eed-b8c5-0bd0948a95d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.478 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406536.4782312, 0146a67d-fbd8-4085-92f6-28a147c85dce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.479 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] VM Started (Lifecycle Event)
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.480 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.483 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.486 2 INFO nova.virt.libvirt.driver [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Instance spawned successfully.
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.487 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.501 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.504 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.541 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.541 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406536.4785886, 0146a67d-fbd8-4085-92f6-28a147c85dce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.541 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] VM Paused (Lifecycle Event)
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.545 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.545 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.546 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.546 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.546 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.546 2 DEBUG nova.virt.libvirt.driver [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.591 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.594 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406536.482853, 0146a67d-fbd8-4085-92f6-28a147c85dce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.594 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] VM Resumed (Lifecycle Event)
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.632 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.637 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.649 2 INFO nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Took 9.85 seconds to spawn the instance on the hypervisor.
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.649 2 DEBUG nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.668 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.717 2 INFO nova.compute.manager [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Took 10.93 seconds to build instance.
Oct 02 12:02:16 compute-0 nova_compute[257802]: 2025-10-02 12:02:16.733 2 DEBUG oslo_concurrency.lockutils [None req-86647950-b747-4108-9924-f320861cdf10 f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 492 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 02 12:02:16 compute-0 ceph-mon[73607]: pgmap v1058: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 492 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 02 12:02:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:17.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.245 2 DEBUG nova.compute.manager [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.245 2 DEBUG oslo_concurrency.lockutils [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.245 2 DEBUG oslo_concurrency.lockutils [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.246 2 DEBUG oslo_concurrency.lockutils [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.246 2 DEBUG nova.compute.manager [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] No waiting events found dispatching network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:02:17 compute-0 nova_compute[257802]: 2025-10-02 12:02:17.246 2 WARNING nova.compute.manager [req-c2bb7d68-6767-4420-a505-418da09e3b7b req-16efdc5a-b0dc-4f1a-aad6-b2b0a1774541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received unexpected event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 for instance with vm_state active and task_state None.
Oct 02 12:02:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:18 compute-0 nova_compute[257802]: 2025-10-02 12:02:18.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 488 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 801 KiB/s wr, 241 op/s
Oct 02 12:02:19 compute-0 ceph-mon[73607]: pgmap v1059: 305 pgs: 305 active+clean; 488 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 801 KiB/s wr, 241 op/s
Oct 02 12:02:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:19.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:19 compute-0 nova_compute[257802]: 2025-10-02 12:02:19.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:19 compute-0 NetworkManager[44987]: <info>  [1759406539.6837] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 02 12:02:19 compute-0 NetworkManager[44987]: <info>  [1759406539.6856] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct 02 12:02:19 compute-0 nova_compute[257802]: 2025-10-02 12:02:19.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:19 compute-0 ovn_controller[148183]: 2025-10-02T12:02:19Z|00098|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:19 compute-0 ovn_controller[148183]: 2025-10-02T12:02:19Z|00099|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:19 compute-0 ovn_controller[148183]: 2025-10-02T12:02:19Z|00100|binding|INFO|Releasing lport 9ae39c1e-f098-4856-80de-d86e2593ff3e from this chassis (sb_readonly=0)
Oct 02 12:02:19 compute-0 nova_compute[257802]: 2025-10-02 12:02:19.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.111 2 DEBUG nova.compute.manager [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-changed-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.112 2 DEBUG nova.compute.manager [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Refreshing instance network info cache due to event network-changed-de7debb3-f519-40b8-b5ea-ea2a51eee758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.112 2 DEBUG oslo_concurrency.lockutils [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.112 2 DEBUG oslo_concurrency.lockutils [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.112 2 DEBUG nova.network.neutron [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Refreshing network info cache for port de7debb3-f519-40b8-b5ea-ea2a51eee758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:02:20 compute-0 nova_compute[257802]: 2025-10-02 12:02:20.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 467 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.1 MiB/s wr, 270 op/s
Oct 02 12:02:20 compute-0 podman[272064]: 2025-10-02 12:02:20.922580865 +0000 UTC m=+0.059586331 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:02:21 compute-0 ceph-mon[73607]: pgmap v1060: 305 pgs: 305 active+clean; 467 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.1 MiB/s wr, 270 op/s
Oct 02 12:02:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:21 compute-0 nova_compute[257802]: 2025-10-02 12:02:21.826 2 DEBUG nova.network.neutron [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updated VIF entry in instance network info cache for port de7debb3-f519-40b8-b5ea-ea2a51eee758. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:02:21 compute-0 nova_compute[257802]: 2025-10-02 12:02:21.827 2 DEBUG nova.network.neutron [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updating instance_info_cache with network_info: [{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:21 compute-0 nova_compute[257802]: 2025-10-02 12:02:21.868 2 DEBUG oslo_concurrency.lockutils [req-fffabbc2-66c3-4ba2-864c-c9cabdc024a7 req-80d5cddf-07b8-4730-beb7-ee9fa5f2ea31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 467 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.1 MiB/s wr, 260 op/s
Oct 02 12:02:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2809225506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:22 compute-0 ceph-mon[73607]: pgmap v1061: 305 pgs: 305 active+clean; 467 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.1 MiB/s wr, 260 op/s
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.028871) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543028953, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2055, "num_deletes": 254, "total_data_size": 3302755, "memory_usage": 3348992, "flush_reason": "Manual Compaction"}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543059401, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3248815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22139, "largest_seqno": 24192, "table_properties": {"data_size": 3239813, "index_size": 5496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19953, "raw_average_key_size": 20, "raw_value_size": 3221253, "raw_average_value_size": 3334, "num_data_blocks": 242, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406368, "oldest_key_time": 1759406368, "file_creation_time": 1759406543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 30563 microseconds, and 7238 cpu microseconds.
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.059452) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3248815 bytes OK
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.059472) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.061863) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.061880) EVENT_LOG_v1 {"time_micros": 1759406543061874, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.061897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3294284, prev total WAL file size 3294284, number of live WAL files 2.
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.062910) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3172KB)], [53(7639KB)]
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543062942, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11071554, "oldest_snapshot_seqno": -1}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 02 12:02:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4877 keys, 9040422 bytes, temperature: kUnknown
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543167937, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9040422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9006791, "index_size": 20337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123173, "raw_average_key_size": 25, "raw_value_size": 8917628, "raw_average_value_size": 1828, "num_data_blocks": 833, "num_entries": 4877, "num_filter_entries": 4877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:02:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.173208) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9040422 bytes
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.182874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.1 rd, 85.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5402, records dropped: 525 output_compression: NoCompression
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.182930) EVENT_LOG_v1 {"time_micros": 1759406543182903, "job": 28, "event": "compaction_finished", "compaction_time_micros": 105298, "compaction_time_cpu_micros": 19147, "output_level": 6, "num_output_files": 1, "total_output_size": 9040422, "num_input_records": 5402, "num_output_records": 4877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543183958, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543186432, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.062762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.186486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.186492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.186494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.186496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:02:23.186499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:02:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 02 12:02:23 compute-0 nova_compute[257802]: 2025-10-02 12:02:23.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:24 compute-0 ceph-mon[73607]: osdmap e149: 3 total, 3 up, 3 in
Oct 02 12:02:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3779598959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 409 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 269 op/s
Oct 02 12:02:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:25 compute-0 nova_compute[257802]: 2025-10-02 12:02:25.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/93389438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:25 compute-0 ceph-mon[73607]: pgmap v1063: 305 pgs: 305 active+clean; 409 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 269 op/s
Oct 02 12:02:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3379174599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3502400963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/247468690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:26.919 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:26.920 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:26.922 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 388 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.4 MiB/s wr, 297 op/s
Oct 02 12:02:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:27 compute-0 ceph-mon[73607]: pgmap v1064: 305 pgs: 305 active+clean; 388 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.4 MiB/s wr, 297 op/s
Oct 02 12:02:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1048423683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.683 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.684 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.684 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.684 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.684 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.685 2 INFO nova.compute.manager [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Terminating instance
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.686 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.686 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:28 compute-0 nova_compute[257802]: 2025-10-02 12:02:28.686 2 DEBUG nova.network.neutron [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:02:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 387 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Oct 02 12:02:29 compute-0 ceph-mon[73607]: pgmap v1065: 305 pgs: 305 active+clean; 387 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:02:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.322 2 DEBUG nova.network.neutron [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.657 2 DEBUG nova.network.neutron [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.682 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-b4e4932c-8129-4ceb-95ef-3a612ef502f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.683 2 DEBUG nova.compute.manager [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:02:29 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 02 12:02:29 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 18.745s CPU time.
Oct 02 12:02:29 compute-0 systemd-machined[211836]: Machine qemu-3-instance-00000006 terminated.
Oct 02 12:02:29 compute-0 podman[272088]: 2025-10-02 12:02:29.883178188 +0000 UTC m=+0.070948001 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:02:29 compute-0 podman[272089]: 2025-10-02 12:02:29.910216605 +0000 UTC m=+0.093003375 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true)
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.908 2 INFO nova.virt.libvirt.driver [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance destroyed successfully.
Oct 02 12:02:29 compute-0 nova_compute[257802]: 2025-10-02 12:02:29.909 2 DEBUG nova.objects.instance [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid b4e4932c-8129-4ceb-95ef-3a612ef502f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:30 compute-0 sudo[272144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:30 compute-0 sudo[272144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:30 compute-0 sudo[272144]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/127373874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:30 compute-0 ovn_controller[148183]: 2025-10-02T12:02:30Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:4c:10 10.100.0.3
Oct 02 12:02:30 compute-0 ovn_controller[148183]: 2025-10-02T12:02:30Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:4c:10 10.100.0.3
Oct 02 12:02:30 compute-0 sudo[272173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:30 compute-0 sudo[272173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:30 compute-0 sudo[272173]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.624 2 INFO nova.virt.libvirt.driver [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Deleting instance files /var/lib/nova/instances/b4e4932c-8129-4ceb-95ef-3a612ef502f9_del
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.624 2 INFO nova.virt.libvirt.driver [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Deletion of /var/lib/nova/instances/b4e4932c-8129-4ceb-95ef-3a612ef502f9_del complete
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:30.717 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:30.718 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:02:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:30.719 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.724 2 INFO nova.compute.manager [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Took 1.04 seconds to destroy the instance on the hypervisor.
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.725 2 DEBUG oslo.service.loopingcall [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.725 2 DEBUG nova.compute.manager [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.725 2 DEBUG nova.network.neutron [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.843 2 DEBUG nova.network.neutron [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.890 2 DEBUG nova.network.neutron [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 355 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 02 12:02:30 compute-0 nova_compute[257802]: 2025-10-02 12:02:30.932 2 INFO nova.compute.manager [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Took 0.21 seconds to deallocate network for instance.
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.024 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.025 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:31 compute-0 ceph-mon[73607]: pgmap v1066: 305 pgs: 305 active+clean; 355 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 02 12:02:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2871346515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.101 2 DEBUG oslo_concurrency.processutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:31.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.346 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.346 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.346 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.346 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0146a67d-fbd8-4085-92f6-28a147c85dce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159835869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.553 2 DEBUG oslo_concurrency.processutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.558 2 DEBUG nova.compute.provider_tree [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.573 2 DEBUG nova.scheduler.client.report [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.593 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.626 2 INFO nova.scheduler.client.report [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Deleted allocations for instance b4e4932c-8129-4ceb-95ef-3a612ef502f9
Oct 02 12:02:31 compute-0 nova_compute[257802]: 2025-10-02 12:02:31.697 2 DEBUG oslo_concurrency.lockutils [None req-645c63d9-52ec-428b-9dc0-503147d3e1af 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "b4e4932c-8129-4ceb-95ef-3a612ef502f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2159835869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2568248138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 355 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 02 12:02:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:33.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1488732752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3660646601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:33 compute-0 ceph-mon[73607]: pgmap v1067: 305 pgs: 305 active+clean; 355 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.971 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updating instance_info_cache with network_info: [{"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.991 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-0146a67d-fbd8-4085-92f6-28a147c85dce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.991 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.991 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.992 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:33 compute-0 nova_compute[257802]: 2025-10-02 12:02:33.992 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.016 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.017 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.017 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.017 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.017 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329981483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.456 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3368811766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:34 compute-0 podman[272247]: 2025-10-02 12:02:34.582890296 +0000 UTC m=+0.080335502 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.595 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.595 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.598 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.598 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.598 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.769 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.771 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4373MB free_disk=20.846904754638672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.771 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.771 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.887 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.888 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 0146a67d-fbd8-4085-92f6-28a147c85dce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.888 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.888 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:02:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 345 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 319 op/s
Oct 02 12:02:34 compute-0 nova_compute[257802]: 2025-10-02 12:02:34.982 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:35.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108407713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.435 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.441 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.481 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:02:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/329981483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3481444994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:35 compute-0 ceph-mon[73607]: pgmap v1068: 305 pgs: 305 active+clean; 345 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 319 op/s
Oct 02 12:02:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/906744250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/108407713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.540 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:02:35 compute-0 nova_compute[257802]: 2025-10-02 12:02:35.541 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:36 compute-0 sudo[272296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:36 compute-0 sudo[272296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:36 compute-0 sudo[272296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 sudo[272321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:36 compute-0 sudo[272321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:36 compute-0 sudo[272321]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 sudo[272346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:36 compute-0 sudo[272346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:36 compute-0 sudo[272346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 sudo[272371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:02:36 compute-0 sudo[272371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.648 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.648 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.679 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.679 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.680 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.680 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.680 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.681 2 INFO nova.compute.manager [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Terminating instance
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.682 2 DEBUG nova.compute.manager [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:02:36 compute-0 kernel: tapde7debb3-f5 (unregistering): left promiscuous mode
Oct 02 12:02:36 compute-0 NetworkManager[44987]: <info>  [1759406556.7881] device (tapde7debb3-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.801 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating tmpfile /var/lib/nova/instances/tmp69s6w50k to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:02:36 compute-0 sudo[272371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.802 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:02:36 compute-0 ovn_controller[148183]: 2025-10-02T12:02:36Z|00101|binding|INFO|Releasing lport de7debb3-f519-40b8-b5ea-ea2a51eee758 from this chassis (sb_readonly=0)
Oct 02 12:02:36 compute-0 ovn_controller[148183]: 2025-10-02T12:02:36Z|00102|binding|INFO|Setting lport de7debb3-f519-40b8-b5ea-ea2a51eee758 down in Southbound
Oct 02 12:02:36 compute-0 ovn_controller[148183]: 2025-10-02T12:02:36Z|00103|binding|INFO|Removing iface tapde7debb3-f5 ovn-installed in OVS
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:36 compute-0 nova_compute[257802]: 2025-10-02 12:02:36.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:36.886 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:4c:10 10.100.0.3'], port_security=['fa:16:3e:65:4c:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0146a67d-fbd8-4085-92f6-28a147c85dce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-797e74ff-2c6b-48fb-807e-b845ad260597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a5858e0d184dd184a3291b74794c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d7d59e9-1a48-4c19-9701-fea254de36f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0288ec27-954c-4f05-9e09-da71a7e0f954, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=de7debb3-f519-40b8-b5ea-ea2a51eee758) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:36.887 158261 INFO neutron.agent.ovn.metadata.agent [-] Port de7debb3-f519-40b8-b5ea-ea2a51eee758 in datapath 797e74ff-2c6b-48fb-807e-b845ad260597 unbound from our chassis
Oct 02 12:02:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:36.888 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 797e74ff-2c6b-48fb-807e-b845ad260597, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:02:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:36.889 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3fc84d-15e3-468d-97c6-0c7f98bd0de4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:36.890 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 namespace which is not needed anymore
Oct 02 12:02:36 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 02 12:02:36 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000e.scope: Consumed 15.209s CPU time.
Oct 02 12:02:36 compute-0 systemd-machined[211836]: Machine qemu-10-instance-0000000e terminated.
Oct 02 12:02:36 compute-0 sudo[272433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:36 compute-0 sudo[272433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:36 compute-0 sudo[272433]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 322 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 333 op/s
Oct 02 12:02:37 compute-0 sudo[272475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:37 compute-0 sudo[272475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 sudo[272475]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [NOTICE]   (271989) : haproxy version is 2.8.14-c23fe91
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [NOTICE]   (271989) : path to executable is /usr/sbin/haproxy
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [WARNING]  (271989) : Exiting Master process...
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [WARNING]  (271989) : Exiting Master process...
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [ALERT]    (271989) : Current worker (271991) exited with code 143 (Terminated)
Oct 02 12:02:37 compute-0 neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597[271985]: [WARNING]  (271989) : All workers exited. Exiting... (0)
Oct 02 12:02:37 compute-0 systemd[1]: libpod-faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb.scope: Deactivated successfully.
Oct 02 12:02:37 compute-0 podman[272474]: 2025-10-02 12:02:37.071717223 +0000 UTC m=+0.088320559 container died faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:02:37 compute-0 sudo[272512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:37 compute-0 sudo[272512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 sudo[272512]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.118 2 INFO nova.virt.libvirt.driver [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Instance destroyed successfully.
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.120 2 DEBUG nova.objects.instance [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lazy-loading 'resources' on Instance uuid 0146a67d-fbd8-4085-92f6-28a147c85dce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:02:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:37.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:37 compute-0 sudo[272551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:02:37 compute-0 sudo[272551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 ceph-mon[73607]: pgmap v1069: 305 pgs: 305 active+clean; 322 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 333 op/s
Oct 02 12:02:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:37.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-660bc18ccf8b91ae6b1c098331aad250c012377f56576b8198c2961e496f58a5-merged.mount: Deactivated successfully.
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.252 2 DEBUG nova.virt.libvirt.vif [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1557393845',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1557393845',id=14,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBABlpYtD7rDfo5edCO3VO05L7/og1LArqb6dlvrZJFkfLxvT/a212dNPUwff9OZOkCSB3EGYGqoNRjJJC/KPkC0Po6hJhudoH+ymL6NUy6s9m4GPkIqwcNKYTuBlpjjxTw==',key_name='tempest-keypair-199584473',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6a5858e0d184dd184a3291b74794c14',ramdisk_id='',reservation_id='r-fyt47mp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1982850532-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4a02a1717144da38c573ce51c727de8',uuid=0146a67d-fbd8-4085-92f6-28a147c85dce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.252 2 DEBUG nova.network.os_vif_util [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converting VIF {"id": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "address": "fa:16:3e:65:4c:10", "network": {"id": "797e74ff-2c6b-48fb-807e-b845ad260597", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1853530210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a5858e0d184dd184a3291b74794c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7debb3-f5", "ovs_interfaceid": "de7debb3-f519-40b8-b5ea-ea2a51eee758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.253 2 DEBUG nova.network.os_vif_util [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.254 2 DEBUG os_vif [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.256 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde7debb3-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.263 2 INFO os_vif [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:4c:10,bridge_name='br-int',has_traffic_filtering=True,id=de7debb3-f519-40b8-b5ea-ea2a51eee758,network=Network(797e74ff-2c6b-48fb-807e-b845ad260597),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7debb3-f5')
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:02:37 compute-0 podman[272474]: 2025-10-02 12:02:37.396448973 +0000 UTC m=+0.413052309 container cleanup faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:02:37 compute-0 systemd[1]: libpod-conmon-faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb.scope: Deactivated successfully.
Oct 02 12:02:37 compute-0 sudo[272551]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:02:37 compute-0 podman[272625]: 2025-10-02 12:02:37.504065847 +0000 UTC m=+0.083552872 container remove faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.511 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a027707-08a4-4217-8d71-f0930973a39f]: (4, ('Thu Oct  2 12:02:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 (faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb)\nfaac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb\nThu Oct  2 12:02:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 (faac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb)\nfaac8148b2f54b7defacc25726d5c3a73d7dd66c9e7de6ecbb1aadb890f467fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.513 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc5cc76-f39c-486c-a227-9783398cf7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.514 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap797e74ff-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 kernel: tap797e74ff-20: left promiscuous mode
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.522 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ac2661-2d11-4e84-9ee2-2409753dcb9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.556 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e99119c5-fc43-43f3-a44c-286785b7a6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.557 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e77f054-65c6-44d6-9ecb-5c7d64aa6b9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.582 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a03c11e-6c56-4954-84c5-d0fb99c9d3a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460207, 'reachable_time': 26474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272640, 'error': None, 'target': 'ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d797e74ff\x2d2c6b\x2d48fb\x2d807e\x2db845ad260597.mount: Deactivated successfully.
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.588 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-797e74ff-2c6b-48fb-807e-b845ad260597 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:02:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:37.589 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[40aca8ae-fbd5-4e82-a3b4-7d12f3ab561f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 87f78a2e-2da1-42ac-b674-fd11eb0f1b4a does not exist
Oct 02 12:02:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3f8e6474-7efd-4a2b-ae24-b338f3467160 does not exist
Oct 02 12:02:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6e9627c9-6b68-4e64-b672-dd69f70ced6d does not exist
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:02:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:02:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.641 2 DEBUG nova.compute.manager [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-unplugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.641 2 DEBUG oslo_concurrency.lockutils [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.641 2 DEBUG oslo_concurrency.lockutils [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.641 2 DEBUG oslo_concurrency.lockutils [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.641 2 DEBUG nova.compute.manager [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] No waiting events found dispatching network-vif-unplugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:02:37 compute-0 nova_compute[257802]: 2025-10-02 12:02:37.642 2 DEBUG nova.compute.manager [req-d11463ef-f9fb-4f46-95ba-815b05f687ba req-21f4afd0-8479-4c48-94fb-21ac2891b497 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-unplugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:02:37 compute-0 sudo[272641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:37 compute-0 sudo[272641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 sudo[272641]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 sudo[272666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:37 compute-0 sudo[272666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 sudo[272666]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 sudo[272691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:37 compute-0 sudo[272691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:37 compute-0 sudo[272691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 sudo[272717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:02:37 compute-0 sudo[272717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.152274865 +0000 UTC m=+0.019585124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.2781557 +0000 UTC m=+0.145465949 container create 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:38 compute-0 systemd[1]: Started libpod-conmon-6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34.scope.
Oct 02 12:02:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.473407295 +0000 UTC m=+0.340717574 container init 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.48006284 +0000 UTC m=+0.347373089 container start 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:02:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:38 compute-0 systemd[1]: libpod-6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34.scope: Deactivated successfully.
Oct 02 12:02:38 compute-0 practical_antonelli[272800]: 167 167
Oct 02 12:02:38 compute-0 conmon[272800]: conmon 6dd0804821e86d663f4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34.scope/container/memory.events
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.528785742 +0000 UTC m=+0.396095991 container attach 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.529911409 +0000 UTC m=+0.397221678 container died 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db5cc7e27b9d85751748521441ba882a2cf910990c950946abef12622583bf9-merged.mount: Deactivated successfully.
Oct 02 12:02:38 compute-0 podman[272783]: 2025-10-02 12:02:38.711874057 +0000 UTC m=+0.579184316 container remove 6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:02:38 compute-0 systemd[1]: libpod-conmon-6dd0804821e86d663f4d24cb080aeacd7043b033ab747c2f4d42336536f02d34.scope: Deactivated successfully.
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.774 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.801 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.802 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.802 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.811 2 INFO nova.virt.libvirt.driver [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Deleting instance files /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce_del
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.812 2 INFO nova.virt.libvirt.driver [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Deletion of /var/lib/nova/instances/0146a67d-fbd8-4085-92f6-28a147c85dce_del complete
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.875 2 INFO nova.compute.manager [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Took 2.19 seconds to destroy the instance on the hypervisor.
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.876 2 DEBUG oslo.service.loopingcall [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.876 2 DEBUG nova.compute.manager [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:02:38 compute-0 nova_compute[257802]: 2025-10-02 12:02:38.876 2 DEBUG nova.network.neutron [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:02:38 compute-0 podman[272827]: 2025-10-02 12:02:38.928420018 +0000 UTC m=+0.061477427 container create 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:02:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 299 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.5 MiB/s wr, 333 op/s
Oct 02 12:02:38 compute-0 podman[272827]: 2025-10-02 12:02:38.889483678 +0000 UTC m=+0.022541117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:38 compute-0 systemd[1]: Started libpod-conmon-453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc.scope.
Oct 02 12:02:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:39 compute-0 podman[272827]: 2025-10-02 12:02:39.082654272 +0000 UTC m=+0.215711691 container init 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:02:39 compute-0 podman[272827]: 2025-10-02 12:02:39.092122436 +0000 UTC m=+0.225179845 container start 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:39.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:39 compute-0 podman[272827]: 2025-10-02 12:02:39.139991997 +0000 UTC m=+0.273049436 container attach 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:02:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:39 compute-0 ceph-mon[73607]: pgmap v1070: 305 pgs: 305 active+clean; 299 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.5 MiB/s wr, 333 op/s
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.745 2 DEBUG nova.compute.manager [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.746 2 DEBUG oslo_concurrency.lockutils [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.747 2 DEBUG oslo_concurrency.lockutils [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.747 2 DEBUG oslo_concurrency.lockutils [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.747 2 DEBUG nova.compute.manager [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] No waiting events found dispatching network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.747 2 WARNING nova.compute.manager [req-33a381bc-b6d4-4442-a165-785a95bc479b req-37783ffd-acaf-4d15-883b-d461c0b42849 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received unexpected event network-vif-plugged-de7debb3-f519-40b8-b5ea-ea2a51eee758 for instance with vm_state active and task_state deleting.
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.871 2 DEBUG nova.network.neutron [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:39 compute-0 reverent_wright[272844]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:02:39 compute-0 reverent_wright[272844]: --> relative data size: 1.0
Oct 02 12:02:39 compute-0 reverent_wright[272844]: --> All data devices are unavailable
Oct 02 12:02:39 compute-0 systemd[1]: libpod-453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc.scope: Deactivated successfully.
Oct 02 12:02:39 compute-0 podman[272827]: 2025-10-02 12:02:39.898059865 +0000 UTC m=+1.031117284 container died 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.912 2 INFO nova.compute.manager [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Took 1.04 seconds to deallocate network for instance.
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.980 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:39 compute-0 nova_compute[257802]: 2025-10-02 12:02:39.981 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.036 2 DEBUG nova.compute.manager [req-7488b90c-c309-4138-a2f1-c7ca277a7d5d req-61becce7-c504-49ff-a122-94182cff5364 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Received event network-vif-deleted-de7debb3-f519-40b8-b5ea-ea2a51eee758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.076 2 DEBUG oslo_concurrency.processutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1f0b4cf25ad1d9960fd896e2b25dd8063dadf3328eb1c1b38d783c97d0484b-merged.mount: Deactivated successfully.
Oct 02 12:02:40 compute-0 podman[272827]: 2025-10-02 12:02:40.426511779 +0000 UTC m=+1.559569188 container remove 453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:02:40 compute-0 systemd[1]: libpod-conmon-453661a6821d071c12261976146ec2d864731b830153b3b404734fa228b400bc.scope: Deactivated successfully.
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:40 compute-0 sudo[272717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 sudo[272894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:02:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041528249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:40 compute-0 sudo[272894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 sudo[272894]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.538 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.549 2 DEBUG oslo_concurrency.processutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.556 2 DEBUG nova.compute.provider_tree [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.563 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.565 2 DEBUG os_brick.utils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.566 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.569 2 DEBUG nova.scheduler.client.report [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.593 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:40 compute-0 sudo[272921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:40 compute-0 sudo[272921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.593 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.593 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[79a7f50f-77fa-4fc4-b3cc-47b0806800b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:40 compute-0 sudo[272921]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.601 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.608 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.609 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[0763bf74-3bf6-480a-a0ee-98b50e2ce0c8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.610 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2041528249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.625 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.626 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d490be01-206c-4eb4-930c-b2fda3a4d9c5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.627 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcb15ae-7ead-4d31-ab29-d64cbe8c3186]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.628 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:02:40 compute-0 sudo[272949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.652 2 INFO nova.scheduler.client.report [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Deleted allocations for instance 0146a67d-fbd8-4085-92f6-28a147c85dce
Oct 02 12:02:40 compute-0 sudo[272949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 sudo[272949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.668 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.674 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.675 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.675 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.676 2 DEBUG os_brick.utils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:02:40 compute-0 sudo[272978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:02:40 compute-0 sudo[272978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 nova_compute[257802]: 2025-10-02 12:02:40.860 2 DEBUG oslo_concurrency.lockutils [None req-0d04e065-b2a7-47a6-9df3-4b97801b2f8e f4a02a1717144da38c573ce51c727de8 b6a5858e0d184dd184a3291b74794c14 - - default default] Lock "0146a67d-fbd8-4085-92f6-28a147c85dce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 272 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.2 MiB/s wr, 381 op/s
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.027973964 +0000 UTC m=+0.048374964 container create 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:02:41 compute-0 systemd[1]: Started libpod-conmon-0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d.scope.
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.000509627 +0000 UTC m=+0.020910647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.120304402 +0000 UTC m=+0.140705422 container init 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.126998396 +0000 UTC m=+0.147399396 container start 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:41 compute-0 pensive_varahamihira[273058]: 167 167
Oct 02 12:02:41 compute-0 systemd[1]: libpod-0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d.scope: Deactivated successfully.
Oct 02 12:02:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:41 compute-0 conmon[273058]: conmon 0f7e86bbc9279fdebf95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d.scope/container/memory.events
Oct 02 12:02:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:41.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.134759458 +0000 UTC m=+0.155160478 container attach 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.13528064 +0000 UTC m=+0.155681650 container died 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0d02d0b138685cb298640e833bb45ccf52e1d1e0af2d9a5f261f18912e69e50-merged.mount: Deactivated successfully.
Oct 02 12:02:41 compute-0 podman[273041]: 2025-10-02 12:02:41.184651579 +0000 UTC m=+0.205052579 container remove 0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:41 compute-0 systemd[1]: libpod-conmon-0f7e86bbc9279fdebf954d94ebe5434a299fd62255b222203f2ada034a82582d.scope: Deactivated successfully.
Oct 02 12:02:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:41 compute-0 podman[273082]: 2025-10-02 12:02:41.359107561 +0000 UTC m=+0.040661244 container create f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:41 compute-0 systemd[1]: Started libpod-conmon-f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9.scope.
Oct 02 12:02:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/498accfae1cda4ca8ee87656750c21f08423f0b7c95364c354c7bf2414103d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/498accfae1cda4ca8ee87656750c21f08423f0b7c95364c354c7bf2414103d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/498accfae1cda4ca8ee87656750c21f08423f0b7c95364c354c7bf2414103d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/498accfae1cda4ca8ee87656750c21f08423f0b7c95364c354c7bf2414103d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:41 compute-0 podman[273082]: 2025-10-02 12:02:41.431311333 +0000 UTC m=+0.112865036 container init f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:02:41 compute-0 podman[273082]: 2025-10-02 12:02:41.341456916 +0000 UTC m=+0.023010619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:41 compute-0 podman[273082]: 2025-10-02 12:02:41.438056068 +0000 UTC m=+0.119609751 container start f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:41 compute-0 podman[273082]: 2025-10-02 12:02:41.457530889 +0000 UTC m=+0.139084602 container attach f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:02:41 compute-0 ceph-mon[73607]: pgmap v1071: 305 pgs: 305 active+clean; 272 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.2 MiB/s wr, 381 op/s
Oct 02 12:02:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1912580596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.060 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='5c71f9a3-182a-4066-9e3d-88ac9417292f'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.061 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating instance directory: /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.061 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Ensure instance console log exists: /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.062 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.065 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.066 2 DEBUG nova.virt.libvirt.vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:31Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.066 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.067 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.067 2 DEBUG os_vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.071 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9c114c-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.071 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a9c114c-7c, col_values=(('external_ids', {'iface-id': '4a9c114c-7ca6-4f80-ab5d-d38765e4c15c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:34:a2', 'vm-uuid': '6febae1b-d70f-43e6-8aba-1e913541fec2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:42 compute-0 NetworkManager[44987]: <info>  [1759406562.0749] manager: (tap4a9c114c-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.080 2 INFO os_vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.084 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:02:42 compute-0 nova_compute[257802]: 2025-10-02 12:02:42.084 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='5c71f9a3-182a-4066-9e3d-88ac9417292f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]: {
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:     "1": [
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:         {
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "devices": [
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "/dev/loop3"
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             ],
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "lv_name": "ceph_lv0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "lv_size": "7511998464",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "name": "ceph_lv0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "tags": {
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.cluster_name": "ceph",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.crush_device_class": "",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.encrypted": "0",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.osd_id": "1",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.type": "block",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:                 "ceph.vdo": "0"
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             },
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "type": "block",
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:             "vg_name": "ceph_vg0"
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:         }
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]:     ]
Oct 02 12:02:42 compute-0 stoic_hofstadter[273098]: }
Oct 02 12:02:42 compute-0 systemd[1]: libpod-f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9.scope: Deactivated successfully.
Oct 02 12:02:42 compute-0 podman[273082]: 2025-10-02 12:02:42.228617848 +0000 UTC m=+0.910171551 container died f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-498accfae1cda4ca8ee87656750c21f08423f0b7c95364c354c7bf2414103d50-merged.mount: Deactivated successfully.
Oct 02 12:02:42 compute-0 podman[273082]: 2025-10-02 12:02:42.346587277 +0000 UTC m=+1.028140960 container remove f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:42 compute-0 systemd[1]: libpod-conmon-f157a3f5bf6d515a83065b8dea7d7d8dfec2517c4363a2840ee1e1799c4ff6d9.scope: Deactivated successfully.
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:02:42
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log']
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:02:42 compute-0 sudo[272978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 sudo[273124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:42 compute-0 sudo[273124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:42 compute-0 sudo[273124]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 sudo[273149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:42 compute-0 sudo[273149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:42 compute-0 sudo[273149]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 sudo[273174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:42 compute-0 sudo[273174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:42 compute-0 sudo[273174]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 sudo[273199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:02:42 compute-0 sudo[273199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:02:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 272 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 304 op/s
Oct 02 12:02:43 compute-0 ceph-mon[73607]: pgmap v1072: 305 pgs: 305 active+clean; 272 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 304 op/s
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.024929759 +0000 UTC m=+0.057643223 container create c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:43 compute-0 systemd[1]: Started libpod-conmon-c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101.scope.
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:42.990950791 +0000 UTC m=+0.023664295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.133735053 +0000 UTC m=+0.166448547 container init c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:02:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.140228703 +0000 UTC m=+0.172942157 container start c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.143784211 +0000 UTC m=+0.176497695 container attach c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:02:43 compute-0 modest_bouman[273281]: 167 167
Oct 02 12:02:43 compute-0 systemd[1]: libpod-c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101.scope: Deactivated successfully.
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.146093807 +0000 UTC m=+0.178807271 container died c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:02:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d4bfc4b316255758defb59cc05a8a015b40b68229cde69e524ddf67b9490e5-merged.mount: Deactivated successfully.
Oct 02 12:02:43 compute-0 podman[273265]: 2025-10-02 12:02:43.188531354 +0000 UTC m=+0.221244818 container remove c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:43 compute-0 systemd[1]: libpod-conmon-c8dc46bf57dea7347b0036d60f197372c8f2ec4aa2885e34bf644e1009726101.scope: Deactivated successfully.
Oct 02 12:02:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:43 compute-0 podman[273304]: 2025-10-02 12:02:43.358857145 +0000 UTC m=+0.037875565 container create d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:43 compute-0 systemd[1]: Started libpod-conmon-d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263.scope.
Oct 02 12:02:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8d9495ed41d6b5e9ff306c99106cadfb67b9212cb4963b50c11fd0834dde55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8d9495ed41d6b5e9ff306c99106cadfb67b9212cb4963b50c11fd0834dde55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8d9495ed41d6b5e9ff306c99106cadfb67b9212cb4963b50c11fd0834dde55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8d9495ed41d6b5e9ff306c99106cadfb67b9212cb4963b50c11fd0834dde55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:43 compute-0 podman[273304]: 2025-10-02 12:02:43.343601808 +0000 UTC m=+0.022620248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:43 compute-0 podman[273304]: 2025-10-02 12:02:43.442088778 +0000 UTC m=+0.121107218 container init d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:02:43 compute-0 podman[273304]: 2025-10-02 12:02:43.448121457 +0000 UTC m=+0.127139877 container start d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:02:43 compute-0 podman[273304]: 2025-10-02 12:02:43.46487197 +0000 UTC m=+0.143890410 container attach d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:02:44 compute-0 sad_gauss[273320]: {
Oct 02 12:02:44 compute-0 sad_gauss[273320]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:02:44 compute-0 sad_gauss[273320]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:02:44 compute-0 sad_gauss[273320]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:02:44 compute-0 sad_gauss[273320]:         "osd_id": 1,
Oct 02 12:02:44 compute-0 sad_gauss[273320]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:02:44 compute-0 sad_gauss[273320]:         "type": "bluestore"
Oct 02 12:02:44 compute-0 sad_gauss[273320]:     }
Oct 02 12:02:44 compute-0 sad_gauss[273320]: }
Oct 02 12:02:44 compute-0 systemd[1]: libpod-d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263.scope: Deactivated successfully.
Oct 02 12:02:44 compute-0 podman[273341]: 2025-10-02 12:02:44.355577149 +0000 UTC m=+0.025485349 container died d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f8d9495ed41d6b5e9ff306c99106cadfb67b9212cb4963b50c11fd0834dde55-merged.mount: Deactivated successfully.
Oct 02 12:02:44 compute-0 podman[273341]: 2025-10-02 12:02:44.419511446 +0000 UTC m=+0.089419636 container remove d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gauss, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:02:44 compute-0 systemd[1]: libpod-conmon-d78a966e6b5f633425c7fd69826cb9a607f9b8f8fbd2d0298987e0a4b5484263.scope: Deactivated successfully.
Oct 02 12:02:44 compute-0 sudo[273199]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:02:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:02:44 compute-0 nova_compute[257802]: 2025-10-02 12:02:44.504 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:02:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b494ae87-6db2-4790-a1e4-6f630f8e9820 does not exist
Oct 02 12:02:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 48bc4ccd-163e-4909-9123-d0420cc73ae2 does not exist
Oct 02 12:02:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 53ad5a15-10f0-4991-802c-edb6129f03f6 does not exist
Oct 02 12:02:44 compute-0 sudo[273357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:44 compute-0 sudo[273357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:44 compute-0 sudo[273357]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:44 compute-0 sudo[273382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:02:44 compute-0 sudo[273382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:44 compute-0 sudo[273382]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:44 compute-0 nova_compute[257802]: 2025-10-02 12:02:44.907 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406549.906397, b4e4932c-8129-4ceb-95ef-3a612ef502f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:44 compute-0 nova_compute[257802]: 2025-10-02 12:02:44.907 2 INFO nova.compute.manager [-] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] VM Stopped (Lifecycle Event)
Oct 02 12:02:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 308 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.9 MiB/s wr, 417 op/s
Oct 02 12:02:44 compute-0 nova_compute[257802]: 2025-10-02 12:02:44.943 2 DEBUG nova.compute.manager [None req-8d13c8f2-891a-40e1-a3f6-179203a9b2f3 - - - - - -] [instance: b4e4932c-8129-4ceb-95ef-3a612ef502f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:44 compute-0 nova_compute[257802]: 2025-10-02 12:02:44.968 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='5c71f9a3-182a-4066-9e3d-88ac9417292f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:02:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:45 compute-0 kernel: tap4a9c114c-7c: entered promiscuous mode
Oct 02 12:02:45 compute-0 NetworkManager[44987]: <info>  [1759406565.2187] manager: (tap4a9c114c-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 02 12:02:45 compute-0 ovn_controller[148183]: 2025-10-02T12:02:45Z|00104|binding|INFO|Claiming lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for this additional chassis.
Oct 02 12:02:45 compute-0 ovn_controller[148183]: 2025-10-02T12:02:45Z|00105|binding|INFO|4a9c114c-7ca6-4f80-ab5d-d38765e4c15c: Claiming fa:16:3e:87:34:a2 10.100.0.11
Oct 02 12:02:45 compute-0 nova_compute[257802]: 2025-10-02 12:02:45.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:45.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:45 compute-0 ovn_controller[148183]: 2025-10-02T12:02:45Z|00106|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c ovn-installed in OVS
Oct 02 12:02:45 compute-0 nova_compute[257802]: 2025-10-02 12:02:45.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:45 compute-0 nova_compute[257802]: 2025-10-02 12:02:45.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:45 compute-0 systemd-machined[211836]: New machine qemu-11-instance-00000010.
Oct 02 12:02:45 compute-0 systemd-udevd[273421]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:02:45 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000010.
Oct 02 12:02:45 compute-0 NetworkManager[44987]: <info>  [1759406565.2706] device (tap4a9c114c-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:02:45 compute-0 NetworkManager[44987]: <info>  [1759406565.2714] device (tap4a9c114c-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:02:45 compute-0 nova_compute[257802]: 2025-10-02 12:02:45.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:02:45 compute-0 ceph-mon[73607]: pgmap v1073: 305 pgs: 305 active+clean; 308 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.9 MiB/s wr, 417 op/s
Oct 02 12:02:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3014154418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:46 compute-0 nova_compute[257802]: 2025-10-02 12:02:46.902 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406566.9025095, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:46 compute-0 nova_compute[257802]: 2025-10-02 12:02:46.903 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Started (Lifecycle Event)
Oct 02 12:02:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 304 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.8 MiB/s wr, 292 op/s
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.002 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:02:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:47.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.478 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406567.478085, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.478 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Resumed (Lifecycle Event)
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.579 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.583 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:02:47 compute-0 nova_compute[257802]: 2025-10-02 12:02:47.684 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:02:48 compute-0 ceph-mon[73607]: pgmap v1074: 305 pgs: 305 active+clean; 304 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.8 MiB/s wr, 292 op/s
Oct 02 12:02:48 compute-0 ovn_controller[148183]: 2025-10-02T12:02:48Z|00107|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:48 compute-0 ovn_controller[148183]: 2025-10-02T12:02:48Z|00108|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:48 compute-0 nova_compute[257802]: 2025-10-02 12:02:48.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:48 compute-0 ovn_controller[148183]: 2025-10-02T12:02:48Z|00109|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:02:48 compute-0 ovn_controller[148183]: 2025-10-02T12:02:48Z|00110|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:02:48 compute-0 nova_compute[257802]: 2025-10-02 12:02:48.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 291 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.0 MiB/s wr, 281 op/s
Oct 02 12:02:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:49.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:49 compute-0 ceph-mon[73607]: pgmap v1075: 305 pgs: 305 active+clean; 291 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.0 MiB/s wr, 281 op/s
Oct 02 12:02:49 compute-0 ovn_controller[148183]: 2025-10-02T12:02:49Z|00111|binding|INFO|Claiming lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for this chassis.
Oct 02 12:02:49 compute-0 ovn_controller[148183]: 2025-10-02T12:02:49Z|00112|binding|INFO|4a9c114c-7ca6-4f80-ab5d-d38765e4c15c: Claiming fa:16:3e:87:34:a2 10.100.0.11
Oct 02 12:02:49 compute-0 ovn_controller[148183]: 2025-10-02T12:02:49Z|00113|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c up in Southbound
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.017 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.019 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d bound to our chassis
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.022 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.042 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6ca401-86be-4076-b45f-766a160c5489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.077 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1be3383b-990d-4571-b06a-7d2994650e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.080 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3a7d7d-4b0d-40ab-a77c-f79947114388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.116 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c0def160-161c-4815-b4e2-2753f9bfc7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.141 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[69381de0-e773-438b-8dc6-cd01aba7cd34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1322, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1322, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458802, 'reachable_time': 36366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273480, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.165 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2624604f-879a-4f2d-8ded-a3eb62e7fa3c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bd66e63-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458812, 'tstamp': 458812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273481, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bd66e63-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458815, 'tstamp': 458815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273481, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.167 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:50 compute-0 nova_compute[257802]: 2025-10-02 12:02:50.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.172 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bd66e63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.172 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.173 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bd66e63-90, col_values=(('external_ids', {'iface-id': '5a25c40a-77b7-400c-afc3-f6cb920420cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:02:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:02:50.173 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:02:50 compute-0 sudo[273482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:50 compute-0 sudo[273482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:50 compute-0 sudo[273482]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:50 compute-0 sudo[273507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:50 compute-0 sudo[273507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:50 compute-0 sudo[273507]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:50 compute-0 nova_compute[257802]: 2025-10-02 12:02:50.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 265 op/s
Oct 02 12:02:51 compute-0 nova_compute[257802]: 2025-10-02 12:02:51.082 2 INFO nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Post operation of migration started
Oct 02 12:02:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:51.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:51 compute-0 nova_compute[257802]: 2025-10-02 12:02:51.522 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:02:51 compute-0 nova_compute[257802]: 2025-10-02 12:02:51.524 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:02:51 compute-0 nova_compute[257802]: 2025-10-02 12:02:51.524 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:02:51 compute-0 podman[273533]: 2025-10-02 12:02:51.941234799 +0000 UTC m=+0.078668791 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:02:51 compute-0 ceph-mon[73607]: pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 265 op/s
Oct 02 12:02:52 compute-0 nova_compute[257802]: 2025-10-02 12:02:52.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:52 compute-0 nova_compute[257802]: 2025-10-02 12:02:52.116 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406557.1157072, 0146a67d-fbd8-4085-92f6-28a147c85dce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:02:52 compute-0 nova_compute[257802]: 2025-10-02 12:02:52.117 2 INFO nova.compute.manager [-] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] VM Stopped (Lifecycle Event)
Oct 02 12:02:52 compute-0 nova_compute[257802]: 2025-10-02 12:02:52.134 2 DEBUG nova.compute.manager [None req-425353fe-5805-4534-8ae5-935302c0be9f - - - - - -] [instance: 0146a67d-fbd8-4085-92f6-28a147c85dce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 279 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 184 op/s
Oct 02 12:02:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:53.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:53 compute-0 ceph-mon[73607]: pgmap v1077: 305 pgs: 305 active+clean; 279 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 184 op/s
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.679 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.695 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.714 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.715 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.715 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:53 compute-0 nova_compute[257802]: 2025-10-02 12:02:53.720 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:02:53 compute-0 virtqemud[257280]: Domain id=11 name='instance-00000010' uuid=6febae1b-d70f-43e6-8aba-1e913541fec2 is tainted: custom-monitor
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004342822329238787 of space, bias 1.0, pg target 1.302846698771636 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021537810580681185 of space, bias 1.0, pg target 0.6439805363623674 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:02:54 compute-0 nova_compute[257802]: 2025-10-02 12:02:54.727 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:02:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 224 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 189 op/s
Oct 02 12:02:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:55 compute-0 ceph-mon[73607]: pgmap v1078: 305 pgs: 305 active+clean; 224 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 189 op/s
Oct 02 12:02:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:55.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:55 compute-0 nova_compute[257802]: 2025-10-02 12:02:55.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:55 compute-0 nova_compute[257802]: 2025-10-02 12:02:55.732 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:02:55 compute-0 nova_compute[257802]: 2025-10-02 12:02:55.736 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:02:55 compute-0 nova_compute[257802]: 2025-10-02 12:02:55.766 2 DEBUG nova.objects.instance [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:02:56 compute-0 sshd-session[273554]: banner exchange: Connection from 93.123.109.214 port 46542: invalid format
Oct 02 12:02:56 compute-0 sshd-session[273555]: banner exchange: Connection from 93.123.109.214 port 46558: invalid format
Oct 02 12:02:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3579900210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:02:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3579900210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:02:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/716255890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 200 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 178 KiB/s rd, 605 KiB/s wr, 82 op/s
Oct 02 12:02:57 compute-0 nova_compute[257802]: 2025-10-02 12:02:57.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:02:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:57.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:57 compute-0 ceph-mon[73607]: pgmap v1079: 305 pgs: 305 active+clean; 200 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 178 KiB/s rd, 605 KiB/s wr, 82 op/s
Oct 02 12:02:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:02:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4123010262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/258884514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:02:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:02:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2773 syncs, 3.77 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4229 writes, 17K keys, 4229 commit groups, 1.0 writes per commit group, ingest: 16.45 MB, 0.03 MB/s
                                           Interval WAL: 4229 writes, 1656 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:02:58 compute-0 nova_compute[257802]: 2025-10-02 12:02:58.659 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Check if temp file /var/lib/nova/instances/tmps5_fkoiu exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:02:58 compute-0 nova_compute[257802]: 2025-10-02 12:02:58.660 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:02:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 77 KiB/s wr, 37 op/s
Oct 02 12:02:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:02:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:59.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:59 compute-0 ceph-mon[73607]: pgmap v1080: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 77 KiB/s wr, 37 op/s
Oct 02 12:03:00 compute-0 nova_compute[257802]: 2025-10-02 12:03:00.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:00 compute-0 podman[273559]: 2025-10-02 12:03:00.907298736 +0000 UTC m=+0.050278651 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:03:00 compute-0 podman[273560]: 2025-10-02 12:03:00.908423653 +0000 UTC m=+0.047906962 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 02 12:03:01 compute-0 ceph-mon[73607]: pgmap v1081: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 02 12:03:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:01.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:02 compute-0 nova_compute[257802]: 2025-10-02 12:03:02.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 13 KiB/s wr, 23 op/s
Oct 02 12:03:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:03.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:03 compute-0 ceph-mon[73607]: pgmap v1082: 305 pgs: 305 active+clean; 200 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 13 KiB/s wr, 23 op/s
Oct 02 12:03:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:03.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4290698394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:04 compute-0 podman[273601]: 2025-10-02 12:03:04.935600623 +0000 UTC m=+0.075291178 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:03:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 200 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 23 KiB/s wr, 24 op/s
Oct 02 12:03:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1161370037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:05 compute-0 ceph-mon[73607]: pgmap v1083: 305 pgs: 305 active+clean; 200 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 23 KiB/s wr, 24 op/s
Oct 02 12:03:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.433 2 DEBUG nova.compute.manager [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.433 2 DEBUG oslo_concurrency.lockutils [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.433 2 DEBUG oslo_concurrency.lockutils [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.434 2 DEBUG oslo_concurrency.lockutils [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.434 2 DEBUG nova.compute.manager [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.434 2 DEBUG nova.compute.manager [req-1c5bda39-236d-4c4c-8de2-a2db02a6671c req-123ab590-41da-43fa-86e2-95e7f784a0e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:03:05 compute-0 nova_compute[257802]: 2025-10-02 12:03:05.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.167 2 INFO nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 6.34 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.168 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.183 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(4d38ffdd-ba38-478b-bb7b-9503d4fbb581),old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='3d957787-f9d3-4665-bd82-80e6c561747f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.186 2 DEBUG nova.objects.instance [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lazy-loading 'migration_context' on Instance uuid 6febae1b-d70f-43e6-8aba-1e913541fec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.187 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.189 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.189 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.207 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Find same serial number: pos=1, serial=b3df5dc9-9a56-4922-8b65-4162deb6be93 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.208 2 DEBUG nova.virt.libvirt.vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:55Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.208 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.208 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.209 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:03:06 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:87:34:a2"/>
Oct 02 12:03:06 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:03:06 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:03:06 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:03:06 compute-0 nova_compute[257802]:   <target dev="tap4a9c114c-7c"/>
Oct 02 12:03:06 compute-0 nova_compute[257802]: </interface>
Oct 02 12:03:06 compute-0 nova_compute[257802]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.209 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.691 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.691 2 INFO nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:03:06 compute-0 nova_compute[257802]: 2025-10-02 12:03:06.751 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:03:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3046991153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 200 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 10 KiB/s wr, 19 op/s
Oct 02 12:03:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:07.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.253 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.254 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:03:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:07.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.591 2 DEBUG nova.compute.manager [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.593 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.593 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.593 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.594 2 DEBUG nova.compute.manager [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.594 2 WARNING nova.compute.manager [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.594 2 DEBUG nova.compute.manager [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.594 2 DEBUG nova.compute.manager [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing instance network info cache due to event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.595 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.595 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.595 2 DEBUG nova.network.neutron [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.756 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.757 2 DEBUG nova.virt.libvirt.migration [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:03:07 compute-0 ceph-mon[73607]: pgmap v1084: 305 pgs: 305 active+clean; 200 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 10 KiB/s wr, 19 op/s
Oct 02 12:03:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1093490568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.884 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406587.8844717, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.885 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Paused (Lifecycle Event)
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.912 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.915 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:03:07 compute-0 nova_compute[257802]: 2025-10-02 12:03:07.940 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During sync_power_state the instance has a pending task (migrating). Skip.
Oct 02 12:03:08 compute-0 kernel: tap4a9c114c-7c (unregistering): left promiscuous mode
Oct 02 12:03:08 compute-0 NetworkManager[44987]: <info>  [1759406588.2035] device (tap4a9c114c-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:08 compute-0 ovn_controller[148183]: 2025-10-02T12:03:08Z|00114|binding|INFO|Releasing lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c from this chassis (sb_readonly=0)
Oct 02 12:03:08 compute-0 ovn_controller[148183]: 2025-10-02T12:03:08Z|00115|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c down in Southbound
Oct 02 12:03:08 compute-0 ovn_controller[148183]: 2025-10-02T12:03:08Z|00116|binding|INFO|Removing iface tap4a9c114c-7c ovn-installed in OVS
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.219 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'e4c8874a-a81b-4869-98e4-aca2e3f3bf40'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.220 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d unbound from our chassis
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.221 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.240 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f9f634-b28e-48a8-80fd-dbf9aa0fa036]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.269 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a983d325-4d05-489c-97e7-1bc62482e719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 02 12:03:08 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000010.scope: Consumed 2.974s CPU time.
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.274 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9cfe0b-382f-4d84-984c-c8809d2fe026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 systemd-machined[211836]: Machine qemu-11-instance-00000010 terminated.
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.298 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[78984096-85a5-4980-884f-80521c8e1d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.314 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f4bbef-01c1-4e3a-99c3-f3a4f596b8c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 35, 'tx_packets': 7, 'rx_bytes': 1994, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 35, 'tx_packets': 7, 'rx_bytes': 1994, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458802, 'reachable_time': 36366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273644, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.328 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c212d1a8-fb07-4082-9bf7-eae0d5922284]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bd66e63-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458812, 'tstamp': 458812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273645, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bd66e63-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458815, 'tstamp': 458815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273645, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.330 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.335 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bd66e63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.336 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.336 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bd66e63-90, col_values=(('external_ids', {'iface-id': '5a25c40a-77b7-400c-afc3-f6cb920420cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:08.336 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:03:08 compute-0 virtqemud[257280]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-b3df5dc9-9a56-4922-8b65-4162deb6be93: No such file or directory
Oct 02 12:03:08 compute-0 virtqemud[257280]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-b3df5dc9-9a56-4922-8b65-4162deb6be93: No such file or directory
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.431 2 DEBUG nova.virt.libvirt.guest [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.432 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation has completed
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.432 2 INFO nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] _post_live_migration() is started..
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.434 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.434 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.435 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.523 2 DEBUG nova.compute.manager [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.523 2 DEBUG oslo_concurrency.lockutils [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.524 2 DEBUG oslo_concurrency.lockutils [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.524 2 DEBUG oslo_concurrency.lockutils [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.524 2 DEBUG nova.compute.manager [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:08 compute-0 nova_compute[257802]: 2025-10-02 12:03:08.524 2 DEBUG nova.compute.manager [req-7e83b389-60df-4ee3-baf0-d8ba2ef6db5b req-cab3e8f9-47fb-4964-b2c7-a27ce6214890 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:03:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 217 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 731 KiB/s wr, 39 op/s
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.116 2 DEBUG nova.network.neutron [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updated VIF entry in instance network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.117 2 DEBUG nova.network.neutron [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.140 2 DEBUG oslo_concurrency.lockutils [req-048ca513-2f81-498a-8f06-2f4799f0dcad req-7ea42d27-26cf-4a9d-9661-6447e9fa61e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:03:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:09.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:09 compute-0 ceph-mon[73607]: pgmap v1085: 305 pgs: 305 active+clean; 217 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 731 KiB/s wr, 39 op/s
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.462 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Activated binding for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.462 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.463 2 DEBUG nova.virt.libvirt.vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:58Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.463 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.463 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.464 2 DEBUG os_vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9c114c-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.470 2 INFO os_vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.470 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.470 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.471 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.471 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.471 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deleting instance files /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.471 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deletion of /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del complete
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.536 2 DEBUG nova.compute.manager [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.536 2 DEBUG oslo_concurrency.lockutils [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.537 2 DEBUG oslo_concurrency.lockutils [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.537 2 DEBUG oslo_concurrency.lockutils [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.537 2 DEBUG nova.compute.manager [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:09 compute-0 nova_compute[257802]: 2025-10-02 12:03:09.537 2 DEBUG nova.compute.manager [req-5840af6f-199a-4229-b2c7-c9f39fdcc766 req-58fe1328-f4d5-4e5d-89c4-41b51854ecff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:03:10 compute-0 sudo[273659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:10 compute-0 sudo[273659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1088330370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:10 compute-0 sudo[273659]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:10 compute-0 sudo[273685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:10 compute-0 sudo[273685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:10 compute-0 sudo[273685]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.674 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.674 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.674 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.674 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.674 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.675 2 WARNING nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.675 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.675 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.675 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.675 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.676 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.676 2 WARNING nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.676 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.676 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.676 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 WARNING nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.677 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.678 2 DEBUG oslo_concurrency.lockutils [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.678 2 DEBUG nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:10 compute-0 nova_compute[257802]: 2025-10-02 12:03:10.678 2 WARNING nova.compute.manager [req-a0169541-85cd-4b1b-be07-04df2366afc7 req-f5051526-c598-425a-a284-e39003b82786 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.
Oct 02 12:03:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 258 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 69 op/s
Oct 02 12:03:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:11.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1729442588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/872445967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 258 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 2.4 MiB/s wr, 58 op/s
Oct 02 12:03:12 compute-0 ceph-mon[73607]: pgmap v1086: 305 pgs: 305 active+clean; 258 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 69 op/s
Oct 02 12:03:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:13.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:14 compute-0 ceph-mon[73607]: pgmap v1087: 305 pgs: 305 active+clean; 258 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 2.4 MiB/s wr, 58 op/s
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.937 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.938 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.938 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 286 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.961 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.962 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.962 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.962 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:03:14 compute-0 nova_compute[257802]: 2025-10-02 12:03:14.962 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:15.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:15.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330245305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.375 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.446 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.446 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1330245305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.640 2 WARNING nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.641 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4633MB free_disk=20.914722442626953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.642 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.642 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.686 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Migration for instance 6febae1b-d70f-43e6-8aba-1e913541fec2 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.714 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.751 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Instance e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.751 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Migration 4d38ffdd-ba38-478b-bb7b-9503d4fbb581 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.751 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.752 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:03:15 compute-0 nova_compute[257802]: 2025-10-02 12:03:15.811 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996132327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.232 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.238 2 DEBUG nova.compute.provider_tree [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.263 2 DEBUG nova.scheduler.client.report [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.312 2 DEBUG nova.compute.resource_tracker [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.312 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.317 2 INFO nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.422 2 INFO nova.scheduler.client.report [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Deleted allocation for migration 4d38ffdd-ba38-478b-bb7b-9503d4fbb581
Oct 02 12:03:16 compute-0 nova_compute[257802]: 2025-10-02 12:03:16.423 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:03:16 compute-0 ceph-mon[73607]: pgmap v1088: 305 pgs: 305 active+clean; 286 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 02 12:03:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3184905191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1996132327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 264 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Oct 02 12:03:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:18 compute-0 ceph-mon[73607]: pgmap v1089: 305 pgs: 305 active+clean; 264 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Oct 02 12:03:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 246 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Oct 02 12:03:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:19.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:19 compute-0 nova_compute[257802]: 2025-10-02 12:03:19.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:20 compute-0 ceph-mon[73607]: pgmap v1090: 305 pgs: 305 active+clean; 246 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Oct 02 12:03:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4209692166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:20 compute-0 nova_compute[257802]: 2025-10-02 12:03:20.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 255 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 231 op/s
Oct 02 12:03:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:21.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2782296448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:03:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/319575875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:03:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:03:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/319575875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:03:22 compute-0 ceph-mon[73607]: pgmap v1091: 305 pgs: 305 active+clean; 255 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 231 op/s
Oct 02 12:03:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/125296754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/319575875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:03:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/319575875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.795 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "46dc872c-5412-497c-a7e7-31a99fa93f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.795 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.854 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.858 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.858 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.858 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.859 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.859 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.860 2 INFO nova.compute.manager [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Terminating instance
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.861 2 DEBUG nova.compute.manager [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:03:22 compute-0 podman[273762]: 2025-10-02 12:03:22.917031314 +0000 UTC m=+0.056445533 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:22 compute-0 kernel: tapc3f370c5-77 (unregistering): left promiscuous mode
Oct 02 12:03:22 compute-0 NetworkManager[44987]: <info>  [1759406602.9219] device (tapc3f370c5-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00117|binding|INFO|Releasing lport c3f370c5-770e-48df-b015-b39eb427f259 from this chassis (sb_readonly=0)
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00118|binding|INFO|Setting lport c3f370c5-770e-48df-b015-b39eb427f259 down in Southbound
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00119|binding|INFO|Releasing lport 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 from this chassis (sb_readonly=0)
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00120|binding|INFO|Setting lport 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 down in Southbound
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00121|binding|INFO|Removing iface tapc3f370c5-77 ovn-installed in OVS
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.947 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ff:42 10.100.0.6'], port_security=['fa:16:3e:02:ff:42 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-96076607', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-96076607', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c3f370c5-770e-48df-b015-b39eb427f259) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00122|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct 02 12:03:22 compute-0 ovn_controller[148183]: 2025-10-02T12:03:22Z|00123|binding|INFO|Releasing lport 2d320c71-91bf-44ba-b753-c9807cdc7fdb from this chassis (sb_readonly=0)
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.949 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:98:01 19.80.0.134'], port_security=['fa:16:3e:88:98:01 19.80.0.134'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['c3f370c5-770e-48df-b015-b39eb427f259'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2131619737', 'neutron:cidrs': '19.80.0.134/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2131619737', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b81908e7-ad92-4b22-83eb-83c6667bf780, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0fe5d3df-efa4-47f1-a32b-45858e68a8b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:03:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 255 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 199 op/s
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.950 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c3f370c5-770e-48df-b015-b39eb427f259 in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d unbound from our chassis
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.952 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.954 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[256f835f-1a76-4a68-a5fd-3f54e999620c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:22.954 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace which is not needed anymore
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.963 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.963 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.972 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.972 2 INFO nova.compute.claims [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:03:22 compute-0 nova_compute[257802]: 2025-10-02 12:03:22.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 02 12:03:23 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Consumed 7.899s CPU time.
Oct 02 12:03:23 compute-0 systemd-machined[211836]: Machine qemu-9-instance-0000000c terminated.
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [NOTICE]   (271169) : haproxy version is 2.8.14-c23fe91
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [NOTICE]   (271169) : path to executable is /usr/sbin/haproxy
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [WARNING]  (271169) : Exiting Master process...
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [WARNING]  (271169) : Exiting Master process...
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [ALERT]    (271169) : Current worker (271171) exited with code 143 (Terminated)
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[271147]: [WARNING]  (271169) : All workers exited. Exiting... (0)
Oct 02 12:03:23 compute-0 systemd[1]: libpod-6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846.scope: Deactivated successfully.
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.098 2 INFO nova.virt.libvirt.driver [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Instance destroyed successfully.
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.100 2 DEBUG nova.objects.instance [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lazy-loading 'resources' on Instance uuid e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:03:23 compute-0 podman[273803]: 2025-10-02 12:03:23.101981286 +0000 UTC m=+0.054573327 container died 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.102 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846-userdata-shm.mount: Deactivated successfully.
Oct 02 12:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-105e7c63d274aaa119458aa4bef051320da796ffbc8fb11d8def9bd7834e54fd-merged.mount: Deactivated successfully.
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.153 2 DEBUG nova.virt.libvirt.vif [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:01:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1571369270',display_name='tempest-LiveMigrationTest-server-1571369270',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1571369270',id=12,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:01:47Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-ton1smkr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:07Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.154 2 DEBUG nova.network.os_vif_util [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converting VIF {"id": "c3f370c5-770e-48df-b015-b39eb427f259", "address": "fa:16:3e:02:ff:42", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3f370c5-77", "ovs_interfaceid": "c3f370c5-770e-48df-b015-b39eb427f259", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.155 2 DEBUG nova.network.os_vif_util [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.155 2 DEBUG os_vif [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3f370c5-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.164 2 INFO os_vif [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ff:42,bridge_name='br-int',has_traffic_filtering=True,id=c3f370c5-770e-48df-b015-b39eb427f259,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc3f370c5-77')
Oct 02 12:03:23 compute-0 podman[273803]: 2025-10-02 12:03:23.165088203 +0000 UTC m=+0.117680244 container cleanup 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:23 compute-0 systemd[1]: libpod-conmon-6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846.scope: Deactivated successfully.
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.195 2 DEBUG nova.compute.manager [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Received event network-vif-unplugged-c3f370c5-770e-48df-b015-b39eb427f259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.196 2 DEBUG oslo_concurrency.lockutils [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.196 2 DEBUG oslo_concurrency.lockutils [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.196 2 DEBUG oslo_concurrency.lockutils [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:03:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.197 2 DEBUG nova.compute.manager [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] No waiting events found dispatching network-vif-unplugged-c3f370c5-770e-48df-b015-b39eb427f259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.197 2 DEBUG nova.compute.manager [req-a643dbde-8b88-46fd-81be-e2b4f863bc15 req-e465b9ac-85cd-4954-ae99-4e220860f8cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Received event network-vif-unplugged-c3f370c5-770e-48df-b015-b39eb427f259 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:03:23 compute-0 podman[273849]: 2025-10-02 12:03:23.249106776 +0000 UTC m=+0.059092540 container remove 6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.257 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f98459e6-390f-46ed-aa62-d26b17e6e424]: (4, ('Thu Oct  2 12:03:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846)\n6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846\nThu Oct  2 12:03:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846)\n6ddcf3104f6773662d1da48ac96aec0bb15fce8fc8332ee5071d80d4dc852846\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.259 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[80cb9815-4271-4d7d-bc3c-4202eef662ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.260 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 kernel: tap5bd66e63-90: left promiscuous mode
Oct 02 12:03:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:23.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.281 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bf45b601-cd81-4889-a245-a6b8759e4768]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.307 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed20e4a-4a0d-4e4d-95bd-d5f14fa8995c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.309 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca0cf37-ef50-493b-84d4-6afe9154af33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.327 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b5daade8-1a24-4a01-9818-734d44eb2425]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458795, 'reachable_time': 28178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273893, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d5bd66e63\x2d9399\x2d4ab1\x2dbcda\x2da761f2c44b1d.mount: Deactivated successfully.
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.333 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.333 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f69f9c-1bf4-401e-afaf-b6e22b077534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.335 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 0fe5d3df-efa4-47f1-a32b-45858e68a8b9 in datapath 89bcb068-8337-43f0-9d5d-f27225e9a30d unbound from our chassis
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.336 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 89bcb068-8337-43f0-9d5d-f27225e9a30d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.337 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33f25df3-9b2f-4955-84ed-04421b2ec4cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.337 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d namespace which is not needed anymore
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406588.4310393, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.433 2 INFO nova.compute.manager [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Stopped (Lifecycle Event)
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [NOTICE]   (271244) : haproxy version is 2.8.14-c23fe91
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [NOTICE]   (271244) : path to executable is /usr/sbin/haproxy
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [WARNING]  (271244) : Exiting Master process...
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [ALERT]    (271244) : Current worker (271246) exited with code 143 (Terminated)
Oct 02 12:03:23 compute-0 neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d[271240]: [WARNING]  (271244) : All workers exited. Exiting... (0)
Oct 02 12:03:23 compute-0 systemd[1]: libpod-5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb.scope: Deactivated successfully.
Oct 02 12:03:23 compute-0 podman[273910]: 2025-10-02 12:03:23.488846049 +0000 UTC m=+0.053913122 container died 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e108c60ce2193e14f4e80aaa64d913573c729dd603977becea42d2467361c96d-merged.mount: Deactivated successfully.
Oct 02 12:03:23 compute-0 podman[273910]: 2025-10-02 12:03:23.550565891 +0000 UTC m=+0.115632964 container cleanup 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:03:23 compute-0 systemd[1]: libpod-conmon-5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb.scope: Deactivated successfully.
Oct 02 12:03:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201983574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.571 2 DEBUG nova.compute.manager [None req-689521a7-c146-473b-907b-84d15f65b823 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.588 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.594 2 DEBUG nova.compute.provider_tree [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:23 compute-0 podman[273939]: 2025-10-02 12:03:23.629786454 +0000 UTC m=+0.057379476 container remove 5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.636 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb116cab-5fe5-48c0-9b82-e6f409b6753f]: (4, ('Thu Oct  2 12:03:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d (5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb)\n5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb\nThu Oct  2 12:03:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d (5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb)\n5b87dcbf53394ce2ceee9199a84e2a5908b0dd998e8a3b2b6109065bebc6cbcb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.637 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[330de44c-9e3f-4abf-b34b-bfee32a5ab43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.638 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89bcb068-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 kernel: tap89bcb068-80: left promiscuous mode
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.656 2 INFO nova.virt.libvirt.driver [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Deleting instance files /var/lib/nova/instances/e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7_del
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.657 2 INFO nova.virt.libvirt.driver [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Deletion of /var/lib/nova/instances/e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7_del complete
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.660 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb1e19f-fb43-4711-9ea8-590fbfdb345d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.680 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[12ca137e-76a7-4111-992a-15f3ee89f82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.681 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4b7f50-309b-4ef3-81d3-538c1b018543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.695 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86e38486-ebc9-4644-905f-3a32a63230e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458886, 'reachable_time': 27298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273954, 'error': None, 'target': 'ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.697 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-89bcb068-8337-43f0-9d5d-f27225e9a30d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:03:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:23.697 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5e15db-0d08-45dc-bc93-f52e939714cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:03:23 compute-0 nova_compute[257802]: 2025-10-02 12:03:23.778 2 DEBUG nova.scheduler.client.report [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d89bcb068\x2d8337\x2d43f0\x2d9d5d\x2df27225e9a30d.mount: Deactivated successfully.
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.181 2 INFO nova.compute.manager [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.181 2 DEBUG oslo.service.loopingcall [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.182 2 DEBUG nova.compute.manager [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.182 2 DEBUG nova.network.neutron [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.305 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.306 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.419 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.420 2 DEBUG nova.network.neutron [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.454 2 INFO nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.502 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:03:24 compute-0 ceph-mon[73607]: pgmap v1092: 305 pgs: 305 active+clean; 255 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 199 op/s
Oct 02 12:03:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3201983574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.606 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.608 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.608 2 INFO nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Creating image(s)
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.672 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.698 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.733 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.739 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.804 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.805 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.806 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.807 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.839 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:24 compute-0 nova_compute[257802]: 2025-10-02 12:03:24.844 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 46dc872c-5412-497c-a7e7-31a99fa93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 217 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 237 op/s
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.153 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 46dc872c-5412-497c-a7e7-31a99fa93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.184 2 DEBUG nova.network.neutron [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.184 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:03:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.232 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] resizing rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:03:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:25.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.338 2 DEBUG nova.compute.manager [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Received event network-vif-plugged-c3f370c5-770e-48df-b015-b39eb427f259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.338 2 DEBUG oslo_concurrency.lockutils [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.339 2 DEBUG oslo_concurrency.lockutils [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.339 2 DEBUG oslo_concurrency.lockutils [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.339 2 DEBUG nova.compute.manager [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] No waiting events found dispatching network-vif-plugged-c3f370c5-770e-48df-b015-b39eb427f259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.340 2 WARNING nova.compute.manager [req-c50e705a-8e7a-4a92-b2cb-e8fcf565ceba req-9d124caf-821a-4cca-9c7e-2400a6b855f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Received unexpected event network-vif-plugged-c3f370c5-770e-48df-b015-b39eb427f259 for instance with vm_state active and task_state deleting.
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.344 2 DEBUG nova.objects.instance [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lazy-loading 'migration_context' on Instance uuid 46dc872c-5412-497c-a7e7-31a99fa93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.367 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.368 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Ensure instance console log exists: /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.368 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.368 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.369 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.370 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.374 2 WARNING nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.381 2 DEBUG nova.virt.libvirt.host [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.382 2 DEBUG nova.virt.libvirt.host [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.386 2 DEBUG nova.virt.libvirt.host [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.386 2 DEBUG nova.virt.libvirt.host [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.388 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.388 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.389 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.389 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.389 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.390 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.390 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.390 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.391 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.391 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.391 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.391 2 DEBUG nova.virt.hardware [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.395 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:03:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937837174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.821 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.849 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:25 compute-0 nova_compute[257802]: 2025-10-02 12:03:25.853 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:03:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380107115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.275 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.277 2 DEBUG nova.objects.instance [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lazy-loading 'pci_devices' on Instance uuid 46dc872c-5412-497c-a7e7-31a99fa93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.293 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <uuid>46dc872c-5412-497c-a7e7-31a99fa93f75</uuid>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <name>instance-00000015</name>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiagnosticsTest-server-106125037</nova:name>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:03:25</nova:creationTime>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:user uuid="4f90aaf0fd8f4214863d9023c775ec7d">tempest-ServerDiagnosticsTest-393356948-project-member</nova:user>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <nova:project uuid="462919ce52394bef90e573962ed18700">tempest-ServerDiagnosticsTest-393356948</nova:project>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="serial">46dc872c-5412-497c-a7e7-31a99fa93f75</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="uuid">46dc872c-5412-497c-a7e7-31a99fa93f75</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/46dc872c-5412-497c-a7e7-31a99fa93f75_disk">
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config">
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:03:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/console.log" append="off"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:03:26 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:03:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:03:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:03:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:03:26 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.410 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.412 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.413 2 INFO nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Using config drive
Oct 02 12:03:26 compute-0 nova_compute[257802]: 2025-10-02 12:03:26.438 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:26 compute-0 ceph-mon[73607]: pgmap v1093: 305 pgs: 305 active+clean; 217 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 237 op/s
Oct 02 12:03:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3937837174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1380107115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:26.920 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:26.921 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:26.921 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 195 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Oct 02 12:03:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:03:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:03:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:27.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.347 2 INFO nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Creating config drive at /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.352 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66z3xmbo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.479 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66z3xmbo" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.507 2 DEBUG nova.storage.rbd_utils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] rbd image 46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.510 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config 46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.827 2 DEBUG oslo_concurrency.processutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config 46dc872c-5412-497c-a7e7-31a99fa93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:27 compute-0 nova_compute[257802]: 2025-10-02 12:03:27.828 2 INFO nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Deleting local config drive /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75/disk.config because it was imported into RBD.
Oct 02 12:03:27 compute-0 systemd-machined[211836]: New machine qemu-12-instance-00000015.
Oct 02 12:03:27 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000015.
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:28 compute-0 ceph-mon[73607]: pgmap v1094: 305 pgs: 305 active+clean; 195 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.790 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406608.790054, 46dc872c-5412-497c-a7e7-31a99fa93f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.791 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] VM Resumed (Lifecycle Event)
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.794 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.794 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.797 2 INFO nova.virt.libvirt.driver [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance spawned successfully.
Oct 02 12:03:28 compute-0 nova_compute[257802]: 2025-10-02 12:03:28.798 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:03:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 146 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 241 op/s
Oct 02 12:03:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.996 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.996 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.997 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.997 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.997 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:29 compute-0 nova_compute[257802]: 2025-10-02 12:03:29.997 2 DEBUG nova.virt.libvirt.driver [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.121 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.125 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.165 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.166 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406608.790817, 46dc872c-5412-497c-a7e7-31a99fa93f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.166 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] VM Started (Lifecycle Event)
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.193 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.196 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.212 2 INFO nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Took 5.61 seconds to spawn the instance on the hypervisor.
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.213 2 DEBUG nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.224 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.276 2 INFO nova.compute.manager [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Took 7.35 seconds to build instance.
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.300 2 DEBUG oslo_concurrency.lockutils [None req-82b740a7-321a-4dce-a87c-909920039bd6 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.526 2 DEBUG nova.network.neutron [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:03:30 compute-0 sudo[274303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:30 compute-0 sudo[274303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:30 compute-0 sudo[274303]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.592 2 INFO nova.compute.manager [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Took 6.41 seconds to deallocate network for instance.
Oct 02 12:03:30 compute-0 sudo[274328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:30 compute-0 sudo[274328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:30 compute-0 sudo[274328]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.667 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.668 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.798 2 DEBUG nova.compute.manager [None req-689c34ee-2761-4df2-ab65-9c032568cb6d 8c3449c8b0404774b93358774236e412 186c71c9c685435c9b1bc0ba899c2211 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.802 2 INFO nova.compute.manager [None req-689c34ee-2761-4df2-ab65-9c032568cb6d 8c3449c8b0404774b93358774236e412 186c71c9c685435c9b1bc0ba899c2211 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Retrieving diagnostics
Oct 02 12:03:30 compute-0 nova_compute[257802]: 2025-10-02 12:03:30.839 2 DEBUG oslo_concurrency.processutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:30 compute-0 ceph-mon[73607]: pgmap v1095: 305 pgs: 305 active+clean; 146 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 241 op/s
Oct 02 12:03:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 298 op/s
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.171 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "46dc872c-5412-497c-a7e7-31a99fa93f75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.171 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.172 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "46dc872c-5412-497c-a7e7-31a99fa93f75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.172 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.172 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.173 2 INFO nova.compute.manager [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Terminating instance
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.174 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "refresh_cache-46dc872c-5412-497c-a7e7-31a99fa93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.174 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquired lock "refresh_cache-46dc872c-5412-497c-a7e7-31a99fa93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.175 2 DEBUG nova.network.neutron [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:03:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:31.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201547118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.356 2 DEBUG oslo_concurrency.processutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.362 2 DEBUG nova.compute.provider_tree [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.608 2 DEBUG nova.scheduler.client.report [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.668 2 DEBUG nova.network.neutron [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.677 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.781 2 INFO nova.scheduler.client.report [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Deleted allocations for instance e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7
Oct 02 12:03:31 compute-0 podman[274376]: 2025-10-02 12:03:31.922522385 +0000 UTC m=+0.055007528 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:03:31 compute-0 podman[274375]: 2025-10-02 12:03:31.92559985 +0000 UTC m=+0.058052582 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.943 2 DEBUG oslo_concurrency.lockutils [None req-d622c355-8acb-4c63-965f-78612a0df1f9 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/250854786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1201547118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:31 compute-0 nova_compute[257802]: 2025-10-02 12:03:31.992 2 DEBUG nova.network.neutron [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:03:32 compute-0 nova_compute[257802]: 2025-10-02 12:03:32.032 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Releasing lock "refresh_cache-46dc872c-5412-497c-a7e7-31a99fa93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:03:32 compute-0 nova_compute[257802]: 2025-10-02 12:03:32.033 2 DEBUG nova.compute.manager [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:03:32 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 02 12:03:32 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Consumed 4.195s CPU time.
Oct 02 12:03:32 compute-0 systemd-machined[211836]: Machine qemu-12-instance-00000015 terminated.
Oct 02 12:03:32 compute-0 nova_compute[257802]: 2025-10-02 12:03:32.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:32 compute-0 nova_compute[257802]: 2025-10-02 12:03:32.256 2 INFO nova.virt.libvirt.driver [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance destroyed successfully.
Oct 02 12:03:32 compute-0 nova_compute[257802]: 2025-10-02 12:03:32.259 2 DEBUG nova.objects.instance [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lazy-loading 'resources' on Instance uuid 46dc872c-5412-497c-a7e7-31a99fa93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:03:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 271 op/s
Oct 02 12:03:33 compute-0 ceph-mon[73607]: pgmap v1096: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 298 op/s
Oct 02 12:03:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1247553154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:33.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.217 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:03:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.569 2 INFO nova.virt.libvirt.driver [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Deleting instance files /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75_del
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.569 2 INFO nova.virt.libvirt.driver [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Deletion of /var/lib/nova/instances/46dc872c-5412-497c-a7e7-31a99fa93f75_del complete
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.650 2 INFO nova.compute.manager [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Took 1.62 seconds to destroy the instance on the hypervisor.
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.651 2 DEBUG oslo.service.loopingcall [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.651 2 DEBUG nova.compute.manager [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:03:33 compute-0 nova_compute[257802]: 2025-10-02 12:03:33.651 2 DEBUG nova.network.neutron [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:34 compute-0 ceph-mon[73607]: pgmap v1097: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 271 op/s
Oct 02 12:03:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3069889945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2979139565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.137 2 DEBUG nova.network.neutron [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.166 2 DEBUG nova.network.neutron [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.169 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.209 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.210 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.210 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.211 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.211 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.237 2 INFO nova.compute.manager [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Took 0.59 seconds to deallocate network for instance.
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.374 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.374 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.399 2 DEBUG nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.415 2 DEBUG nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.415 2 DEBUG nova.compute.provider_tree [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.432 2 DEBUG nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.456 2 DEBUG nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.485 2 DEBUG oslo_concurrency.processutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714695288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.640 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:34.734 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:03:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:34.735 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:03:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:34.735 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:34.768 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:03:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:34.768 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.843 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.844 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.845 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424545852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.909 2 DEBUG oslo_concurrency.processutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.914 2 DEBUG nova.compute.provider_tree [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.935 2 DEBUG nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.6 MiB/s wr, 348 op/s
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.968 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:34 compute-0 nova_compute[257802]: 2025-10-02 12:03:34.970 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.009 2 INFO nova.scheduler.client.report [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Deleted allocations for instance 46dc872c-5412-497c-a7e7-31a99fa93f75
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.045 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.046 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.064 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.107 2 DEBUG oslo_concurrency.lockutils [None req-9cb1f396-34a6-4f47-8726-5ca0baf0e76c 4f90aaf0fd8f4214863d9023c775ec7d 462919ce52394bef90e573962ed18700 - - default default] Lock "46dc872c-5412-497c-a7e7-31a99fa93f75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3714695288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1695241263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3424545852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:35.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111253323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.534 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.539 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.552 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.591 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.591 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.591 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.591 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:03:35 compute-0 nova_compute[257802]: 2025-10-02 12:03:35.616 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:35 compute-0 podman[274507]: 2025-10-02 12:03:35.943426671 +0000 UTC m=+0.087336306 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:36 compute-0 ceph-mon[73607]: pgmap v1098: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.6 MiB/s wr, 348 op/s
Oct 02 12:03:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1929740917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1111253323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 341 op/s
Oct 02 12:03:37 compute-0 nova_compute[257802]: 2025-10-02 12:03:37.145 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:37 compute-0 nova_compute[257802]: 2025-10-02 12:03:37.146 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:37 compute-0 nova_compute[257802]: 2025-10-02 12:03:37.146 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:03:37 compute-0 nova_compute[257802]: 2025-10-02 12:03:37.147 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:03:37 compute-0 nova_compute[257802]: 2025-10-02 12:03:37.171 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:03:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:38 compute-0 nova_compute[257802]: 2025-10-02 12:03:38.094 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406603.092829, e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:38 compute-0 nova_compute[257802]: 2025-10-02 12:03:38.094 2 INFO nova.compute.manager [-] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] VM Stopped (Lifecycle Event)
Oct 02 12:03:38 compute-0 nova_compute[257802]: 2025-10-02 12:03:38.159 2 DEBUG nova.compute.manager [None req-583aae66-d63b-4a5c-88c9-9076e973009c - - - - - -] [instance: e2d0d6b1-09f1-478b-8bd6-4159ee5c2bf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:38 compute-0 nova_compute[257802]: 2025-10-02 12:03:38.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:38 compute-0 ceph-mon[73607]: pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 341 op/s
Oct 02 12:03:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 311 op/s
Oct 02 12:03:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:40 compute-0 nova_compute[257802]: 2025-10-02 12:03:40.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:40 compute-0 ceph-mon[73607]: pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 311 op/s
Oct 02 12:03:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 206 op/s
Oct 02 12:03:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:41.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:41.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:03:42
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'volumes']
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:03:42 compute-0 ceph-mon[73607]: pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 206 op/s
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:03:42.770 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:03:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 107 op/s
Oct 02 12:03:43 compute-0 nova_compute[257802]: 2025-10-02 12:03:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:43.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:43.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:44 compute-0 ceph-mon[73607]: pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 107 op/s
Oct 02 12:03:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 107 op/s
Oct 02 12:03:44 compute-0 sudo[274538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:44 compute-0 sudo[274538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 sudo[274538]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:45 compute-0 sudo[274563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 sudo[274563]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:45 compute-0 sudo[274588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 sudo[274588]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:03:45 compute-0 sudo[274613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:45 compute-0 sudo[274613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:03:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:03:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:03:45 compute-0 nova_compute[257802]: 2025-10-02 12:03:45.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:03:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:45 compute-0 sudo[274659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:45 compute-0 sudo[274659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:45 compute-0 sudo[274659]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:45 compute-0 sudo[274684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 sudo[274684]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:45 compute-0 sudo[274709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:45 compute-0 sudo[274709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-0 sudo[274734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:03:45 compute-0 sudo[274734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:03:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:03:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 sudo[274734]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 ceph-mon[73607]: pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 107 op/s
Oct 02 12:03:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:46 compute-0 sudo[274790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:46 compute-0 sudo[274790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:46 compute-0 sudo[274790]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 sudo[274815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:46 compute-0 sudo[274815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:46 compute-0 sudo[274815]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 sudo[274840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:46 compute-0 sudo[274840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:46 compute-0 sudo[274840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 sudo[274865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:03:46 compute-0 sudo[274865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 29 op/s
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.066929622 +0000 UTC m=+0.042840218 container create f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:03:47 compute-0 systemd[1]: Started libpod-conmon-f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d.scope.
Oct 02 12:03:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.04659667 +0000 UTC m=+0.022507316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.144994837 +0000 UTC m=+0.120905463 container init f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.151134069 +0000 UTC m=+0.127044655 container start f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.154689316 +0000 UTC m=+0.130599932 container attach f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:03:47 compute-0 competent_meitner[274947]: 167 167
Oct 02 12:03:47 compute-0 systemd[1]: libpod-f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d.scope: Deactivated successfully.
Oct 02 12:03:47 compute-0 podman[274931]: 2025-10-02 12:03:47.156521692 +0000 UTC m=+0.132432298 container died f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:03:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:47 compute-0 nova_compute[257802]: 2025-10-02 12:03:47.255 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406612.2534199, 46dc872c-5412-497c-a7e7-31a99fa93f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:03:47 compute-0 nova_compute[257802]: 2025-10-02 12:03:47.256 2 INFO nova.compute.manager [-] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] VM Stopped (Lifecycle Event)
Oct 02 12:03:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:47.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:48 compute-0 nova_compute[257802]: 2025-10-02 12:03:48.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:49 compute-0 nova_compute[257802]: 2025-10-02 12:03:49.101 2 DEBUG nova.compute.manager [None req-66d1e118-b205-4913-a2cc-aa4a6a4ec530 - - - - - -] [instance: 46dc872c-5412-497c-a7e7-31a99fa93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:03:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:49.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:50 compute-0 nova_compute[257802]: 2025-10-02 12:03:50.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:50 compute-0 sudo[274965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:50 compute-0 sudo[274965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:50 compute-0 sudo[274965]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-89154c362bc8eb6756c653e2e0d8ebc524215462a9c305f53563676ed70d0135-merged.mount: Deactivated successfully.
Oct 02 12:03:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:50 compute-0 sudo[274990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:50 compute-0 sudo[274990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:50 compute-0 sudo[274990]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:51.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:51 compute-0 podman[274931]: 2025-10-02 12:03:51.266106245 +0000 UTC m=+4.242016871 container remove f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_meitner, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:03:51 compute-0 systemd[1]: libpod-conmon-f49177dc5727df63a08c2ad78f4e46658a252a6aaeb5d4095414902dd75dd28d.scope: Deactivated successfully.
Oct 02 12:03:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:51 compute-0 podman[275022]: 2025-10-02 12:03:51.447064898 +0000 UTC m=+0.024922666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:51 compute-0 podman[275022]: 2025-10-02 12:03:51.599673072 +0000 UTC m=+0.177530820 container create 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:03:51 compute-0 systemd[1]: Started libpod-conmon-6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f.scope.
Oct 02 12:03:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c673b27d727199af816632814d3a8917310679e706f61a4a5c0ae0adb5fbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c673b27d727199af816632814d3a8917310679e706f61a4a5c0ae0adb5fbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c673b27d727199af816632814d3a8917310679e706f61a4a5c0ae0adb5fbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c673b27d727199af816632814d3a8917310679e706f61a4a5c0ae0adb5fbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:51 compute-0 podman[275022]: 2025-10-02 12:03:51.821176455 +0000 UTC m=+0.399034223 container init 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:03:51 compute-0 podman[275022]: 2025-10-02 12:03:51.828246789 +0000 UTC m=+0.406104537 container start 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:03:51 compute-0 podman[275022]: 2025-10-02 12:03:51.892081704 +0000 UTC m=+0.469939452 container attach 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:03:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:03:52 compute-0 ceph-mon[73607]: pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 29 op/s
Oct 02 12:03:52 compute-0 ceph-mon[73607]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:52 compute-0 ceph-mon[73607]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:03:52 compute-0 nova_compute[257802]: 2025-10-02 12:03:52.741 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "174d7548-b038-401c-82bf-8212037da6d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:52 compute-0 nova_compute[257802]: 2025-10-02 12:03:52.742 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:52 compute-0 nova_compute[257802]: 2025-10-02 12:03:52.864 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:03:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:53 compute-0 condescending_bohr[275038]: [
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:     {
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "available": false,
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "ceph_device": false,
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "lsm_data": {},
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "lvs": [],
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "path": "/dev/sr0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "rejected_reasons": [
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "Has a FileSystem",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "Insufficient space (<5GB)"
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         ],
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         "sys_api": {
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "actuators": null,
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "device_nodes": "sr0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "devname": "sr0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "human_readable_size": "482.00 KB",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "id_bus": "ata",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "model": "QEMU DVD-ROM",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "nr_requests": "2",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "parent": "/dev/sr0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "partitions": {},
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "path": "/dev/sr0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "removable": "1",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "rev": "2.5+",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "ro": "0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "rotational": "0",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "sas_address": "",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "sas_device_handle": "",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "scheduler_mode": "mq-deadline",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "sectors": 0,
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "sectorsize": "2048",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "size": 493568.0,
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "support_discard": "2048",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "type": "disk",
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:             "vendor": "QEMU"
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:         }
Oct 02 12:03:53 compute-0 condescending_bohr[275038]:     }
Oct 02 12:03:53 compute-0 condescending_bohr[275038]: ]
Oct 02 12:03:53 compute-0 systemd[1]: libpod-6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f.scope: Deactivated successfully.
Oct 02 12:03:53 compute-0 systemd[1]: libpod-6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f.scope: Consumed 1.155s CPU time.
Oct 02 12:03:53 compute-0 podman[275022]: 2025-10-02 12:03:53.028937875 +0000 UTC m=+1.606795633 container died 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.108 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.109 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.116 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.116 2 INFO nova.compute.claims [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:03:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:53.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:03:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.380 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:03:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:03:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229641721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.803 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:03:53 compute-0 nova_compute[257802]: 2025-10-02 12:03:53.808 2 DEBUG nova.compute.provider_tree [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:03:54 compute-0 nova_compute[257802]: 2025-10-02 12:03:54.296 2 DEBUG nova.scheduler.client.report [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:03:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:55 compute-0 nova_compute[257802]: 2025-10-02 12:03:55.376 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:55 compute-0 nova_compute[257802]: 2025-10-02 12:03:55.377 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:03:55 compute-0 nova_compute[257802]: 2025-10-02 12:03:55.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:55 compute-0 nova_compute[257802]: 2025-10-02 12:03:55.664 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:03:55 compute-0 nova_compute[257802]: 2025-10-02 12:03:55.665 2 DEBUG nova.network.neutron [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:03:56 compute-0 nova_compute[257802]: 2025-10-02 12:03:56.011 2 INFO nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:03:56 compute-0 nova_compute[257802]: 2025-10-02 12:03:56.154 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:03:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:03:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:03:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122574247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:03:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:03:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122574247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:03:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-732c673b27d727199af816632814d3a8917310679e706f61a4a5c0ae0adb5fbf-merged.mount: Deactivated successfully.
Oct 02 12:03:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:58 compute-0 nova_compute[257802]: 2025-10-02 12:03:58.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:03:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-0 podman[275022]: 2025-10-02 12:03:59.135451621 +0000 UTC m=+7.713309369 container remove 6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:03:59 compute-0 sudo[274865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:59 compute-0 ceph-mon[73607]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2229641721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:03:59 compute-0 ceph-mon[73607]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-0 ceph-mon[73607]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2122574247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:03:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2122574247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:03:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:03:59 compute-0 systemd[1]: libpod-conmon-6db6b82b9f20501ee21dec0a2fd86354cc3ee694dad3dd728710c4e6f92e0a2f.scope: Deactivated successfully.
Oct 02 12:03:59 compute-0 podman[276165]: 2025-10-02 12:03:59.239252241 +0000 UTC m=+6.174261039 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:03:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:03:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:59.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:03:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:04:00 compute-0 nova_compute[257802]: 2025-10-02 12:04:00.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:04:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:04:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:04:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:04:01 compute-0 ceph-mon[73607]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:01.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7a0ca961-09b7-492f-8d8e-5fc133ba3007 does not exist
Oct 02 12:04:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5b459a05-0850-4170-98f5-30f5bbb69740 does not exist
Oct 02 12:04:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c24975a6-e64d-4413-8ad0-daf5462e12c3 does not exist
Oct 02 12:04:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:04:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:04:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:04:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:04:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:04:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:01 compute-0 sudo[276221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:01 compute-0 sudo[276221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:01 compute-0 sudo[276221]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:01 compute-0 sudo[276246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:02 compute-0 sudo[276246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:02 compute-0 sudo[276246]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:02 compute-0 sudo[276283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:02 compute-0 podman[276271]: 2025-10-02 12:04:02.058752264 +0000 UTC m=+0.048985399 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 12:04:02 compute-0 podman[276270]: 2025-10-02 12:04:02.059540923 +0000 UTC m=+0.052095085 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:04:02 compute-0 sudo[276283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:02 compute-0 sudo[276283]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:02 compute-0 sudo[276335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:04:02 compute-0 sudo[276335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:02 compute-0 podman[276400]: 2025-10-02 12:04:02.448707092 +0000 UTC m=+0.022445644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:02 compute-0 ceph-mon[73607]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:04:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:02 compute-0 podman[276400]: 2025-10-02 12:04:02.664921365 +0000 UTC m=+0.238659897 container create 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:04:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:03 compute-0 systemd[1]: Started libpod-conmon-98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d.scope.
Oct 02 12:04:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:03.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:03 compute-0 podman[276400]: 2025-10-02 12:04:03.41354185 +0000 UTC m=+0.987280432 container init 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:04:03 compute-0 podman[276400]: 2025-10-02 12:04:03.426663544 +0000 UTC m=+1.000402116 container start 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:03 compute-0 adoring_sammet[276416]: 167 167
Oct 02 12:04:03 compute-0 systemd[1]: libpod-98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d.scope: Deactivated successfully.
Oct 02 12:04:03 compute-0 podman[276400]: 2025-10-02 12:04:03.792726922 +0000 UTC m=+1.366465454 container attach 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:04:03 compute-0 podman[276400]: 2025-10-02 12:04:03.794228589 +0000 UTC m=+1.367967131 container died 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.926 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.927 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.928 2 INFO nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Creating image(s)
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.953 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:03 compute-0 nova_compute[257802]: 2025-10-02 12:04:03.977 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.003 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.006 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.067 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.068 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.068 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.069 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.093 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:04 compute-0 nova_compute[257802]: 2025-10-02 12:04:04.097 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 174d7548-b038-401c-82bf-8212037da6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:04 compute-0 ceph-mon[73607]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f02ef3ae6b893f14b37663b5f6dde893a56e156145e7a88ec7027fc23d8b6fb-merged.mount: Deactivated successfully.
Oct 02 12:04:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:05.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:05 compute-0 nova_compute[257802]: 2025-10-02 12:04:05.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:05 compute-0 podman[276400]: 2025-10-02 12:04:05.892250047 +0000 UTC m=+3.465988579 container remove 98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 12:04:05 compute-0 systemd[1]: libpod-conmon-98ea1690d7356ba218c145bdc6c56d1a0de73ff6cce957295ada4d3e8dc3725d.scope: Deactivated successfully.
Oct 02 12:04:06 compute-0 podman[276536]: 2025-10-02 12:04:06.060577249 +0000 UTC m=+0.038776167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:06 compute-0 podman[276536]: 2025-10-02 12:04:06.373320013 +0000 UTC m=+0.351518871 container create b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:04:06 compute-0 nova_compute[257802]: 2025-10-02 12:04:06.553 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 174d7548-b038-401c-82bf-8212037da6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:06 compute-0 ceph-mon[73607]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:06 compute-0 systemd[1]: Started libpod-conmon-b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0.scope.
Oct 02 12:04:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Oct 02 12:04:07 compute-0 podman[276536]: 2025-10-02 12:04:07.006147701 +0000 UTC m=+0.984346619 container init b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:04:07 compute-0 podman[276536]: 2025-10-02 12:04:07.019060831 +0000 UTC m=+0.997259669 container start b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:04:07 compute-0 podman[276551]: 2025-10-02 12:04:07.018109607 +0000 UTC m=+0.606285365 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:04:07 compute-0 nova_compute[257802]: 2025-10-02 12:04:07.027 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:07 compute-0 nova_compute[257802]: 2025-10-02 12:04:07.040 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] resizing rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:04:07 compute-0 podman[276536]: 2025-10-02 12:04:07.096439639 +0000 UTC m=+1.074638477 container attach b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:04:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:07 compute-0 nova_compute[257802]: 2025-10-02 12:04:07.227 2 WARNING nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Oct 02 12:04:07 compute-0 nova_compute[257802]: 2025-10-02 12:04:07.228 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid 174d7548-b038-401c-82bf-8212037da6d1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:04:07 compute-0 nova_compute[257802]: 2025-10-02 12:04:07.229 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "174d7548-b038-401c-82bf-8212037da6d1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:07.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:04:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:04:07 compute-0 elegant_shamir[276588]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:04:07 compute-0 elegant_shamir[276588]: --> relative data size: 1.0
Oct 02 12:04:07 compute-0 elegant_shamir[276588]: --> All data devices are unavailable
Oct 02 12:04:07 compute-0 systemd[1]: libpod-b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0.scope: Deactivated successfully.
Oct 02 12:04:07 compute-0 conmon[276588]: conmon b114e0896f14c20524e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0.scope/container/memory.events
Oct 02 12:04:07 compute-0 podman[276536]: 2025-10-02 12:04:07.802012132 +0000 UTC m=+1.780210960 container died b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.066 2 DEBUG nova.objects.instance [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lazy-loading 'migration_context' on Instance uuid 174d7548-b038-401c-82bf-8212037da6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bebd9a9f336e5c43a2ff85c2bfed40c4c891f7653ed2f39f1d318ed617affc21-merged.mount: Deactivated successfully.
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.262 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.262 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Ensure instance console log exists: /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.263 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.263 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.264 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:08 compute-0 podman[276536]: 2025-10-02 12:04:08.554998105 +0000 UTC m=+2.533196953 container remove b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:04:08 compute-0 systemd[1]: libpod-conmon-b114e0896f14c20524e3f2457cee03f10de420e6b5305fda03976c64930ae2d0.scope: Deactivated successfully.
Oct 02 12:04:08 compute-0 sudo[276335]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:08 compute-0 sudo[276680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:08 compute-0 sudo[276680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:08 compute-0 sudo[276680]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:08 compute-0 sudo[276705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:08 compute-0 sudo[276705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:08 compute-0 sudo[276705]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:08 compute-0 sudo[276730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:08 compute-0 sudo[276730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:08 compute-0 sudo[276730]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:08 compute-0 sudo[276755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:04:08 compute-0 sudo[276755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.862 2 DEBUG nova.network.neutron [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.862 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.863 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.866 2 WARNING nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.876 2 DEBUG nova.virt.libvirt.host [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.877 2 DEBUG nova.virt.libvirt.host [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.880 2 DEBUG nova.virt.libvirt.host [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.880 2 DEBUG nova.virt.libvirt.host [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.881 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.881 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.881 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.882 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.883 2 DEBUG nova.virt.hardware [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:04:08 compute-0 nova_compute[257802]: 2025-10-02 12:04:08.885 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 58 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 874 KiB/s wr, 12 op/s
Oct 02 12:04:09 compute-0 ceph-mon[73607]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.199087821 +0000 UTC m=+0.103205997 container create 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.11715783 +0000 UTC m=+0.021276026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:09.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:09 compute-0 systemd[1]: Started libpod-conmon-94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019.scope.
Oct 02 12:04:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740398130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:09 compute-0 nova_compute[257802]: 2025-10-02 12:04:09.419 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.442103585 +0000 UTC m=+0.346221761 container init 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:04:09 compute-0 nova_compute[257802]: 2025-10-02 12:04:09.446 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.448645546 +0000 UTC m=+0.352763722 container start 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:09 compute-0 nova_compute[257802]: 2025-10-02 12:04:09.456 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:09 compute-0 musing_hawking[276855]: 167 167
Oct 02 12:04:09 compute-0 systemd[1]: libpod-94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019.scope: Deactivated successfully.
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.535430557 +0000 UTC m=+0.439548813 container attach 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:04:09 compute-0 podman[276836]: 2025-10-02 12:04:09.536242526 +0000 UTC m=+0.440360702 container died 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3562497f66771d1d65c0a1dd806f487eef280202517184c23b8d29e0baf09db8-merged.mount: Deactivated successfully.
Oct 02 12:04:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269150505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:09 compute-0 nova_compute[257802]: 2025-10-02 12:04:09.883 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:09 compute-0 nova_compute[257802]: 2025-10-02 12:04:09.885 2 DEBUG nova.objects.instance [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lazy-loading 'pci_devices' on Instance uuid 174d7548-b038-401c-82bf-8212037da6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:10 compute-0 podman[276836]: 2025-10-02 12:04:10.003613934 +0000 UTC m=+0.907732110 container remove 94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hawking, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:04:10 compute-0 systemd[1]: libpod-conmon-94a6fb8d7f70e16806d5063d368923f7042e2594512bc9667bc688ce306f3019.scope: Deactivated successfully.
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.067 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <uuid>174d7548-b038-401c-82bf-8212037da6d1</uuid>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <name>instance-00000016</name>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-2063887045</nova:name>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:04:08</nova:creationTime>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:user uuid="613aafef5cdf4917b8819d273f7ab163">tempest-ServerDiagnosticsNegativeTest-1562904644-project-member</nova:user>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <nova:project uuid="1e6ccaf042df413eb35c8d2bd20a6b6b">tempest-ServerDiagnosticsNegativeTest-1562904644</nova:project>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <system>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="serial">174d7548-b038-401c-82bf-8212037da6d1</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="uuid">174d7548-b038-401c-82bf-8212037da6d1</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </system>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <os>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </os>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <features>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </features>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/174d7548-b038-401c-82bf-8212037da6d1_disk">
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/174d7548-b038-401c-82bf-8212037da6d1_disk.config">
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/console.log" append="off"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <video>
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </video>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:04:10 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:04:10 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:04:10 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:04:10 compute-0 nova_compute[257802]: </domain>
Oct 02 12:04:10 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:04:10 compute-0 podman[276922]: 2025-10-02 12:04:10.171619038 +0000 UTC m=+0.024424073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.312 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.313 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.313 2 INFO nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Using config drive
Oct 02 12:04:10 compute-0 podman[276922]: 2025-10-02 12:04:10.340498274 +0000 UTC m=+0.193303259 container create 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.384 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:10 compute-0 ceph-mon[73607]: pgmap v1115: 305 pgs: 305 active+clean; 58 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 874 KiB/s wr, 12 op/s
Oct 02 12:04:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2740398130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1269150505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:10 compute-0 systemd[1]: Started libpod-conmon-3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6.scope.
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eda95582e126dba9618eb90d9c4b76751f46ad8f8d9eef20155aaf1da3b49d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eda95582e126dba9618eb90d9c4b76751f46ad8f8d9eef20155aaf1da3b49d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eda95582e126dba9618eb90d9c4b76751f46ad8f8d9eef20155aaf1da3b49d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eda95582e126dba9618eb90d9c4b76751f46ad8f8d9eef20155aaf1da3b49d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:10 compute-0 podman[276922]: 2025-10-02 12:04:10.659731948 +0000 UTC m=+0.512536953 container init 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:04:10 compute-0 podman[276922]: 2025-10-02 12:04:10.666049813 +0000 UTC m=+0.518854808 container start 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:04:10 compute-0 podman[276922]: 2025-10-02 12:04:10.727392626 +0000 UTC m=+0.580197631 container attach 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.784 2 INFO nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Creating config drive at /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.789 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vjlgxdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:10 compute-0 sudo[276964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:10 compute-0 sudo[276964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:10 compute-0 sudo[276964]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.936 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vjlgxdc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:10 compute-0 sudo[276991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:10 compute-0 sudo[276991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:10 compute-0 sudo[276991]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.966 2 DEBUG nova.storage.rbd_utils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] rbd image 174d7548-b038-401c-82bf-8212037da6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:04:10 compute-0 nova_compute[257802]: 2025-10-02 12:04:10.970 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config 174d7548-b038-401c-82bf-8212037da6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]: {
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:     "1": [
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:         {
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "devices": [
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "/dev/loop3"
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             ],
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "lv_name": "ceph_lv0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "lv_size": "7511998464",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "name": "ceph_lv0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "tags": {
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.cluster_name": "ceph",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.crush_device_class": "",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.encrypted": "0",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.osd_id": "1",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.type": "block",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:                 "ceph.vdo": "0"
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             },
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "type": "block",
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:             "vg_name": "ceph_vg0"
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:         }
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]:     ]
Oct 02 12:04:11 compute-0 affectionate_mahavira[276958]: }
Oct 02 12:04:11 compute-0 systemd[1]: libpod-3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6.scope: Deactivated successfully.
Oct 02 12:04:11 compute-0 podman[276922]: 2025-10-02 12:04:11.563404166 +0000 UTC m=+1.416209141 container died 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-56eda95582e126dba9618eb90d9c4b76751f46ad8f8d9eef20155aaf1da3b49d-merged.mount: Deactivated successfully.
Oct 02 12:04:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:12 compute-0 podman[276922]: 2025-10-02 12:04:12.373309913 +0000 UTC m=+2.226114908 container remove 3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mahavira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:12 compute-0 sudo[276755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-0 systemd[1]: libpod-conmon-3d92818bdaac75b490fab7f8af44798ab3daca585a038e332893a94c74533ef6.scope: Deactivated successfully.
Oct 02 12:04:12 compute-0 nova_compute[257802]: 2025-10-02 12:04:12.453 2 DEBUG oslo_concurrency.processutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config 174d7548-b038-401c-82bf-8212037da6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:12 compute-0 nova_compute[257802]: 2025-10-02 12:04:12.454 2 INFO nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Deleting local config drive /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1/disk.config because it was imported into RBD.
Oct 02 12:04:12 compute-0 sudo[277071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:12 compute-0 sudo[277071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:12 compute-0 sudo[277071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-0 systemd-machined[211836]: New machine qemu-13-instance-00000016.
Oct 02 12:04:12 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000016.
Oct 02 12:04:12 compute-0 sudo[277101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:12 compute-0 sudo[277101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:12 compute-0 sudo[277101]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-0 sudo[277130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:12 compute-0 sudo[277130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:12 compute-0 sudo[277130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:12 compute-0 sudo[277160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:04:12 compute-0 sudo[277160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:04:13 compute-0 ceph-mon[73607]: pgmap v1116: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.09443088 +0000 UTC m=+0.036836270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.216995682 +0000 UTC m=+0.159401022 container create 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:04:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:13.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:13 compute-0 systemd[1]: Started libpod-conmon-26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e.scope.
Oct 02 12:04:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.601914017 +0000 UTC m=+0.544319377 container init 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.610459287 +0000 UTC m=+0.552864627 container start 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:04:13 compute-0 quizzical_chaplygin[277283]: 167 167
Oct 02 12:04:13 compute-0 systemd[1]: libpod-26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e.scope: Deactivated successfully.
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.68474959 +0000 UTC m=+0.627154930 container attach 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:04:13 compute-0 podman[277244]: 2025-10-02 12:04:13.68597474 +0000 UTC m=+0.628380100 container died 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.828 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406653.8277676, 174d7548-b038-401c-82bf-8212037da6d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.828 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] VM Resumed (Lifecycle Event)
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.831 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.831 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.835 2 INFO nova.virt.libvirt.driver [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance spawned successfully.
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.835 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.881 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.884 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.906 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.907 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.908 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.908 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.909 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:13 compute-0 nova_compute[257802]: 2025-10-02 12:04:13.909 2 DEBUG nova.virt.libvirt.driver [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.030 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.031 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406653.830842, 174d7548-b038-401c-82bf-8212037da6d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.031 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] VM Started (Lifecycle Event)
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.089 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.093 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.113 2 INFO nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Took 10.19 seconds to spawn the instance on the hypervisor.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.114 2 DEBUG nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-86206b16c2e1f462f99c497402c5cafc98c7bc572b16b0224d3bf6a5a00c0d03-merged.mount: Deactivated successfully.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.154 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.365 2 INFO nova.compute.manager [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Took 21.35 seconds to build instance.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.419 2 DEBUG oslo_concurrency.lockutils [None req-4858cb47-bec5-4468-97c2-93d2d3a3a31a 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.420 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "174d7548-b038-401c-82bf-8212037da6d1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.421 2 INFO nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:14 compute-0 nova_compute[257802]: 2025-10-02 12:04:14.421 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "174d7548-b038-401c-82bf-8212037da6d1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:14 compute-0 podman[277244]: 2025-10-02 12:04:14.561214708 +0000 UTC m=+1.503620048 container remove 26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:04:14 compute-0 systemd[1]: libpod-conmon-26e5c0994cc4414793c4dc85fbe23504572a0a8cacee17f768a365662a68c09e.scope: Deactivated successfully.
Oct 02 12:04:14 compute-0 ceph-mon[73607]: pgmap v1117: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:04:14 compute-0 podman[277309]: 2025-10-02 12:04:14.709597688 +0000 UTC m=+0.023695996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:14 compute-0 podman[277309]: 2025-10-02 12:04:14.850981535 +0000 UTC m=+0.165079823 container create f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:04:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 12:04:15 compute-0 systemd[1]: Started libpod-conmon-f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc.scope.
Oct 02 12:04:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c0929aaff086151eb22a545736ab59441e88924f99d01f2ffe9ed8c8b7b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c0929aaff086151eb22a545736ab59441e88924f99d01f2ffe9ed8c8b7b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c0929aaff086151eb22a545736ab59441e88924f99d01f2ffe9ed8c8b7b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c0929aaff086151eb22a545736ab59441e88924f99d01f2ffe9ed8c8b7b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:15 compute-0 podman[277309]: 2025-10-02 12:04:15.124565363 +0000 UTC m=+0.438663681 container init f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:15 compute-0 podman[277309]: 2025-10-02 12:04:15.131543655 +0000 UTC m=+0.445641943 container start f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:04:15 compute-0 podman[277309]: 2025-10-02 12:04:15.228609559 +0000 UTC m=+0.542707847 container attach f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:04:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:15.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:15 compute-0 nova_compute[257802]: 2025-10-02 12:04:15.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:15 compute-0 silly_sanderson[277326]: {
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:         "osd_id": 1,
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:         "type": "bluestore"
Oct 02 12:04:15 compute-0 silly_sanderson[277326]:     }
Oct 02 12:04:15 compute-0 silly_sanderson[277326]: }
Oct 02 12:04:15 compute-0 systemd[1]: libpod-f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc.scope: Deactivated successfully.
Oct 02 12:04:15 compute-0 podman[277309]: 2025-10-02 12:04:15.978857154 +0000 UTC m=+1.292955462 container died f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb0c0929aaff086151eb22a545736ab59441e88924f99d01f2ffe9ed8c8b7b4e-merged.mount: Deactivated successfully.
Oct 02 12:04:16 compute-0 podman[277309]: 2025-10-02 12:04:16.625016921 +0000 UTC m=+1.939115209 container remove f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:16 compute-0 sudo[277160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:04:16 compute-0 systemd[1]: libpod-conmon-f44bde8d63badc2d9d384dcc490e159029b1a56e89a00c25e8e8a2a06898ffdc.scope: Deactivated successfully.
Oct 02 12:04:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.892 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "174d7548-b038-401c-82bf-8212037da6d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.893 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.894 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "174d7548-b038-401c-82bf-8212037da6d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.894 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.895 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.896 2 INFO nova.compute.manager [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Terminating instance
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.897 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "refresh_cache-174d7548-b038-401c-82bf-8212037da6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.897 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquired lock "refresh_cache-174d7548-b038-401c-82bf-8212037da6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:04:16 compute-0 nova_compute[257802]: 2025-10-02 12:04:16.898 2 DEBUG nova.network.neutron [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:04:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:04:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 427 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 02 12:04:17 compute-0 ceph-mon[73607]: pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 12:04:17 compute-0 nova_compute[257802]: 2025-10-02 12:04:17.156 2 DEBUG nova.network.neutron [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:04:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:17.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4627b6bc-f5dc-4533-be3d-e3b6d3ffc255 does not exist
Oct 02 12:04:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev efe8499b-c93b-460d-8840-500d51812901 does not exist
Oct 02 12:04:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 19139548-975a-4f48-97f8-0dd475d9506f does not exist
Oct 02 12:04:17 compute-0 sudo[277360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:17 compute-0 sudo[277360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:17 compute-0 sudo[277360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:17 compute-0 sudo[277385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:04:17 compute-0 sudo[277385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:17 compute-0 sudo[277385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:17 compute-0 nova_compute[257802]: 2025-10-02 12:04:17.895 2 DEBUG nova.network.neutron [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:17 compute-0 nova_compute[257802]: 2025-10-02 12:04:17.922 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Releasing lock "refresh_cache-174d7548-b038-401c-82bf-8212037da6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:04:17 compute-0 nova_compute[257802]: 2025-10-02 12:04:17.922 2 DEBUG nova.compute.manager [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:04:18 compute-0 nova_compute[257802]: 2025-10-02 12:04:18.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:18 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 02 12:04:18 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Consumed 4.984s CPU time.
Oct 02 12:04:18 compute-0 systemd-machined[211836]: Machine qemu-13-instance-00000016 terminated.
Oct 02 12:04:18 compute-0 nova_compute[257802]: 2025-10-02 12:04:18.345 2 INFO nova.virt.libvirt.driver [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance destroyed successfully.
Oct 02 12:04:18 compute-0 nova_compute[257802]: 2025-10-02 12:04:18.346 2 DEBUG nova.objects.instance [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lazy-loading 'resources' on Instance uuid 174d7548-b038-401c-82bf-8212037da6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:18 compute-0 ceph-mon[73607]: pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 427 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 02 12:04:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:04:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 12:04:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:19.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:04:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:04:20 compute-0 nova_compute[257802]: 2025-10-02 12:04:20.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:20 compute-0 ceph-mon[73607]: pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 12:04:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 72 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 954 KiB/s wr, 100 op/s
Oct 02 12:04:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.287 2 INFO nova.virt.libvirt.driver [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Deleting instance files /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1_del
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.288 2 INFO nova.virt.libvirt.driver [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Deletion of /var/lib/nova/instances/174d7548-b038-401c-82bf-8212037da6d1_del complete
Oct 02 12:04:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:21.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.373 2 INFO nova.compute.manager [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Took 3.45 seconds to destroy the instance on the hypervisor.
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.374 2 DEBUG oslo.service.loopingcall [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.374 2 DEBUG nova.compute.manager [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.374 2 DEBUG nova.network.neutron [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.724 2 DEBUG nova.network.neutron [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.743 2 DEBUG nova.network.neutron [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.757 2 INFO nova.compute.manager [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Took 0.38 seconds to deallocate network for instance.
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.807 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.808 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:21 compute-0 nova_compute[257802]: 2025-10-02 12:04:21.869 2 DEBUG oslo_concurrency.processutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:21 compute-0 ovn_controller[148183]: 2025-10-02T12:04:21Z|00124|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 12:04:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953411920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.291 2 DEBUG oslo_concurrency.processutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.296 2 DEBUG nova.compute.provider_tree [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.322 2 DEBUG nova.scheduler.client.report [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.386 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.424 2 INFO nova.scheduler.client.report [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Deleted allocations for instance 174d7548-b038-401c-82bf-8212037da6d1
Oct 02 12:04:22 compute-0 nova_compute[257802]: 2025-10-02 12:04:22.497 2 DEBUG oslo_concurrency.lockutils [None req-4b9be177-5d13-48b1-b885-1827e1fcbc23 613aafef5cdf4917b8819d273f7ab163 1e6ccaf042df413eb35c8d2bd20a6b6b - - default default] Lock "174d7548-b038-401c-82bf-8212037da6d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:22 compute-0 ceph-mon[73607]: pgmap v1121: 305 pgs: 305 active+clean; 72 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 954 KiB/s wr, 100 op/s
Oct 02 12:04:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1953411920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 55 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 02 12:04:23 compute-0 nova_compute[257802]: 2025-10-02 12:04:23.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:23.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:23.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:24 compute-0 ceph-mon[73607]: pgmap v1122: 305 pgs: 305 active+clean; 55 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 02 12:04:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 02 12:04:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:25.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.627 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.628 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.649 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.732 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.732 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.738 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.738 2 INFO nova.compute.claims [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:04:25 compute-0 nova_compute[257802]: 2025-10-02 12:04:25.830 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:26 compute-0 ceph-mon[73607]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 02 12:04:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109289417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.234 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.239 2 DEBUG nova.compute.provider_tree [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.308 2 DEBUG nova.scheduler.client.report [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.381 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.382 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.480 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.480 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.562 2 INFO nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.687 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:26.922 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:26.922 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:26.922 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:26 compute-0 nova_compute[257802]: 2025-10-02 12:04:26.962 2 DEBUG nova.policy [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8850add40b254d198f270d9e64c777d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9afa78cc4dec419babdf61fd31f46e28', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:04:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Oct 02 12:04:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2109289417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.205 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.206 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.207 2 INFO nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating image(s)
Oct 02 12:04:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.232 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.258 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.282 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.285 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:27.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.344 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.345 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.345 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.346 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.372 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.375 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.792 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:27 compute-0 nova_compute[257802]: 2025-10-02 12:04:27.881 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] resizing rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.053 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Successfully created port: 869b835b-1179-4864-abfe-fc542b215555 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.134 2 DEBUG nova.objects.instance [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'migration_context' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.155 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.156 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Ensure instance console log exists: /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.157 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.157 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.157 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:28 compute-0 nova_compute[257802]: 2025-10-02 12:04:28.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:28 compute-0 ceph-mon[73607]: pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Oct 02 12:04:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 68 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 929 KiB/s wr, 89 op/s
Oct 02 12:04:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:29.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1311168690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.480 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Successfully updated port: 869b835b-1179-4864-abfe-fc542b215555 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.525 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.525 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquired lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.526 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:04:29 compute-0 podman[277648]: 2025-10-02 12:04:29.927894715 +0000 UTC m=+0.060714288 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.972 2 DEBUG nova.compute.manager [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-changed-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.972 2 DEBUG nova.compute.manager [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Refreshing instance network info cache due to event network-changed-869b835b-1179-4864-abfe-fc542b215555. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:04:29 compute-0 nova_compute[257802]: 2025-10-02 12:04:29.973 2 DEBUG oslo_concurrency.lockutils [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:04:30 compute-0 nova_compute[257802]: 2025-10-02 12:04:30.043 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:04:30 compute-0 nova_compute[257802]: 2025-10-02 12:04:30.301 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:30 compute-0 ceph-mon[73607]: pgmap v1125: 305 pgs: 305 active+clean; 68 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 929 KiB/s wr, 89 op/s
Oct 02 12:04:30 compute-0 nova_compute[257802]: 2025-10-02 12:04:30.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 92 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 707 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:04:31 compute-0 sudo[277668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:31 compute-0 sudo[277668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:31 compute-0 sudo[277668]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:31 compute-0 sudo[277693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:31 compute-0 sudo[277693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:31 compute-0 sudo[277693]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:31 compute-0 nova_compute[257802]: 2025-10-02 12:04:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:31.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:31.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.008 2 DEBUG nova.network.neutron [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updating instance_info_cache with network_info: [{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.032 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Releasing lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.033 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance network_info: |[{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.033 2 DEBUG oslo_concurrency.lockutils [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.034 2 DEBUG nova.network.neutron [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Refreshing network info cache for port 869b835b-1179-4864-abfe-fc542b215555 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.036 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start _get_guest_xml network_info=[{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.042 2 WARNING nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.048 2 DEBUG nova.virt.libvirt.host [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.048 2 DEBUG nova.virt.libvirt.host [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.050 2 DEBUG nova.virt.libvirt.host [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.051 2 DEBUG nova.virt.libvirt.host [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.051 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.052 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.052 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.052 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.052 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.053 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.053 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.053 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.053 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.053 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.054 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.054 2 DEBUG nova.virt.hardware [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.056 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:04:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:32 compute-0 ceph-mon[73607]: pgmap v1126: 305 pgs: 305 active+clean; 92 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 707 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:04:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546192226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.533 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.559 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.563 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:32 compute-0 podman[277780]: 2025-10-02 12:04:32.922604579 +0000 UTC m=+0.049600624 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid)
Oct 02 12:04:32 compute-0 podman[277779]: 2025-10-02 12:04:32.927634504 +0000 UTC m=+0.062456912 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:04:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 125 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.1 MiB/s wr, 56 op/s
Oct 02 12:04:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773290048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.996 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.998 2 DEBUG nova.virt.libvirt.vif [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:04:26Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:04:32 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.999 2 DEBUG nova.network.os_vif_util [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:32.999 2 DEBUG nova.network.os_vif_util [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.000 2 DEBUG nova.objects.instance [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.017 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <uuid>66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</uuid>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <name>instance-00000017</name>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAdminTestJSON-server-167670540</nova:name>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:04:32</nova:creationTime>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:user uuid="8850add40b254d198f270d9e64c777d5">tempest-ServersAdminTestJSON-518249049-project-member</nova:user>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:project uuid="9afa78cc4dec419babdf61fd31f46e28">tempest-ServersAdminTestJSON-518249049</nova:project>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <nova:port uuid="869b835b-1179-4864-abfe-fc542b215555">
Oct 02 12:04:33 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <system>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="serial">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="uuid">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </system>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <os>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </os>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <features>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </features>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk">
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config">
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:33 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1d:d7:20"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <target dev="tap869b835b-11"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log" append="off"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <video>
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </video>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:04:33 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:04:33 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:04:33 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:04:33 compute-0 nova_compute[257802]: </domain>
Oct 02 12:04:33 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.018 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Preparing to wait for external event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.018 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.019 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.019 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.020 2 DEBUG nova.virt.libvirt.vif [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:04:26Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.020 2 DEBUG nova.network.os_vif_util [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.020 2 DEBUG nova.network.os_vif_util [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.021 2 DEBUG os_vif [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869b835b-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap869b835b-11, col_values=(('external_ids', {'iface-id': '869b835b-1179-4864-abfe-fc542b215555', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:d7:20', 'vm-uuid': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:33 compute-0 NetworkManager[44987]: <info>  [1759406673.0293] manager: (tap869b835b-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.036 2 INFO os_vif [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.129 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.130 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.130 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.136 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.137 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.137 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No VIF found with MAC fa:16:3e:1d:d7:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.138 2 INFO nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Using config drive
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.157 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.344 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406658.344085, 174d7548-b038-401c-82bf-8212037da6d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.345 2 INFO nova.compute.manager [-] [instance: 174d7548-b038-401c-82bf-8212037da6d1] VM Stopped (Lifecycle Event)
Oct 02 12:04:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:33.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.368 2 DEBUG nova.compute.manager [None req-16c6cfff-ef4e-4542-9ec5-aec1c30ab4a5 - - - - - -] [instance: 174d7548-b038-401c-82bf-8212037da6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1546192226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/773290048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4190072840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.640 2 INFO nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating config drive at /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.646 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparokxg4y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.771 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparokxg4y" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.805 2 DEBUG nova.storage.rbd_utils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.809 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.975 2 DEBUG oslo_concurrency.processutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:33 compute-0 nova_compute[257802]: 2025-10-02 12:04:33.976 2 INFO nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting local config drive /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config because it was imported into RBD.
Oct 02 12:04:34 compute-0 kernel: tap869b835b-11: entered promiscuous mode
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.0207] manager: (tap869b835b-11): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:34 compute-0 ovn_controller[148183]: 2025-10-02T12:04:34Z|00125|binding|INFO|Claiming lport 869b835b-1179-4864-abfe-fc542b215555 for this chassis.
Oct 02 12:04:34 compute-0 ovn_controller[148183]: 2025-10-02T12:04:34Z|00126|binding|INFO|869b835b-1179-4864-abfe-fc542b215555: Claiming fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.042 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.043 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f bound to our chassis
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.044 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.056 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b72ee4c5-d04a-4171-91dd-5df86114fdf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.057 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80106802-d1 in ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:04:34 compute-0 systemd-udevd[277892]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.060 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80106802-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.060 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fe98fb-68d7-4e2a-8c9c-a124f7809a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.061 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da50e796-cad1-44fe-9f90-11ef2544f9aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 systemd-machined[211836]: New machine qemu-14-instance-00000017.
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.077 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d74d8a67-8f55-4c86-a9e9-cd47ee7a9770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.0817] device (tap869b835b-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.0830] device (tap869b835b-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.103 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ba4913-586e-46ee-a6cf-541603eac636]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000017.
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:34 compute-0 ovn_controller[148183]: 2025-10-02T12:04:34Z|00127|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 ovn-installed in OVS
Oct 02 12:04:34 compute-0 ovn_controller[148183]: 2025-10-02T12:04:34Z|00128|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 up in Southbound
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.133 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1939cd5d-f7eb-4842-b43b-470ad083ff7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.1477] manager: (tap80106802-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.146 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c14d1f5b-7bcc-42fb-9378-813734b937ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.175 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[edef2555-8187-42f1-b9e5-7a8ee92c019c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.178 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5c826f97-4aee-4a99-8b8f-a15d73d7d046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.2035] device (tap80106802-d0): carrier: link connected
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.213 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[07896548-148e-4559-9714-55269231c204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.233 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aee5dc30-7ee1-4e57-ae66-48dfed6357e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277925, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.252 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3b05e5-8288-40ea-b7d3-8fb990587442]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:27b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474182, 'tstamp': 474182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277926, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.268 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ef2e4e-2d39-4c49-a4d0-d4a1513565f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277927, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.299 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7c57e7-3c41-41d5-b880-d89327ddf444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.364 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df377727-2827-4f63-a3e5-0a1920e9f76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.366 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.366 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.366 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:34 compute-0 kernel: tap80106802-d0: entered promiscuous mode
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:34 compute-0 NetworkManager[44987]: <info>  [1759406674.3686] manager: (tap80106802-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.370 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:34 compute-0 ovn_controller[148183]: 2025-10-02T12:04:34Z|00129|binding|INFO|Releasing lport 3e3f512e-f85f-4c9c-b91d-072c570470c1 from this chassis (sb_readonly=0)
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.389 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80106802-d877-42c6-b2a9-50b050f6b08f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80106802-d877-42c6-b2a9-50b050f6b08f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.390 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29abc648-f5de-4807-ba54-5c76dc44c3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.391 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/80106802-d877-42c6-b2a9-50b050f6b08f.pid.haproxy
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:04:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:34.391 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'env', 'PROCESS_TAG=haproxy-80106802-d877-42c6-b2a9-50b050f6b08f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80106802-d877-42c6-b2a9-50b050f6b08f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:04:34 compute-0 ceph-mon[73607]: pgmap v1127: 305 pgs: 305 active+clean; 125 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.1 MiB/s wr, 56 op/s
Oct 02 12:04:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3944645154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2967803754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.742 2 DEBUG nova.compute.manager [req-2cf8f7eb-da51-440e-8e17-75c328007774 req-f9295510-2774-469e-8939-a43c0d47d2cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.743 2 DEBUG oslo_concurrency.lockutils [req-2cf8f7eb-da51-440e-8e17-75c328007774 req-f9295510-2774-469e-8939-a43c0d47d2cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.743 2 DEBUG oslo_concurrency.lockutils [req-2cf8f7eb-da51-440e-8e17-75c328007774 req-f9295510-2774-469e-8939-a43c0d47d2cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.744 2 DEBUG oslo_concurrency.lockutils [req-2cf8f7eb-da51-440e-8e17-75c328007774 req-f9295510-2774-469e-8939-a43c0d47d2cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:34 compute-0 nova_compute[257802]: 2025-10-02 12:04:34.744 2 DEBUG nova.compute.manager [req-2cf8f7eb-da51-440e-8e17-75c328007774 req-f9295510-2774-469e-8939-a43c0d47d2cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Processing event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:04:34 compute-0 podman[277960]: 2025-10-02 12:04:34.82696812 +0000 UTC m=+0.021046260 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:04:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.100 2 DEBUG nova.network.neutron [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updated VIF entry in instance network info cache for port 869b835b-1179-4864-abfe-fc542b215555. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.102 2 DEBUG nova.network.neutron [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updating instance_info_cache with network_info: [{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.165 2 DEBUG oslo_concurrency.lockutils [req-f8a91347-9319-4f79-8554-14d0afff53ec req-585e2873-3e6a-4586-b8f8-2df6a05be5df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:04:35 compute-0 podman[277960]: 2025-10-02 12:04:35.213571897 +0000 UTC m=+0.407650017 container create af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:04:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:35.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:35 compute-0 systemd[1]: Started libpod-conmon-af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c.scope.
Oct 02 12:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:35.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02787123b1d6271d494656d45f863f1b39368d9478799771b4d0dc95f41398ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:35 compute-0 podman[277960]: 2025-10-02 12:04:35.422312605 +0000 UTC m=+0.616390745 container init af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:04:35 compute-0 podman[277960]: 2025-10-02 12:04:35.427465642 +0000 UTC m=+0.621543752 container start af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:04:35 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [NOTICE]   (278022) : New worker (278024) forked
Oct 02 12:04:35 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [NOTICE]   (278022) : Loading success.
Oct 02 12:04:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/260913443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.679 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.680 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406675.6800585, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.680 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Started (Lifecycle Event)
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.683 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.686 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance spawned successfully.
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.686 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.699 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.702 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.707 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.708 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.708 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.708 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.709 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.709 2 DEBUG nova.virt.libvirt.driver [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.742 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.743 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406675.6806133, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.743 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Paused (Lifecycle Event)
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.782 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.785 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406675.6823454, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.785 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Resumed (Lifecycle Event)
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.810 2 INFO nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Took 8.60 seconds to spawn the instance on the hypervisor.
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.811 2 DEBUG nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.812 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.817 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.888 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.964 2 INFO nova.compute.manager [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Took 10.26 seconds to build instance.
Oct 02 12:04:35 compute-0 nova_compute[257802]: 2025-10-02 12:04:35.991 2 DEBUG oslo_concurrency.lockutils [None req-72118657-acb4-42ba-83ca-3fd4d6f85f79 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.125 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.126 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727202244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.560 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:36 compute-0 ceph-mon[73607]: pgmap v1128: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 02 12:04:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/413277808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1996495807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3727202244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.640 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.641 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.795 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.797 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4682MB free_disk=20.946636199951172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.797 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.797 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.822 2 DEBUG nova.compute.manager [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.823 2 DEBUG oslo_concurrency.lockutils [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.823 2 DEBUG oslo_concurrency.lockutils [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.823 2 DEBUG oslo_concurrency.lockutils [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.823 2 DEBUG nova.compute.manager [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.824 2 WARNING nova.compute.manager [req-a201fede-cabb-4ab7-a077-d5267f21a550 req-3e420040-ca39-4e70-b42c-5429529d6724 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state None.
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.874 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.875 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.875 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:04:36 compute-0 nova_compute[257802]: 2025-10-02 12:04:36.912 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 134 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Oct 02 12:04:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382317671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.329 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.335 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.354 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:04:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.376 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.376 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/93842212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2321269412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2382317671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:37 compute-0 podman[278079]: 2025-10-02 12:04:37.965786349 +0000 UTC m=+0.095358363 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.989 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "da3d84b1-1cb1-46a9-a160-cd80904111da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:37 compute-0 nova_compute[257802]: 2025-10-02 12:04:37.989 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.052 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.119 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.119 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.124 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.125 2 INFO nova.compute.claims [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.363 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617205285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:38 compute-0 ceph-mon[73607]: pgmap v1129: 305 pgs: 305 active+clean; 134 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Oct 02 12:04:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/529022672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.782 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.793 2 DEBUG nova.compute.provider_tree [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.815 2 DEBUG nova.scheduler.client.report [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.866 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:38 compute-0 nova_compute[257802]: 2025-10-02 12:04:38.868 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:04:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 155 MiB data, 355 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 121 op/s
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.008 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.009 2 DEBUG nova.network.neutron [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.056 2 INFO nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.148 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:04:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000072s ======
Oct 02 12:04:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.351 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.353 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.354 2 INFO nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Creating image(s)
Oct 02 12:04:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.384 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.417 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.446 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.450 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.473 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.474 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.508 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.509 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.510 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.510 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.536 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.539 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 da3d84b1-1cb1-46a9-a160-cd80904111da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:39.546 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:04:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:39.548 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3617205285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:39 compute-0 nova_compute[257802]: 2025-10-02 12:04:39.913 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 da3d84b1-1cb1-46a9-a160-cd80904111da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.005 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] resizing rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.145 2 DEBUG nova.objects.instance [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lazy-loading 'migration_context' on Instance uuid da3d84b1-1cb1-46a9-a160-cd80904111da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.165 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.165 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Ensure instance console log exists: /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.166 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.166 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.166 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.418 2 DEBUG nova.network.neutron [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.418 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.420 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.424 2 WARNING nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.428 2 DEBUG nova.virt.libvirt.host [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.429 2 DEBUG nova.virt.libvirt.host [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.432 2 DEBUG nova.virt.libvirt.host [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.432 2 DEBUG nova.virt.libvirt.host [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.434 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.434 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.435 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.435 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.435 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.436 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.436 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.437 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.437 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.437 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.438 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.438 2 DEBUG nova.virt.hardware [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.441 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086160103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.888 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:40 compute-0 ceph-mon[73607]: pgmap v1130: 305 pgs: 305 active+clean; 155 MiB data, 355 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 121 op/s
Oct 02 12:04:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4086160103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.929 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:40 compute-0 nova_compute[257802]: 2025-10-02 12:04:40.933 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 204 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 219 op/s
Oct 02 12:04:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:04:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:04:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668515256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.389 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.394 2 DEBUG nova.objects.instance [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lazy-loading 'pci_devices' on Instance uuid da3d84b1-1cb1-46a9-a160-cd80904111da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.421 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <uuid>da3d84b1-1cb1-46a9-a160-cd80904111da</uuid>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <name>instance-0000001a</name>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1560824564</nova:name>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:04:40</nova:creationTime>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:user uuid="c5ab011ce9f04adbb19dab5fa5ed1714">tempest-ServersAdminNegativeTestJSON-1328318995-project-member</nova:user>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <nova:project uuid="6b8ddbfa33c348beb1c883371b5c6909">tempest-ServersAdminNegativeTestJSON-1328318995</nova:project>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <system>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="serial">da3d84b1-1cb1-46a9-a160-cd80904111da</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="uuid">da3d84b1-1cb1-46a9-a160-cd80904111da</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </system>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <os>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </os>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <features>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </features>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/da3d84b1-1cb1-46a9-a160-cd80904111da_disk">
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config">
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:41 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/console.log" append="off"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <video>
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </video>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:04:41 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:04:41 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:04:41 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:04:41 compute-0 nova_compute[257802]: </domain>
Oct 02 12:04:41 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.627 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.628 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.629 2 INFO nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Using config drive
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.651 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.941 2 INFO nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Creating config drive at /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config
Oct 02 12:04:41 compute-0 nova_compute[257802]: 2025-10-02 12:04:41.946 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw16l_q6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1668515256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:42 compute-0 nova_compute[257802]: 2025-10-02 12:04:42.073 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw16l_q6" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:42 compute-0 nova_compute[257802]: 2025-10-02 12:04:42.103 2 DEBUG nova.storage.rbd_utils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] rbd image da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:42 compute-0 nova_compute[257802]: 2025-10-02 12:04:42.107 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:42 compute-0 nova_compute[257802]: 2025-10-02 12:04:42.282 2 DEBUG oslo_concurrency.processutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config da3d84b1-1cb1-46a9-a160-cd80904111da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:42 compute-0 nova_compute[257802]: 2025-10-02 12:04:42.283 2 INFO nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Deleting local config drive /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da/disk.config because it was imported into RBD.
Oct 02 12:04:42 compute-0 systemd-machined[211836]: New machine qemu-15-instance-0000001a.
Oct 02 12:04:42 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000001a.
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:04:42
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:04:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 214 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.6 MiB/s wr, 334 op/s
Oct 02 12:04:43 compute-0 ceph-mon[73607]: pgmap v1131: 305 pgs: 305 active+clean; 204 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 219 op/s
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.225 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406683.224673, da3d84b1-1cb1-46a9-a160-cd80904111da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.225 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] VM Resumed (Lifecycle Event)
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.230 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.230 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.234 2 INFO nova.virt.libvirt.driver [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance spawned successfully.
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.234 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.248 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.251 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.301 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.301 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.302 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.302 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.302 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.302 2 DEBUG nova.virt.libvirt.driver [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.306 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.306 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406683.229567, da3d84b1-1cb1-46a9-a160-cd80904111da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.306 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] VM Started (Lifecycle Event)
Oct 02 12:04:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:43.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.334 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.337 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.369 2 INFO nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Took 4.02 seconds to spawn the instance on the hypervisor.
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.369 2 DEBUG nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.371 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.461 2 INFO nova.compute.manager [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Took 5.36 seconds to build instance.
Oct 02 12:04:43 compute-0 nova_compute[257802]: 2025-10-02 12:04:43.500 2 DEBUG oslo_concurrency.lockutils [None req-fa2d60d3-6668-455c-a5e6-32849fb4f4a5 c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:44 compute-0 ceph-mon[73607]: pgmap v1132: 305 pgs: 305 active+clean; 214 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.6 MiB/s wr, 334 op/s
Oct 02 12:04:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 227 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.1 MiB/s wr, 440 op/s
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.158 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.158 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.188 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.261 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.261 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.269 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.270 2 INFO nova.compute.claims [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:04:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:45.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.457 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:04:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079717066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.867 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.873 2 DEBUG nova.compute.provider_tree [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.898 2 DEBUG nova.scheduler.client.report [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.931 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.931 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.988 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:04:45 compute-0 nova_compute[257802]: 2025-10-02 12:04:45.988 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.024 2 INFO nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.045 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.140 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.141 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.142 2 INFO nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Creating image(s)
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.173 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:46 compute-0 ceph-mon[73607]: pgmap v1133: 305 pgs: 305 active+clean; 227 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.1 MiB/s wr, 440 op/s
Oct 02 12:04:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2079717066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.201 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.230 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.233 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.289 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.289 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.290 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.290 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.320 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.323 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c2b299fa-a4b6-461f-84d6-790aa118102d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.400 2 DEBUG nova.policy [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8850add40b254d198f270d9e64c777d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9afa78cc4dec419babdf61fd31f46e28', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.660 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c2b299fa-a4b6-461f-84d6-790aa118102d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.745 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] resizing rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.865 2 DEBUG nova.objects.instance [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'migration_context' on Instance uuid c2b299fa-a4b6-461f-84d6-790aa118102d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.878 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.878 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Ensure instance console log exists: /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.879 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.879 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:46 compute-0 nova_compute[257802]: 2025-10-02 12:04:46.879 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 227 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.6 MiB/s wr, 490 op/s
Oct 02 12:04:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:47.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/203995729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:04:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:47.549 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 12:04:47 compute-0 nova_compute[257802]: 2025-10-02 12:04:47.953 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Successfully created port: 759e5032-b035-44b1-9e49-b89ab3323ceb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:04:48 compute-0 nova_compute[257802]: 2025-10-02 12:04:48.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:48 compute-0 ceph-mon[73607]: pgmap v1134: 305 pgs: 305 active+clean; 227 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.6 MiB/s wr, 490 op/s
Oct 02 12:04:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 273 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.9 MiB/s wr, 541 op/s
Oct 02 12:04:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:49.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:49 compute-0 ovn_controller[148183]: 2025-10-02T12:04:49Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:04:49 compute-0 ovn_controller[148183]: 2025-10-02T12:04:49Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:04:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3393721122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1900020686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:50 compute-0 nova_compute[257802]: 2025-10-02 12:04:50.461 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Successfully updated port: 759e5032-b035-44b1-9e49-b89ab3323ceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:04:50 compute-0 nova_compute[257802]: 2025-10-02 12:04:50.492 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:04:50 compute-0 nova_compute[257802]: 2025-10-02 12:04:50.493 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquired lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:04:50 compute-0 nova_compute[257802]: 2025-10-02 12:04:50.493 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:04:50 compute-0 nova_compute[257802]: 2025-10-02 12:04:50.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:50 compute-0 ceph-mon[73607]: pgmap v1135: 305 pgs: 305 active+clean; 273 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.9 MiB/s wr, 541 op/s
Oct 02 12:04:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 333 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.7 MiB/s wr, 529 op/s
Oct 02 12:04:51 compute-0 sudo[278667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:51 compute-0 sudo[278667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:51 compute-0 sudo[278667]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:51 compute-0 sudo[278692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:51 compute-0 sudo[278692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:51 compute-0 sudo[278692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:04:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:04:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:51 compute-0 nova_compute[257802]: 2025-10-02 12:04:51.409 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:04:51 compute-0 nova_compute[257802]: 2025-10-02 12:04:51.649 2 DEBUG nova.compute.manager [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-changed-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:04:51 compute-0 nova_compute[257802]: 2025-10-02 12:04:51.650 2 DEBUG nova.compute.manager [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Refreshing instance network info cache due to event network-changed-759e5032-b035-44b1-9e49-b89ab3323ceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:04:51 compute-0 nova_compute[257802]: 2025-10-02 12:04:51.650 2 DEBUG oslo_concurrency.lockutils [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:04:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:52 compute-0 ceph-mon[73607]: pgmap v1136: 305 pgs: 305 active+clean; 333 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.7 MiB/s wr, 529 op/s
Oct 02 12:04:52 compute-0 nova_compute[257802]: 2025-10-02 12:04:52.856 2 DEBUG nova.network.neutron [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updating instance_info_cache with network_info: [{"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 364 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 8.8 MiB/s wr, 509 op/s
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.074 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Releasing lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.074 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Instance network_info: |[{"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.075 2 DEBUG oslo_concurrency.lockutils [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.075 2 DEBUG nova.network.neutron [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Refreshing network info cache for port 759e5032-b035-44b1-9e49-b89ab3323ceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.078 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Start _get_guest_xml network_info=[{"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.083 2 WARNING nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.088 2 DEBUG nova.virt.libvirt.host [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.088 2 DEBUG nova.virt.libvirt.host [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.092 2 DEBUG nova.virt.libvirt.host [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.092 2 DEBUG nova.virt.libvirt.host [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.093 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.094 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.094 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.094 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.095 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.095 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.095 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.095 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.096 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.096 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.096 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.096 2 DEBUG nova.virt.hardware [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.100 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:04:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:04:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077392932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.604 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.632 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:53 compute-0 nova_compute[257802]: 2025-10-02 12:04:53.636 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2077392932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:04:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003758939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.059 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.061 2 DEBUG nova.virt.libvirt.vif [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-194319742',display_name='tempest-ServersAdminTestJSON-server-194319742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-194319742',id=27,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-pxmd9jcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:04:46Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=c2b299fa-a4b6-461f-84d6-790aa118102d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.061 2 DEBUG nova.network.os_vif_util [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.062 2 DEBUG nova.network.os_vif_util [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.063 2 DEBUG nova.objects.instance [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2b299fa-a4b6-461f-84d6-790aa118102d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008303234924623116 of space, bias 1.0, pg target 2.490970477386935 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.123 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <uuid>c2b299fa-a4b6-461f-84d6-790aa118102d</uuid>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <name>instance-0000001b</name>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAdminTestJSON-server-194319742</nova:name>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:04:53</nova:creationTime>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:user uuid="8850add40b254d198f270d9e64c777d5">tempest-ServersAdminTestJSON-518249049-project-member</nova:user>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:project uuid="9afa78cc4dec419babdf61fd31f46e28">tempest-ServersAdminTestJSON-518249049</nova:project>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <nova:port uuid="759e5032-b035-44b1-9e49-b89ab3323ceb">
Oct 02 12:04:54 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <system>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="serial">c2b299fa-a4b6-461f-84d6-790aa118102d</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="uuid">c2b299fa-a4b6-461f-84d6-790aa118102d</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </system>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <os>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </os>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <features>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </features>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c2b299fa-a4b6-461f-84d6-790aa118102d_disk">
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config">
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </source>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:04:54 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:79:89:f8"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <target dev="tap759e5032-b0"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/console.log" append="off"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <video>
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </video>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:04:54 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:04:54 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:04:54 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:04:54 compute-0 nova_compute[257802]: </domain>
Oct 02 12:04:54 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.126 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Preparing to wait for external event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.126 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.127 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.127 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.128 2 DEBUG nova.virt.libvirt.vif [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-194319742',display_name='tempest-ServersAdminTestJSON-server-194319742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-194319742',id=27,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-pxmd9jcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:04:46Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=c2b299fa-a4b6-461f-84d6-790aa118102d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.128 2 DEBUG nova.network.os_vif_util [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.129 2 DEBUG nova.network.os_vif_util [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.129 2 DEBUG os_vif [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap759e5032-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap759e5032-b0, col_values=(('external_ids', {'iface-id': '759e5032-b035-44b1-9e49-b89ab3323ceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:89:f8', 'vm-uuid': 'c2b299fa-a4b6-461f-84d6-790aa118102d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:54 compute-0 NetworkManager[44987]: <info>  [1759406694.1362] manager: (tap759e5032-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.143 2 INFO os_vif [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0')
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.303 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.304 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.304 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No VIF found with MAC fa:16:3e:79:89:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.304 2 INFO nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Using config drive
Oct 02 12:04:54 compute-0 nova_compute[257802]: 2025-10-02 12:04:54.331 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 388 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 9.1 MiB/s wr, 448 op/s
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.193 2 INFO nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Creating config drive at /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.198 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lm0rew5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:55 compute-0 ceph-mon[73607]: pgmap v1137: 305 pgs: 305 active+clean; 364 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 8.8 MiB/s wr, 509 op/s
Oct 02 12:04:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3003758939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.323 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lm0rew5" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.545 2 DEBUG nova.storage.rbd_utils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.549 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.621 2 DEBUG nova.network.neutron [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updated VIF entry in instance network info cache for port 759e5032-b035-44b1-9e49-b89ab3323ceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.622 2 DEBUG nova.network.neutron [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updating instance_info_cache with network_info: [{"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.696 2 DEBUG oslo_concurrency.processutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config c2b299fa-a4b6-461f-84d6-790aa118102d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.697 2 INFO nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Deleting local config drive /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d/disk.config because it was imported into RBD.
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.715 2 DEBUG oslo_concurrency.lockutils [req-29250ac2-b21f-4b57-90ee-197c11dfb435 req-ca76c317-5266-4d6d-9b8e-351ce07ed191 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:04:55 compute-0 NetworkManager[44987]: <info>  [1759406695.7515] manager: (tap759e5032-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 02 12:04:55 compute-0 kernel: tap759e5032-b0: entered promiscuous mode
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:55 compute-0 ovn_controller[148183]: 2025-10-02T12:04:55Z|00130|binding|INFO|Claiming lport 759e5032-b035-44b1-9e49-b89ab3323ceb for this chassis.
Oct 02 12:04:55 compute-0 ovn_controller[148183]: 2025-10-02T12:04:55Z|00131|binding|INFO|759e5032-b035-44b1-9e49-b89ab3323ceb: Claiming fa:16:3e:79:89:f8 10.100.0.4
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:55 compute-0 systemd-machined[211836]: New machine qemu-16-instance-0000001b.
Oct 02 12:04:55 compute-0 ovn_controller[148183]: 2025-10-02T12:04:55Z|00132|binding|INFO|Setting lport 759e5032-b035-44b1-9e49-b89ab3323ceb ovn-installed in OVS
Oct 02 12:04:55 compute-0 ovn_controller[148183]: 2025-10-02T12:04:55Z|00133|binding|INFO|Setting lport 759e5032-b035-44b1-9e49-b89ab3323ceb up in Southbound
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.785 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:89:f8 10.100.0.4'], port_security=['fa:16:3e:79:89:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c2b299fa-a4b6-461f-84d6-790aa118102d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=759e5032-b035-44b1-9e49-b89ab3323ceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.788 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 759e5032-b035-44b1-9e49-b89ab3323ceb in datapath 80106802-d877-42c6-b2a9-50b050f6b08f bound to our chassis
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.790 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:04:55 compute-0 systemd-udevd[278855]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:04:55 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000001b.
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.811 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3c21ad-2666-4423-8753-47facffd5e0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 NetworkManager[44987]: <info>  [1759406695.8138] device (tap759e5032-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:04:55 compute-0 NetworkManager[44987]: <info>  [1759406695.8150] device (tap759e5032-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.843 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d65e99-44a2-4e54-9882-b6e7d77a0f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.849 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[464d747e-55cd-40aa-86cb-1f556424e53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.877 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f04d0845-c1b0-4478-bef2-df250de84e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.898 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0a05ae83-c547-402e-ab6f-8961e77d19b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278869, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.920 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3b1ae7-bc97-4aa9-8f5b-12b0910270e9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278870, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278870, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.924 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:55 compute-0 nova_compute[257802]: 2025-10-02 12:04:55.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.930 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.931 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.931 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:04:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:04:55.931 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:04:56 compute-0 ceph-mon[73607]: pgmap v1138: 305 pgs: 305 active+clean; 388 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 9.1 MiB/s wr, 448 op/s
Oct 02 12:04:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1681002428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:04:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1681002428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.638 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406696.6375725, c2b299fa-a4b6-461f-84d6-790aa118102d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.638 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] VM Started (Lifecycle Event)
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.730 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.735 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406696.6402035, c2b299fa-a4b6-461f-84d6-790aa118102d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.735 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] VM Paused (Lifecycle Event)
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.857 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.859 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:56 compute-0 nova_compute[257802]: 2025-10-02 12:04:56.893 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 413 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 10 MiB/s wr, 375 op/s
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.073 2 DEBUG nova.compute.manager [req-8536cfc4-4591-49d9-b801-f1a54c6203cf req-448278e7-e309-433f-a2f2-dca77c982e92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.074 2 DEBUG oslo_concurrency.lockutils [req-8536cfc4-4591-49d9-b801-f1a54c6203cf req-448278e7-e309-433f-a2f2-dca77c982e92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.074 2 DEBUG oslo_concurrency.lockutils [req-8536cfc4-4591-49d9-b801-f1a54c6203cf req-448278e7-e309-433f-a2f2-dca77c982e92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.074 2 DEBUG oslo_concurrency.lockutils [req-8536cfc4-4591-49d9-b801-f1a54c6203cf req-448278e7-e309-433f-a2f2-dca77c982e92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.074 2 DEBUG nova.compute.manager [req-8536cfc4-4591-49d9-b801-f1a54c6203cf req-448278e7-e309-433f-a2f2-dca77c982e92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Processing event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.075 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.079 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406697.0788877, c2b299fa-a4b6-461f-84d6-790aa118102d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.079 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] VM Resumed (Lifecycle Event)
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.081 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.084 2 INFO nova.virt.libvirt.driver [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Instance spawned successfully.
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.084 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.113 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.118 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.122 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.122 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.123 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.123 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.124 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.124 2 DEBUG nova.virt.libvirt.driver [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:04:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.317 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:04:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:57.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:04:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:57.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.440 2 INFO nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Took 11.30 seconds to spawn the instance on the hypervisor.
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.440 2 DEBUG nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:04:57 compute-0 nova_compute[257802]: 2025-10-02 12:04:57.992 2 INFO nova.compute.manager [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Took 12.75 seconds to build instance.
Oct 02 12:04:58 compute-0 ceph-mon[73607]: pgmap v1139: 305 pgs: 305 active+clean; 413 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 10 MiB/s wr, 375 op/s
Oct 02 12:04:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 438 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 11 MiB/s wr, 378 op/s
Oct 02 12:04:59 compute-0 nova_compute[257802]: 2025-10-02 12:04:59.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:04:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:04:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:04:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:04:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:59 compute-0 nova_compute[257802]: 2025-10-02 12:04:59.627 2 DEBUG oslo_concurrency.lockutils [None req-7f402b54-c93e-4ffd-b5a2-bd57dd8f5812 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.069 2 DEBUG nova.compute.manager [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.069 2 DEBUG oslo_concurrency.lockutils [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.069 2 DEBUG oslo_concurrency.lockutils [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.070 2 DEBUG oslo_concurrency.lockutils [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.070 2 DEBUG nova.compute.manager [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] No waiting events found dispatching network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.070 2 WARNING nova.compute.manager [req-49bebaad-420a-482b-a476-0d1f20a6d29d req-a7029e6f-5fc9-444e-890b-484668270021 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received unexpected event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb for instance with vm_state active and task_state None.
Oct 02 12:05:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 02 12:05:00 compute-0 ceph-mon[73607]: pgmap v1140: 305 pgs: 305 active+clean; 438 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 11 MiB/s wr, 378 op/s
Oct 02 12:05:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 02 12:05:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 02 12:05:00 compute-0 nova_compute[257802]: 2025-10-02 12:05:00.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:00 compute-0 podman[278917]: 2025-10-02 12:05:00.933658902 +0000 UTC m=+0.066674395 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:05:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 451 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 8.6 MiB/s wr, 420 op/s
Oct 02 12:05:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 02 12:05:01 compute-0 ceph-mon[73607]: osdmap e150: 3 total, 3 up, 3 in
Oct 02 12:05:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 02 12:05:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 02 12:05:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 02 12:05:02 compute-0 ceph-mon[73607]: pgmap v1142: 305 pgs: 305 active+clean; 451 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 8.6 MiB/s wr, 420 op/s
Oct 02 12:05:02 compute-0 ceph-mon[73607]: osdmap e151: 3 total, 3 up, 3 in
Oct 02 12:05:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 02 12:05:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 02 12:05:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 473 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 8.1 MiB/s wr, 395 op/s
Oct 02 12:05:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:03.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:03 compute-0 ceph-mon[73607]: osdmap e152: 3 total, 3 up, 3 in
Oct 02 12:05:03 compute-0 podman[278939]: 2025-10-02 12:05:03.918964514 +0000 UTC m=+0.053627673 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:03 compute-0 podman[278938]: 2025-10-02 12:05:03.923525367 +0000 UTC m=+0.058920585 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:04 compute-0 nova_compute[257802]: 2025-10-02 12:05:04.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 477 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 275 op/s
Oct 02 12:05:05 compute-0 ceph-mon[73607]: pgmap v1145: 305 pgs: 305 active+clean; 473 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 8.1 MiB/s wr, 395 op/s
Oct 02 12:05:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/307807258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:05 compute-0 nova_compute[257802]: 2025-10-02 12:05:05.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:06 compute-0 ceph-mon[73607]: pgmap v1146: 305 pgs: 305 active+clean; 477 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 275 op/s
Oct 02 12:05:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 484 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 230 op/s
Oct 02 12:05:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 02 12:05:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 02 12:05:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 02 12:05:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:07.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:07.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:08 compute-0 ceph-mon[73607]: pgmap v1147: 305 pgs: 305 active+clean; 484 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 230 op/s
Oct 02 12:05:08 compute-0 ceph-mon[73607]: osdmap e153: 3 total, 3 up, 3 in
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.403 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "da3d84b1-1cb1-46a9-a160-cd80904111da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.404 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.404 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "da3d84b1-1cb1-46a9-a160-cd80904111da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.404 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.405 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.406 2 INFO nova.compute.manager [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Terminating instance
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.406 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "refresh_cache-da3d84b1-1cb1-46a9-a160-cd80904111da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.407 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquired lock "refresh_cache-da3d84b1-1cb1-46a9-a160-cd80904111da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.407 2 DEBUG nova.network.neutron [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:05:08 compute-0 nova_compute[257802]: 2025-10-02 12:05:08.837 2 DEBUG nova.network.neutron [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:05:08 compute-0 podman[278982]: 2025-10-02 12:05:08.944699194 +0000 UTC m=+0.084605258 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 484 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.5 MiB/s wr, 229 op/s
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1575453669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:09.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.714 2 DEBUG nova.network.neutron [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.764 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Releasing lock "refresh_cache-da3d84b1-1cb1-46a9-a160-cd80904111da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.764 2 DEBUG nova.compute.manager [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:05:09 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 02 12:05:09 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001a.scope: Consumed 13.917s CPU time.
Oct 02 12:05:09 compute-0 systemd-machined[211836]: Machine qemu-15-instance-0000001a terminated.
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.986 2 INFO nova.virt.libvirt.driver [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance destroyed successfully.
Oct 02 12:05:09 compute-0 nova_compute[257802]: 2025-10-02 12:05:09.987 2 DEBUG nova.objects.instance [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lazy-loading 'resources' on Instance uuid da3d84b1-1cb1-46a9-a160-cd80904111da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:10 compute-0 ovn_controller[148183]: 2025-10-02T12:05:10Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:89:f8 10.100.0.4
Oct 02 12:05:10 compute-0 ovn_controller[148183]: 2025-10-02T12:05:10Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:89:f8 10.100.0.4
Oct 02 12:05:10 compute-0 ceph-mon[73607]: pgmap v1149: 305 pgs: 305 active+clean; 484 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.5 MiB/s wr, 229 op/s
Oct 02 12:05:10 compute-0 nova_compute[257802]: 2025-10-02 12:05:10.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 484 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 141 op/s
Oct 02 12:05:11 compute-0 sudo[279032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:11 compute-0 sudo[279032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:11 compute-0 sudo[279032]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:11.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:11 compute-0 sudo[279057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:11 compute-0 sudo[279057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:11 compute-0 sudo[279057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:11.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:12 compute-0 nova_compute[257802]: 2025-10-02 12:05:12.150 2 INFO nova.virt.libvirt.driver [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Deleting instance files /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da_del
Oct 02 12:05:12 compute-0 nova_compute[257802]: 2025-10-02 12:05:12.152 2 INFO nova.virt.libvirt.driver [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Deletion of /var/lib/nova/instances/da3d84b1-1cb1-46a9-a160-cd80904111da_del complete
Oct 02 12:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:12 compute-0 ceph-mon[73607]: pgmap v1150: 305 pgs: 305 active+clean; 484 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 141 op/s
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 466 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 213 op/s
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.338 2 INFO nova.compute.manager [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Took 3.57 seconds to destroy the instance on the hypervisor.
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.339 2 DEBUG oslo.service.loopingcall [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.339 2 DEBUG nova.compute.manager [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.339 2 DEBUG nova.network.neutron [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:05:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.617 2 DEBUG nova.network.neutron [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.644 2 DEBUG nova.network.neutron [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.673 2 INFO nova.compute.manager [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Took 0.33 seconds to deallocate network for instance.
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.759 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.760 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:13 compute-0 nova_compute[257802]: 2025-10-02 12:05:13.905 2 DEBUG oslo_concurrency.processutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700514518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.355 2 DEBUG oslo_concurrency.processutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.361 2 DEBUG nova.compute.provider_tree [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.604 2 DEBUG nova.scheduler.client.report [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.645 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.710 2 INFO nova.scheduler.client.report [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Deleted allocations for instance da3d84b1-1cb1-46a9-a160-cd80904111da
Oct 02 12:05:14 compute-0 ceph-mon[73607]: pgmap v1151: 305 pgs: 305 active+clean; 466 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 213 op/s
Oct 02 12:05:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2898171107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1700514518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:14 compute-0 nova_compute[257802]: 2025-10-02 12:05:14.848 2 DEBUG oslo_concurrency.lockutils [None req-ceca18d3-5a8c-471f-96d1-a0378ff7468e c5ab011ce9f04adbb19dab5fa5ed1714 6b8ddbfa33c348beb1c883371b5c6909 - - default default] Lock "da3d84b1-1cb1-46a9-a160-cd80904111da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 436 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.4 MiB/s wr, 244 op/s
Oct 02 12:05:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:15.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:15 compute-0 nova_compute[257802]: 2025-10-02 12:05:15.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 405 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 4.7 MiB/s wr, 190 op/s
Oct 02 12:05:17 compute-0 ceph-mon[73607]: pgmap v1152: 305 pgs: 305 active+clean; 436 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.4 MiB/s wr, 244 op/s
Oct 02 12:05:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:17 compute-0 sudo[279108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:17 compute-0 sudo[279108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:17 compute-0 sudo[279108]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:18 compute-0 sudo[279133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:18 compute-0 sudo[279133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:18 compute-0 sudo[279133]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:18 compute-0 sudo[279158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:18 compute-0 sudo[279158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:18 compute-0 sudo[279158]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:18 compute-0 sudo[279183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:05:18 compute-0 sudo[279183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:18 compute-0 ceph-mon[73607]: pgmap v1153: 305 pgs: 305 active+clean; 405 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 4.7 MiB/s wr, 190 op/s
Oct 02 12:05:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:05:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:05:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:18 compute-0 sudo[279183]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:05:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:05:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 400 KiB/s rd, 4.0 MiB/s wr, 163 op/s
Oct 02 12:05:19 compute-0 nova_compute[257802]: 2025-10-02 12:05:19.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:20 compute-0 nova_compute[257802]: 2025-10-02 12:05:20.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:20 compute-0 ceph-mon[73607]: pgmap v1154: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 400 KiB/s rd, 4.0 MiB/s wr, 163 op/s
Oct 02 12:05:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1224773953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 379 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:05:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3699658884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1678619395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1da1d7b1-6508-4032-9e68-0c524fd9bcb7 does not exist
Oct 02 12:05:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 32a13f18-e189-455d-a25d-30d2caeb7501 does not exist
Oct 02 12:05:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e27cece4-e788-416d-8ed5-2bf2b1b8df22 does not exist
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:05:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:22 compute-0 sudo[279241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:22 compute-0 sudo[279241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:22 compute-0 sudo[279241]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:22 compute-0 sudo[279266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:22 compute-0 sudo[279266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:22 compute-0 sudo[279266]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:22 compute-0 sudo[279291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:22 compute-0 sudo[279291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:22 compute-0 sudo[279291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:22 compute-0 sudo[279316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:05:22 compute-0 sudo[279316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:22 compute-0 nova_compute[257802]: 2025-10-02 12:05:22.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:22.980 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:05:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:22.981 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:05:22 compute-0 ceph-mon[73607]: pgmap v1155: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 379 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 136 op/s
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.06426887 +0000 UTC m=+0.100148694 container create 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:22.985317858 +0000 UTC m=+0.021197702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:23 compute-0 systemd[1]: Started libpod-conmon-26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8.scope.
Oct 02 12:05:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.208106619 +0000 UTC m=+0.243986463 container init 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.215270806 +0000 UTC m=+0.251150630 container start 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:23 compute-0 distracted_sammet[279399]: 167 167
Oct 02 12:05:23 compute-0 systemd[1]: libpod-26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8.scope: Deactivated successfully.
Oct 02 12:05:23 compute-0 conmon[279399]: conmon 26ad986a1b2ca4930597 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8.scope/container/memory.events
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.237977304 +0000 UTC m=+0.273857148 container attach 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.238852836 +0000 UTC m=+0.274732660 container died 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36af670e1e5feaadba28eccc9adcff3d07ca04d0531fd29c6586bf955eba00a-merged.mount: Deactivated successfully.
Oct 02 12:05:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:23 compute-0 podman[279383]: 2025-10-02 12:05:23.568517267 +0000 UTC m=+0.604397091 container remove 26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:23 compute-0 systemd[1]: libpod-conmon-26ad986a1b2ca49305976535e6f2ec8a247081f3c43f2a14248c469839e9bac8.scope: Deactivated successfully.
Oct 02 12:05:23 compute-0 podman[279424]: 2025-10-02 12:05:23.782110593 +0000 UTC m=+0.081076696 container create 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:23 compute-0 podman[279424]: 2025-10-02 12:05:23.722999179 +0000 UTC m=+0.021965272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:23 compute-0 systemd[1]: Started libpod-conmon-31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f.scope.
Oct 02 12:05:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-0 podman[279424]: 2025-10-02 12:05:23.913509896 +0000 UTC m=+0.212475989 container init 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:05:23 compute-0 podman[279424]: 2025-10-02 12:05:23.92057665 +0000 UTC m=+0.219542723 container start 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:23 compute-0 podman[279424]: 2025-10-02 12:05:23.935418074 +0000 UTC m=+0.234384167 container attach 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:05:24 compute-0 ceph-mon[73607]: pgmap v1156: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 136 op/s
Oct 02 12:05:24 compute-0 nova_compute[257802]: 2025-10-02 12:05:24.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:24 compute-0 funny_euler[279441]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:05:24 compute-0 funny_euler[279441]: --> relative data size: 1.0
Oct 02 12:05:24 compute-0 funny_euler[279441]: --> All data devices are unavailable
Oct 02 12:05:24 compute-0 systemd[1]: libpod-31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f.scope: Deactivated successfully.
Oct 02 12:05:24 compute-0 podman[279424]: 2025-10-02 12:05:24.775400211 +0000 UTC m=+1.074366294 container died 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:05:24 compute-0 nova_compute[257802]: 2025-10-02 12:05:24.985 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406709.9833002, da3d84b1-1cb1-46a9-a160-cd80904111da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:05:24 compute-0 nova_compute[257802]: 2025-10-02 12:05:24.986 2 INFO nova.compute.manager [-] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] VM Stopped (Lifecycle Event)
Oct 02 12:05:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 106 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 12:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-31b4cc394d1ebc7035be3ace8679f09bbdc1c73c6e8daaf3b725136cfc0e06f0-merged.mount: Deactivated successfully.
Oct 02 12:05:25 compute-0 nova_compute[257802]: 2025-10-02 12:05:25.023 2 DEBUG nova.compute.manager [None req-c3c4397d-7a13-448b-8998-f25b356b1510 - - - - - -] [instance: da3d84b1-1cb1-46a9-a160-cd80904111da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:25 compute-0 podman[279424]: 2025-10-02 12:05:25.206124809 +0000 UTC m=+1.505090872 container remove 31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:05:25 compute-0 systemd[1]: libpod-conmon-31cc9e2b9e67a162249a0d014dbd177928175a6edf438e492c863b575552e18f.scope: Deactivated successfully.
Oct 02 12:05:25 compute-0 sudo[279316]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:25 compute-0 sudo[279469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:25 compute-0 sudo[279469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:25 compute-0 sudo[279469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:25 compute-0 sudo[279494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:25 compute-0 sudo[279494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:25 compute-0 sudo[279494]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:25 compute-0 sudo[279519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:25 compute-0 sudo[279519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:25 compute-0 sudo[279519]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:25 compute-0 sudo[279544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:05:25 compute-0 sudo[279544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:25 compute-0 nova_compute[257802]: 2025-10-02 12:05:25.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:25 compute-0 podman[279609]: 2025-10-02 12:05:25.780142862 +0000 UTC m=+0.024702839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:25 compute-0 podman[279609]: 2025-10-02 12:05:25.971539931 +0000 UTC m=+0.216099888 container create c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:05:26 compute-0 systemd[1]: Started libpod-conmon-c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea.scope.
Oct 02 12:05:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:26 compute-0 podman[279609]: 2025-10-02 12:05:26.279002026 +0000 UTC m=+0.523562013 container init c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:05:26 compute-0 podman[279609]: 2025-10-02 12:05:26.287424043 +0000 UTC m=+0.531984000 container start c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:05:26 compute-0 upbeat_shaw[279626]: 167 167
Oct 02 12:05:26 compute-0 systemd[1]: libpod-c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea.scope: Deactivated successfully.
Oct 02 12:05:26 compute-0 podman[279609]: 2025-10-02 12:05:26.335579738 +0000 UTC m=+0.580139715 container attach c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:05:26 compute-0 podman[279609]: 2025-10-02 12:05:26.337045524 +0000 UTC m=+0.581605501 container died c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:05:26 compute-0 ceph-mon[73607]: pgmap v1157: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 106 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 12:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b136e7f1e51e4a9c21670b9092466984e6d15b4242fe4f26ad6be21c5c06a574-merged.mount: Deactivated successfully.
Oct 02 12:05:26 compute-0 podman[279609]: 2025-10-02 12:05:26.630605197 +0000 UTC m=+0.875165154 container remove c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:05:26 compute-0 systemd[1]: libpod-conmon-c056db3b88d8b9814d762354c17ce3b95cf08df604364b5b4b5140a775d3caea.scope: Deactivated successfully.
Oct 02 12:05:26 compute-0 podman[279653]: 2025-10-02 12:05:26.841956006 +0000 UTC m=+0.088851436 container create 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:05:26 compute-0 podman[279653]: 2025-10-02 12:05:26.77219378 +0000 UTC m=+0.019089240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:26 compute-0 systemd[1]: Started libpod-conmon-9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2.scope.
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:26.923 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:26.924 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:26.924 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3720de5be782262ab3e9a2f9813f404da5f243192056300703ebb707305ac8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3720de5be782262ab3e9a2f9813f404da5f243192056300703ebb707305ac8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3720de5be782262ab3e9a2f9813f404da5f243192056300703ebb707305ac8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3720de5be782262ab3e9a2f9813f404da5f243192056300703ebb707305ac8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 podman[279653]: 2025-10-02 12:05:26.962526513 +0000 UTC m=+0.209421943 container init 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:05:26 compute-0 podman[279653]: 2025-10-02 12:05:26.969521005 +0000 UTC m=+0.216416435 container start 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:05:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 405 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 680 KiB/s wr, 52 op/s
Oct 02 12:05:27 compute-0 podman[279653]: 2025-10-02 12:05:27.040962083 +0000 UTC m=+0.287857513 container attach 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:27 compute-0 gallant_germain[279669]: {
Oct 02 12:05:27 compute-0 gallant_germain[279669]:     "1": [
Oct 02 12:05:27 compute-0 gallant_germain[279669]:         {
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "devices": [
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "/dev/loop3"
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             ],
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "lv_name": "ceph_lv0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "lv_size": "7511998464",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "name": "ceph_lv0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "tags": {
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.cluster_name": "ceph",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.crush_device_class": "",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.encrypted": "0",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.osd_id": "1",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.type": "block",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:                 "ceph.vdo": "0"
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             },
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "type": "block",
Oct 02 12:05:27 compute-0 gallant_germain[279669]:             "vg_name": "ceph_vg0"
Oct 02 12:05:27 compute-0 gallant_germain[279669]:         }
Oct 02 12:05:27 compute-0 gallant_germain[279669]:     ]
Oct 02 12:05:27 compute-0 gallant_germain[279669]: }
Oct 02 12:05:27 compute-0 systemd[1]: libpod-9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2.scope: Deactivated successfully.
Oct 02 12:05:27 compute-0 podman[279653]: 2025-10-02 12:05:27.733978254 +0000 UTC m=+0.980873694 container died 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3720de5be782262ab3e9a2f9813f404da5f243192056300703ebb707305ac8f-merged.mount: Deactivated successfully.
Oct 02 12:05:28 compute-0 podman[279653]: 2025-10-02 12:05:28.30323993 +0000 UTC m=+1.550135360 container remove 9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:05:28 compute-0 systemd[1]: libpod-conmon-9c22a297896416988802dbf5a34152a325dc86a94be3ad65aa04bb276d2859d2.scope: Deactivated successfully.
Oct 02 12:05:28 compute-0 sudo[279544]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 sudo[279690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:28 compute-0 sudo[279690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[279690]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 sudo[279716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:28 compute-0 sudo[279716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[279716]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 sudo[279741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:28 compute-0 sudo[279741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[279741]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 ceph-mon[73607]: pgmap v1158: 305 pgs: 305 active+clean; 405 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 680 KiB/s wr, 52 op/s
Oct 02 12:05:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/270720575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:28 compute-0 sudo[279766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:05:28 compute-0 sudo[279766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 podman[279829]: 2025-10-02 12:05:28.965477413 +0000 UTC m=+0.064309583 container create af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 443 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 85 op/s
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:28.92185538 +0000 UTC m=+0.020687570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:29 compute-0 systemd[1]: Started libpod-conmon-af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919.scope.
Oct 02 12:05:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:29.156307469 +0000 UTC m=+0.255139659 container init af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:29.165038223 +0000 UTC m=+0.263870403 container start af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:29 compute-0 nice_lalande[279846]: 167 167
Oct 02 12:05:29 compute-0 systemd[1]: libpod-af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919.scope: Deactivated successfully.
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:29.241593367 +0000 UTC m=+0.340425557 container attach af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:29.242036428 +0000 UTC m=+0.340868598 container died af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:05:29 compute-0 nova_compute[257802]: 2025-10-02 12:05:29.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:29.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-891f623b9f6f882009b5a4ae9c104a5c7041f73c0fd92da435ac6e2f3858ee77-merged.mount: Deactivated successfully.
Oct 02 12:05:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1291441108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:29 compute-0 podman[279829]: 2025-10-02 12:05:29.828419855 +0000 UTC m=+0.927252025 container remove af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lalande, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:29 compute-0 systemd[1]: libpod-conmon-af8a99dbc3c82e04a67720c80688e44c084efbb92fb26a767830bd7f3c5b6919.scope: Deactivated successfully.
Oct 02 12:05:30 compute-0 podman[279870]: 2025-10-02 12:05:30.048205443 +0000 UTC m=+0.094877616 container create 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:30 compute-0 podman[279870]: 2025-10-02 12:05:29.981728038 +0000 UTC m=+0.028400221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:30 compute-0 nova_compute[257802]: 2025-10-02 12:05:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:30 compute-0 systemd[1]: Started libpod-conmon-165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc.scope.
Oct 02 12:05:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e00a9a27857d86bfa7eecf65fc7f6655632a398a8beddb3d0023dee672186ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e00a9a27857d86bfa7eecf65fc7f6655632a398a8beddb3d0023dee672186ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e00a9a27857d86bfa7eecf65fc7f6655632a398a8beddb3d0023dee672186ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e00a9a27857d86bfa7eecf65fc7f6655632a398a8beddb3d0023dee672186ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:30 compute-0 podman[279870]: 2025-10-02 12:05:30.423503317 +0000 UTC m=+0.470175490 container init 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:05:30 compute-0 podman[279870]: 2025-10-02 12:05:30.434271362 +0000 UTC m=+0.480943535 container start 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:05:30 compute-0 podman[279870]: 2025-10-02 12:05:30.548970624 +0000 UTC m=+0.595642757 container attach 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:05:30 compute-0 nova_compute[257802]: 2025-10-02 12:05:30.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 484 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 12:05:31 compute-0 ceph-mon[73607]: pgmap v1159: 305 pgs: 305 active+clean; 443 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 85 op/s
Oct 02 12:05:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2480881078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:31 compute-0 infallible_lalande[279887]: {
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:         "osd_id": 1,
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:         "type": "bluestore"
Oct 02 12:05:31 compute-0 infallible_lalande[279887]:     }
Oct 02 12:05:31 compute-0 infallible_lalande[279887]: }
Oct 02 12:05:31 compute-0 systemd[1]: libpod-165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc.scope: Deactivated successfully.
Oct 02 12:05:31 compute-0 podman[279910]: 2025-10-02 12:05:31.365718619 +0000 UTC m=+0.024524234 container died 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:31 compute-0 sudo[279934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:31 compute-0 sudo[279934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:31 compute-0 sudo[279934]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:31 compute-0 sudo[279959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:31 compute-0 sudo[279959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:31 compute-0 sudo[279959]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:31 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 02 12:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e00a9a27857d86bfa7eecf65fc7f6655632a398a8beddb3d0023dee672186ea-merged.mount: Deactivated successfully.
Oct 02 12:05:32 compute-0 nova_compute[257802]: 2025-10-02 12:05:32.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:32 compute-0 podman[279910]: 2025-10-02 12:05:32.167277631 +0000 UTC m=+0.826083266 container remove 165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:32 compute-0 systemd[1]: libpod-conmon-165e0365099d986b2a3d9c8766b6eb7b8e87eed7ea783c12bdd02c9abf24a6cc.scope: Deactivated successfully.
Oct 02 12:05:32 compute-0 sudo[279766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:05:32 compute-0 podman[279909]: 2025-10-02 12:05:32.254777834 +0000 UTC m=+0.895232188 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:05:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:05:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:32 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8fa79d86-5a73-40cc-998b-8e14ff76fcc3 does not exist
Oct 02 12:05:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 521f2656-8eca-4b3e-bcc0-f4725ba7a900 does not exist
Oct 02 12:05:32 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7a53f0c0-b5dd-426a-b11d-504c203a1a09 does not exist
Oct 02 12:05:32 compute-0 ceph-mon[73607]: pgmap v1160: 305 pgs: 305 active+clean; 484 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 02 12:05:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.423516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732423568, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 258, "total_data_size": 3747996, "memory_usage": 3805448, "flush_reason": "Manual Compaction"}
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 02 12:05:32 compute-0 sudo[279993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:32 compute-0 sudo[279993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:32 compute-0 sudo[279993]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 sudo[280018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:05:32 compute-0 sudo[280018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:32 compute-0 sudo[280018]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732501936, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3607201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24193, "largest_seqno": 26360, "table_properties": {"data_size": 3597676, "index_size": 5891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20655, "raw_average_key_size": 20, "raw_value_size": 3577976, "raw_average_value_size": 3497, "num_data_blocks": 260, "num_entries": 1023, "num_filter_entries": 1023, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406543, "oldest_key_time": 1759406543, "file_creation_time": 1759406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 78469 microseconds, and 7304 cpu microseconds.
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.501981) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3607201 bytes OK
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.502003) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.611796) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.611922) EVENT_LOG_v1 {"time_micros": 1759406732611907, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.611956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3739006, prev total WAL file size 3739006, number of live WAL files 2.
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.613779) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3522KB)], [56(8828KB)]
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732613883, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12647623, "oldest_snapshot_seqno": -1}
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5363 keys, 12528429 bytes, temperature: kUnknown
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732964928, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 12528429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12488114, "index_size": 25805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 134649, "raw_average_key_size": 25, "raw_value_size": 12387214, "raw_average_value_size": 2309, "num_data_blocks": 1069, "num_entries": 5363, "num_filter_entries": 5363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:05:32 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:05:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:32.983 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 509 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 189 op/s
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.965473) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12528429 bytes
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.024986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.0 rd, 35.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5900, records dropped: 537 output_compression: NoCompression
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.025030) EVENT_LOG_v1 {"time_micros": 1759406733025013, "job": 30, "event": "compaction_finished", "compaction_time_micros": 351397, "compaction_time_cpu_micros": 28643, "output_level": 6, "num_output_files": 1, "total_output_size": 12528429, "num_input_records": 5900, "num_output_records": 5363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406733026901, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406733031198, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:32.613658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.031276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.031283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.031286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.031289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:33.031292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:33 compute-0 nova_compute[257802]: 2025-10-02 12:05:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:33 compute-0 nova_compute[257802]: 2025-10-02 12:05:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:05:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:05:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:05:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/374108653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/633674661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:34 compute-0 nova_compute[257802]: 2025-10-02 12:05:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:34 compute-0 nova_compute[257802]: 2025-10-02 12:05:34.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:05:34 compute-0 nova_compute[257802]: 2025-10-02 12:05:34.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 02 12:05:34 compute-0 podman[280044]: 2025-10-02 12:05:34.915736844 +0000 UTC m=+0.054877142 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:34 compute-0 podman[280045]: 2025-10-02 12:05:34.94973733 +0000 UTC m=+0.083438124 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true)
Oct 02 12:05:34 compute-0 ceph-mon[73607]: pgmap v1161: 305 pgs: 305 active+clean; 509 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 189 op/s
Oct 02 12:05:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3930794615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2636885787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 523 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.3 MiB/s wr, 209 op/s
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.230475) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735230504, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 287, "num_deletes": 251, "total_data_size": 66823, "memory_usage": 73608, "flush_reason": "Manual Compaction"}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 02 12:05:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 02 12:05:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735376518, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 66542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26361, "largest_seqno": 26647, "table_properties": {"data_size": 64637, "index_size": 133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5160, "raw_average_key_size": 18, "raw_value_size": 60742, "raw_average_value_size": 219, "num_data_blocks": 6, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406733, "oldest_key_time": 1759406733, "file_creation_time": 1759406735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 146109 microseconds, and 938 cpu microseconds.
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.376575) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 66542 bytes OK
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.376600) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.394228) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.394255) EVENT_LOG_v1 {"time_micros": 1759406735394245, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.394283) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 64659, prev total WAL file size 64981, number of live WAL files 2.
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.394986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(64KB)], [59(11MB)]
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735395050, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 12594971, "oldest_snapshot_seqno": -1}
Oct 02 12:05:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:35.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5128 keys, 10659320 bytes, temperature: kUnknown
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735580670, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10659320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10622203, "index_size": 23183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 130508, "raw_average_key_size": 25, "raw_value_size": 10526900, "raw_average_value_size": 2052, "num_data_blocks": 951, "num_entries": 5128, "num_filter_entries": 5128, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.580999) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10659320 bytes
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.621606) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.8 rd, 57.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.9 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(349.5) write-amplify(160.2) OK, records in: 5640, records dropped: 512 output_compression: NoCompression
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.621651) EVENT_LOG_v1 {"time_micros": 1759406735621634, "job": 32, "event": "compaction_finished", "compaction_time_micros": 185716, "compaction_time_cpu_micros": 46656, "output_level": 6, "num_output_files": 1, "total_output_size": 10659320, "num_input_records": 5640, "num_output_records": 5128, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735622089, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735624276, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.394690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.624505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.624510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.624512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.624515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:05:35.624518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.636 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.636 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.636 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.637 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:35 compute-0 nova_compute[257802]: 2025-10-02 12:05:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:36 compute-0 ceph-mon[73607]: pgmap v1162: 305 pgs: 305 active+clean; 523 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.3 MiB/s wr, 209 op/s
Oct 02 12:05:36 compute-0 ceph-mon[73607]: osdmap e154: 3 total, 3 up, 3 in
Oct 02 12:05:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 530 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 6.8 MiB/s wr, 270 op/s
Oct 02 12:05:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:38 compute-0 ceph-mon[73607]: pgmap v1164: 305 pgs: 305 active+clean; 530 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 6.8 MiB/s wr, 270 op/s
Oct 02 12:05:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 540 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.082 2 INFO nova.compute.manager [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Rebuilding instance
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:39.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.446 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.519 2 DEBUG nova.compute.manager [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.599 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_requests' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.616 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.647 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'resources' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.692 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'migration_context' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.714 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:05:39 compute-0 nova_compute[257802]: 2025-10-02 12:05:39.717 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:05:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1365960670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:39 compute-0 podman[280086]: 2025-10-02 12:05:39.953907133 +0000 UTC m=+0.088303544 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:05:40 compute-0 nova_compute[257802]: 2025-10-02 12:05:40.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:40 compute-0 ceph-mon[73607]: pgmap v1165: 305 pgs: 305 active+clean; 540 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 02 12:05:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3195151013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 534 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 275 op/s
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.288 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updating instance_info_cache with network_info: [{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.323 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.324 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.325 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.325 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.358 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.359 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.359 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.360 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.360 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/662989926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.817 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.918 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.919 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.923 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:05:41 compute-0 nova_compute[257802]: 2025-10-02 12:05:41.923 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.092 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.093 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4387MB free_disk=20.749637603759766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.093 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.094 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.173 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.174 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance c2b299fa-a4b6-461f-84d6-790aa118102d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.174 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.174 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.240 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:42 compute-0 ceph-mon[73607]: pgmap v1166: 305 pgs: 305 active+clean; 534 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 275 op/s
Oct 02 12:05:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/662989926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:05:42
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta']
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:05:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 02 12:05:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313004967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.701 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.707 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.734 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.746 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance shutdown successfully after 3 seconds.
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.764 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:05:42 compute-0 nova_compute[257802]: 2025-10-02 12:05:42.764 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:05:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:05:42 compute-0 kernel: tap869b835b-11 (unregistering): left promiscuous mode
Oct 02 12:05:42 compute-0 NetworkManager[44987]: <info>  [1759406742.9609] device (tap869b835b-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:05:43 compute-0 ovn_controller[148183]: 2025-10-02T12:05:43Z|00134|binding|INFO|Releasing lport 869b835b-1179-4864-abfe-fc542b215555 from this chassis (sb_readonly=0)
Oct 02 12:05:43 compute-0 ovn_controller[148183]: 2025-10-02T12:05:43Z|00135|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 down in Southbound
Oct 02 12:05:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 499 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 305 op/s
Oct 02 12:05:43 compute-0 ovn_controller[148183]: 2025-10-02T12:05:43Z|00136|binding|INFO|Removing iface tap869b835b-11 ovn-installed in OVS
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.020 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.021 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f unbound from our chassis
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.023 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.044 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d543ca52-f187-428c-9705-59e353dea5ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 02 12:05:43 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000017.scope: Consumed 16.995s CPU time.
Oct 02 12:05:43 compute-0 systemd-machined[211836]: Machine qemu-14-instance-00000017 terminated.
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.072 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[30e398ad-9f3a-4c55-9c67-5d4688bc8738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.075 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8341aace-0b85-4593-abc1-94a2e2a95bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.100 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[635df9f6-8711-4f91-8c4f-626872d94639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.118 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5279fbff-a272-411c-95bd-476f48d9abb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280171, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.131 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[514a5b5c-994f-4c23-89bc-d8140e2a204a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280172, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280172, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.133 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.140 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.140 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.140 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:43.141 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.180 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance destroyed successfully.
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.185 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance destroyed successfully.
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.186 2 DEBUG nova.virt.libvirt.vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:04:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:05:38Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.186 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.187 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.187 2 DEBUG os_vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869b835b-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.196 2 INFO os_vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:05:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.504 2 DEBUG nova.compute.manager [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.505 2 DEBUG oslo_concurrency.lockutils [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.505 2 DEBUG oslo_concurrency.lockutils [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.506 2 DEBUG oslo_concurrency.lockutils [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.506 2 DEBUG nova.compute.manager [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.506 2 WARNING nova.compute.manager [req-3c264c6e-b632-4012-8d60-03c3711830fb req-734a21b3-5fab-4596-9815-07311ab44fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state error and task_state rebuilding.
Oct 02 12:05:43 compute-0 ceph-mon[73607]: osdmap e155: 3 total, 3 up, 3 in
Oct 02 12:05:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/313004967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.760 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:43 compute-0 nova_compute[257802]: 2025-10-02 12:05:43.760 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.501 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting instance files /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.502 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deletion of /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del complete
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.760 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.761 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating image(s)
Oct 02 12:05:44 compute-0 ceph-mon[73607]: pgmap v1168: 305 pgs: 305 active+clean; 499 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 305 op/s
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.798 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.833 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.867 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.872 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:44 compute-0 nova_compute[257802]: 2025-10-02 12:05:44.873 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 484 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.199 2 DEBUG nova.virt.libvirt.imagebackend [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/db05f54c-61f8-42d6-a1e2-da3219a77b12/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/db05f54c-61f8-42d6-a1e2-da3219a77b12/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:05:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:05:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.749 2 DEBUG nova.compute.manager [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.749 2 DEBUG oslo_concurrency.lockutils [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.749 2 DEBUG oslo_concurrency.lockutils [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.749 2 DEBUG oslo_concurrency.lockutils [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.750 2 DEBUG nova.compute.manager [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:05:45 compute-0 nova_compute[257802]: 2025-10-02 12:05:45.750 2 WARNING nova.compute.manager [req-3161222a-7f0d-48b4-8b80-1dee9ba5868c req-be288c83-db04-4248-b6a4-d4af921ca83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state error and task_state rebuild_spawning.
Oct 02 12:05:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3478073098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:46 compute-0 ceph-mon[73607]: pgmap v1169: 305 pgs: 305 active+clean; 484 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Oct 02 12:05:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 464 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Oct 02 12:05:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:47.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.441 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.500 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.501 2 DEBUG nova.virt.images [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] db05f54c-61f8-42d6-a1e2-da3219a77b12 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.503 2 DEBUG nova.privsep.utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.503 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.812 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.817 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.884 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.886 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.910 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:47 compute-0 nova_compute[257802]: 2025-10-02 12:05:47.912 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.221 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.296 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] resizing rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.429 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.430 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Ensure instance console log exists: /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.430 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.430 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.431 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.433 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start _get_guest_xml network_info=[{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.437 2 WARNING nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.455 2 DEBUG nova.virt.libvirt.host [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.456 2 DEBUG nova.virt.libvirt.host [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.462 2 DEBUG nova.virt.libvirt.host [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.463 2 DEBUG nova.virt.libvirt.host [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.464 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.465 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.465 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.466 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.466 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.466 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.466 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.467 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.467 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.467 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.467 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.468 2 DEBUG nova.virt.hardware [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:05:48 compute-0 nova_compute[257802]: 2025-10-02 12:05:48.468 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:49 compute-0 ceph-mon[73607]: pgmap v1170: 305 pgs: 305 active+clean; 464 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Oct 02 12:05:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 373 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 252 op/s
Oct 02 12:05:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:49 compute-0 nova_compute[257802]: 2025-10-02 12:05:49.807 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:50 compute-0 ceph-mon[73607]: pgmap v1171: 305 pgs: 305 active+clean; 373 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 252 op/s
Oct 02 12:05:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:05:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1365190767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.272 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.298 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.303 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:05:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554190170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.730 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.731 2 DEBUG nova.virt.libvirt.vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:04:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:05:44Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.732 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.733 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.735 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <uuid>66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</uuid>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <name>instance-00000017</name>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAdminTestJSON-server-167670540</nova:name>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:05:48</nova:creationTime>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:user uuid="8850add40b254d198f270d9e64c777d5">tempest-ServersAdminTestJSON-518249049-project-member</nova:user>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:project uuid="9afa78cc4dec419babdf61fd31f46e28">tempest-ServersAdminTestJSON-518249049</nova:project>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <nova:port uuid="869b835b-1179-4864-abfe-fc542b215555">
Oct 02 12:05:50 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <system>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="serial">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="uuid">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </system>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <os>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </os>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <features>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </features>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk">
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </source>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config">
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </source>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:05:50 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1d:d7:20"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <target dev="tap869b835b-11"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log" append="off"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <video>
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </video>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:05:50 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:05:50 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:05:50 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:05:50 compute-0 nova_compute[257802]: </domain>
Oct 02 12:05:50 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.736 2 DEBUG nova.compute.manager [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Preparing to wait for external event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.736 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.737 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.737 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.737 2 DEBUG nova.virt.libvirt.vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:04:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:05:44Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.738 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.738 2 DEBUG nova.network.os_vif_util [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.739 2 DEBUG os_vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869b835b-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap869b835b-11, col_values=(('external_ids', {'iface-id': '869b835b-1179-4864-abfe-fc542b215555', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:d7:20', 'vm-uuid': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:50 compute-0 NetworkManager[44987]: <info>  [1759406750.7473] manager: (tap869b835b-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.752 2 INFO os_vif [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.840 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.840 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.840 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No VIF found with MAC fa:16:3e:1d:d7:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.841 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Using config drive
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.861 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.889 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:50 compute-0 nova_compute[257802]: 2025-10-02 12:05:50.960 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'keypairs' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:05:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 380 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.1 MiB/s wr, 171 op/s
Oct 02 12:05:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 02 12:05:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1365190767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/554190170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:05:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 02 12:05:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 02 12:05:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.550 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating config drive at /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.555 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rs3clcj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:51 compute-0 sudo[280470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:51 compute-0 sudo[280470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:51 compute-0 sudo[280470]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:51 compute-0 sudo[280498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:51 compute-0 sudo[280498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:51 compute-0 sudo[280498]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.680 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rs3clcj" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.705 2 DEBUG nova.storage.rbd_utils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.708 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.870 2 DEBUG oslo_concurrency.processutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.871 2 INFO nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting local config drive /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config because it was imported into RBD.
Oct 02 12:05:51 compute-0 kernel: tap869b835b-11: entered promiscuous mode
Oct 02 12:05:51 compute-0 NetworkManager[44987]: <info>  [1759406751.9195] manager: (tap869b835b-11): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 02 12:05:51 compute-0 ovn_controller[148183]: 2025-10-02T12:05:51Z|00137|binding|INFO|Claiming lport 869b835b-1179-4864-abfe-fc542b215555 for this chassis.
Oct 02 12:05:51 compute-0 ovn_controller[148183]: 2025-10-02T12:05:51Z|00138|binding|INFO|869b835b-1179-4864-abfe-fc542b215555: Claiming fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.939 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.940 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f bound to our chassis
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.941 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:05:51 compute-0 ovn_controller[148183]: 2025-10-02T12:05:51Z|00139|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 ovn-installed in OVS
Oct 02 12:05:51 compute-0 ovn_controller[148183]: 2025-10-02T12:05:51Z|00140|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 up in Southbound
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:51 compute-0 systemd-udevd[280574]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:05:51 compute-0 nova_compute[257802]: 2025-10-02 12:05:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:51 compute-0 systemd-machined[211836]: New machine qemu-17-instance-00000017.
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.960 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18522175-af06-4941-923b-f9e67b70536a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:51 compute-0 NetworkManager[44987]: <info>  [1759406751.9724] device (tap869b835b-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:05:51 compute-0 NetworkManager[44987]: <info>  [1759406751.9736] device (tap869b835b-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:05:51 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000017.
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.991 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8451f578-ed34-4498-a032-019446b630d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:51.994 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e6098e4a-83af-4ace-9a91-3279f5a3ca1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.025 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b1525caa-01f9-42c1-84cb-05ba17016d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.041 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3a15dfc1-eb39-4efc-92a5-409697251bd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280586, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.057 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[526c8f33-3ce8-40da-bc54-d5dbc00983a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280587, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280587, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.058 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:52 compute-0 nova_compute[257802]: 2025-10-02 12:05:52.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:52 compute-0 nova_compute[257802]: 2025-10-02 12:05:52.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.061 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.061 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.062 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:05:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:05:52.062 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:05:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:52 compute-0 ceph-mon[73607]: pgmap v1172: 305 pgs: 305 active+clean; 380 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.1 MiB/s wr, 171 op/s
Oct 02 12:05:52 compute-0 ceph-mon[73607]: osdmap e156: 3 total, 3 up, 3 in
Oct 02 12:05:52 compute-0 nova_compute[257802]: 2025-10-02 12:05:52.769 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:05:52 compute-0 nova_compute[257802]: 2025-10-02 12:05:52.770 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406752.7689662, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:05:52 compute-0 nova_compute[257802]: 2025-10-02 12:05:52.770 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Started (Lifecycle Event)
Oct 02 12:05:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 440 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct 02 12:05:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 02 12:05:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 02 12:05:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 02 12:05:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009669840812860599 of space, bias 1.0, pg target 2.9009522438581796 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002886793632514424 of space, bias 1.0, pg target 0.8602645024892983 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:05:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 02 12:05:54 compute-0 ceph-mon[73607]: pgmap v1174: 305 pgs: 305 active+clean; 440 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct 02 12:05:54 compute-0 ceph-mon[73607]: osdmap e157: 3 total, 3 up, 3 in
Oct 02 12:05:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 02 12:05:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.690 2 DEBUG nova.compute.manager [req-7eb1b2a2-7c67-44fc-836e-f6e72b73ff36 req-d794111e-7d10-40de-b4ec-f8bd049dadc5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.690 2 DEBUG oslo_concurrency.lockutils [req-7eb1b2a2-7c67-44fc-836e-f6e72b73ff36 req-d794111e-7d10-40de-b4ec-f8bd049dadc5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.690 2 DEBUG oslo_concurrency.lockutils [req-7eb1b2a2-7c67-44fc-836e-f6e72b73ff36 req-d794111e-7d10-40de-b4ec-f8bd049dadc5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.690 2 DEBUG oslo_concurrency.lockutils [req-7eb1b2a2-7c67-44fc-836e-f6e72b73ff36 req-d794111e-7d10-40de-b4ec-f8bd049dadc5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.691 2 DEBUG nova.compute.manager [req-7eb1b2a2-7c67-44fc-836e-f6e72b73ff36 req-d794111e-7d10-40de-b4ec-f8bd049dadc5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Processing event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.691 2 DEBUG nova.compute.manager [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.694 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.696 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.699 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance spawned successfully.
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.700 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.702 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.728 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.729 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.729 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.729 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.730 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.730 2 DEBUG nova.virt.libvirt.driver [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.733 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.733 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406752.7690685, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.733 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Paused (Lifecycle Event)
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.770 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.775 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406754.69407, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.775 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Resumed (Lifecycle Event)
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.827 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.830 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.882 2 DEBUG nova.compute.manager [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:05:54 compute-0 nova_compute[257802]: 2025-10-02 12:05:54.911 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:05:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.7 MiB/s wr, 201 op/s
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.367 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.368 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.368 2 DEBUG nova.objects.instance [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:05:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:55.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.494 2 DEBUG oslo_concurrency.lockutils [None req-84fe18cc-37d1-441d-a037-fac5c58a9f5f 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:55 compute-0 ceph-mon[73607]: osdmap e158: 3 total, 3 up, 3 in
Oct 02 12:05:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3634991860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3634991860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:55 compute-0 nova_compute[257802]: 2025-10-02 12:05:55.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.864 2 DEBUG nova.compute.manager [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.864 2 DEBUG oslo_concurrency.lockutils [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.865 2 DEBUG oslo_concurrency.lockutils [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.865 2 DEBUG oslo_concurrency.lockutils [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.865 2 DEBUG nova.compute.manager [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:05:56 compute-0 nova_compute[257802]: 2025-10-02 12:05:56.865 2 WARNING nova.compute.manager [req-e2940d3e-a03e-4359-b567-2a4fdbee3e71 req-23d4f39e-72ca-41ab-9274-a8d1cfa349d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state None.
Oct 02 12:05:56 compute-0 ceph-mon[73607]: pgmap v1177: 305 pgs: 305 active+clean; 461 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.7 MiB/s wr, 201 op/s
Oct 02 12:05:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 488 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 9.9 MiB/s wr, 206 op/s
Oct 02 12:05:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:05:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:57.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 488 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 268 op/s
Oct 02 12:05:59 compute-0 ceph-mon[73607]: pgmap v1178: 305 pgs: 305 active+clean; 488 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 9.9 MiB/s wr, 206 op/s
Oct 02 12:05:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:05:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:59.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:05:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:05:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:59 compute-0 nova_compute[257802]: 2025-10-02 12:05:59.762 2 INFO nova.compute.manager [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Rebuilding instance
Oct 02 12:06:00 compute-0 ceph-mon[73607]: pgmap v1179: 305 pgs: 305 active+clean; 488 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 268 op/s
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.692 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.709 2 DEBUG nova.compute.manager [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.769 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_requests' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.786 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.801 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'resources' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.813 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'migration_context' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.835 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:06:00 compute-0 nova_compute[257802]: 2025-10-02 12:06:00.838 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:06:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 465 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Oct 02 12:06:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 02 12:06:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 02 12:06:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 02 12:06:02 compute-0 ceph-mon[73607]: pgmap v1180: 305 pgs: 305 active+clean; 465 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Oct 02 12:06:02 compute-0 ceph-mon[73607]: osdmap e159: 3 total, 3 up, 3 in
Oct 02 12:06:02 compute-0 podman[280637]: 2025-10-02 12:06:02.916578368 +0000 UTC m=+0.053394335 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 12:06:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 427 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 149 op/s
Oct 02 12:06:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:03.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:03.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3861229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:04 compute-0 nova_compute[257802]: 2025-10-02 12:06:04.895 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:04 compute-0 nova_compute[257802]: 2025-10-02 12:06:04.896 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:04 compute-0 nova_compute[257802]: 2025-10-02 12:06:04.896 2 INFO nova.compute.manager [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Unshelving
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.010 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.010 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.014 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'pci_requests' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 407 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.040 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'numa_topology' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.065 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.065 2 INFO nova.compute.claims [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:06:05 compute-0 ceph-mon[73607]: pgmap v1182: 305 pgs: 305 active+clean; 427 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 149 op/s
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.267 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:05.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:05.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095233327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.765 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.770 2 DEBUG nova.compute.provider_tree [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:05 compute-0 nova_compute[257802]: 2025-10-02 12:06:05.862 2 DEBUG nova.scheduler.client.report [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:05 compute-0 podman[280680]: 2025-10-02 12:06:05.914556059 +0000 UTC m=+0.053190489 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:06:05 compute-0 podman[280679]: 2025-10-02 12:06:05.931702902 +0000 UTC m=+0.064154310 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:06:06 compute-0 nova_compute[257802]: 2025-10-02 12:06:06.132 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:06 compute-0 ceph-mon[73607]: pgmap v1183: 305 pgs: 305 active+clean; 407 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Oct 02 12:06:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/381002376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2095233327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:06 compute-0 nova_compute[257802]: 2025-10-02 12:06:06.489 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:06:06 compute-0 nova_compute[257802]: 2025-10-02 12:06:06.489 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:06:06 compute-0 nova_compute[257802]: 2025-10-02 12:06:06.490 2 DEBUG nova.network.neutron [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:06:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 407 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 130 op/s
Oct 02 12:06:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:07 compute-0 nova_compute[257802]: 2025-10-02 12:06:07.502 2 DEBUG nova.network.neutron [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.045 2 DEBUG nova.network.neutron [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.069 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.072 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.072 2 INFO nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating image(s)
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.100 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.103 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:08.106 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:08.107 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.244 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.270 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.273 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "73b61c8b0eb5f6b4b161f36a95b364cc48519a29" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.274 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "73b61c8b0eb5f6b4b161f36a95b364cc48519a29" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:08 compute-0 ceph-mon[73607]: pgmap v1184: 305 pgs: 305 active+clean; 407 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 130 op/s
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.810 2 DEBUG nova.virt.libvirt.imagebackend [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a5e65e3c-7180-41c3-815b-a56c1c0389b1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a5e65e3c-7180-41c3-815b-a56c1c0389b1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.874 2 DEBUG nova.virt.libvirt.imagebackend [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a5e65e3c-7180-41c3-815b-a56c1c0389b1/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:06:08 compute-0 nova_compute[257802]: 2025-10-02 12:06:08.875 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] cloning images/a5e65e3c-7180-41c3-815b-a56c1c0389b1@snap to None/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:06:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 460 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 751 KiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 02 12:06:09 compute-0 nova_compute[257802]: 2025-10-02 12:06:09.108 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "73b61c8b0eb5f6b4b161f36a95b364cc48519a29" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:09 compute-0 nova_compute[257802]: 2025-10-02 12:06:09.229 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'migration_context' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:09 compute-0 nova_compute[257802]: 2025-10-02 12:06:09.295 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] flattening vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:06:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:09 compute-0 ovn_controller[148183]: 2025-10-02T12:06:09Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:06:09 compute-0 ovn_controller[148183]: 2025-10-02T12:06:09Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:06:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4084862216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/375416715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.915 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Image rbd:vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.915 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.916 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Ensure instance console log exists: /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.916 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.916 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.916 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.918 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:05:44Z,direct_url=<?>,disk_format='raw',id=a5e65e3c-7180-41c3-815b-a56c1c0389b1,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1602823290-shelved',owner='39f16ad971c74b0296dabeb9b59464da',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:05:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.922 2 WARNING nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.925 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.926 2 DEBUG nova.virt.libvirt.host [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.927 2 DEBUG nova.virt.libvirt.host [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.931 2 DEBUG nova.virt.libvirt.host [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.932 2 DEBUG nova.virt.libvirt.host [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.933 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.933 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:05:44Z,direct_url=<?>,disk_format='raw',id=a5e65e3c-7180-41c3-815b-a56c1c0389b1,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1602823290-shelved',owner='39f16ad971c74b0296dabeb9b59464da',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:05:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.934 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.934 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.934 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.934 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.935 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.935 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.935 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.935 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.935 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.936 2 DEBUG nova.virt.hardware [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.936 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:10 compute-0 nova_compute[257802]: 2025-10-02 12:06:10.958 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:10 compute-0 podman[280939]: 2025-10-02 12:06:10.975640371 +0000 UTC m=+0.095393157 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:06:11 compute-0 ceph-mon[73607]: pgmap v1185: 305 pgs: 305 active+clean; 460 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 751 KiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 02 12:06:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 494 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 105 op/s
Oct 02 12:06:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:06:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655011574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.379 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.405 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.409 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:11 compute-0 sudo[281025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:11 compute-0 sudo[281025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:11 compute-0 sudo[281025]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:06:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553197275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:11 compute-0 sudo[281050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:11 compute-0 sudo[281050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:11 compute-0 sudo[281050]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.836 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.839 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:11 compute-0 sshd-session[281053]: Invalid user riscv from 167.99.55.34 port 39980
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.855 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <uuid>81cd8274-bb25-4b6c-aa66-89669fd098d5</uuid>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <name>instance-00000019</name>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1602823290</nova:name>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:06:10</nova:creationTime>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:user uuid="a10a935aed2d40ff8ee890b3089c7d03">tempest-UnshelveToHostMultiNodesTest-32053853-project-member</nova:user>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <nova:project uuid="39f16ad971c74b0296dabeb9b59464da">tempest-UnshelveToHostMultiNodesTest-32053853</nova:project>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="a5e65e3c-7180-41c3-815b-a56c1c0389b1"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <system>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="serial">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="uuid">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </system>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <os>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </os>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <features>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </features>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk">
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config">
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:06:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log" append="off"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <video>
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </video>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:06:11 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:06:11 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:06:11 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:06:11 compute-0 nova_compute[257802]: </domain>
Oct 02 12:06:11 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:06:11 compute-0 sshd-session[281053]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:06:11 compute-0 sshd-session[281053]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.918 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.918 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.919 2 INFO nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Using config drive
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.947 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:11 compute-0 nova_compute[257802]: 2025-10-02 12:06:11.980 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.038 2 DEBUG nova.objects.instance [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'keypairs' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2655011574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1553197275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.496 2 INFO nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating config drive at /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.501 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9scdtxal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.636 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9scdtxal" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.664 2 DEBUG nova.storage.rbd_utils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:12 compute-0 nova_compute[257802]: 2025-10-02 12:06:12.667 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.4 MiB/s wr, 182 op/s
Oct 02 12:06:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:13.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:13.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:13 compute-0 nova_compute[257802]: 2025-10-02 12:06:13.519 2 DEBUG oslo_concurrency.processutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:13 compute-0 nova_compute[257802]: 2025-10-02 12:06:13.520 2 INFO nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting local config drive /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config because it was imported into RBD.
Oct 02 12:06:13 compute-0 ceph-mon[73607]: pgmap v1186: 305 pgs: 305 active+clean; 494 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 105 op/s
Oct 02 12:06:13 compute-0 systemd-machined[211836]: New machine qemu-18-instance-00000019.
Oct 02 12:06:13 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000019.
Oct 02 12:06:13 compute-0 sshd-session[281053]: Failed password for invalid user riscv from 167.99.55.34 port 39980 ssh2
Oct 02 12:06:13 compute-0 sshd-session[281053]: Connection closed by invalid user riscv 167.99.55.34 port 39980 [preauth]
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.437 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406774.4371557, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.438 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Resumed (Lifecycle Event)
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.441 2 DEBUG nova.compute.manager [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.442 2 DEBUG nova.virt.libvirt.driver [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.445 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance spawned successfully.
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.515 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.518 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.555 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.556 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406774.440486, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.557 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Started (Lifecycle Event)
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.638 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.642 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:06:14 compute-0 nova_compute[257802]: 2025-10-02 12:06:14.712 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:06:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 02 12:06:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 560 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.6 MiB/s wr, 191 op/s
Oct 02 12:06:15 compute-0 ceph-mon[73607]: pgmap v1187: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.4 MiB/s wr, 182 op/s
Oct 02 12:06:15 compute-0 kernel: tap869b835b-11 (unregistering): left promiscuous mode
Oct 02 12:06:15 compute-0 NetworkManager[44987]: <info>  [1759406775.2668] device (tap869b835b-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:06:15 compute-0 ovn_controller[148183]: 2025-10-02T12:06:15Z|00141|binding|INFO|Releasing lport 869b835b-1179-4864-abfe-fc542b215555 from this chassis (sb_readonly=0)
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 ovn_controller[148183]: 2025-10-02T12:06:15Z|00142|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 down in Southbound
Oct 02 12:06:15 compute-0 ovn_controller[148183]: 2025-10-02T12:06:15Z|00143|binding|INFO|Removing iface tap869b835b-11 ovn-installed in OVS
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.298 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.300 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f unbound from our chassis
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.302 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.318 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6d2960-93c8-4798-9389-598dffa36335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 02 12:06:15 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000017.scope: Consumed 13.952s CPU time.
Oct 02 12:06:15 compute-0 systemd-machined[211836]: Machine qemu-17-instance-00000017 terminated.
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.345 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bad81eb2-d83a-42a0-aadc-85440bdd2f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.350 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e21706-1cb9-4bc4-b038-2537f55cfcda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.376 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc6d787-adc1-4a81-a4d1-fda858034728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.396 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[411b0ce1-0b7a-4bf1-a284-a570d3b58008]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281208, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.411 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3fddf3-9efd-4e41-b662-2b86d2b5ee86]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281209, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281209, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.412 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.419 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.419 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.419 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:15.420 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 02 12:06:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.971 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance shutdown successfully after 15 seconds.
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.978 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance destroyed successfully.
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.986 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance destroyed successfully.
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.987 2 DEBUG nova.virt.libvirt.vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:05:58Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.987 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.988 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.988 2 DEBUG os_vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.990 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869b835b-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:15 compute-0 nova_compute[257802]: 2025-10-02 12:06:15.995 2 INFO os_vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:06:16 compute-0 ceph-mon[73607]: pgmap v1188: 305 pgs: 305 active+clean; 560 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.6 MiB/s wr, 191 op/s
Oct 02 12:06:16 compute-0 ceph-mon[73607]: osdmap e160: 3 total, 3 up, 3 in
Oct 02 12:06:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 567 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 9.4 MiB/s wr, 250 op/s
Oct 02 12:06:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:17.110 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:17.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.511 2 DEBUG nova.compute.manager [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.603 2 DEBUG oslo_concurrency.lockutils [None req-6f03e499-db4b-4700-8422-7c61f7820f04 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.684 2 DEBUG nova.compute.manager [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.684 2 DEBUG oslo_concurrency.lockutils [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.685 2 DEBUG oslo_concurrency.lockutils [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.685 2 DEBUG oslo_concurrency.lockutils [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.685 2 DEBUG nova.compute.manager [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:17 compute-0 nova_compute[257802]: 2025-10-02 12:06:17.685 2 WARNING nova.compute.manager [req-ace77156-0448-4c44-b604-00dfa02999b7 req-d59165bf-58ef-496f-a1d3-b7735d9113a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state rebuilding.
Oct 02 12:06:18 compute-0 nova_compute[257802]: 2025-10-02 12:06:18.907 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting instance files /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del
Oct 02 12:06:18 compute-0 nova_compute[257802]: 2025-10-02 12:06:18.907 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deletion of /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del complete
Oct 02 12:06:18 compute-0 ceph-mon[73607]: pgmap v1190: 305 pgs: 305 active+clean; 567 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 9.4 MiB/s wr, 250 op/s
Oct 02 12:06:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 449 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.0 MiB/s wr, 331 op/s
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.098 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.098 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating image(s)
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.123 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.150 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.175 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.179 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.239 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.240 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.241 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.241 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.269 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.272 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.813 2 DEBUG nova.compute.manager [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.814 2 DEBUG oslo_concurrency.lockutils [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.814 2 DEBUG oslo_concurrency.lockutils [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.815 2 DEBUG oslo_concurrency.lockutils [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.815 2 DEBUG nova.compute.manager [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.816 2 WARNING nova.compute.manager [req-2989d476-10a8-43e7-b4fa-3e92cb86ecb6 req-3a61788c-2395-4dca-bb16-b74b742a66a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.886 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.917 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.918 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.918 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.919 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.919 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.920 2 INFO nova.compute.manager [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Terminating instance
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.921 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.921 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:06:19 compute-0 nova_compute[257802]: 2025-10-02 12:06:19.922 2 DEBUG nova.network.neutron [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.003 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] resizing rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.314 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.315 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Ensure instance console log exists: /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.315 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.316 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.316 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.319 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start _get_guest_xml network_info=[{"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.323 2 WARNING nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.332 2 DEBUG nova.virt.libvirt.host [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.332 2 DEBUG nova.virt.libvirt.host [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.338 2 DEBUG nova.virt.libvirt.host [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.339 2 DEBUG nova.virt.libvirt.host [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.340 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.341 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.341 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.341 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.342 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.342 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.342 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.342 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.343 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.343 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.343 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.343 2 DEBUG nova.virt.hardware [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.343 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.359 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.387 2 DEBUG nova.network.neutron [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.684 2 DEBUG nova.network.neutron [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.698 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.699 2 DEBUG nova.compute.manager [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:20 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 02 12:06:20 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000019.scope: Consumed 7.284s CPU time.
Oct 02 12:06:20 compute-0 systemd-machined[211836]: Machine qemu-18-instance-00000019 terminated.
Oct 02 12:06:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:06:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796480936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.810 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.836 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.840 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.920 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.921 2 DEBUG nova.objects.instance [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'resources' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:20 compute-0 nova_compute[257802]: 2025-10-02 12:06:20.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 426 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.7 MiB/s wr, 377 op/s
Oct 02 12:06:21 compute-0 ceph-mon[73607]: pgmap v1191: 305 pgs: 305 active+clean; 449 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.0 MiB/s wr, 331 op/s
Oct 02 12:06:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3796480936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:06:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559508445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.369 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.370 2 DEBUG nova.virt.libvirt.vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:06:19Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.370 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.371 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.373 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <uuid>66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</uuid>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <name>instance-00000017</name>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAdminTestJSON-server-167670540</nova:name>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:06:20</nova:creationTime>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:user uuid="8850add40b254d198f270d9e64c777d5">tempest-ServersAdminTestJSON-518249049-project-member</nova:user>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:project uuid="9afa78cc4dec419babdf61fd31f46e28">tempest-ServersAdminTestJSON-518249049</nova:project>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <nova:port uuid="869b835b-1179-4864-abfe-fc542b215555">
Oct 02 12:06:21 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <system>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="serial">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="uuid">66fec68d-e11d-4f9f-9cea-a0358d8f2ae0</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </system>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <os>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </os>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <features>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </features>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk">
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config">
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:06:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1d:d7:20"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <target dev="tap869b835b-11"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/console.log" append="off"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <video>
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </video>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:06:21 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:06:21 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:06:21 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:06:21 compute-0 nova_compute[257802]: </domain>
Oct 02 12:06:21 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.374 2 DEBUG nova.virt.libvirt.vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:06:19Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.374 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.375 2 DEBUG nova.network.os_vif_util [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.375 2 DEBUG os_vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869b835b-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap869b835b-11, col_values=(('external_ids', {'iface-id': '869b835b-1179-4864-abfe-fc542b215555', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:d7:20', 'vm-uuid': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:21 compute-0 NetworkManager[44987]: <info>  [1759406781.3812] manager: (tap869b835b-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.387 2 INFO os_vif [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:06:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.517 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.519 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.520 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] No VIF found with MAC fa:16:3e:1d:d7:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.520 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Using config drive
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.539 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.577 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:21 compute-0 nova_compute[257802]: 2025-10-02 12:06:21.657 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'keypairs' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:22 compute-0 ceph-mon[73607]: pgmap v1192: 305 pgs: 305 active+clean; 426 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.7 MiB/s wr, 377 op/s
Oct 02 12:06:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3559508445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.573 2 INFO nova.virt.libvirt.driver [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting instance files /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.575 2 INFO nova.virt.libvirt.driver [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deletion of /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del complete
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.635 2 INFO nova.compute.manager [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Took 1.94 seconds to destroy the instance on the hypervisor.
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.636 2 DEBUG oslo.service.loopingcall [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.636 2 DEBUG nova.compute.manager [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:06:22 compute-0 nova_compute[257802]: 2025-10-02 12:06:22.636 2 DEBUG nova.network.neutron [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:06:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 02 12:06:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 02 12:06:22 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 02 12:06:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 421 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 345 op/s
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.033 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Creating config drive at /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.038 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbo5441vd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.172 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbo5441vd" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.207 2 DEBUG nova.storage.rbd_utils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] rbd image 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.211 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.231 2 DEBUG nova.network.neutron [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.249 2 DEBUG nova.network.neutron [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.270 2 INFO nova.compute.manager [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Took 0.63 seconds to deallocate network for instance.
Oct 02 12:06:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:23.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.525 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.526 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:23 compute-0 nova_compute[257802]: 2025-10-02 12:06:23.620 2 DEBUG oslo_concurrency.processutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093714299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.100 2 DEBUG oslo_concurrency.processutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.108 2 DEBUG nova.compute.provider_tree [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.168 2 DEBUG nova.scheduler.client.report [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.212 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:24 compute-0 ceph-mon[73607]: osdmap e161: 3 total, 3 up, 3 in
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.235 2 INFO nova.scheduler.client.report [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Deleted allocations for instance 81cd8274-bb25-4b6c-aa66-89669fd098d5
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.251 2 DEBUG oslo_concurrency.processutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.252 2 INFO nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting local config drive /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0/disk.config because it was imported into RBD.
Oct 02 12:06:24 compute-0 kernel: tap869b835b-11: entered promiscuous mode
Oct 02 12:06:24 compute-0 NetworkManager[44987]: <info>  [1759406784.3122] manager: (tap869b835b-11): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:24 compute-0 ovn_controller[148183]: 2025-10-02T12:06:24Z|00144|binding|INFO|Claiming lport 869b835b-1179-4864-abfe-fc542b215555 for this chassis.
Oct 02 12:06:24 compute-0 ovn_controller[148183]: 2025-10-02T12:06:24Z|00145|binding|INFO|869b835b-1179-4864-abfe-fc542b215555: Claiming fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.319 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.322 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f bound to our chassis
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.325 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:06:24 compute-0 ovn_controller[148183]: 2025-10-02T12:06:24Z|00146|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 ovn-installed in OVS
Oct 02 12:06:24 compute-0 ovn_controller[148183]: 2025-10-02T12:06:24Z|00147|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 up in Southbound
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.340 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e144a96c-3524-4877-9d8c-d7e288b55eba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 systemd-udevd[281591]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:06:24 compute-0 NetworkManager[44987]: <info>  [1759406784.3557] device (tap869b835b-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:06:24 compute-0 systemd-machined[211836]: New machine qemu-19-instance-00000017.
Oct 02 12:06:24 compute-0 NetworkManager[44987]: <info>  [1759406784.3567] device (tap869b835b-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.367 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cef5576d-0cd6-4abe-9bfd-e80595190031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000017.
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.372 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee178c-f88b-4f17-8a65-9ed9a99854f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.391 2 DEBUG oslo_concurrency.lockutils [None req-9e8719ea-2525-4064-9288-6013db1fdad4 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.398 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a151c46b-0371-4731-84d9-6e67a637fc51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.413 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00e94bd3-6408-40bb-982e-123043a51fad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281600, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.429 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d4bf58-0e1e-4af3-b1d4-1cbecb57de73]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281604, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281604, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.431 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.434 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.434 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.434 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:24.435 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.781 2 DEBUG nova.compute.manager [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.781 2 DEBUG oslo_concurrency.lockutils [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.781 2 DEBUG oslo_concurrency.lockutils [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.782 2 DEBUG oslo_concurrency.lockutils [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.782 2 DEBUG nova.compute.manager [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:24 compute-0 nova_compute[257802]: 2025-10-02 12:06:24.782 2 WARNING nova.compute.manager [req-a00c87f1-be70-4859-a12a-37b2b4e4992b req-c700d45f-1ba8-4ccf-8aea-561d6a8e3d06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:06:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 396 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 277 op/s
Oct 02 12:06:25 compute-0 ceph-mon[73607]: pgmap v1194: 305 pgs: 305 active+clean; 421 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 345 op/s
Oct 02 12:06:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2093714299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.594 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.595 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406785.5939982, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.595 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Resumed (Lifecycle Event)
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.597 2 DEBUG nova.compute.manager [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.598 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.601 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance spawned successfully.
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.601 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.669 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.677 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.677 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.678 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.678 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.679 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.679 2 DEBUG nova.virt.libvirt.driver [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.683 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.722 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.722 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406785.5973647, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.723 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Started (Lifecycle Event)
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.762 2 DEBUG nova.compute.manager [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.764 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.771 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.803 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.832 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.833 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.833 2 DEBUG nova.objects.instance [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:06:25 compute-0 nova_compute[257802]: 2025-10-02 12:06:25.893 2 DEBUG oslo_concurrency.lockutils [None req-adc48646-4775-4e28-a4f8-227fa763cb43 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:26 compute-0 ceph-mon[73607]: pgmap v1195: 305 pgs: 305 active+clean; 396 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 277 op/s
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.891 2 DEBUG nova.compute.manager [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.893 2 DEBUG oslo_concurrency.lockutils [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.893 2 DEBUG oslo_concurrency.lockutils [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.893 2 DEBUG oslo_concurrency.lockutils [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.894 2 DEBUG nova.compute.manager [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:26 compute-0 nova_compute[257802]: 2025-10-02 12:06:26.894 2 WARNING nova.compute.manager [req-54459c7e-e36e-4f53-84da-49ff1dc0ec6a req-18f8b008-6864-4790-8c7f-bb8470d78f03 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state active and task_state None.
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:26.924 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:26.925 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:26.925 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 372 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 278 op/s
Oct 02 12:06:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 396 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 257 op/s
Oct 02 12:06:29 compute-0 ceph-mon[73607]: pgmap v1196: 305 pgs: 305 active+clean; 372 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 278 op/s
Oct 02 12:06:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:29.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3150860967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:30 compute-0 ceph-mon[73607]: pgmap v1197: 305 pgs: 305 active+clean; 396 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 257 op/s
Oct 02 12:06:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3150860967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:30 compute-0 nova_compute[257802]: 2025-10-02 12:06:30.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 405 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 226 op/s
Oct 02 12:06:31 compute-0 nova_compute[257802]: 2025-10-02 12:06:31.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:31 compute-0 sudo[281653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:31 compute-0 sudo[281653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:31 compute-0 sudo[281653]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:31 compute-0 sudo[281678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:31 compute-0 sudo[281678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:31 compute-0 sudo[281678]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:32 compute-0 nova_compute[257802]: 2025-10-02 12:06:32.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:32 compute-0 ceph-mon[73607]: pgmap v1198: 305 pgs: 305 active+clean; 405 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 226 op/s
Oct 02 12:06:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:32 compute-0 sudo[281704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:32 compute-0 sudo[281704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:32 compute-0 sudo[281704]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:32 compute-0 sudo[281729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:32 compute-0 sudo[281729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:32 compute-0 sudo[281729]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:32 compute-0 sudo[281754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:32 compute-0 sudo[281754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:32 compute-0 sudo[281754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 404 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 220 op/s
Oct 02 12:06:33 compute-0 sudo[281780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:06:33 compute-0 sudo[281780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:33 compute-0 podman[281778]: 2025-10-02 12:06:33.096745678 +0000 UTC m=+0.112968659 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 12:06:33 compute-0 nova_compute[257802]: 2025-10-02 12:06:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:33 compute-0 nova_compute[257802]: 2025-10-02 12:06:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:33 compute-0 sudo[281780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7aca989d-31c2-48b2-96c2-5477a145f59d does not exist
Oct 02 12:06:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a8d24e8f-c1a5-4f54-83cd-958fa256bec9 does not exist
Oct 02 12:06:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6a8abfed-ccc5-4bde-9e5a-bc8a8e291ea5 does not exist
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:33 compute-0 sudo[281854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:33 compute-0 sudo[281854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:33 compute-0 sudo[281854]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:33 compute-0 sudo[281879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:33 compute-0 sudo[281879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:33 compute-0 sudo[281879]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:33 compute-0 sudo[281904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:33 compute-0 sudo[281904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:33 compute-0 sudo[281904]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:34 compute-0 sudo[281929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:06:34 compute-0 sudo[281929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.354109635 +0000 UTC m=+0.046205658 container create 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:06:34 compute-0 systemd[1]: Started libpod-conmon-6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa.scope.
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.329869718 +0000 UTC m=+0.021965761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.449162723 +0000 UTC m=+0.141258766 container init 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.456592757 +0000 UTC m=+0.148688780 container start 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:06:34 compute-0 practical_lalande[282016]: 167 167
Oct 02 12:06:34 compute-0 conmon[282016]: conmon 6775584569c706c31dc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa.scope/container/memory.events
Oct 02 12:06:34 compute-0 systemd[1]: libpod-6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa.scope: Deactivated successfully.
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.46446871 +0000 UTC m=+0.156564763 container attach 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.46490697 +0000 UTC m=+0.157002993 container died 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b15981b47ec580ae7e1553555e345dd1a86664e7474ffaa4f2bbd8a1111591d6-merged.mount: Deactivated successfully.
Oct 02 12:06:34 compute-0 podman[281998]: 2025-10-02 12:06:34.538651785 +0000 UTC m=+0.230747808 container remove 6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:06:34 compute-0 systemd[1]: libpod-conmon-6775584569c706c31dc7f6c6422719fa3f8884a9419f21d56a86d5b5ae9635aa.scope: Deactivated successfully.
Oct 02 12:06:34 compute-0 ceph-mon[73607]: pgmap v1199: 305 pgs: 305 active+clean; 404 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 220 op/s
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:06:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:34 compute-0 podman[282042]: 2025-10-02 12:06:34.693335762 +0000 UTC m=+0.037877873 container create 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:06:34 compute-0 systemd[1]: Started libpod-conmon-2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4.scope.
Oct 02 12:06:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:34 compute-0 podman[282042]: 2025-10-02 12:06:34.677410569 +0000 UTC m=+0.021952680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:34 compute-0 podman[282042]: 2025-10-02 12:06:34.79489044 +0000 UTC m=+0.139432571 container init 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:34 compute-0 podman[282042]: 2025-10-02 12:06:34.802095397 +0000 UTC m=+0.146637508 container start 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:06:34 compute-0 podman[282042]: 2025-10-02 12:06:34.809756236 +0000 UTC m=+0.154298367 container attach 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:06:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 368 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 205 op/s
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:06:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:35 compute-0 affectionate_turing[282059]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:06:35 compute-0 affectionate_turing[282059]: --> relative data size: 1.0
Oct 02 12:06:35 compute-0 affectionate_turing[282059]: --> All data devices are unavailable
Oct 02 12:06:35 compute-0 systemd[1]: libpod-2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4.scope: Deactivated successfully.
Oct 02 12:06:35 compute-0 conmon[282059]: conmon 2dff7630a2f66e4bfdef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4.scope/container/memory.events
Oct 02 12:06:35 compute-0 podman[282042]: 2025-10-02 12:06:35.569341264 +0000 UTC m=+0.913883375 container died 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3093107111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d987c36406ee49dba627e972e038c57222c07dce915666f53efc527a9a1e313-merged.mount: Deactivated successfully.
Oct 02 12:06:35 compute-0 podman[282042]: 2025-10-02 12:06:35.712347722 +0000 UTC m=+1.056889833 container remove 2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:35 compute-0 systemd[1]: libpod-conmon-2dff7630a2f66e4bfdef3e0c3a9a5665554c014579373bf92d689dedbf7698d4.scope: Deactivated successfully.
Oct 02 12:06:35 compute-0 sudo[281929]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:35 compute-0 sudo[282087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-0 sudo[282087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[282087]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 sudo[282112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:35 compute-0 sudo[282112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[282112]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.919 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406780.9170885, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.919 2 INFO nova.compute.manager [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Stopped (Lifecycle Event)
Oct 02 12:06:35 compute-0 nova_compute[257802]: 2025-10-02 12:06:35.955 2 DEBUG nova.compute.manager [None req-93330c7e-4bfd-4acf-98fd-1cc2834cc222 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:35 compute-0 sudo[282137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-0 sudo[282137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[282137]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:36 compute-0 sudo[282174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:06:36 compute-0 sudo[282174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:36 compute-0 podman[282162]: 2025-10-02 12:06:36.065951853 +0000 UTC m=+0.078052692 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:06:36 compute-0 podman[282161]: 2025-10-02 12:06:36.088098398 +0000 UTC m=+0.102375010 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.406325707 +0000 UTC m=+0.045061099 container create d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:06:36 compute-0 nova_compute[257802]: 2025-10-02 12:06:36.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:36 compute-0 systemd[1]: Started libpod-conmon-d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075.scope.
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.384815058 +0000 UTC m=+0.023550480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.50803594 +0000 UTC m=+0.146771362 container init d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.518085947 +0000 UTC m=+0.156821339 container start d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:36 compute-0 gracious_antonelli[282283]: 167 167
Oct 02 12:06:36 compute-0 systemd[1]: libpod-d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075.scope: Deactivated successfully.
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.539601356 +0000 UTC m=+0.178336768 container attach d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.540637542 +0000 UTC m=+0.179372934 container died d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fabd93ff8ab6839efe70acace1f56bb219c74ca6171eac0464ac4fd62110daa-merged.mount: Deactivated successfully.
Oct 02 12:06:36 compute-0 podman[282266]: 2025-10-02 12:06:36.610565532 +0000 UTC m=+0.249300924 container remove d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:06:36 compute-0 systemd[1]: libpod-conmon-d1eea2140c39c94a548b5e1b1179260db0d46469c0783dc9a7ce76d43d3d5075.scope: Deactivated successfully.
Oct 02 12:06:36 compute-0 ceph-mon[73607]: pgmap v1200: 305 pgs: 305 active+clean; 368 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 205 op/s
Oct 02 12:06:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/511653184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1736012696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/182866723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4273803553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 podman[282309]: 2025-10-02 12:06:36.795890842 +0000 UTC m=+0.046547286 container create 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:06:36 compute-0 systemd[1]: Started libpod-conmon-2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b.scope.
Oct 02 12:06:36 compute-0 podman[282309]: 2025-10-02 12:06:36.775409788 +0000 UTC m=+0.026066262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d02dff2622092555a47cb9bb31dd8f421e9b526ec2f896d848fd681eb3c4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d02dff2622092555a47cb9bb31dd8f421e9b526ec2f896d848fd681eb3c4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d02dff2622092555a47cb9bb31dd8f421e9b526ec2f896d848fd681eb3c4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d02dff2622092555a47cb9bb31dd8f421e9b526ec2f896d848fd681eb3c4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:36 compute-0 podman[282309]: 2025-10-02 12:06:36.889509005 +0000 UTC m=+0.140165489 container init 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:36 compute-0 podman[282309]: 2025-10-02 12:06:36.898174308 +0000 UTC m=+0.148830762 container start 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:36 compute-0 podman[282309]: 2025-10-02 12:06:36.90432607 +0000 UTC m=+0.154982564 container attach 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:06:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.301 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.302 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.302 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:06:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:37.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2033195360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/151764086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4004864019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/106709009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]: {
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:     "1": [
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:         {
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "devices": [
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "/dev/loop3"
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             ],
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "lv_name": "ceph_lv0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "lv_size": "7511998464",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "name": "ceph_lv0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "tags": {
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.cluster_name": "ceph",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.crush_device_class": "",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.encrypted": "0",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.osd_id": "1",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.type": "block",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:                 "ceph.vdo": "0"
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             },
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "type": "block",
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:             "vg_name": "ceph_vg0"
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:         }
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]:     ]
Oct 02 12:06:37 compute-0 dreamy_chebyshev[282326]: }
Oct 02 12:06:37 compute-0 systemd[1]: libpod-2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b.scope: Deactivated successfully.
Oct 02 12:06:37 compute-0 podman[282309]: 2025-10-02 12:06:37.758471236 +0000 UTC m=+1.009127690 container died 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d02dff2622092555a47cb9bb31dd8f421e9b526ec2f896d848fd681eb3c4f9-merged.mount: Deactivated successfully.
Oct 02 12:06:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:37 compute-0 podman[282309]: 2025-10-02 12:06:37.834778072 +0000 UTC m=+1.085434526 container remove 2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:06:37 compute-0 systemd[1]: libpod-conmon-2cea6b2bcad4468bac7c97d359a097c0a68137178e5b6b2a28fc800745d4662b.scope: Deactivated successfully.
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.859 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.860 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.861 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.861 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.861 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.862 2 INFO nova.compute.manager [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Terminating instance
Oct 02 12:06:37 compute-0 sudo[282174]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.863 2 DEBUG nova.compute.manager [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:06:37 compute-0 sudo[282348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:37 compute-0 sudo[282348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:37 compute-0 sudo[282348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:37 compute-0 kernel: tap759e5032-b0 (unregistering): left promiscuous mode
Oct 02 12:06:37 compute-0 NetworkManager[44987]: <info>  [1759406797.9523] device (tap759e5032-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:37 compute-0 ovn_controller[148183]: 2025-10-02T12:06:37Z|00148|binding|INFO|Releasing lport 759e5032-b035-44b1-9e49-b89ab3323ceb from this chassis (sb_readonly=0)
Oct 02 12:06:37 compute-0 ovn_controller[148183]: 2025-10-02T12:06:37Z|00149|binding|INFO|Setting lport 759e5032-b035-44b1-9e49-b89ab3323ceb down in Southbound
Oct 02 12:06:37 compute-0 ovn_controller[148183]: 2025-10-02T12:06:37Z|00150|binding|INFO|Removing iface tap759e5032-b0 ovn-installed in OVS
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:37.971 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:89:f8 10.100.0.4'], port_security=['fa:16:3e:79:89:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c2b299fa-a4b6-461f-84d6-790aa118102d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=759e5032-b035-44b1-9e49-b89ab3323ceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:37.975 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 759e5032-b035-44b1-9e49-b89ab3323ceb in datapath 80106802-d877-42c6-b2a9-50b050f6b08f unbound from our chassis
Oct 02 12:06:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:37.977 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80106802-d877-42c6-b2a9-50b050f6b08f
Oct 02 12:06:37 compute-0 nova_compute[257802]: 2025-10-02 12:06:37.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:37.995 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[44e0ad04-4aae-414c-afed-eb94f0f10b0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 sudo[282375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:38 compute-0 sudo[282375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:38 compute-0 sudo[282375]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:38 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 02 12:06:38 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001b.scope: Consumed 17.387s CPU time.
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.028 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9657b7df-2764-4b5d-92fc-c970afa08823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 systemd-machined[211836]: Machine qemu-16-instance-0000001b terminated.
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.033 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8d54cfa8-0ae9-43fc-9bd7-ea3684f10809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.061 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f746b670-e11f-4d0e-a7d3-70107b8e1f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 sudo[282407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:38 compute-0 sudo[282407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:38 compute-0 sudo[282407]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.084 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da24aea6-077e-4ded-9af9-ac3702f1d841]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80106802-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:27:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474182, 'reachable_time': 44687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282432, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.103 2 INFO nova.virt.libvirt.driver [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Instance destroyed successfully.
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.103 2 DEBUG nova.objects.instance [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'resources' on Instance uuid c2b299fa-a4b6-461f-84d6-790aa118102d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.105 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8c402af5-a847-442b-8127-a9d9106d6487]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474195, 'tstamp': 474195}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282440, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80106802-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474198, 'tstamp': 474198}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282440, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.107 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.115 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80106802-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.115 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.115 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80106802-d0, col_values=(('external_ids', {'iface-id': '3e3f512e-f85f-4c9c-b91d-072c570470c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:38.116 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.126 2 DEBUG nova.virt.libvirt.vif [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-194319742',display_name='tempest-ServersAdminTestJSON-server-194319742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-194319742',id=27,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:04:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-pxmd9jcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:04:57Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=c2b299fa-a4b6-461f-84d6-790aa118102d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.127 2 DEBUG nova.network.os_vif_util [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.128 2 DEBUG nova.network.os_vif_util [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.128 2 DEBUG os_vif [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap759e5032-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.135 2 INFO os_vif [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:89:f8,bridge_name='br-int',has_traffic_filtering=True,id=759e5032-b035-44b1-9e49-b89ab3323ceb,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap759e5032-b0')
Oct 02 12:06:38 compute-0 sudo[282441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:06:38 compute-0 sudo[282441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.199 2 DEBUG nova.compute.manager [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-unplugged-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.199 2 DEBUG oslo_concurrency.lockutils [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.200 2 DEBUG oslo_concurrency.lockutils [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.200 2 DEBUG oslo_concurrency.lockutils [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.200 2 DEBUG nova.compute.manager [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] No waiting events found dispatching network-vif-unplugged-759e5032-b035-44b1-9e49-b89ab3323ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.200 2 DEBUG nova.compute.manager [req-4be6252a-f690-446f-af42-5650d983ec7e req-0d2c501b-5d6a-4f9f-b94f-bf78b1c64537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-unplugged-759e5032-b035-44b1-9e49-b89ab3323ceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.532621743 +0000 UTC m=+0.056627865 container create 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:06:38 compute-0 systemd[1]: Started libpod-conmon-258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771.scope.
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.500689927 +0000 UTC m=+0.024696079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.624 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updating instance_info_cache with network_info: [{"id": "759e5032-b035-44b1-9e49-b89ab3323ceb", "address": "fa:16:3e:79:89:f8", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap759e5032-b0", "ovs_interfaceid": "759e5032-b035-44b1-9e49-b89ab3323ceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.653 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-c2b299fa-a4b6-461f-84d6-790aa118102d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.654 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.664464836 +0000 UTC m=+0.188470978 container init 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.672129224 +0000 UTC m=+0.196135346 container start 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:38 compute-0 musing_rhodes[282554]: 167 167
Oct 02 12:06:38 compute-0 systemd[1]: libpod-258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771.scope: Deactivated successfully.
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.679185848 +0000 UTC m=+0.203191960 container attach 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.67965543 +0000 UTC m=+0.203661542 container died 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:38 compute-0 ceph-mon[73607]: pgmap v1201: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Oct 02 12:06:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3591459830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f249562f7fe4ba6816e23d97467bfc44e843807fa9f6ca508618e250689a84a3-merged.mount: Deactivated successfully.
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.775 2 INFO nova.virt.libvirt.driver [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Deleting instance files /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d_del
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.777 2 INFO nova.virt.libvirt.driver [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Deletion of /var/lib/nova/instances/c2b299fa-a4b6-461f-84d6-790aa118102d_del complete
Oct 02 12:06:38 compute-0 podman[282537]: 2025-10-02 12:06:38.793637434 +0000 UTC m=+0.317643556 container remove 258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:06:38 compute-0 systemd[1]: libpod-conmon-258a9865590a4faf34520c5f3277001ac8b0f796eb7b29ae8d107ee947c5b771.scope: Deactivated successfully.
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.834 2 INFO nova.compute.manager [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.835 2 DEBUG oslo.service.loopingcall [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.835 2 DEBUG nova.compute.manager [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:06:38 compute-0 nova_compute[257802]: 2025-10-02 12:06:38.836 2 DEBUG nova.network.neutron [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:06:39 compute-0 podman[282577]: 2025-10-02 12:06:39.020313591 +0000 UTC m=+0.079314952 container create fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:06:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 331 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.6 MiB/s wr, 278 op/s
Oct 02 12:06:39 compute-0 podman[282577]: 2025-10-02 12:06:38.962323285 +0000 UTC m=+0.021324666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:39 compute-0 systemd[1]: Started libpod-conmon-fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1.scope.
Oct 02 12:06:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435dab1af56b642d03996a2d902c62d1f69ef89fad5fa0ec801e077165ad6aaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435dab1af56b642d03996a2d902c62d1f69ef89fad5fa0ec801e077165ad6aaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435dab1af56b642d03996a2d902c62d1f69ef89fad5fa0ec801e077165ad6aaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435dab1af56b642d03996a2d902c62d1f69ef89fad5fa0ec801e077165ad6aaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:39 compute-0 podman[282577]: 2025-10-02 12:06:39.207086127 +0000 UTC m=+0.266087508 container init fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:06:39 compute-0 podman[282577]: 2025-10-02 12:06:39.213814822 +0000 UTC m=+0.272816183 container start fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:06:39 compute-0 podman[282577]: 2025-10-02 12:06:39.22466923 +0000 UTC m=+0.283670591 container attach fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:06:39 compute-0 ovn_controller[148183]: 2025-10-02T12:06:39Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:06:39 compute-0 ovn_controller[148183]: 2025-10-02T12:06:39Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:d7:20 10.100.0.9
Oct 02 12:06:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:39.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:39 compute-0 nova_compute[257802]: 2025-10-02 12:06:39.741 2 DEBUG nova.network.neutron [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:39 compute-0 nova_compute[257802]: 2025-10-02 12:06:39.787 2 INFO nova.compute.manager [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Took 0.95 seconds to deallocate network for instance.
Oct 02 12:06:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:06:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3971227237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:39 compute-0 nova_compute[257802]: 2025-10-02 12:06:39.846 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:39 compute-0 nova_compute[257802]: 2025-10-02 12:06:39.847 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3971227237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:39 compute-0 nova_compute[257802]: 2025-10-02 12:06:39.932 2 DEBUG oslo_concurrency.processutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.076 2 DEBUG nova.compute.manager [req-1ffda413-f6bc-4376-b270-0ed748b6715c req-64526804-ced4-4a39-a30d-5740661e629f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-deleted-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:40 compute-0 elated_jackson[282594]: {
Oct 02 12:06:40 compute-0 elated_jackson[282594]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:06:40 compute-0 elated_jackson[282594]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:06:40 compute-0 elated_jackson[282594]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:06:40 compute-0 elated_jackson[282594]:         "osd_id": 1,
Oct 02 12:06:40 compute-0 elated_jackson[282594]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:06:40 compute-0 elated_jackson[282594]:         "type": "bluestore"
Oct 02 12:06:40 compute-0 elated_jackson[282594]:     }
Oct 02 12:06:40 compute-0 elated_jackson[282594]: }
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:40 compute-0 podman[282577]: 2025-10-02 12:06:40.150976221 +0000 UTC m=+1.209977592 container died fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:06:40 compute-0 systemd[1]: libpod-fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1.scope: Deactivated successfully.
Oct 02 12:06:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225149654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.392 2 DEBUG oslo_concurrency.processutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.402 2 DEBUG nova.compute.provider_tree [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-435dab1af56b642d03996a2d902c62d1f69ef89fad5fa0ec801e077165ad6aaf-merged.mount: Deactivated successfully.
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.423 2 DEBUG nova.scheduler.client.report [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.451 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.456 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.457 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.457 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.458 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.504 2 INFO nova.scheduler.client.report [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Deleted allocations for instance c2b299fa-a4b6-461f-84d6-790aa118102d
Oct 02 12:06:40 compute-0 podman[282577]: 2025-10-02 12:06:40.530722254 +0000 UTC m=+1.589723615 container remove fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:40 compute-0 systemd[1]: libpod-conmon-fa86075d7684a1cc6cbee105851162c5663d58d3b0dca5aa028b21e8bd697cd1.scope: Deactivated successfully.
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.570 2 DEBUG oslo_concurrency.lockutils [None req-9f91cdf6-0c6c-44db-b27f-1eca96ad4c7a 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:40 compute-0 sudo[282441]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.594 2 DEBUG nova.compute.manager [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.594 2 DEBUG oslo_concurrency.lockutils [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.594 2 DEBUG oslo_concurrency.lockutils [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.595 2 DEBUG oslo_concurrency.lockutils [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2b299fa-a4b6-461f-84d6-790aa118102d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.595 2 DEBUG nova.compute.manager [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] No waiting events found dispatching network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.595 2 WARNING nova.compute.manager [req-86cb5bde-a3e0-4450-9145-e6ec99ce5fe1 req-13118c45-f0da-44c7-9939-bd5887e5ad1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Received unexpected event network-vif-plugged-759e5032-b035-44b1-9e49-b89ab3323ceb for instance with vm_state deleted and task_state None.
Oct 02 12:06:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:06:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 94edb95f-cf93-4ffc-bc7e-41c2e14958f1 does not exist
Oct 02 12:06:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 35457aac-7705-443b-ab67-0a8a4d6b816b does not exist
Oct 02 12:06:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 30b3ec87-82f6-483f-8681-288b0a68c288 does not exist
Oct 02 12:06:40 compute-0 sudo[282672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:40 compute-0 sudo[282672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:40 compute-0 sudo[282672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:40 compute-0 sudo[282697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:06:40 compute-0 sudo[282697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:40 compute-0 sudo[282697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510628566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:40 compute-0 nova_compute[257802]: 2025-10-02 12:06:40.972 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:40 compute-0 ceph-mon[73607]: pgmap v1202: 305 pgs: 305 active+clean; 331 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.6 MiB/s wr, 278 op/s
Oct 02 12:06:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2206464622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:06:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/225149654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:06:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 355 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.4 MiB/s wr, 241 op/s
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.071 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.071 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:06:41 compute-0 podman[282725]: 2025-10-02 12:06:41.143857129 +0000 UTC m=+0.129141609 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.262 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.263 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4486MB free_disk=20.832733154296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.263 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.263 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.417 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.417 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.418 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.465 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:41.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:06:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:06:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808735561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.898 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:41 compute-0 nova_compute[257802]: 2025-10-02 12:06:41.903 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/510628566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2808735561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:42 compute-0 nova_compute[257802]: 2025-10-02 12:06:42.259 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:06:42
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:06:42 compute-0 nova_compute[257802]: 2025-10-02 12:06:42.377 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:06:42 compute-0 nova_compute[257802]: 2025-10-02 12:06:42.377 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:06:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:06:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 293 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.5 MiB/s wr, 366 op/s
Oct 02 12:06:43 compute-0 nova_compute[257802]: 2025-10-02 12:06:43.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:06:43 compute-0 ceph-mon[73607]: pgmap v1203: 305 pgs: 305 active+clean; 355 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.4 MiB/s wr, 241 op/s
Oct 02 12:06:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:43.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:44 compute-0 ceph-mon[73607]: pgmap v1204: 305 pgs: 305 active+clean; 293 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.5 MiB/s wr, 366 op/s
Oct 02 12:06:44 compute-0 nova_compute[257802]: 2025-10-02 12:06:44.377 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 279 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 421 op/s
Oct 02 12:06:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2415899086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:45.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:45.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:45 compute-0 nova_compute[257802]: 2025-10-02 12:06:45.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 ceph-mon[73607]: pgmap v1205: 305 pgs: 305 active+clean; 279 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 421 op/s
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.644 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.645 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.646 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.646 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.646 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.648 2 INFO nova.compute.manager [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Terminating instance
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.649 2 DEBUG nova.compute.manager [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:06:46 compute-0 kernel: tap869b835b-11 (unregistering): left promiscuous mode
Oct 02 12:06:46 compute-0 NetworkManager[44987]: <info>  [1759406806.7615] device (tap869b835b-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:06:46 compute-0 ovn_controller[148183]: 2025-10-02T12:06:46Z|00151|binding|INFO|Releasing lport 869b835b-1179-4864-abfe-fc542b215555 from this chassis (sb_readonly=0)
Oct 02 12:06:46 compute-0 ovn_controller[148183]: 2025-10-02T12:06:46Z|00152|binding|INFO|Setting lport 869b835b-1179-4864-abfe-fc542b215555 down in Southbound
Oct 02 12:06:46 compute-0 ovn_controller[148183]: 2025-10-02T12:06:46Z|00153|binding|INFO|Removing iface tap869b835b-11 ovn-installed in OVS
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:46.774 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:d7:20 10.100.0.9'], port_security=['fa:16:3e:1d:d7:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66fec68d-e11d-4f9f-9cea-a0358d8f2ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80106802-d877-42c6-b2a9-50b050f6b08f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afa78cc4dec419babdf61fd31f46e28', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8fbb5420-10f4-405b-bd01-713020f7e518', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03aa6f10-2374-4fa3-bc90-1fcb8815afb8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=869b835b-1179-4864-abfe-fc542b215555) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:46.777 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 869b835b-1179-4864-abfe-fc542b215555 in datapath 80106802-d877-42c6-b2a9-50b050f6b08f unbound from our chassis
Oct 02 12:06:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:46.779 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80106802-d877-42c6-b2a9-50b050f6b08f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:06:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:46.780 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f82df1c3-4ab9-428c-b98c-e0c5232c8b15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:46.781 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f namespace which is not needed anymore
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 02 12:06:46 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000017.scope: Consumed 14.360s CPU time.
Oct 02 12:06:46 compute-0 systemd-machined[211836]: Machine qemu-19-instance-00000017 terminated.
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.884 2 INFO nova.virt.libvirt.driver [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Instance destroyed successfully.
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.885 2 DEBUG nova.objects.instance [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lazy-loading 'resources' on Instance uuid 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:06:46 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [NOTICE]   (278022) : haproxy version is 2.8.14-c23fe91
Oct 02 12:06:46 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [NOTICE]   (278022) : path to executable is /usr/sbin/haproxy
Oct 02 12:06:46 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [WARNING]  (278022) : Exiting Master process...
Oct 02 12:06:46 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [ALERT]    (278022) : Current worker (278024) exited with code 143 (Terminated)
Oct 02 12:06:46 compute-0 neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f[278018]: [WARNING]  (278022) : All workers exited. Exiting... (0)
Oct 02 12:06:46 compute-0 systemd[1]: libpod-af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c.scope: Deactivated successfully.
Oct 02 12:06:46 compute-0 podman[282799]: 2025-10-02 12:06:46.945683275 +0000 UTC m=+0.083174957 container died af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.950 2 DEBUG nova.virt.libvirt.vif [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-167670540',display_name='tempest-ServersAdminTestJSON-server-167670540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-167670540',id=23,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:06:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9afa78cc4dec419babdf61fd31f46e28',ramdisk_id='',reservation_id='r-fpp9adrm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-518249049',owner_user_name='tempest-ServersAdminTestJSON-518249049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:06:29Z,user_data=None,user_id='8850add40b254d198f270d9e64c777d5',uuid=66fec68d-e11d-4f9f-9cea-a0358d8f2ae0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.951 2 DEBUG nova.network.os_vif_util [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converting VIF {"id": "869b835b-1179-4864-abfe-fc542b215555", "address": "fa:16:3e:1d:d7:20", "network": {"id": "80106802-d877-42c6-b2a9-50b050f6b08f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-79358917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afa78cc4dec419babdf61fd31f46e28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap869b835b-11", "ovs_interfaceid": "869b835b-1179-4864-abfe-fc542b215555", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.952 2 DEBUG nova.network.os_vif_util [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.952 2 DEBUG os_vif [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.954 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869b835b-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:46 compute-0 nova_compute[257802]: 2025-10-02 12:06:46.958 2 INFO os_vif [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:d7:20,bridge_name='br-int',has_traffic_filtering=True,id=869b835b-1179-4864-abfe-fc542b215555,network=Network(80106802-d877-42c6-b2a9-50b050f6b08f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap869b835b-11')
Oct 02 12:06:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 260 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 6.1 MiB/s wr, 429 op/s
Oct 02 12:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-02787123b1d6271d494656d45f863f1b39368d9478799771b4d0dc95f41398ba-merged.mount: Deactivated successfully.
Oct 02 12:06:47 compute-0 podman[282799]: 2025-10-02 12:06:47.108972353 +0000 UTC m=+0.246464035 container cleanup af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:06:47 compute-0 systemd[1]: libpod-conmon-af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c.scope: Deactivated successfully.
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.256 2 DEBUG nova.compute.manager [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.257 2 DEBUG oslo_concurrency.lockutils [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.257 2 DEBUG oslo_concurrency.lockutils [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.257 2 DEBUG oslo_concurrency.lockutils [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.257 2 DEBUG nova.compute.manager [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.258 2 DEBUG nova.compute.manager [req-63d7378b-cd5f-49f8-b8b1-d8aaa50577ce req-bc79d118-305b-4ede-b7a7-7359bec76854 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-unplugged-869b835b-1179-4864-abfe-fc542b215555 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:06:47 compute-0 podman[282857]: 2025-10-02 12:06:47.47544933 +0000 UTC m=+0.347013409 container remove af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.481 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9b99bc0a-9266-4954-9a1b-20b7b058fb34]: (4, ('Thu Oct  2 12:06:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f (af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c)\naf3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c\nThu Oct  2 12:06:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f (af3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c)\naf3a566d72423f68c8c02e2d9de8eb347bf7b4d67924c14d6713d342d602949c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.483 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[745c7ad8-f355-47b6-b398-16abf9846d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.483 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80106802-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:47 compute-0 kernel: tap80106802-d0: left promiscuous mode
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6c42d725-5ed2-4c35-bb31-8fe9ecc569d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:47.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:47.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.516 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[28f62040-ccbb-4d2f-b456-7ec0adb78b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.517 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[30e84223-5888-47ee-9188-12a692f52cb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.531 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45255b38-0c64-46d1-a98b-e592fb1d4f51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474175, 'reachable_time': 29820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282872, 'error': None, 'target': 'ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d80106802\x2dd877\x2d42c6\x2db2a9\x2d50b050f6b08f.mount: Deactivated successfully.
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.537 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80106802-d877-42c6-b2a9-50b050f6b08f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:06:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:47.537 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2950a674-449e-42e6-88d3-5391c3db04ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:06:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.952 2 INFO nova.virt.libvirt.driver [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deleting instance files /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del
Oct 02 12:06:47 compute-0 nova_compute[257802]: 2025-10-02 12:06:47.952 2 INFO nova.virt.libvirt.driver [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deletion of /var/lib/nova/instances/66fec68d-e11d-4f9f-9cea-a0358d8f2ae0_del complete
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.154 2 INFO nova.compute.manager [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Took 1.50 seconds to destroy the instance on the hypervisor.
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.155 2 DEBUG oslo.service.loopingcall [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.156 2 DEBUG nova.compute.manager [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.157 2 DEBUG nova.network.neutron [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:06:48 compute-0 ceph-mon[73607]: pgmap v1206: 305 pgs: 305 active+clean; 260 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 6.1 MiB/s wr, 429 op/s
Oct 02 12:06:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1432307146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.987 2 DEBUG nova.network.neutron [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:06:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:48.989 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:06:48 compute-0 nova_compute[257802]: 2025-10-02 12:06:48.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:48.990 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:06:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 170 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.7 MiB/s wr, 439 op/s
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.034 2 INFO nova.compute.manager [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Took 0.88 seconds to deallocate network for instance.
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.104 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.105 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.124 2 DEBUG nova.compute.manager [req-b83e5c36-9a4d-4d20-8b4f-7c03fe8e1f1a req-83985e97-6dde-434c-a054-07de6d407c01 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-deleted-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.143 2 DEBUG oslo_concurrency.processutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.419 2 DEBUG nova.compute.manager [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.420 2 DEBUG oslo_concurrency.lockutils [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.420 2 DEBUG oslo_concurrency.lockutils [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.421 2 DEBUG oslo_concurrency.lockutils [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.421 2 DEBUG nova.compute.manager [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] No waiting events found dispatching network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.421 2 WARNING nova.compute.manager [req-f86c7897-1d65-4382-a40a-4c9aa9c52aaf req-728a3a49-54ca-4e01-a803-8301ee533b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Received unexpected event network-vif-plugged-869b835b-1179-4864-abfe-fc542b215555 for instance with vm_state deleted and task_state None.
Oct 02 12:06:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:49.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372524306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.560 2 DEBUG oslo_concurrency.processutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.565 2 DEBUG nova.compute.provider_tree [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.582 2 DEBUG nova.scheduler.client.report [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.604 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.629 2 INFO nova.scheduler.client.report [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Deleted allocations for instance 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0
Oct 02 12:06:49 compute-0 nova_compute[257802]: 2025-10-02 12:06:49.701 2 DEBUG oslo_concurrency.lockutils [None req-76a64c8d-23d1-47f1-a3b5-fdb16be5f594 8850add40b254d198f270d9e64c777d5 9afa78cc4dec419babdf61fd31f46e28 - - default default] Lock "66fec68d-e11d-4f9f-9cea-a0358d8f2ae0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:50 compute-0 ceph-mon[73607]: pgmap v1207: 305 pgs: 305 active+clean; 170 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.7 MiB/s wr, 439 op/s
Oct 02 12:06:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2372524306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:50 compute-0 nova_compute[257802]: 2025-10-02 12:06:50.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 134 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 395 op/s
Oct 02 12:06:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:51.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:51 compute-0 nova_compute[257802]: 2025-10-02 12:06:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:52 compute-0 sudo[282898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:52 compute-0 sudo[282898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:52 compute-0 sudo[282898]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:52 compute-0 sudo[282923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:52 compute-0 sudo[282923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:52 compute-0 sudo[282923]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 156 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 352 op/s
Oct 02 12:06:53 compute-0 ceph-mon[73607]: pgmap v1208: 305 pgs: 305 active+clean; 134 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 395 op/s
Oct 02 12:06:53 compute-0 nova_compute[257802]: 2025-10-02 12:06:53.102 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406798.1005816, c2b299fa-a4b6-461f-84d6-790aa118102d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:06:53 compute-0 nova_compute[257802]: 2025-10-02 12:06:53.103 2 INFO nova.compute.manager [-] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] VM Stopped (Lifecycle Event)
Oct 02 12:06:53 compute-0 nova_compute[257802]: 2025-10-02 12:06:53.126 2 DEBUG nova.compute.manager [None req-926c4909-030e-4fc1-869a-b2d60fd753f5 - - - - - -] [instance: c2b299fa-a4b6-461f-84d6-790aa118102d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:06:53 compute-0 nova_compute[257802]: 2025-10-02 12:06:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:06:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003034741124604504 of space, bias 1.0, pg target 0.9104223373813511 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:06:54 compute-0 ceph-mon[73607]: pgmap v1209: 305 pgs: 305 active+clean; 156 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 352 op/s
Oct 02 12:06:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:06:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518450349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:06:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518450349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 172 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.4 MiB/s wr, 214 op/s
Oct 02 12:06:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3518450349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3518450349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:55 compute-0 nova_compute[257802]: 2025-10-02 12:06:55.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:56 compute-0 ceph-mon[73607]: pgmap v1210: 305 pgs: 305 active+clean; 172 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.4 MiB/s wr, 214 op/s
Oct 02 12:06:56 compute-0 nova_compute[257802]: 2025-10-02 12:06:56.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:06:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:06:56.992 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:06:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 177 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Oct 02 12:06:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:06:58 compute-0 ceph-mon[73607]: pgmap v1211: 305 pgs: 305 active+clean; 177 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Oct 02 12:06:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 197 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 181 op/s
Oct 02 12:06:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:59.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:06:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:00 compute-0 ceph-mon[73607]: pgmap v1212: 305 pgs: 305 active+clean; 197 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 181 op/s
Oct 02 12:07:00 compute-0 nova_compute[257802]: 2025-10-02 12:07:00.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Oct 02 12:07:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:01 compute-0 nova_compute[257802]: 2025-10-02 12:07:01.883 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406806.8820329, 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:01 compute-0 nova_compute[257802]: 2025-10-02 12:07:01.883 2 INFO nova.compute.manager [-] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] VM Stopped (Lifecycle Event)
Oct 02 12:07:01 compute-0 nova_compute[257802]: 2025-10-02 12:07:01.916 2 DEBUG nova.compute.manager [None req-90d51113-050d-4b65-bf9d-94fe273971ae - - - - - -] [instance: 66fec68d-e11d-4f9f-9cea-a0358d8f2ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:01 compute-0 nova_compute[257802]: 2025-10-02 12:07:01.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:02 compute-0 ceph-mon[73607]: pgmap v1213: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Oct 02 12:07:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 539 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 12:07:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:03 compute-0 podman[282955]: 2025-10-02 12:07:03.909665334 +0000 UTC m=+0.052985935 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:07:04 compute-0 ceph-mon[73607]: pgmap v1214: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 539 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 12:07:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 502 KiB/s rd, 2.4 MiB/s wr, 107 op/s
Oct 02 12:07:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:05.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:05 compute-0 nova_compute[257802]: 2025-10-02 12:07:05.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:06 compute-0 ceph-mon[73607]: pgmap v1215: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 502 KiB/s rd, 2.4 MiB/s wr, 107 op/s
Oct 02 12:07:06 compute-0 podman[282976]: 2025-10-02 12:07:06.944311728 +0000 UTC m=+0.073730975 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 12:07:06 compute-0 podman[282977]: 2025-10-02 12:07:06.978009757 +0000 UTC m=+0.089135364 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:07:06 compute-0 nova_compute[257802]: 2025-10-02 12:07:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 389 KiB/s rd, 945 KiB/s wr, 92 op/s
Oct 02 12:07:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:07.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:08 compute-0 ceph-mon[73607]: pgmap v1216: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 389 KiB/s rd, 945 KiB/s wr, 92 op/s
Oct 02 12:07:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 312 KiB/s rd, 418 KiB/s wr, 61 op/s
Oct 02 12:07:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:10 compute-0 nova_compute[257802]: 2025-10-02 12:07:10.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 181 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 115 KiB/s wr, 26 op/s
Oct 02 12:07:11 compute-0 ceph-mon[73607]: pgmap v1217: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 312 KiB/s rd, 418 KiB/s wr, 61 op/s
Oct 02 12:07:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:11.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:11 compute-0 podman[283019]: 2025-10-02 12:07:11.948370996 +0000 UTC m=+0.081133118 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:07:12 compute-0 nova_compute[257802]: 2025-10-02 12:07:12.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:12 compute-0 sudo[283046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:12 compute-0 sudo[283046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:12 compute-0 sudo[283046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:12 compute-0 sudo[283071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:12 compute-0 sudo[283071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:12 compute-0 sudo[283071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:12 compute-0 ceph-mon[73607]: pgmap v1218: 305 pgs: 305 active+clean; 181 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 115 KiB/s wr, 26 op/s
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 142 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 40 KiB/s wr, 27 op/s
Oct 02 12:07:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:13.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:14 compute-0 ceph-mon[73607]: pgmap v1219: 305 pgs: 305 active+clean; 142 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 40 KiB/s wr, 27 op/s
Oct 02 12:07:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1463599971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 144 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 776 KiB/s wr, 40 op/s
Oct 02 12:07:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:15.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:15.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3394220777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:15 compute-0 nova_compute[257802]: 2025-10-02 12:07:15.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 02 12:07:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 02 12:07:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 02 12:07:16 compute-0 ceph-mon[73607]: pgmap v1220: 305 pgs: 305 active+clean; 144 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 776 KiB/s wr, 40 op/s
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 148 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Oct 02 12:07:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.674 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.674 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.719 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:07:17 compute-0 ceph-mon[73607]: osdmap e162: 3 total, 3 up, 3 in
Oct 02 12:07:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.915 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.915 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.923 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:07:17 compute-0 nova_compute[257802]: 2025-10-02 12:07:17.924 2 INFO nova.compute.claims [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.220 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521848262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.644 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.650 2 DEBUG nova.compute.provider_tree [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.733 2 DEBUG nova.scheduler.client.report [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.770 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.770 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.839 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.840 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.866 2 INFO nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:07:18 compute-0 ceph-mon[73607]: pgmap v1222: 305 pgs: 305 active+clean; 148 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Oct 02 12:07:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2521848262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:18 compute-0 nova_compute[257802]: 2025-10-02 12:07:18.907 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.028 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.030 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.031 2 INFO nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Creating image(s)
Oct 02 12:07:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 183 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.066 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.093 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.120 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.124 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.145 2 DEBUG nova.policy [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec17c54e24584f11a5348b68d6e7ca85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7359a7dad3b849bfbf075b88f2a261b4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.181 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.182 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.182 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.182 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.206 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.209 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.744 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.816 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] resizing rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.854 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Successfully created port: 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.947 2 DEBUG nova.objects.instance [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.969 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.969 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Ensure instance console log exists: /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.970 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.971 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:19 compute-0 nova_compute[257802]: 2025-10-02 12:07:19.971 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:20 compute-0 nova_compute[257802]: 2025-10-02 12:07:20.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:21 compute-0 ceph-mon[73607]: pgmap v1223: 305 pgs: 305 active+clean; 183 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Oct 02 12:07:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.2 MiB/s wr, 196 op/s
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.507 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Successfully updated port: 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.522 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.522 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquired lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.522 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:07:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.697 2 DEBUG nova.compute.manager [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-changed-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.698 2 DEBUG nova.compute.manager [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Refreshing instance network info cache due to event network-changed-7d4e6499-094f-4aa5-acf9-3a8487c8dba0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.698 2 DEBUG oslo_concurrency.lockutils [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:07:21 compute-0 nova_compute[257802]: 2025-10-02 12:07:21.841 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:07:22 compute-0 nova_compute[257802]: 2025-10-02 12:07:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3728058607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 206 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.3 MiB/s wr, 207 op/s
Oct 02 12:07:23 compute-0 ceph-mon[73607]: pgmap v1224: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.2 MiB/s wr, 196 op/s
Oct 02 12:07:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/581658048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:23.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:23.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.621 2 DEBUG nova.network.neutron [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.639 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Releasing lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.639 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Instance network_info: |[{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.640 2 DEBUG oslo_concurrency.lockutils [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.640 2 DEBUG nova.network.neutron [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Refreshing network info cache for port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.644 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Start _get_guest_xml network_info=[{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.651 2 WARNING nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.654 2 DEBUG nova.virt.libvirt.host [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.655 2 DEBUG nova.virt.libvirt.host [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.658 2 DEBUG nova.virt.libvirt.host [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.658 2 DEBUG nova.virt.libvirt.host [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.659 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.659 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.660 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.660 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.660 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.660 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.661 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.661 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.661 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.661 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.662 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.662 2 DEBUG nova.virt.hardware [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:07:23 compute-0 nova_compute[257802]: 2025-10-02 12:07:23.664 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:07:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358117048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.140 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.181 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.187 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:24 compute-0 ceph-mon[73607]: pgmap v1225: 305 pgs: 305 active+clean; 206 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.3 MiB/s wr, 207 op/s
Oct 02 12:07:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1358117048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:07:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458361932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.670 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.674 2 DEBUG nova.virt.libvirt.vif [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-173512894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-173512894',id=34,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWLHYAzCSRCkStdbU+GdVhWIXiwiTci8xggQ9ThyRlprkD/MENcP1zXCe9JELWxtblFvNPabWQ+ZgjaGJX29tNuXgS46PKPgWmCmmQjfV3eqKUfK1wEy2Lz1kDGxf6LzA==',key_name='tempest-keypair-377984943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7359a7dad3b849bfbf075b88f2a261b4',ramdisk_id='',reservation_id='r-rkyghinj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:07:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec17c54e24584f11a5348b68d6e7ca85',uuid=65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.675 2 DEBUG nova.network.os_vif_util [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converting VIF {"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.677 2 DEBUG nova.network.os_vif_util [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.680 2 DEBUG nova.objects.instance [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.698 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <uuid>65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632</uuid>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <name>instance-00000022</name>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-173512894</nova:name>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:07:23</nova:creationTime>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:user uuid="ec17c54e24584f11a5348b68d6e7ca85">tempest-UpdateMultiattachVolumeNegativeTest-1815230933-project-member</nova:user>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:project uuid="7359a7dad3b849bfbf075b88f2a261b4">tempest-UpdateMultiattachVolumeNegativeTest-1815230933</nova:project>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <nova:port uuid="7d4e6499-094f-4aa5-acf9-3a8487c8dba0">
Oct 02 12:07:24 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <system>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="serial">65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="uuid">65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </system>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <os>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </os>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <features>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </features>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk">
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </source>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config">
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </source>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:07:24 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3c:32:bf"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <target dev="tap7d4e6499-09"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/console.log" append="off"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <video>
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </video>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:07:24 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:07:24 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:07:24 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:07:24 compute-0 nova_compute[257802]: </domain>
Oct 02 12:07:24 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.700 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Preparing to wait for external event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.701 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.701 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.702 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.703 2 DEBUG nova.virt.libvirt.vif [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-173512894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-173512894',id=34,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWLHYAzCSRCkStdbU+GdVhWIXiwiTci8xggQ9ThyRlprkD/MENcP1zXCe9JELWxtblFvNPabWQ+ZgjaGJX29tNuXgS46PKPgWmCmmQjfV3eqKUfK1wEy2Lz1kDGxf6LzA==',key_name='tempest-keypair-377984943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7359a7dad3b849bfbf075b88f2a261b4',ramdisk_id='',reservation_id='r-rkyghinj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:07:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec17c54e24584f11a5348b68d6e7ca85',uuid=65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.703 2 DEBUG nova.network.os_vif_util [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converting VIF {"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.704 2 DEBUG nova.network.os_vif_util [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.704 2 DEBUG os_vif [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d4e6499-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d4e6499-09, col_values=(('external_ids', {'iface-id': '7d4e6499-094f-4aa5-acf9-3a8487c8dba0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:32:bf', 'vm-uuid': '65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:24 compute-0 NetworkManager[44987]: <info>  [1759406844.7121] manager: (tap7d4e6499-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.721 2 INFO os_vif [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09')
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.779 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.779 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.779 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No VIF found with MAC fa:16:3e:3c:32:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.780 2 INFO nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Using config drive
Oct 02 12:07:24 compute-0 nova_compute[257802]: 2025-10-02 12:07:24.809 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 195 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.8 MiB/s wr, 202 op/s
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.398 2 INFO nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Creating config drive at /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.403 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt49hnujb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/458361932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2637674219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.531 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt49hnujb" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.556 2 DEBUG nova.storage.rbd_utils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] rbd image 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.559 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.585 2 DEBUG nova.network.neutron [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updated VIF entry in instance network info cache for port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.586 2 DEBUG nova.network.neutron [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.601 2 DEBUG oslo_concurrency.lockutils [req-61725fc5-4045-4f74-9b4b-08bec8052b81 req-7606659e-2cec-46c1-aa82-f36d6fb35589 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.775 2 DEBUG oslo_concurrency.processutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.777 2 INFO nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Deleting local config drive /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632/disk.config because it was imported into RBD.
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:25 compute-0 kernel: tap7d4e6499-09: entered promiscuous mode
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.8313] manager: (tap7d4e6499-09): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:25 compute-0 ovn_controller[148183]: 2025-10-02T12:07:25Z|00154|binding|INFO|Claiming lport 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 for this chassis.
Oct 02 12:07:25 compute-0 ovn_controller[148183]: 2025-10-02T12:07:25Z|00155|binding|INFO|7d4e6499-094f-4aa5-acf9-3a8487c8dba0: Claiming fa:16:3e:3c:32:bf 10.100.0.5
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.8594] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.8599] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 02 12:07:25 compute-0 nova_compute[257802]: 2025-10-02 12:07:25.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:25 compute-0 systemd-machined[211836]: New machine qemu-20-instance-00000022.
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.862 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:32:bf 10.100.0.5'], port_security=['fa:16:3e:3c:32:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7359a7dad3b849bfbf075b88f2a261b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89efda0a-e365-4ab4-b56f-2cbf8e88c8e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00463c6f-e0da-4800-9774-7f10cd7297fc, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7d4e6499-094f-4aa5-acf9-3a8487c8dba0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.863 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 in datapath 0392b00d-9a0f-4fdc-878a-61235e8b04c7 bound to our chassis
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.865 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0392b00d-9a0f-4fdc-878a-61235e8b04c7
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.878 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6868af0c-9232-4dcb-ba8c-8048c8c3e995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.879 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0392b00d-91 in ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.881 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0392b00d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.881 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[23b26b18-cfb1-4676-a987-dde42ba626b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.882 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a480a785-a68a-41ec-97ab-f0780303cbf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.897 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddc369-abc2-46e1-8072-3a19a0153016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000022.
Oct 02 12:07:25 compute-0 systemd-udevd[283431]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.923 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a903dc07-921a-409c-a528-5f72e3a65721]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.9272] device (tap7d4e6499-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.9279] device (tap7d4e6499-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.949 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[affa38fe-2716-4200-a350-aeebe9d583d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.956 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b140a867-9562-4b9a-8674-2d27c7535b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 NetworkManager[44987]: <info>  [1759406845.9575] manager: (tap0392b00d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.982 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9b3651-8a59-4178-a9f8-2291e30e5044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:25.984 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[28999cb5-1b66-4d5b-8c6b-05f796d71e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 NetworkManager[44987]: <info>  [1759406846.0022] device (tap0392b00d-90): carrier: link connected
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.006 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe36a5f-93b8-4e16-ba41-3274e829bb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.022 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[84e3b5d4-a55c-48df-b112-de65646bab8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0392b00d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491362, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283461, 'error': None, 'target': 'ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.036 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[305fa54f-2af8-4f96-bdc7-b7b8007c38c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:2c11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491362, 'tstamp': 491362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283462, 'error': None, 'target': 'ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.052 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b046c2f-be39-4292-aee4-91b1054c8f5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0392b00d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491362, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283463, 'error': None, 'target': 'ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.079 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff4ecf0-094b-46e3-a037-ec28c1e250b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.135 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[213c5954-bb38-43dc-a50d-b0e443d86c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.136 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0392b00d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.136 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.136 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0392b00d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:26 compute-0 NetworkManager[44987]: <info>  [1759406846.1386] manager: (tap0392b00d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 kernel: tap0392b00d-90: entered promiscuous mode
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.174 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0392b00d-90, col_values=(('external_ids', {'iface-id': 'a266984f-a69e-4d11-8c6e-e21eb33eff29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.181 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0392b00d-9a0f-4fdc-878a-61235e8b04c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0392b00d-9a0f-4fdc-878a-61235e8b04c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.181 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18aa9c87-5798-46da-a45a-921cdbcd529a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.182 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-0392b00d-9a0f-4fdc-878a-61235e8b04c7
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/0392b00d-9a0f-4fdc-878a-61235e8b04c7.pid.haproxy
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 0392b00d-9a0f-4fdc-878a-61235e8b04c7
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.183 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'env', 'PROCESS_TAG=haproxy-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0392b00d-9a0f-4fdc-878a-61235e8b04c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:07:26 compute-0 ovn_controller[148183]: 2025-10-02T12:07:26Z|00156|binding|INFO|Releasing lport a266984f-a69e-4d11-8c6e-e21eb33eff29 from this chassis (sb_readonly=0)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 ovn_controller[148183]: 2025-10-02T12:07:26Z|00157|binding|INFO|Setting lport 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 ovn-installed in OVS
Oct 02 12:07:26 compute-0 ovn_controller[148183]: 2025-10-02T12:07:26Z|00158|binding|INFO|Setting lport 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 up in Southbound
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.355 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:26 compute-0 ceph-mon[73607]: pgmap v1226: 305 pgs: 305 active+clean; 195 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.8 MiB/s wr, 202 op/s
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.530 2 DEBUG nova.compute.manager [req-348f507b-5c93-43f6-8182-df3e124c0c02 req-ee137187-1953-4879-8b78-cee34ff65e9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.531 2 DEBUG oslo_concurrency.lockutils [req-348f507b-5c93-43f6-8182-df3e124c0c02 req-ee137187-1953-4879-8b78-cee34ff65e9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.531 2 DEBUG oslo_concurrency.lockutils [req-348f507b-5c93-43f6-8182-df3e124c0c02 req-ee137187-1953-4879-8b78-cee34ff65e9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.531 2 DEBUG oslo_concurrency.lockutils [req-348f507b-5c93-43f6-8182-df3e124c0c02 req-ee137187-1953-4879-8b78-cee34ff65e9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.531 2 DEBUG nova.compute.manager [req-348f507b-5c93-43f6-8182-df3e124c0c02 req-ee137187-1953-4879-8b78-cee34ff65e9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Processing event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:07:26 compute-0 podman[283538]: 2025-10-02 12:07:26.624562047 +0000 UTC m=+0.115536004 container create eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:07:26 compute-0 podman[283538]: 2025-10-02 12:07:26.529915428 +0000 UTC m=+0.020889415 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:07:26 compute-0 systemd[1]: Started libpod-conmon-eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534.scope.
Oct 02 12:07:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88a2a3226cd38a43f703d2acfac1613e8c659c8e136e517c98efd6c3da39806f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:26 compute-0 podman[283538]: 2025-10-02 12:07:26.691474313 +0000 UTC m=+0.182448290 container init eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:26 compute-0 podman[283538]: 2025-10-02 12:07:26.69706769 +0000 UTC m=+0.188041647 container start eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.704 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.705 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406846.7052817, 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.706 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] VM Started (Lifecycle Event)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.708 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.711 2 INFO nova.virt.libvirt.driver [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Instance spawned successfully.
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.711 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:07:26 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [NOTICE]   (283558) : New worker (283560) forked
Oct 02 12:07:26 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [NOTICE]   (283558) : Loading success.
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.736 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.742 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.746 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.747 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.747 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.748 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.748 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.749 2 DEBUG nova.virt.libvirt.driver [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.757 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.764 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.765 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406846.7053835, 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.765 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] VM Paused (Lifecycle Event)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.791 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.795 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406846.707471, 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.795 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] VM Resumed (Lifecycle Event)
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.813 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.815 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.839 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.842 2 INFO nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Took 7.81 seconds to spawn the instance on the hypervisor.
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.843 2 DEBUG nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.915 2 INFO nova.compute.manager [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Took 9.05 seconds to build instance.
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.925 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.925 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:26.926 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:26 compute-0 nova_compute[257802]: 2025-10-02 12:07:26.947 2 DEBUG oslo_concurrency.lockutils [None req-25c81dc2-3a01-4661-8261-4cb613d373e5 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 210 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 191 op/s
Oct 02 12:07:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:28 compute-0 ceph-mon[73607]: pgmap v1227: 305 pgs: 305 active+clean; 210 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 191 op/s
Oct 02 12:07:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4089154726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1584143293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.638 2 DEBUG nova.compute.manager [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.638 2 DEBUG oslo_concurrency.lockutils [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.638 2 DEBUG oslo_concurrency.lockutils [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.639 2 DEBUG oslo_concurrency.lockutils [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.639 2 DEBUG nova.compute.manager [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] No waiting events found dispatching network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:07:28 compute-0 nova_compute[257802]: 2025-10-02 12:07:28.639 2 WARNING nova.compute.manager [req-b45c7f5c-f821-4c69-a9b5-062d51bc2d0a req-29be688e-2aa7-4b76-a631-a02b60ded909 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received unexpected event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 for instance with vm_state active and task_state None.
Oct 02 12:07:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 269 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.6 MiB/s wr, 236 op/s
Oct 02 12:07:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:29 compute-0 nova_compute[257802]: 2025-10-02 12:07:29.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/800830383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:30 compute-0 nova_compute[257802]: 2025-10-02 12:07:30.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:30 compute-0 ceph-mon[73607]: pgmap v1228: 305 pgs: 305 active+clean; 269 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.6 MiB/s wr, 236 op/s
Oct 02 12:07:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3062239647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 280 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 189 op/s
Oct 02 12:07:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:31.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:07:31.759 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4205512689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-0 sudo[283571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:32 compute-0 sudo[283571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:32 compute-0 sudo[283571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:32 compute-0 sudo[283597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:32 compute-0 sudo[283597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:32 compute-0 sudo[283597]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:32 compute-0 ceph-mon[73607]: pgmap v1229: 305 pgs: 305 active+clean; 280 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 189 op/s
Oct 02 12:07:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 302 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.6 MiB/s wr, 230 op/s
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.408 2 DEBUG nova.compute.manager [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-changed-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.408 2 DEBUG nova.compute.manager [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Refreshing instance network info cache due to event network-changed-7d4e6499-094f-4aa5-acf9-3a8487c8dba0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.408 2 DEBUG oslo_concurrency.lockutils [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.409 2 DEBUG oslo_concurrency.lockutils [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:07:33 compute-0 nova_compute[257802]: 2025-10-02 12:07:33.409 2 DEBUG nova.network.neutron [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Refreshing network info cache for port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:07:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:33.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:33.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2092060022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/626484903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:34 compute-0 nova_compute[257802]: 2025-10-02 12:07:34.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:34 compute-0 nova_compute[257802]: 2025-10-02 12:07:34.798 2 DEBUG nova.network.neutron [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updated VIF entry in instance network info cache for port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:07:34 compute-0 nova_compute[257802]: 2025-10-02 12:07:34.798 2 DEBUG nova.network.neutron [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:07:34 compute-0 nova_compute[257802]: 2025-10-02 12:07:34.814 2 DEBUG oslo_concurrency.lockutils [req-706418dd-0bd7-4d40-81ef-d0af20eadc00 req-1da5465c-88f3-4c62-9dbf-4b52284d7703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:07:34 compute-0 podman[283623]: 2025-10-02 12:07:34.920874578 +0000 UTC m=+0.061160866 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 12:07:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 314 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.1 MiB/s wr, 241 op/s
Oct 02 12:07:35 compute-0 ceph-mon[73607]: pgmap v1230: 305 pgs: 305 active+clean; 302 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.6 MiB/s wr, 230 op/s
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:07:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:35.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:35 compute-0 nova_compute[257802]: 2025-10-02 12:07:35.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:36 compute-0 nova_compute[257802]: 2025-10-02 12:07:36.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:36 compute-0 ceph-mon[73607]: pgmap v1231: 305 pgs: 305 active+clean; 314 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.1 MiB/s wr, 241 op/s
Oct 02 12:07:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1234167239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 327 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.1 MiB/s wr, 285 op/s
Oct 02 12:07:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/639241301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:37.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:37.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:37 compute-0 podman[283643]: 2025-10-02 12:07:37.923782541 +0000 UTC m=+0.056582153 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:07:37 compute-0 podman[283642]: 2025-10-02 12:07:37.924097789 +0000 UTC m=+0.062425947 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:07:38 compute-0 ceph-mon[73607]: pgmap v1232: 305 pgs: 305 active+clean; 327 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.1 MiB/s wr, 285 op/s
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.578 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.578 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:07:38 compute-0 nova_compute[257802]: 2025-10-02 12:07:38.578 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 336 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 4.9 MiB/s wr, 368 op/s
Oct 02 12:07:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4200805076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2210858305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:39.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:39 compute-0 nova_compute[257802]: 2025-10-02 12:07:39.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:39 compute-0 ovn_controller[148183]: 2025-10-02T12:07:39Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:32:bf 10.100.0.5
Oct 02 12:07:39 compute-0 ovn_controller[148183]: 2025-10-02T12:07:39Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:32:bf 10.100.0.5
Oct 02 12:07:40 compute-0 ceph-mon[73607]: pgmap v1233: 305 pgs: 305 active+clean; 336 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 4.9 MiB/s wr, 368 op/s
Oct 02 12:07:40 compute-0 nova_compute[257802]: 2025-10-02 12:07:40.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 349 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.7 MiB/s wr, 341 op/s
Oct 02 12:07:41 compute-0 sudo[283685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:41 compute-0 sudo[283685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:41 compute-0 sudo[283685]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.224 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [{"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.254 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.255 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.255 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:41 compute-0 sudo[283710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:41 compute-0 sudo[283710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:41 compute-0 sudo[283710]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:41 compute-0 sudo[283735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:41 compute-0 sudo[283735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:41 compute-0 sudo[283735]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.343 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.344 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.344 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.344 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.345 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:41 compute-0 sudo[283760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:07:41 compute-0 sudo[283760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:41.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774464130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.799 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:41 compute-0 sudo[283760]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.890 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:07:41 compute-0 nova_compute[257802]: 2025-10-02 12:07:41.891 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.079 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=20.840709686279297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.081 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.081 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c724886f-a7bc-430f-a60d-d4c666d00604 does not exist
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f598f426-a2f2-4004-8ed2-37c08b34ad9d does not exist
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cc77a799-56dc-4e45-94db-a06381cac616 does not exist
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.157 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.157 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.157 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:07:42 compute-0 sudo[283838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:42 compute-0 sudo[283838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 sudo[283838]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.192 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:42 compute-0 sudo[283869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:42 compute-0 sudo[283869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 sudo[283869]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:42 compute-0 podman[283862]: 2025-10-02 12:07:42.287920925 +0000 UTC m=+0.098931575 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:07:42 compute-0 sudo[283909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:42 compute-0 sudo[283909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 sudo[283909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:07:42
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:07:42 compute-0 sudo[283956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:07:42 compute-0 sudo[283956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 ceph-mon[73607]: pgmap v1234: 305 pgs: 305 active+clean; 349 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.7 MiB/s wr, 341 op/s
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3774464130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688893788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.640 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.646 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.677 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.707 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:07:42 compute-0 nova_compute[257802]: 2025-10-02 12:07:42.707 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.757108039 +0000 UTC m=+0.073515961 container create e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.710297136 +0000 UTC m=+0.026705088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:42 compute-0 systemd[1]: Started libpod-conmon-e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e.scope.
Oct 02 12:07:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:07:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.881495869 +0000 UTC m=+0.197903841 container init e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.889431484 +0000 UTC m=+0.205839416 container start e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:07:42 compute-0 boring_mccarthy[284042]: 167 167
Oct 02 12:07:42 compute-0 systemd[1]: libpod-e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e.scope: Deactivated successfully.
Oct 02 12:07:42 compute-0 conmon[284042]: conmon e439f73a944638efc652 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e.scope/container/memory.events
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.897988094 +0000 UTC m=+0.214396026 container attach e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.898321803 +0000 UTC m=+0.214729745 container died e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8384dce7c368633c8e55d7f409ed1698edae66786df0d4b96e95f2b3923fa31-merged.mount: Deactivated successfully.
Oct 02 12:07:42 compute-0 podman[284025]: 2025-10-02 12:07:42.970026847 +0000 UTC m=+0.286434769 container remove e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:07:42 compute-0 systemd[1]: libpod-conmon-e439f73a944638efc6526b837a51c59d640891bfedcf2165817b025adf99b23e.scope: Deactivated successfully.
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 357 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 330 op/s
Oct 02 12:07:43 compute-0 podman[284064]: 2025-10-02 12:07:43.139230791 +0000 UTC m=+0.043036201 container create 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:07:43 compute-0 systemd[1]: Started libpod-conmon-5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab.scope.
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:07:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:43 compute-0 podman[284064]: 2025-10-02 12:07:43.120695644 +0000 UTC m=+0.024501074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:43 compute-0 podman[284064]: 2025-10-02 12:07:43.242088821 +0000 UTC m=+0.145894251 container init 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:43 compute-0 podman[284064]: 2025-10-02 12:07:43.249725629 +0000 UTC m=+0.153531039 container start 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:07:43 compute-0 podman[284064]: 2025-10-02 12:07:43.258390812 +0000 UTC m=+0.162196252 container attach 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:07:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/583747885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2688893788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2783669186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:43.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:44 compute-0 stupefied_kirch[284080]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:07:44 compute-0 stupefied_kirch[284080]: --> relative data size: 1.0
Oct 02 12:07:44 compute-0 stupefied_kirch[284080]: --> All data devices are unavailable
Oct 02 12:07:44 compute-0 systemd[1]: libpod-5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab.scope: Deactivated successfully.
Oct 02 12:07:44 compute-0 conmon[284080]: conmon 5ddee970af178baa8607 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab.scope/container/memory.events
Oct 02 12:07:44 compute-0 podman[284064]: 2025-10-02 12:07:44.184017246 +0000 UTC m=+1.087822666 container died 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d949af7e35f89518baf238b3d635e54c94af359193b8c6c4c1eb45dfa05b38d2-merged.mount: Deactivated successfully.
Oct 02 12:07:44 compute-0 podman[284064]: 2025-10-02 12:07:44.260402945 +0000 UTC m=+1.164208355 container remove 5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:07:44 compute-0 systemd[1]: libpod-conmon-5ddee970af178baa86077599b47d637c7cd98d5dfba7103f2bddfc504c420fab.scope: Deactivated successfully.
Oct 02 12:07:44 compute-0 sudo[283956]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 sudo[284109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-0 sudo[284109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 sudo[284109]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 sudo[284134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:44 compute-0 sudo[284134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 sudo[284134]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 sudo[284160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-0 sudo[284160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 sudo[284160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 sudo[284185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:07:44 compute-0 sudo[284185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 ceph-mon[73607]: pgmap v1235: 305 pgs: 305 active+clean; 357 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 330 op/s
Oct 02 12:07:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3743711792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:44 compute-0 nova_compute[257802]: 2025-10-02 12:07:44.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:44 compute-0 podman[284250]: 2025-10-02 12:07:44.904133284 +0000 UTC m=+0.049074299 container create dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:07:44 compute-0 systemd[1]: Started libpod-conmon-dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490.scope.
Oct 02 12:07:44 compute-0 podman[284250]: 2025-10-02 12:07:44.87758569 +0000 UTC m=+0.022526725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:45 compute-0 podman[284250]: 2025-10-02 12:07:45.019999905 +0000 UTC m=+0.164940940 container init dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:07:45 compute-0 podman[284250]: 2025-10-02 12:07:45.029087288 +0000 UTC m=+0.174028303 container start dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:07:45 compute-0 elated_liskov[284267]: 167 167
Oct 02 12:07:45 compute-0 systemd[1]: libpod-dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490.scope: Deactivated successfully.
Oct 02 12:07:45 compute-0 podman[284250]: 2025-10-02 12:07:45.037456224 +0000 UTC m=+0.182397269 container attach dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:07:45 compute-0 podman[284250]: 2025-10-02 12:07:45.038233013 +0000 UTC m=+0.183174028 container died dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:07:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 408 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.4 MiB/s wr, 288 op/s
Oct 02 12:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e57002e9877458760098be42bff1f8cf733def0e354bbcb22c53131dcb91605-merged.mount: Deactivated successfully.
Oct 02 12:07:45 compute-0 podman[284250]: 2025-10-02 12:07:45.361707382 +0000 UTC m=+0.506648397 container remove dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:07:45 compute-0 systemd[1]: libpod-conmon-dbd4720958c3bc5d123d4a32ee2608b9b70d1aa7a08a089d8deb6efdda179490.scope: Deactivated successfully.
Oct 02 12:07:45 compute-0 nova_compute[257802]: 2025-10-02 12:07:45.550 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:45 compute-0 nova_compute[257802]: 2025-10-02 12:07:45.551 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:45 compute-0 podman[284293]: 2025-10-02 12:07:45.499662285 +0000 UTC m=+0.021250443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:45 compute-0 podman[284293]: 2025-10-02 12:07:45.709763475 +0000 UTC m=+0.231351613 container create 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:07:45 compute-0 nova_compute[257802]: 2025-10-02 12:07:45.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2199587781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2012125399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/701690202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:45 compute-0 systemd[1]: Started libpod-conmon-8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6.scope.
Oct 02 12:07:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef22ad090d62e466e1161de0876eb2def82aad084ac4cadcad908fc01082191/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef22ad090d62e466e1161de0876eb2def82aad084ac4cadcad908fc01082191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef22ad090d62e466e1161de0876eb2def82aad084ac4cadcad908fc01082191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef22ad090d62e466e1161de0876eb2def82aad084ac4cadcad908fc01082191/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 podman[284293]: 2025-10-02 12:07:46.097257488 +0000 UTC m=+0.618845646 container init 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:07:46 compute-0 podman[284293]: 2025-10-02 12:07:46.106853975 +0000 UTC m=+0.628442113 container start 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:07:46 compute-0 podman[284293]: 2025-10-02 12:07:46.23428812 +0000 UTC m=+0.755876288 container attach 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]: {
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:     "1": [
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:         {
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "devices": [
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "/dev/loop3"
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             ],
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "lv_name": "ceph_lv0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "lv_size": "7511998464",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "name": "ceph_lv0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "tags": {
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.cluster_name": "ceph",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.crush_device_class": "",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.encrypted": "0",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.osd_id": "1",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.type": "block",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:                 "ceph.vdo": "0"
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             },
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "type": "block",
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:             "vg_name": "ceph_vg0"
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:         }
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]:     ]
Oct 02 12:07:46 compute-0 vigorous_albattani[284310]: }
Oct 02 12:07:46 compute-0 systemd[1]: libpod-8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6.scope: Deactivated successfully.
Oct 02 12:07:46 compute-0 podman[284293]: 2025-10-02 12:07:46.924122793 +0000 UTC m=+1.445710941 container died 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:07:46 compute-0 ceph-mon[73607]: pgmap v1236: 305 pgs: 305 active+clean; 408 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.4 MiB/s wr, 288 op/s
Oct 02 12:07:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 445 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.9 MiB/s wr, 322 op/s
Oct 02 12:07:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:47.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ef22ad090d62e466e1161de0876eb2def82aad084ac4cadcad908fc01082191-merged.mount: Deactivated successfully.
Oct 02 12:07:47 compute-0 podman[284293]: 2025-10-02 12:07:47.843470102 +0000 UTC m=+2.365058240 container remove 8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:07:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:47 compute-0 sudo[284185]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:47 compute-0 systemd[1]: libpod-conmon-8c7a3aae256caa81785955809e8d4b133bbac96022d0c88ff17a1b66573f96a6.scope: Deactivated successfully.
Oct 02 12:07:47 compute-0 sudo[284333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:47 compute-0 sudo[284333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:47 compute-0 sudo[284333]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:48 compute-0 sudo[284358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:48 compute-0 sudo[284358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 sudo[284358]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:48 compute-0 sudo[284383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:48 compute-0 sudo[284383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 sudo[284383]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:48 compute-0 sudo[284408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:07:48 compute-0 sudo[284408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.496823237 +0000 UTC m=+0.077674412 container create 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.438669497 +0000 UTC m=+0.019520692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:48 compute-0 systemd[1]: Started libpod-conmon-1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd.scope.
Oct 02 12:07:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.705283446 +0000 UTC m=+0.286134661 container init 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.719277051 +0000 UTC m=+0.300128256 container start 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:07:48 compute-0 elastic_stonebraker[284491]: 167 167
Oct 02 12:07:48 compute-0 systemd[1]: libpod-1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd.scope: Deactivated successfully.
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.742253996 +0000 UTC m=+0.323105191 container attach 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:48 compute-0 podman[284475]: 2025-10-02 12:07:48.742738638 +0000 UTC m=+0.323589813 container died 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4d70dee52006e334f2b1e4f66dc3e96e6fad550cb351012a694cbbc5f058a1-merged.mount: Deactivated successfully.
Oct 02 12:07:49 compute-0 ceph-mon[73607]: pgmap v1237: 305 pgs: 305 active+clean; 445 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.9 MiB/s wr, 322 op/s
Oct 02 12:07:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 522 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 11 MiB/s wr, 419 op/s
Oct 02 12:07:49 compute-0 podman[284475]: 2025-10-02 12:07:49.196486293 +0000 UTC m=+0.777337468 container remove 1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:07:49 compute-0 systemd[1]: libpod-conmon-1cce496e395c1b76c56718d9671498d72fbd8b485b1b3d776c427d71bd7541fd.scope: Deactivated successfully.
Oct 02 12:07:49 compute-0 podman[284518]: 2025-10-02 12:07:49.393890189 +0000 UTC m=+0.068299091 container create af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:07:49 compute-0 systemd[1]: Started libpod-conmon-af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9.scope.
Oct 02 12:07:49 compute-0 podman[284518]: 2025-10-02 12:07:49.347106678 +0000 UTC m=+0.021515600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dd979c39b4b65021b2615324f0823033b95323bbec28762f4a157f5dbeed08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dd979c39b4b65021b2615324f0823033b95323bbec28762f4a157f5dbeed08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dd979c39b4b65021b2615324f0823033b95323bbec28762f4a157f5dbeed08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dd979c39b4b65021b2615324f0823033b95323bbec28762f4a157f5dbeed08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 podman[284518]: 2025-10-02 12:07:49.502755779 +0000 UTC m=+0.177164711 container init af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:49 compute-0 podman[284518]: 2025-10-02 12:07:49.510421767 +0000 UTC m=+0.184830669 container start af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:07:49 compute-0 podman[284518]: 2025-10-02 12:07:49.529879166 +0000 UTC m=+0.204288088 container attach af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:07:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:49 compute-0 nova_compute[257802]: 2025-10-02 12:07:49.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3071389385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:50 compute-0 nifty_golick[284535]: {
Oct 02 12:07:50 compute-0 nifty_golick[284535]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:07:50 compute-0 nifty_golick[284535]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:07:50 compute-0 nifty_golick[284535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:07:50 compute-0 nifty_golick[284535]:         "osd_id": 1,
Oct 02 12:07:50 compute-0 nifty_golick[284535]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:07:50 compute-0 nifty_golick[284535]:         "type": "bluestore"
Oct 02 12:07:50 compute-0 nifty_golick[284535]:     }
Oct 02 12:07:50 compute-0 nifty_golick[284535]: }
Oct 02 12:07:50 compute-0 systemd[1]: libpod-af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9.scope: Deactivated successfully.
Oct 02 12:07:50 compute-0 podman[284518]: 2025-10-02 12:07:50.377724716 +0000 UTC m=+1.052133638 container died af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-54dd979c39b4b65021b2615324f0823033b95323bbec28762f4a157f5dbeed08-merged.mount: Deactivated successfully.
Oct 02 12:07:50 compute-0 podman[284518]: 2025-10-02 12:07:50.628432144 +0000 UTC m=+1.302841046 container remove af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:07:50 compute-0 systemd[1]: libpod-conmon-af1db95a270c62ae872b53b16e2f4baa681ab6bd89289bd5ec87c70df79be0d9.scope: Deactivated successfully.
Oct 02 12:07:50 compute-0 sudo[284408]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:07:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:07:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2e0b3e6a-189b-44c9-bb1d-3605dddb6da3 does not exist
Oct 02 12:07:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 25988a60-2603-435f-9b3b-8beac656f524 does not exist
Oct 02 12:07:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b67e54dc-a3b0-49aa-b5bf-1d0fb1009db7 does not exist
Oct 02 12:07:50 compute-0 sudo[284569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:50 compute-0 sudo[284569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 sudo[284569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 nova_compute[257802]: 2025-10-02 12:07:50.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:50 compute-0 nova_compute[257802]: 2025-10-02 12:07:50.805 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:50 compute-0 nova_compute[257802]: 2025-10-02 12:07:50.806 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:50 compute-0 sudo[284594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:07:50 compute-0 sudo[284594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 sudo[284594]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 nova_compute[257802]: 2025-10-02 12:07:50.880 2 DEBUG nova.objects.instance [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'flavor' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:50 compute-0 nova_compute[257802]: 2025-10-02 12:07:50.965 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 540 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 11 MiB/s wr, 387 op/s
Oct 02 12:07:51 compute-0 ceph-mon[73607]: pgmap v1238: 305 pgs: 305 active+clean; 522 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 11 MiB/s wr, 419 op/s
Oct 02 12:07:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.207 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.208 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.208 2 INFO nova.compute.manager [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Attaching volume 6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291 to /dev/vdb
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.428 2 DEBUG os_brick.utils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.429 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.441 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.442 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[91ad419f-b1d1-4ba6-86fc-cb7efb9f86ff]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.443 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.451 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.452 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[840e6f48-d937-4f76-a083-a337b9743db9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.453 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.460 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.460 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b3fdf7-ca8b-455c-84c8-32d68b0eeeaf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.461 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8ab762-c7e1-4104-9c23-1b92ef80b76b]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.461 2 DEBUG oslo_concurrency.processutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.487 2 DEBUG oslo_concurrency.processutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.489 2 DEBUG os_brick.initiator.connectors.lightos [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.489 2 DEBUG os_brick.initiator.connectors.lightos [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.489 2 DEBUG os_brick.initiator.connectors.lightos [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.490 2 DEBUG os_brick.utils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:07:51 compute-0 nova_compute[257802]: 2025-10-02 12:07:51.490 2 DEBUG nova.virt.block_device [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating existing volume attachment record: 546c2edf-b7e5-4754-9188-a76c1d348b2b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:07:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:51.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:52 compute-0 sudo[284627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:52 compute-0 sudo[284627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-0 sudo[284627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.625 2 DEBUG nova.objects.instance [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'flavor' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:52 compute-0 sudo[284652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.645 2 DEBUG nova.virt.libvirt.driver [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Attempting to attach volume 6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:07:52 compute-0 sudo[284652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-0 sudo[284652]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.649 2 DEBUG nova.virt.libvirt.guest [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291">
Oct 02 12:07:52 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   </source>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:07:52 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <serial>6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291</serial>
Oct 02 12:07:52 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:07:52 compute-0 nova_compute[257802]: </disk>
Oct 02 12:07:52 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:07:52 compute-0 ceph-mon[73607]: pgmap v1239: 305 pgs: 305 active+clean; 540 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 11 MiB/s wr, 387 op/s
Oct 02 12:07:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1382076632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.781 2 DEBUG nova.virt.libvirt.driver [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.781 2 DEBUG nova.virt.libvirt.driver [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.782 2 DEBUG nova.virt.libvirt.driver [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.782 2 DEBUG nova.virt.libvirt.driver [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] No VIF found with MAC fa:16:3e:3c:32:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:07:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.997 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.997 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:52 compute-0 nova_compute[257802]: 2025-10-02 12:07:52.999 2 DEBUG oslo_concurrency.lockutils [None req-29b23721-7123-430b-9b73-7146b5fc742b ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.025 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:07:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 515 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 10 MiB/s wr, 405 op/s
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.099 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.099 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.106 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.106 2 INFO nova.compute.claims [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.253 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:53.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/966006598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.681 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.689 2 DEBUG nova.compute.provider_tree [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.704 2 DEBUG nova.scheduler.client.report [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.728 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.744 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.744 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.769 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] No node specified, defaulting to compute-0.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.828 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.829 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.903 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.904 2 DEBUG nova.network.neutron [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.951 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:07:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/966006598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/574804538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:53 compute-0 nova_compute[257802]: 2025-10-02 12:07:53.985 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.089 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.091 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.091 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Creating image(s)
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.117 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.143 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011773639610087474 of space, bias 1.0, pg target 3.532091883026242 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.170 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.173 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.234 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.235 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.236 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.236 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.262 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.266 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.337 2 DEBUG nova.network.neutron [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.338 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.614 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.694 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] resizing rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.811 2 DEBUG nova.objects.instance [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'migration_context' on Instance uuid 7be1ad09-48b1-425b-9269-d21d36e9ff26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.827 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.828 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Ensure instance console log exists: /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.828 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.829 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.829 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.831 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.835 2 WARNING nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.838 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.839 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.841 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.842 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.843 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.843 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.843 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.843 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.844 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.844 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.844 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.844 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.844 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.845 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.845 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.845 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:07:54 compute-0 nova_compute[257802]: 2025-10-02 12:07:54.848 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:55 compute-0 ceph-mon[73607]: pgmap v1240: 305 pgs: 305 active+clean; 515 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 10 MiB/s wr, 405 op/s
Oct 02 12:07:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3063043164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 495 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.5 MiB/s wr, 399 op/s
Oct 02 12:07:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:07:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/121369350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.322 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.347 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.350 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:55.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:55.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:07:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370217626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.780 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.782 2 DEBUG nova.objects.instance [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7be1ad09-48b1-425b-9269-d21d36e9ff26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:07:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.804 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <uuid>7be1ad09-48b1-425b-9269-d21d36e9ff26</uuid>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <name>instance-00000028</name>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersOnMultiNodesTest-server-2057920136-1</nova:name>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:07:54</nova:creationTime>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:user uuid="199c0d9541a04c4db07e50bfba9fddb1">tempest-ServersOnMultiNodesTest-10966744-project-member</nova:user>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <nova:project uuid="62aa9c47ee2841139cd7066168f59650">tempest-ServersOnMultiNodesTest-10966744</nova:project>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <system>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="serial">7be1ad09-48b1-425b-9269-d21d36e9ff26</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="uuid">7be1ad09-48b1-425b-9269-d21d36e9ff26</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </system>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <os>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </os>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <features>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </features>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7be1ad09-48b1-425b-9269-d21d36e9ff26_disk">
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </source>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config">
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </source>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:07:55 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/console.log" append="off"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <video>
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </video>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:07:55 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:07:55 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:07:55 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:07:55 compute-0 nova_compute[257802]: </domain>
Oct 02 12:07:55 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.847 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.847 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.847 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Using config drive
Oct 02 12:07:55 compute-0 nova_compute[257802]: 2025-10-02 12:07:55.872 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4138778764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:07:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4138778764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:07:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/121369350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/46126558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2370217626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.320 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Creating config drive at /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.324 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjmfhjg2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.346 2 DEBUG oslo_concurrency.lockutils [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.347 2 DEBUG oslo_concurrency.lockutils [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.361 2 INFO nova.compute.manager [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Detaching volume 6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.454 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjmfhjg2" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.481 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.484 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.506 2 INFO nova.virt.block_device [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Attempting to driver detach volume 6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291 from mountpoint /dev/vdb
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.515 2 DEBUG nova.virt.libvirt.driver [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Attempting to detach device vdb from instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.516 2 DEBUG nova.virt.libvirt.guest [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291">
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   </source>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <serial>6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291</serial>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]: </disk>
Oct 02 12:07:56 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.538 2 INFO nova.virt.libvirt.driver [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Successfully detached device vdb from instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 from the persistent domain config.
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.539 2 DEBUG nova.virt.libvirt.driver [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.540 2 DEBUG nova.virt.libvirt.guest [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291">
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   </source>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <serial>6ee55f1c-32cc-4a1e-b7d7-c50f2e3e5291</serial>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:07:56 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:07:56 compute-0 nova_compute[257802]: </disk>
Oct 02 12:07:56 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.641 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759406876.6406193, 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.642 2 DEBUG nova.virt.libvirt.driver [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.644 2 INFO nova.virt.libvirt.driver [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Successfully detached device vdb from instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 from the live domain config.
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.776 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config 7be1ad09-48b1-425b-9269-d21d36e9ff26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.776 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Deleting local config drive /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26/disk.config because it was imported into RBD.
Oct 02 12:07:56 compute-0 systemd-machined[211836]: New machine qemu-21-instance-00000028.
Oct 02 12:07:56 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000028.
Oct 02 12:07:56 compute-0 nova_compute[257802]: 2025-10-02 12:07:56.961 2 DEBUG nova.objects.instance [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'flavor' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.008 2 DEBUG oslo_concurrency.lockutils [None req-c479769c-8823-49d4-ba5e-4b25e6363b78 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:57 compute-0 ceph-mon[73607]: pgmap v1241: 305 pgs: 305 active+clean; 495 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.5 MiB/s wr, 399 op/s
Oct 02 12:07:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/421099192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:07:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 506 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.5 MiB/s wr, 405 op/s
Oct 02 12:07:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:57.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.732 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406877.7324352, 7be1ad09-48b1-425b-9269-d21d36e9ff26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.733 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] VM Resumed (Lifecycle Event)
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.734 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.735 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.737 2 INFO nova.virt.libvirt.driver [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance spawned successfully.
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.738 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:07:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.917 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.920 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.927 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.928 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.928 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.928 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.929 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:57 compute-0 nova_compute[257802]: 2025-10-02 12:07:57.929 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.005 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.005 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406877.7346272, 7be1ad09-48b1-425b-9269-d21d36e9ff26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.005 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] VM Started (Lifecycle Event)
Oct 02 12:07:58 compute-0 ceph-mon[73607]: pgmap v1242: 305 pgs: 305 active+clean; 506 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.5 MiB/s wr, 405 op/s
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.168 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.172 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.198 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.212 2 INFO nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Took 4.12 seconds to spawn the instance on the hypervisor.
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.213 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.264 2 INFO nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Took 5.19 seconds to build instance.
Oct 02 12:07:58 compute-0 nova_compute[257802]: 2025-10-02 12:07:58.286 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:59 compute-0 ovn_controller[148183]: 2025-10-02T12:07:59Z|00159|binding|INFO|Releasing lport a266984f-a69e-4d11-8c6e-e21eb33eff29 from this chassis (sb_readonly=0)
Oct 02 12:07:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 567 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.7 MiB/s wr, 437 op/s
Oct 02 12:07:59 compute-0 nova_compute[257802]: 2025-10-02 12:07:59.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:07:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:59.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:07:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:07:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:59.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:07:59 compute-0 nova_compute[257802]: 2025-10-02 12:07:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.043 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.044 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.044 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.044 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.044 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.046 2 INFO nova.compute.manager [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Terminating instance
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.047 2 DEBUG nova.compute.manager [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:08:00 compute-0 kernel: tap7d4e6499-09 (unregistering): left promiscuous mode
Oct 02 12:08:00 compute-0 NetworkManager[44987]: <info>  [1759406880.1681] device (tap7d4e6499-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:08:00 compute-0 ovn_controller[148183]: 2025-10-02T12:08:00Z|00160|binding|INFO|Releasing lport 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 from this chassis (sb_readonly=0)
Oct 02 12:08:00 compute-0 ovn_controller[148183]: 2025-10-02T12:08:00Z|00161|binding|INFO|Setting lport 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 down in Southbound
Oct 02 12:08:00 compute-0 ovn_controller[148183]: 2025-10-02T12:08:00Z|00162|binding|INFO|Removing iface tap7d4e6499-09 ovn-installed in OVS
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 ceph-mon[73607]: pgmap v1243: 305 pgs: 305 active+clean; 567 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.7 MiB/s wr, 437 op/s
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.193 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:32:bf 10.100.0.5'], port_security=['fa:16:3e:3c:32:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7359a7dad3b849bfbf075b88f2a261b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89efda0a-e365-4ab4-b56f-2cbf8e88c8e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00463c6f-e0da-4800-9774-7f10cd7297fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7d4e6499-094f-4aa5-acf9-3a8487c8dba0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.194 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7d4e6499-094f-4aa5-acf9-3a8487c8dba0 in datapath 0392b00d-9a0f-4fdc-878a-61235e8b04c7 unbound from our chassis
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.196 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0392b00d-9a0f-4fdc-878a-61235e8b04c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.198 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c16fb08-6179-4f2e-a96d-6d55f6920c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.198 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7 namespace which is not needed anymore
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 02 12:08:00 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000022.scope: Consumed 14.266s CPU time.
Oct 02 12:08:00 compute-0 systemd-machined[211836]: Machine qemu-20-instance-00000022 terminated.
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.281 2 INFO nova.virt.libvirt.driver [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Instance destroyed successfully.
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.281 2 DEBUG nova.objects.instance [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lazy-loading 'resources' on Instance uuid 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.323 2 DEBUG nova.virt.libvirt.vif [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-173512894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-173512894',id=34,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWLHYAzCSRCkStdbU+GdVhWIXiwiTci8xggQ9ThyRlprkD/MENcP1zXCe9JELWxtblFvNPabWQ+ZgjaGJX29tNuXgS46PKPgWmCmmQjfV3eqKUfK1wEy2Lz1kDGxf6LzA==',key_name='tempest-keypair-377984943',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:07:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7359a7dad3b849bfbf075b88f2a261b4',ramdisk_id='',reservation_id='r-rkyghinj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1815230933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:07:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec17c54e24584f11a5348b68d6e7ca85',uuid=65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.324 2 DEBUG nova.network.os_vif_util [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converting VIF {"id": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "address": "fa:16:3e:3c:32:bf", "network": {"id": "0392b00d-9a0f-4fdc-878a-61235e8b04c7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-321386985-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7359a7dad3b849bfbf075b88f2a261b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d4e6499-09", "ovs_interfaceid": "7d4e6499-094f-4aa5-acf9-3a8487c8dba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.325 2 DEBUG nova.network.os_vif_util [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.325 2 DEBUG os_vif [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d4e6499-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.335 2 INFO os_vif [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:bf,bridge_name='br-int',has_traffic_filtering=True,id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0,network=Network(0392b00d-9a0f-4fdc-878a-61235e8b04c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d4e6499-09')
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [NOTICE]   (283558) : haproxy version is 2.8.14-c23fe91
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [NOTICE]   (283558) : path to executable is /usr/sbin/haproxy
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [WARNING]  (283558) : Exiting Master process...
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [WARNING]  (283558) : Exiting Master process...
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [ALERT]    (283558) : Current worker (283560) exited with code 143 (Terminated)
Oct 02 12:08:00 compute-0 neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7[283554]: [WARNING]  (283558) : All workers exited. Exiting... (0)
Oct 02 12:08:00 compute-0 systemd[1]: libpod-eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534.scope: Deactivated successfully.
Oct 02 12:08:00 compute-0 podman[285095]: 2025-10-02 12:08:00.388210063 +0000 UTC m=+0.091919922 container died eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534-userdata-shm.mount: Deactivated successfully.
Oct 02 12:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-88a2a3226cd38a43f703d2acfac1613e8c659c8e136e517c98efd6c3da39806f-merged.mount: Deactivated successfully.
Oct 02 12:08:00 compute-0 podman[285095]: 2025-10-02 12:08:00.750320883 +0000 UTC m=+0.454030732 container cleanup eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:08:00 compute-0 systemd[1]: libpod-conmon-eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534.scope: Deactivated successfully.
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.847 2 DEBUG nova.compute.manager [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-unplugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.847 2 DEBUG oslo_concurrency.lockutils [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.848 2 DEBUG oslo_concurrency.lockutils [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.848 2 DEBUG oslo_concurrency.lockutils [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.848 2 DEBUG nova.compute.manager [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] No waiting events found dispatching network-vif-unplugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.848 2 DEBUG nova.compute.manager [req-88584d44-7a63-4021-a93e-25ce44729a28 req-0a8d0e94-1b27-4f5d-bdc6-bbd816a635a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-unplugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:08:00 compute-0 podman[285149]: 2025-10-02 12:08:00.868014489 +0000 UTC m=+0.093459661 container remove eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.874 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1e892085-b5e7-42d0-9146-ecd4354affcb]: (4, ('Thu Oct  2 12:08:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7 (eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534)\neebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534\nThu Oct  2 12:08:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7 (eebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534)\neebe6994002a1d28eb72791972845c40bfd384e50de8281059a09a88244fb534\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.876 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc4c72c-539e-4ea4-b470-be38b1195aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.877 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0392b00d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:08:00 compute-0 kernel: tap0392b00d-90: left promiscuous mode
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.884 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[68b19922-64f5-4b5c-969c-299014a82107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 nova_compute[257802]: 2025-10-02 12:08:00.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.913 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[889dc7cc-9c22-4823-88d1-612caa096cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.914 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff01d84-7407-4a9a-8927-963542e1fe31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.932 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a39f88e5-b7ca-4939-89f5-524aec6f3e28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491357, 'reachable_time': 28991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285164, 'error': None, 'target': 'ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d0392b00d\x2d9a0f\x2d4fdc\x2d878a\x2d61235e8b04c7.mount: Deactivated successfully.
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.937 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0392b00d-9a0f-4fdc-878a-61235e8b04c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:08:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:00.937 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[63fe997a-37d7-46af-be25-dbac209ca987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:08:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 574 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.8 MiB/s wr, 352 op/s
Oct 02 12:08:01 compute-0 anacron[26590]: Job `cron.monthly' started
Oct 02 12:08:01 compute-0 anacron[26590]: Job `cron.monthly' terminated
Oct 02 12:08:01 compute-0 anacron[26590]: Normal exit (3 jobs run)
Oct 02 12:08:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:01.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.587 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.589 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.589 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "7be1ad09-48b1-425b-9269-d21d36e9ff26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.590 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.590 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.591 2 INFO nova.compute.manager [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Terminating instance
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.592 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "refresh_cache-7be1ad09-48b1-425b-9269-d21d36e9ff26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.592 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquired lock "refresh_cache-7be1ad09-48b1-425b-9269-d21d36e9ff26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.593 2 DEBUG nova.network.neutron [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:08:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:01.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:01 compute-0 nova_compute[257802]: 2025-10-02 12:08:01.781 2 DEBUG nova.network.neutron [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.009 2 DEBUG nova.network.neutron [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.032 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Releasing lock "refresh_cache-7be1ad09-48b1-425b-9269-d21d36e9ff26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.032 2 DEBUG nova.compute.manager [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:08:02 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 02 12:08:02 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000028.scope: Consumed 5.146s CPU time.
Oct 02 12:08:02 compute-0 systemd-machined[211836]: Machine qemu-21-instance-00000028 terminated.
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.099 2 INFO nova.virt.libvirt.driver [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Deleting instance files /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_del
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.100 2 INFO nova.virt.libvirt.driver [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Deletion of /var/lib/nova/instances/65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632_del complete
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.250 2 INFO nova.virt.libvirt.driver [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance destroyed successfully.
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.250 2 DEBUG nova.objects.instance [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'resources' on Instance uuid 7be1ad09-48b1-425b-9269-d21d36e9ff26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.278 2 INFO nova.compute.manager [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Took 2.23 seconds to destroy the instance on the hypervisor.
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.279 2 DEBUG oslo.service.loopingcall [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.279 2 DEBUG nova.compute.manager [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.280 2 DEBUG nova.network.neutron [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:08:02 compute-0 ceph-mon[73607]: pgmap v1244: 305 pgs: 305 active+clean; 574 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.8 MiB/s wr, 352 op/s
Oct 02 12:08:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.981 2 DEBUG nova.compute.manager [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.982 2 DEBUG oslo_concurrency.lockutils [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.982 2 DEBUG oslo_concurrency.lockutils [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.982 2 DEBUG oslo_concurrency.lockutils [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.982 2 DEBUG nova.compute.manager [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] No waiting events found dispatching network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:08:02 compute-0 nova_compute[257802]: 2025-10-02 12:08:02.983 2 WARNING nova.compute.manager [req-8c1efa82-2327-4247-b6fa-2b140eba63ab req-2eedca2e-fd22-4d24-9bd4-69a289496edc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received unexpected event network-vif-plugged-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 for instance with vm_state active and task_state deleting.
Oct 02 12:08:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 564 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.9 MiB/s wr, 398 op/s
Oct 02 12:08:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:03.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.589 2 DEBUG nova.compute.manager [req-5ebbcd30-01e6-42cc-9364-c9c1e36f87e7 req-87c479b6-8013-4cd8-a9f2-151822c89c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Received event network-vif-deleted-7d4e6499-094f-4aa5-acf9-3a8487c8dba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.589 2 INFO nova.compute.manager [req-5ebbcd30-01e6-42cc-9364-c9c1e36f87e7 req-87c479b6-8013-4cd8-a9f2-151822c89c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Neutron deleted interface 7d4e6499-094f-4aa5-acf9-3a8487c8dba0; detaching it from the instance and deleting it from the info cache
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.589 2 DEBUG nova.network.neutron [req-5ebbcd30-01e6-42cc-9364-c9c1e36f87e7 req-87c479b6-8013-4cd8-a9f2-151822c89c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:08:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:03.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.666 2 DEBUG nova.network.neutron [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.904 2 INFO nova.compute.manager [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Took 1.62 seconds to deallocate network for instance.
Oct 02 12:08:03 compute-0 nova_compute[257802]: 2025-10-02 12:08:03.909 2 DEBUG nova.compute.manager [req-5ebbcd30-01e6-42cc-9364-c9c1e36f87e7 req-87c479b6-8013-4cd8-a9f2-151822c89c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Detach interface failed, port_id=7d4e6499-094f-4aa5-acf9-3a8487c8dba0, reason: Instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.063 2 INFO nova.virt.libvirt.driver [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Deleting instance files /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26_del
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.064 2 INFO nova.virt.libvirt.driver [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Deletion of /var/lib/nova/instances/7be1ad09-48b1-425b-9269-d21d36e9ff26_del complete
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.374 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.374 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.393 2 INFO nova.compute.manager [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Took 2.36 seconds to destroy the instance on the hypervisor.
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.394 2 DEBUG oslo.service.loopingcall [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.395 2 DEBUG nova.compute.manager [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.396 2 DEBUG nova.network.neutron [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.449 2 DEBUG oslo_concurrency.processutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.567 2 DEBUG nova.network.neutron [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.588 2 DEBUG nova.network.neutron [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.605 2 INFO nova.compute.manager [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Took 0.21 seconds to deallocate network for instance.
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.647 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:04 compute-0 ceph-mon[73607]: pgmap v1245: 305 pgs: 305 active+clean; 564 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.9 MiB/s wr, 398 op/s
Oct 02 12:08:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650529993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.882 2 DEBUG oslo_concurrency.processutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.888 2 DEBUG nova.compute.provider_tree [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.916 2 DEBUG nova.scheduler.client.report [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.940 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.944 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.969 2 INFO nova.scheduler.client.report [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Deleted allocations for instance 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632
Oct 02 12:08:04 compute-0 nova_compute[257802]: 2025-10-02 12:08:04.993 2 DEBUG oslo_concurrency.processutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.032 2 DEBUG oslo_concurrency.lockutils [None req-2f2afd49-c3bd-4df7-8241-3a0ba8a1f428 ec17c54e24584f11a5348b68d6e7ca85 7359a7dad3b849bfbf075b88f2a261b4 - - default default] Lock "65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 534 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.9 MiB/s wr, 379 op/s
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107244198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.430 2 DEBUG oslo_concurrency.processutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.435 2 DEBUG nova.compute.provider_tree [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.450 2 DEBUG nova.scheduler.client.report [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.481 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.512 2 INFO nova.scheduler.client.report [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Deleted allocations for instance 7be1ad09-48b1-425b-9269-d21d36e9ff26
Oct 02 12:08:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:05.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.611 2 DEBUG oslo_concurrency.lockutils [None req-552f3bf0-3f87-4d99-874c-0b0f03712dd1 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "7be1ad09-48b1-425b-9269-d21d36e9ff26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:05.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:05 compute-0 nova_compute[257802]: 2025-10-02 12:08:05.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:05 compute-0 podman[285235]: 2025-10-02 12:08:05.948348524 +0000 UTC m=+0.092535068 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:08:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2650529993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2107244198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 473 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.9 MiB/s wr, 406 op/s
Oct 02 12:08:07 compute-0 ceph-mon[73607]: pgmap v1246: 305 pgs: 305 active+clean; 534 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.9 MiB/s wr, 379 op/s
Oct 02 12:08:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1827984313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:07.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:07.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:07 compute-0 nova_compute[257802]: 2025-10-02 12:08:07.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:08 compute-0 podman[285258]: 2025-10-02 12:08:08.926498908 +0000 UTC m=+0.069802308 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:08:08 compute-0 ceph-mon[73607]: pgmap v1247: 305 pgs: 305 active+clean; 473 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.9 MiB/s wr, 406 op/s
Oct 02 12:08:08 compute-0 podman[285259]: 2025-10-02 12:08:08.941504378 +0000 UTC m=+0.077802256 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 02 12:08:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 396 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.8 MiB/s wr, 408 op/s
Oct 02 12:08:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:09.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:09.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:09.812 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:08:09 compute-0 nova_compute[257802]: 2025-10-02 12:08:09.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:09.813 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:08:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4233098128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:08:10 compute-0 nova_compute[257802]: 2025-10-02 12:08:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:10 compute-0 nova_compute[257802]: 2025-10-02 12:08:10.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 367 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 329 op/s
Oct 02 12:08:11 compute-0 ceph-mon[73607]: pgmap v1248: 305 pgs: 305 active+clean; 396 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.8 MiB/s wr, 408 op/s
Oct 02 12:08:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:08:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:11.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:08:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:11.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:12 compute-0 ceph-mon[73607]: pgmap v1249: 305 pgs: 305 active+clean; 367 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 329 op/s
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:12 compute-0 sudo[285302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:12 compute-0 sudo[285302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:12 compute-0 sudo[285302]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:12 compute-0 sudo[285333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:12 compute-0 sudo[285333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:12 compute-0 sudo[285333]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:12 compute-0 podman[285326]: 2025-10-02 12:08:12.910751577 +0000 UTC m=+0.122430723 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:08:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 302 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 304 op/s
Oct 02 12:08:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:13.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3746483311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:14 compute-0 nova_compute[257802]: 2025-10-02 12:08:14.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:14.815 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:08:14 compute-0 ceph-mon[73607]: pgmap v1250: 305 pgs: 305 active+clean; 302 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 304 op/s
Oct 02 12:08:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/240215465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 240 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 1.1 MiB/s wr, 196 op/s
Oct 02 12:08:15 compute-0 nova_compute[257802]: 2025-10-02 12:08:15.280 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406880.2783515, 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:08:15 compute-0 nova_compute[257802]: 2025-10-02 12:08:15.280 2 INFO nova.compute.manager [-] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] VM Stopped (Lifecycle Event)
Oct 02 12:08:15 compute-0 nova_compute[257802]: 2025-10-02 12:08:15.306 2 DEBUG nova.compute.manager [None req-29ea50bf-5216-4fa2-baf8-5a495dd2d4be - - - - - -] [instance: 65bbc6a8-390f-4a9a-b86e-8dcbcaf1b632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:08:15 compute-0 nova_compute[257802]: 2025-10-02 12:08:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:15 compute-0 nova_compute[257802]: 2025-10-02 12:08:15.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/984186566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 220 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 95 KiB/s wr, 158 op/s
Oct 02 12:08:17 compute-0 ceph-mon[73607]: pgmap v1251: 305 pgs: 305 active+clean; 240 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 1.1 MiB/s wr, 196 op/s
Oct 02 12:08:17 compute-0 nova_compute[257802]: 2025-10-02 12:08:17.249 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406882.2476234, 7be1ad09-48b1-425b-9269-d21d36e9ff26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:08:17 compute-0 nova_compute[257802]: 2025-10-02 12:08:17.250 2 INFO nova.compute.manager [-] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] VM Stopped (Lifecycle Event)
Oct 02 12:08:17 compute-0 nova_compute[257802]: 2025-10-02 12:08:17.281 2 DEBUG nova.compute.manager [None req-bdbf1c71-e434-4f15-92c6-0b8df83b735d - - - - - -] [instance: 7be1ad09-48b1-425b-9269-d21d36e9ff26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:08:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:17.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 201 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 18 KiB/s wr, 134 op/s
Oct 02 12:08:19 compute-0 ceph-mon[73607]: pgmap v1252: 305 pgs: 305 active+clean; 220 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 95 KiB/s wr, 158 op/s
Oct 02 12:08:19 compute-0 nova_compute[257802]: 2025-10-02 12:08:19.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:19 compute-0 nova_compute[257802]: 2025-10-02 12:08:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:19.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:20 compute-0 nova_compute[257802]: 2025-10-02 12:08:20.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:20 compute-0 nova_compute[257802]: 2025-10-02 12:08:20.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 154 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 8.9 KiB/s wr, 104 op/s
Oct 02 12:08:21 compute-0 ceph-mon[73607]: pgmap v1253: 305 pgs: 305 active+clean; 201 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 18 KiB/s wr, 134 op/s
Oct 02 12:08:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:21.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:21.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:22 compute-0 ceph-mon[73607]: pgmap v1254: 305 pgs: 305 active+clean; 154 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 8.9 KiB/s wr, 104 op/s
Oct 02 12:08:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2127436171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2127436171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 76 MiB data, 454 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 6.7 KiB/s wr, 97 op/s
Oct 02 12:08:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2303983804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/827202599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1446007030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1446007030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:23.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:24 compute-0 ceph-mon[73607]: pgmap v1255: 305 pgs: 305 active+clean; 76 MiB data, 454 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 6.7 KiB/s wr, 97 op/s
Oct 02 12:08:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 64 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 6.7 KiB/s wr, 78 op/s
Oct 02 12:08:25 compute-0 nova_compute[257802]: 2025-10-02 12:08:25.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 02 12:08:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 02 12:08:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 02 12:08:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:25.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:25.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:25 compute-0 nova_compute[257802]: 2025-10-02 12:08:25.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:26 compute-0 ceph-mon[73607]: pgmap v1256: 305 pgs: 305 active+clean; 64 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 6.7 KiB/s wr, 78 op/s
Oct 02 12:08:26 compute-0 ceph-mon[73607]: osdmap e163: 3 total, 3 up, 3 in
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:26.926 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:26.926 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:26.926 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 62 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 6.2 KiB/s wr, 92 op/s
Oct 02 12:08:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:27.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:28 compute-0 ceph-mon[73607]: pgmap v1258: 305 pgs: 305 active+clean; 62 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 6.2 KiB/s wr, 92 op/s
Oct 02 12:08:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2285490029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2285490029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 46 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 6.0 KiB/s wr, 103 op/s
Oct 02 12:08:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:29.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:29.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:30 compute-0 nova_compute[257802]: 2025-10-02 12:08:30.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:30 compute-0 nova_compute[257802]: 2025-10-02 12:08:30.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:30 compute-0 ceph-mon[73607]: pgmap v1259: 305 pgs: 305 active+clean; 46 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 6.0 KiB/s wr, 103 op/s
Oct 02 12:08:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 3.6 KiB/s wr, 98 op/s
Oct 02 12:08:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:31.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:31.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 02 12:08:32 compute-0 sudo[285390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:32 compute-0 sudo[285390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 02 12:08:32 compute-0 sudo[285390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 02 12:08:33 compute-0 sudo[285415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:33 compute-0 sudo[285415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:33 compute-0 sudo[285415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 63 op/s
Oct 02 12:08:33 compute-0 ceph-mon[73607]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 3.6 KiB/s wr, 98 op/s
Oct 02 12:08:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:33.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:34 compute-0 ceph-mon[73607]: osdmap e164: 3 total, 3 up, 3 in
Oct 02 12:08:34 compute-0 ceph-mon[73607]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 63 op/s
Oct 02 12:08:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Oct 02 12:08:35 compute-0 nova_compute[257802]: 2025-10-02 12:08:35.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:35 compute-0 nova_compute[257802]: 2025-10-02 12:08:35.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:35.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:35 compute-0 nova_compute[257802]: 2025-10-02 12:08:35.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:36 compute-0 nova_compute[257802]: 2025-10-02 12:08:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:36 compute-0 nova_compute[257802]: 2025-10-02 12:08:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:36 compute-0 nova_compute[257802]: 2025-10-02 12:08:36.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:08:36 compute-0 ceph-mon[73607]: pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Oct 02 12:08:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3701618906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:36 compute-0 podman[285442]: 2025-10-02 12:08:36.916563766 +0000 UTC m=+0.049907099 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 12:08:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 02 12:08:37 compute-0 nova_compute[257802]: 2025-10-02 12:08:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:37 compute-0 nova_compute[257802]: 2025-10-02 12:08:37.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/920380949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:38 compute-0 nova_compute[257802]: 2025-10-02 12:08:38.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:38 compute-0 nova_compute[257802]: 2025-10-02 12:08:38.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:08:38 compute-0 ceph-mon[73607]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 02 12:08:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 307 B/s wr, 4 op/s
Oct 02 12:08:39 compute-0 nova_compute[257802]: 2025-10-02 12:08:39.144 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:39 compute-0 nova_compute[257802]: 2025-10-02 12:08:39.144 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:08:39 compute-0 nova_compute[257802]: 2025-10-02 12:08:39.144 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:08:39 compute-0 nova_compute[257802]: 2025-10-02 12:08:39.156 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:08:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3749972049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:39.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:39.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:39 compute-0 podman[285464]: 2025-10-02 12:08:39.92276172 +0000 UTC m=+0.058485021 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:08:39 compute-0 podman[285463]: 2025-10-02 12:08:39.92276734 +0000 UTC m=+0.058477440 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd)
Oct 02 12:08:40 compute-0 nova_compute[257802]: 2025-10-02 12:08:40.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:40 compute-0 ceph-mon[73607]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 307 B/s wr, 4 op/s
Oct 02 12:08:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2462526934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:40 compute-0 nova_compute[257802]: 2025-10-02 12:08:40.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.123 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609910472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.600 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:08:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:08:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1609910472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.773 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.775 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4705MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.775 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:41 compute-0 nova_compute[257802]: 2025-10-02 12:08:41.775 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.147 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.220 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.220 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.233 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.270 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.293 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:08:42
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240915106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.714 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.720 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.740 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.767 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.768 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.768 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.768 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:08:42 compute-0 ceph-mon[73607]: pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3240915106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:42 compute-0 nova_compute[257802]: 2025-10-02 12:08:42.780 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:08:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:08:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:08:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:43.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:43.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:43 compute-0 podman[285551]: 2025-10-02 12:08:43.931641734 +0000 UTC m=+0.076315528 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:08:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 02 12:08:44 compute-0 ceph-mon[73607]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 02 12:08:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 02 12:08:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:45 compute-0 nova_compute[257802]: 2025-10-02 12:08:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:45 compute-0 nova_compute[257802]: 2025-10-02 12:08:45.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:45 compute-0 nova_compute[257802]: 2025-10-02 12:08:45.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:45 compute-0 ceph-mon[73607]: osdmap e165: 3 total, 3 up, 3 in
Oct 02 12:08:46 compute-0 nova_compute[257802]: 2025-10-02 12:08:46.110 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 02 12:08:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 818 B/s wr, 11 op/s
Oct 02 12:08:47 compute-0 ceph-mon[73607]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 02 12:08:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 02 12:08:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:08:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:47.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:08:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:47.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:48 compute-0 ceph-mon[73607]: pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 818 B/s wr, 11 op/s
Oct 02 12:08:48 compute-0 ceph-mon[73607]: osdmap e166: 3 total, 3 up, 3 in
Oct 02 12:08:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.6 KiB/s wr, 21 op/s
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.549873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929549911, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2448, "num_deletes": 505, "total_data_size": 3629631, "memory_usage": 3702816, "flush_reason": "Manual Compaction"}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929613186, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3338723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26650, "largest_seqno": 29095, "table_properties": {"data_size": 3328772, "index_size": 5677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25702, "raw_average_key_size": 20, "raw_value_size": 3306237, "raw_average_value_size": 2630, "num_data_blocks": 247, "num_entries": 1257, "num_filter_entries": 1257, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406735, "oldest_key_time": 1759406735, "file_creation_time": 1759406929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 63367 microseconds, and 6777 cpu microseconds.
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.613234) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3338723 bytes OK
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.613257) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.620487) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.620506) EVENT_LOG_v1 {"time_micros": 1759406929620500, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.620522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3618638, prev total WAL file size 3618638, number of live WAL files 2.
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.621595) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3260KB)], [62(10MB)]
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929621628, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13998043, "oldest_snapshot_seqno": -1}
Oct 02 12:08:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:49.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5373 keys, 8582877 bytes, temperature: kUnknown
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929688974, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8582877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8547444, "index_size": 20923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137115, "raw_average_key_size": 25, "raw_value_size": 8451167, "raw_average_value_size": 1572, "num_data_blocks": 843, "num_entries": 5373, "num_filter_entries": 5373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759406929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.689762) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8582877 bytes
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.695185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.4 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.8) write-amplify(2.6) OK, records in: 6385, records dropped: 1012 output_compression: NoCompression
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.695221) EVENT_LOG_v1 {"time_micros": 1759406929695205, "job": 34, "event": "compaction_finished", "compaction_time_micros": 67809, "compaction_time_cpu_micros": 19878, "output_level": 6, "num_output_files": 1, "total_output_size": 8582877, "num_input_records": 6385, "num_output_records": 5373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929696570, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929699665, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.621499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.699737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.699745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.699747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.699749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:49 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:08:49.699751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:08:50 compute-0 nova_compute[257802]: 2025-10-02 12:08:50.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:50 compute-0 ceph-mon[73607]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.6 KiB/s wr, 21 op/s
Oct 02 12:08:50 compute-0 nova_compute[257802]: 2025-10-02 12:08:50.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 3.4 KiB/s wr, 44 op/s
Oct 02 12:08:51 compute-0 sudo[285581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:51 compute-0 sudo[285581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:51 compute-0 sudo[285581]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:51 compute-0 sudo[285606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:51 compute-0 sudo[285606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:51 compute-0 sudo[285606]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:51 compute-0 sudo[285631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:51 compute-0 sudo[285631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:51 compute-0 sudo[285631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:51 compute-0 sudo[285656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:08:51 compute-0 sudo[285656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:08:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:51.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:08:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:51 compute-0 podman[285755]: 2025-10-02 12:08:51.882284609 +0000 UTC m=+0.140674632 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:08:51 compute-0 podman[285755]: 2025-10-02 12:08:51.98919526 +0000 UTC m=+0.247585263 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:08:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:08:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:08:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:52 compute-0 podman[285896]: 2025-10-02 12:08:52.688824914 +0000 UTC m=+0.078503963 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:08:52 compute-0 podman[285918]: 2025-10-02 12:08:52.752987402 +0000 UTC m=+0.049909239 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:08:52 compute-0 ceph-mon[73607]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 3.4 KiB/s wr, 44 op/s
Oct 02 12:08:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:52 compute-0 podman[285896]: 2025-10-02 12:08:52.762482345 +0000 UTC m=+0.152161394 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:08:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 02 12:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-0 podman[285962]: 2025-10-02 12:08:53.080734636 +0000 UTC m=+0.161731951 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9)
Oct 02 12:08:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Oct 02 12:08:53 compute-0 sudo[285990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-0 sudo[285990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[285990]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 podman[285984]: 2025-10-02 12:08:53.196009832 +0000 UTC m=+0.091689846 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.k8s.display-name=Keepalived on RHEL 9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph)
Oct 02 12:08:53 compute-0 sudo[286022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-0 sudo[286022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[286022]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 podman[285962]: 2025-10-02 12:08:53.255550668 +0000 UTC m=+0.336547983 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 02 12:08:53 compute-0 sudo[285656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:08:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:53.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:53 compute-0 sudo[286065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-0 sudo[286065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[286065]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 sudo[286090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:53 compute-0 sudo[286090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[286090]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[286115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:54 compute-0 sudo[286115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[286115]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[286140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:08:54 compute-0 sudo[286140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:08:54 compute-0 ceph-mon[73607]: osdmap e167: 3 total, 3 up, 3 in
Oct 02 12:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:54 compute-0 sudo[286140]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8c3b7183-a95d-441b-be2d-12be02740cd4 does not exist
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 99ed71e5-7855-4e4a-8565-696d25fd40c9 does not exist
Oct 02 12:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0f549d0c-6467-4f01-808c-3c54a0a605bf does not exist
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-0 sudo[286197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:54 compute-0 sudo[286197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[286197]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[286222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:54 compute-0 sudo[286222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[286222]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[286247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:54 compute-0 sudo[286247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[286247]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[286272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:08:54 compute-0 sudo[286272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.083577664 +0000 UTC m=+0.039141984 container create 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:08:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Oct 02 12:08:55 compute-0 systemd[1]: Started libpod-conmon-905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b.scope.
Oct 02 12:08:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.15901804 +0000 UTC m=+0.114582390 container init 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.068045092 +0000 UTC m=+0.023609442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.164894144 +0000 UTC m=+0.120458464 container start 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.168233337 +0000 UTC m=+0.123797667 container attach 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:08:55 compute-0 sleepy_merkle[286354]: 167 167
Oct 02 12:08:55 compute-0 systemd[1]: libpod-905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b.scope: Deactivated successfully.
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.170552553 +0000 UTC m=+0.126116883 container died 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba7fffd9d13cc221267166d1e4d33428cf68b899c96a3a623507c330ae7d1f5-merged.mount: Deactivated successfully.
Oct 02 12:08:55 compute-0 podman[286337]: 2025-10-02 12:08:55.2082111 +0000 UTC m=+0.163775420 container remove 905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:08:55 compute-0 systemd[1]: libpod-conmon-905ba848314138e59575277b435371c6eb4342aee53ce12e95effd9f4da9931b.scope: Deactivated successfully.
Oct 02 12:08:55 compute-0 podman[286379]: 2025-10-02 12:08:55.360591669 +0000 UTC m=+0.040099928 container create bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:08:55 compute-0 ceph-mon[73607]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2570955571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4268355803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4268355803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:55 compute-0 systemd[1]: Started libpod-conmon-bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf.scope.
Oct 02 12:08:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:55 compute-0 podman[286379]: 2025-10-02 12:08:55.340624918 +0000 UTC m=+0.020133217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:55 compute-0 podman[286379]: 2025-10-02 12:08:55.437634875 +0000 UTC m=+0.117143154 container init bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:08:55 compute-0 podman[286379]: 2025-10-02 12:08:55.444134585 +0000 UTC m=+0.123642844 container start bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:08:55 compute-0 podman[286379]: 2025-10-02 12:08:55.461537463 +0000 UTC m=+0.141045762 container attach bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:08:55 compute-0 nova_compute[257802]: 2025-10-02 12:08:55.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:55.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:55.794 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:08:55 compute-0 nova_compute[257802]: 2025-10-02 12:08:55.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:55.795 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:08:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:08:55.796 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:08:55 compute-0 nova_compute[257802]: 2025-10-02 12:08:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:08:56 compute-0 boring_northcutt[286396]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:08:56 compute-0 boring_northcutt[286396]: --> relative data size: 1.0
Oct 02 12:08:56 compute-0 boring_northcutt[286396]: --> All data devices are unavailable
Oct 02 12:08:56 compute-0 systemd[1]: libpod-bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf.scope: Deactivated successfully.
Oct 02 12:08:56 compute-0 podman[286379]: 2025-10-02 12:08:56.280562634 +0000 UTC m=+0.960070883 container died bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b894ca951183c772a02cb91e532d846af4f5100d1c8130e43d7a88b374705a0-merged.mount: Deactivated successfully.
Oct 02 12:08:56 compute-0 podman[286379]: 2025-10-02 12:08:56.388432268 +0000 UTC m=+1.067940527 container remove bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:08:56 compute-0 systemd[1]: libpod-conmon-bea0b008bb22e8bb8bb939754680aa8d08ad7bf3fd49ae926ea60dd8862ed5bf.scope: Deactivated successfully.
Oct 02 12:08:56 compute-0 sudo[286272]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:56 compute-0 sudo[286424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:56 compute-0 sudo[286424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:56 compute-0 sudo[286424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:56 compute-0 sudo[286449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:56 compute-0 sudo[286449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:56 compute-0 sudo[286449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:56 compute-0 sudo[286474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:56 compute-0 sudo[286474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:56 compute-0 sudo[286474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:56 compute-0 ceph-mon[73607]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Oct 02 12:08:56 compute-0 sudo[286499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:08:56 compute-0 sudo[286499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:56 compute-0 podman[286563]: 2025-10-02 12:08:56.987793725 +0000 UTC m=+0.039918343 container create 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:08:57 compute-0 systemd[1]: Started libpod-conmon-56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f.scope.
Oct 02 12:08:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:57.058464094 +0000 UTC m=+0.110588712 container init 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:57.065278382 +0000 UTC m=+0.117403000 container start 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:56.97173155 +0000 UTC m=+0.023856188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:57 compute-0 relaxed_blackwell[286580]: 167 167
Oct 02 12:08:57 compute-0 systemd[1]: libpod-56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f.scope: Deactivated successfully.
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:57.071534485 +0000 UTC m=+0.123659123 container attach 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:57.072167701 +0000 UTC m=+0.124292319 container died 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:08:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 55 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 713 KiB/s wr, 31 op/s
Oct 02 12:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f775f1cec8d01e8c23f181771168cb453f0d088097ec4625ae761847c8893b09-merged.mount: Deactivated successfully.
Oct 02 12:08:57 compute-0 podman[286563]: 2025-10-02 12:08:57.108902955 +0000 UTC m=+0.161027573 container remove 56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:08:57 compute-0 systemd[1]: libpod-conmon-56cf0d93bafbcd994a88720639a3c2292950d1d7f22730b597d9824162bece0f.scope: Deactivated successfully.
Oct 02 12:08:57 compute-0 podman[286603]: 2025-10-02 12:08:57.260079294 +0000 UTC m=+0.037552775 container create ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 12:08:57 compute-0 systemd[1]: Started libpod-conmon-ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8.scope.
Oct 02 12:08:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6568dc308a0c18dc1f71be70e9efa91b4aacd805e007ae5ae41b0795469c8d9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6568dc308a0c18dc1f71be70e9efa91b4aacd805e007ae5ae41b0795469c8d9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6568dc308a0c18dc1f71be70e9efa91b4aacd805e007ae5ae41b0795469c8d9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6568dc308a0c18dc1f71be70e9efa91b4aacd805e007ae5ae41b0795469c8d9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:57 compute-0 podman[286603]: 2025-10-02 12:08:57.242981234 +0000 UTC m=+0.020454735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:57 compute-0 podman[286603]: 2025-10-02 12:08:57.344311857 +0000 UTC m=+0.121785358 container init ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:57 compute-0 podman[286603]: 2025-10-02 12:08:57.350452547 +0000 UTC m=+0.127926028 container start ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:08:57 compute-0 podman[286603]: 2025-10-02 12:08:57.354375134 +0000 UTC m=+0.131848615 container attach ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:57.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:58 compute-0 happy_torvalds[286620]: {
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:     "1": [
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:         {
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "devices": [
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "/dev/loop3"
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             ],
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "lv_name": "ceph_lv0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "lv_size": "7511998464",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "name": "ceph_lv0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "tags": {
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.cluster_name": "ceph",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.crush_device_class": "",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.encrypted": "0",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.osd_id": "1",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.type": "block",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:                 "ceph.vdo": "0"
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             },
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "type": "block",
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:             "vg_name": "ceph_vg0"
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:         }
Oct 02 12:08:58 compute-0 happy_torvalds[286620]:     ]
Oct 02 12:08:58 compute-0 happy_torvalds[286620]: }
Oct 02 12:08:58 compute-0 systemd[1]: libpod-ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8.scope: Deactivated successfully.
Oct 02 12:08:58 compute-0 podman[286603]: 2025-10-02 12:08:58.12927377 +0000 UTC m=+0.906747251 container died ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6568dc308a0c18dc1f71be70e9efa91b4aacd805e007ae5ae41b0795469c8d9c-merged.mount: Deactivated successfully.
Oct 02 12:08:58 compute-0 podman[286603]: 2025-10-02 12:08:58.311271337 +0000 UTC m=+1.088744818 container remove ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:08:58 compute-0 systemd[1]: libpod-conmon-ec5b614c6cac5bfad06e21dad1ce44a6c637a737089072433512584013056ae8.scope: Deactivated successfully.
Oct 02 12:08:58 compute-0 sudo[286499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 sudo[286641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:58 compute-0 sudo[286641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[286641]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 sudo[286667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:58 compute-0 sudo[286667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[286667]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 sudo[286692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:58 compute-0 sudo[286692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[286692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 sudo[286717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:08:58 compute-0 sudo[286717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 ceph-mon[73607]: pgmap v1277: 305 pgs: 305 active+clean; 55 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 713 KiB/s wr, 31 op/s
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.875612843 +0000 UTC m=+0.038371646 container create fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:08:58 compute-0 systemd[1]: Started libpod-conmon-fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94.scope.
Oct 02 12:08:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.857126627 +0000 UTC m=+0.019885450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.957769313 +0000 UTC m=+0.120528116 container init fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.963750831 +0000 UTC m=+0.126509634 container start fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:08:58 compute-0 funny_ptolemy[286798]: 167 167
Oct 02 12:08:58 compute-0 systemd[1]: libpod-fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94.scope: Deactivated successfully.
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.972596969 +0000 UTC m=+0.135355772 container attach fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:08:58 compute-0 podman[286784]: 2025-10-02 12:08:58.972903336 +0000 UTC m=+0.135662139 container died fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3df1f72dcbdc7dc3c4875b881db11915cacaea0e49541c32f97d71075b0c9723-merged.mount: Deactivated successfully.
Oct 02 12:08:59 compute-0 podman[286784]: 2025-10-02 12:08:59.037059535 +0000 UTC m=+0.199818338 container remove fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:08:59 compute-0 systemd[1]: libpod-conmon-fdd8980b60bff3dd57b4d0cffeaed9bc4ae49b3fb8b2071402872ca3e53caf94.scope: Deactivated successfully.
Oct 02 12:08:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:08:59 compute-0 podman[286824]: 2025-10-02 12:08:59.18606048 +0000 UTC m=+0.047698754 container create abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:08:59 compute-0 systemd[1]: Started libpod-conmon-abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4.scope.
Oct 02 12:08:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93629ff3d20ee9e8c14d56acacd1e9a709d607b82298ed4d1e1e275f067dfb30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93629ff3d20ee9e8c14d56acacd1e9a709d607b82298ed4d1e1e275f067dfb30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93629ff3d20ee9e8c14d56acacd1e9a709d607b82298ed4d1e1e275f067dfb30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93629ff3d20ee9e8c14d56acacd1e9a709d607b82298ed4d1e1e275f067dfb30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:59 compute-0 podman[286824]: 2025-10-02 12:08:59.158210975 +0000 UTC m=+0.019849269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:59 compute-0 podman[286824]: 2025-10-02 12:08:59.345496193 +0000 UTC m=+0.207134477 container init abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:08:59 compute-0 podman[286824]: 2025-10-02 12:08:59.351225714 +0000 UTC m=+0.212863988 container start abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:08:59 compute-0 podman[286824]: 2025-10-02 12:08:59.388140403 +0000 UTC m=+0.249778697 container attach abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:59.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:08:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:00 compute-0 naughty_curran[286840]: {
Oct 02 12:09:00 compute-0 naughty_curran[286840]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:09:00 compute-0 naughty_curran[286840]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:09:00 compute-0 naughty_curran[286840]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:09:00 compute-0 naughty_curran[286840]:         "osd_id": 1,
Oct 02 12:09:00 compute-0 naughty_curran[286840]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:09:00 compute-0 naughty_curran[286840]:         "type": "bluestore"
Oct 02 12:09:00 compute-0 naughty_curran[286840]:     }
Oct 02 12:09:00 compute-0 naughty_curran[286840]: }
Oct 02 12:09:00 compute-0 systemd[1]: libpod-abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4.scope: Deactivated successfully.
Oct 02 12:09:00 compute-0 podman[286824]: 2025-10-02 12:09:00.163351145 +0000 UTC m=+1.024989419 container died abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-93629ff3d20ee9e8c14d56acacd1e9a709d607b82298ed4d1e1e275f067dfb30-merged.mount: Deactivated successfully.
Oct 02 12:09:00 compute-0 podman[286824]: 2025-10-02 12:09:00.388127966 +0000 UTC m=+1.249766240 container remove abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:09:00 compute-0 sudo[286717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:09:00 compute-0 systemd[1]: libpod-conmon-abca9154e127f67cec94e3198ce4c438ccd9d35df12b2677a5b62b5f3df0d4f4.scope: Deactivated successfully.
Oct 02 12:09:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:09:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:09:00 compute-0 nova_compute[257802]: 2025-10-02 12:09:00.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 53611225-7abc-4dc1-8376-7a93ef66706a does not exist
Oct 02 12:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8cc6f40f-67d5-4721-962f-669cd9085ef7 does not exist
Oct 02 12:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3e59dc4f-34a8-45bd-884d-080ca9669d11 does not exist
Oct 02 12:09:00 compute-0 sudo[286877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:00 compute-0 sudo[286877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:00 compute-0 sudo[286877]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:00 compute-0 sudo[286902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:09:00 compute-0 sudo[286902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:00 compute-0 sudo[286902]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:00 compute-0 ceph-mon[73607]: pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:09:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:09:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:09:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1324999065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:00 compute-0 nova_compute[257802]: 2025-10-02 12:09:00.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 02 12:09:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1399266365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Oct 02 12:09:03 compute-0 ceph-mon[73607]: pgmap v1279: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 02 12:09:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:03.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:04 compute-0 ceph-mon[73607]: pgmap v1280: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Oct 02 12:09:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:09:05 compute-0 nova_compute[257802]: 2025-10-02 12:09:05.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:05 compute-0 nova_compute[257802]: 2025-10-02 12:09:05.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:06 compute-0 ceph-mon[73607]: pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:09:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 12:09:07 compute-0 ovn_controller[148183]: 2025-10-02T12:09:07Z|00163|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 12:09:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:07 compute-0 podman[286930]: 2025-10-02 12:09:07.907884121 +0000 UTC m=+0.046216308 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 02 12:09:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:08 compute-0 ceph-mon[73607]: pgmap v1282: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 12:09:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 83 op/s
Oct 02 12:09:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:09.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:09 compute-0 nova_compute[257802]: 2025-10-02 12:09:09.895 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:09 compute-0 nova_compute[257802]: 2025-10-02 12:09:09.896 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.065 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.343 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.343 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.353 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.353 2 INFO nova.compute.claims [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.648 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:10 compute-0 nova_compute[257802]: 2025-10-02 12:09:10.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:10 compute-0 podman[286971]: 2025-10-02 12:09:10.916776372 +0000 UTC m=+0.059501085 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:09:10 compute-0 podman[286972]: 2025-10-02 12:09:10.917632683 +0000 UTC m=+0.054686807 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:09:10 compute-0 ceph-mon[73607]: pgmap v1283: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 83 op/s
Oct 02 12:09:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426573679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.062 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.068 2 DEBUG nova.compute.provider_tree [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.091 2 DEBUG nova.scheduler.client.report [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.287 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.287 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.612 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.612 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:09:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:11.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.750 2 INFO nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.808 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:09:11 compute-0 nova_compute[257802]: 2025-10-02 12:09:11.986 2 DEBUG nova.policy [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ca95bd9fa148c7948c062421011d76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95056cabad5b4f32916e46a46b10f677', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:09:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3426573679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.142 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.143 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.144 2 INFO nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Creating image(s)
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.218 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.244 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.272 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.276 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.339 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.340 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.340 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.341 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.364 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:12 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.368 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:12.999 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:13 compute-0 ceph-mon[73607]: pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 12:09:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 104 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 74 op/s
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.093 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Successfully created port: d83c3e72-0088-4c8f-8272-fa1f917210b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.099 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] resizing rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.217 2 DEBUG nova.objects.instance [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'migration_context' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.267 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.267 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Ensure instance console log exists: /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.268 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.268 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:13 compute-0 nova_compute[257802]: 2025-10-02 12:09:13.268 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:13 compute-0 sudo[287180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:13 compute-0 sudo[287180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:13 compute-0 sudo[287180]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:13 compute-0 sudo[287205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:13 compute-0 sudo[287205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:13 compute-0 sudo[287205]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:14 compute-0 ceph-mon[73607]: pgmap v1285: 305 pgs: 305 active+clean; 104 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 74 op/s
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.390 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Successfully updated port: d83c3e72-0088-4c8f-8272-fa1f917210b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.408 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.408 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquired lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.408 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.487 2 DEBUG nova.compute.manager [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-changed-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.487 2 DEBUG nova.compute.manager [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Refreshing instance network info cache due to event network-changed-d83c3e72-0088-4c8f-8272-fa1f917210b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.488 2 DEBUG oslo_concurrency.lockutils [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:09:14 compute-0 nova_compute[257802]: 2025-10-02 12:09:14.589 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:09:14 compute-0 podman[287231]: 2025-10-02 12:09:14.943769721 +0000 UTC m=+0.078483961 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:09:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 122 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 97 op/s
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.227 2 DEBUG nova.network.neutron [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating instance_info_cache with network_info: [{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.249 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Releasing lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.250 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Instance network_info: |[{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.250 2 DEBUG oslo_concurrency.lockutils [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.251 2 DEBUG nova.network.neutron [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Refreshing network info cache for port d83c3e72-0088-4c8f-8272-fa1f917210b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.254 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Start _get_guest_xml network_info=[{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.258 2 WARNING nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.263 2 DEBUG nova.virt.libvirt.host [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.264 2 DEBUG nova.virt.libvirt.host [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.267 2 DEBUG nova.virt.libvirt.host [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.268 2 DEBUG nova.virt.libvirt.host [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.269 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.269 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.270 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.270 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.270 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.270 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.271 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.271 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.271 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.271 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.272 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.272 2 DEBUG nova.virt.hardware [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.275 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2248352384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:09:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650327027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.737 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.793 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.800 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:15 compute-0 nova_compute[257802]: 2025-10-02 12:09:15.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:09:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633625131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.226 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.229 2 DEBUG nova.virt.libvirt.vif [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1549047604',display_name='tempest-VolumesAdminNegativeTest-server-1549047604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1549047604',id=43,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCHD7xEhwk17Lh2SZM6L31piL3y0EGZErKHOswnaztMMAN9IszfT3ybcRKpvtuB9DRWBAMXjj3t2j2HbYhw2ugNQv5jQz2Kx18w60DNdRIQ5S3zYPwVYZ7NxtzNxXCYvQ==',key_name='tempest-keypair-1991120415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-dvd5q095',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5ca95bd9fa148c7948c062421011d76',uuid=e8828a06-3170-4302-a5f9-2e4ec8445ab2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.230 2 DEBUG nova.network.os_vif_util [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.232 2 DEBUG nova.network.os_vif_util [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.234 2 DEBUG nova.objects.instance [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.252 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <uuid>e8828a06-3170-4302-a5f9-2e4ec8445ab2</uuid>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <name>instance-0000002b</name>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:name>tempest-VolumesAdminNegativeTest-server-1549047604</nova:name>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:09:15</nova:creationTime>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:user uuid="c5ca95bd9fa148c7948c062421011d76">tempest-VolumesAdminNegativeTest-494036391-project-member</nova:user>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:project uuid="95056cabad5b4f32916e46a46b10f677">tempest-VolumesAdminNegativeTest-494036391</nova:project>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <nova:port uuid="d83c3e72-0088-4c8f-8272-fa1f917210b2">
Oct 02 12:09:16 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <system>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="serial">e8828a06-3170-4302-a5f9-2e4ec8445ab2</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="uuid">e8828a06-3170-4302-a5f9-2e4ec8445ab2</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </system>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <os>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </os>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <features>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </features>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk">
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </source>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config">
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </source>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:09:16 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:78:cb:b6"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <target dev="tapd83c3e72-00"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/console.log" append="off"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <video>
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </video>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:09:16 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:09:16 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:09:16 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:09:16 compute-0 nova_compute[257802]: </domain>
Oct 02 12:09:16 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.253 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Preparing to wait for external event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.253 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.254 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.254 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.254 2 DEBUG nova.virt.libvirt.vif [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1549047604',display_name='tempest-VolumesAdminNegativeTest-server-1549047604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1549047604',id=43,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCHD7xEhwk17Lh2SZM6L31piL3y0EGZErKHOswnaztMMAN9IszfT3ybcRKpvtuB9DRWBAMXjj3t2j2HbYhw2ugNQv5jQz2Kx18w60DNdRIQ5S3zYPwVYZ7NxtzNxXCYvQ==',key_name='tempest-keypair-1991120415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-dvd5q095',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5ca95bd9fa148c7948c062421011d76',uuid=e8828a06-3170-4302-a5f9-2e4ec8445ab2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.255 2 DEBUG nova.network.os_vif_util [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.255 2 DEBUG nova.network.os_vif_util [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.256 2 DEBUG os_vif [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd83c3e72-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd83c3e72-00, col_values=(('external_ids', {'iface-id': 'd83c3e72-0088-4c8f-8272-fa1f917210b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:cb:b6', 'vm-uuid': 'e8828a06-3170-4302-a5f9-2e4ec8445ab2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:16 compute-0 NetworkManager[44987]: <info>  [1759406956.2643] manager: (tapd83c3e72-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.271 2 INFO os_vif [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00')
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.355 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.356 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.356 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No VIF found with MAC fa:16:3e:78:cb:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.357 2 INFO nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Using config drive
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.393 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:16 compute-0 ceph-mon[73607]: pgmap v1286: 305 pgs: 305 active+clean; 122 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 97 op/s
Oct 02 12:09:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3650327027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3633625131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.800 2 DEBUG nova.network.neutron [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updated VIF entry in instance network info cache for port d83c3e72-0088-4c8f-8272-fa1f917210b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.802 2 DEBUG nova.network.neutron [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating instance_info_cache with network_info: [{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.832 2 DEBUG oslo_concurrency.lockutils [req-73fcdefb-fa5e-4b5b-9e90-28f4a0988cab req-62fbf993-fbb9-4e0e-ae71-3a4de4acad85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.916 2 INFO nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Creating config drive at /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config
Oct 02 12:09:16 compute-0 nova_compute[257802]: 2025-10-02 12:09:16.921 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bp8eryt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.052 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bp8eryt" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 134 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.092 2 DEBUG nova.storage.rbd_utils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.098 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.371 2 DEBUG oslo_concurrency.processutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config e8828a06-3170-4302-a5f9-2e4ec8445ab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.372 2 INFO nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Deleting local config drive /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2/disk.config because it was imported into RBD.
Oct 02 12:09:17 compute-0 kernel: tapd83c3e72-00: entered promiscuous mode
Oct 02 12:09:17 compute-0 ovn_controller[148183]: 2025-10-02T12:09:17Z|00164|binding|INFO|Claiming lport d83c3e72-0088-4c8f-8272-fa1f917210b2 for this chassis.
Oct 02 12:09:17 compute-0 ovn_controller[148183]: 2025-10-02T12:09:17Z|00165|binding|INFO|d83c3e72-0088-4c8f-8272-fa1f917210b2: Claiming fa:16:3e:78:cb:b6 10.100.0.4
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.4253] manager: (tapd83c3e72-00): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.447 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:cb:b6 10.100.0.4'], port_security=['fa:16:3e:78:cb:b6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e8828a06-3170-4302-a5f9-2e4ec8445ab2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95056cabad5b4f32916e46a46b10f677', 'neutron:revision_number': '2', 'neutron:security_group_ids': '39e57887-e909-48db-87be-7ebe79b07773', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c365843-a96d-4209-bcbf-c5f27ee9a0dd, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d83c3e72-0088-4c8f-8272-fa1f917210b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.448 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d83c3e72-0088-4c8f-8272-fa1f917210b2 in datapath 032ac993-ac13-41ba-b86e-9d12c97f66b8 bound to our chassis
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.449 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 032ac993-ac13-41ba-b86e-9d12c97f66b8
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.460 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8728eef-ee7c-4816-b64c-aaeeabcda8d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.461 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap032ac993-a1 in ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.465 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap032ac993-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.466 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ed76ba34-7dd2-40af-9fd7-c580c33aac89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.467 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5de918-84c3-4dbc-a1ef-6abce2ce25bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 systemd-machined[211836]: New machine qemu-22-instance-0000002b.
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.481 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca3ba26-8e2f-40a8-b175-d9c3ea80b5df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.501 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99208d42-2b27-4e66-b965-4878112fc5ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000002b.
Oct 02 12:09:17 compute-0 ovn_controller[148183]: 2025-10-02T12:09:17Z|00166|binding|INFO|Setting lport d83c3e72-0088-4c8f-8272-fa1f917210b2 ovn-installed in OVS
Oct 02 12:09:17 compute-0 ovn_controller[148183]: 2025-10-02T12:09:17Z|00167|binding|INFO|Setting lport d83c3e72-0088-4c8f-8272-fa1f917210b2 up in Southbound
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.532 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[eda04a7f-199d-412f-bda8-c71d18cb1505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 systemd-udevd[287401]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.5380] manager: (tap032ac993-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.537 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6c8df7-6387-423e-9bff-9d8d54ac2f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 systemd-udevd[287404]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.5523] device (tapd83c3e72-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.5535] device (tapd83c3e72-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.579 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[140dfa33-0caf-4af5-a5ac-509c866e4ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.582 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[91af741b-ff1c-432a-a49a-95846a1afa24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.6089] device (tap032ac993-a0): carrier: link connected
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.618 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1f235ee2-1a36-4ceb-aa93-c0b47aa9eb1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.634 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[405c966b-315b-448d-af90-cf455dc11544]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032ac993-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:a3:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502523, 'reachable_time': 24629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287429, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.647 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6dcf52-9a04-4f32-a277-4246311229b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:a367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502523, 'tstamp': 502523}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287430, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.669 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[224b23e8-480b-4dc6-9b5f-00e7e213d928]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032ac993-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:a3:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502523, 'reachable_time': 24629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287431, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.697 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cb240ada-f80a-4954-b1aa-c570472a4f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.773 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[372bd151-ecef-4ec8-a30d-e60b54c52eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.775 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032ac993-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.775 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.776 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap032ac993-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 kernel: tap032ac993-a0: entered promiscuous mode
Oct 02 12:09:17 compute-0 NetworkManager[44987]: <info>  [1759406957.7795] manager: (tap032ac993-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.783 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap032ac993-a0, col_values=(('external_ids', {'iface-id': 'c7800c5b-63a7-46df-a2e3-8e9385f2b5c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 ovn_controller[148183]: 2025-10-02T12:09:17Z|00168|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct 02 12:09:17 compute-0 nova_compute[257802]: 2025-10-02 12:09:17.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.804 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.805 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[434020c6-7079-4a0e-b60a-7ebfe922aa4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.806 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-032ac993-ac13-41ba-b86e-9d12c97f66b8
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 032ac993-ac13-41ba-b86e-9d12c97f66b8
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:09:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:17.806 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'env', 'PROCESS_TAG=haproxy-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/032ac993-ac13-41ba-b86e-9d12c97f66b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:09:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:18 compute-0 podman[287506]: 2025-10-02 12:09:18.187293044 +0000 UTC m=+0.055332171 container create 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:09:18 compute-0 systemd[1]: Started libpod-conmon-00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d.scope.
Oct 02 12:09:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1321aeefe559e706d72b3e61bd420d6aa01761b7c64dfc36906f8a940c621ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:18 compute-0 podman[287506]: 2025-10-02 12:09:18.154318624 +0000 UTC m=+0.022357771 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:09:18 compute-0 podman[287506]: 2025-10-02 12:09:18.269504997 +0000 UTC m=+0.137544124 container init 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:09:18 compute-0 podman[287506]: 2025-10-02 12:09:18.275162407 +0000 UTC m=+0.143201534 container start 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:09:18 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [NOTICE]   (287525) : New worker (287527) forked
Oct 02 12:09:18 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [NOTICE]   (287525) : Loading success.
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.342 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406958.3411891, e8828a06-3170-4302-a5f9-2e4ec8445ab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.342 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] VM Started (Lifecycle Event)
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.370 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.374 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406958.3426132, e8828a06-3170-4302-a5f9-2e4ec8445ab2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.374 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] VM Paused (Lifecycle Event)
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.402 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.405 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:09:18 compute-0 nova_compute[257802]: 2025-10-02 12:09:18.429 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:09:18 compute-0 ceph-mon[73607]: pgmap v1287: 305 pgs: 305 active+clean; 134 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 12:09:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 189 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 177 op/s
Oct 02 12:09:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.182 2 DEBUG nova.compute.manager [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.182 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.182 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG nova.compute.manager [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Processing event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG nova.compute.manager [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.183 2 DEBUG oslo_concurrency.lockutils [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.184 2 DEBUG nova.compute.manager [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] No waiting events found dispatching network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.184 2 WARNING nova.compute.manager [req-84b60634-fba4-4497-a1cc-f27bbdfe4204 req-dbcafe19-a5fa-4dc6-ba0f-e5a88734a2e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received unexpected event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 for instance with vm_state building and task_state spawning.
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.184 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.188 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759406960.1880898, e8828a06-3170-4302-a5f9-2e4ec8445ab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.189 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] VM Resumed (Lifecycle Event)
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.191 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.194 2 INFO nova.virt.libvirt.driver [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Instance spawned successfully.
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.194 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.214 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.220 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.224 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.224 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.225 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.225 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.226 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.226 2 DEBUG nova.virt.libvirt.driver [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.257 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.320 2 INFO nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Took 8.18 seconds to spawn the instance on the hypervisor.
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.321 2 DEBUG nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.426 2 INFO nova.compute.manager [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Took 10.14 seconds to build instance.
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.450 2 DEBUG oslo_concurrency.lockutils [None req-fe805c3e-cccb-458a-a0ea-c4a5c95eb49c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:20 compute-0 nova_compute[257802]: 2025-10-02 12:09:20.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:20 compute-0 ceph-mon[73607]: pgmap v1288: 305 pgs: 305 active+clean; 189 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 177 op/s
Oct 02 12:09:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1450001861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 894 KiB/s rd, 5.7 MiB/s wr, 142 op/s
Oct 02 12:09:21 compute-0 nova_compute[257802]: 2025-10-02 12:09:21.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2175521674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:23 compute-0 ceph-mon[73607]: pgmap v1289: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 894 KiB/s rd, 5.7 MiB/s wr, 142 op/s
Oct 02 12:09:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 963 KiB/s rd, 5.7 MiB/s wr, 153 op/s
Oct 02 12:09:23 compute-0 NetworkManager[44987]: <info>  [1759406963.5457] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 02 12:09:23 compute-0 NetworkManager[44987]: <info>  [1759406963.5467] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:23 compute-0 ovn_controller[148183]: 2025-10-02T12:09:23Z|00169|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.808 2 DEBUG nova.compute.manager [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-changed-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.808 2 DEBUG nova.compute.manager [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Refreshing instance network info cache due to event network-changed-d83c3e72-0088-4c8f-8272-fa1f917210b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.809 2 DEBUG oslo_concurrency.lockutils [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.809 2 DEBUG oslo_concurrency.lockutils [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:09:23 compute-0 nova_compute[257802]: 2025-10-02 12:09:23.809 2 DEBUG nova.network.neutron [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Refreshing network info cache for port d83c3e72-0088-4c8f-8272-fa1f917210b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:09:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 02 12:09:24 compute-0 ceph-mon[73607]: pgmap v1290: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 963 KiB/s rd, 5.7 MiB/s wr, 153 op/s
Oct 02 12:09:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 02 12:09:24 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 200 op/s
Oct 02 12:09:25 compute-0 ceph-mon[73607]: osdmap e168: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-0 nova_compute[257802]: 2025-10-02 12:09:25.442 2 DEBUG nova.network.neutron [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updated VIF entry in instance network info cache for port d83c3e72-0088-4c8f-8272-fa1f917210b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:09:25 compute-0 nova_compute[257802]: 2025-10-02 12:09:25.442 2 DEBUG nova.network.neutron [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating instance_info_cache with network_info: [{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:09:25 compute-0 nova_compute[257802]: 2025-10-02 12:09:25.463 2 DEBUG oslo_concurrency.lockutils [req-00763543-bb93-4afa-ac8f-8679284081c1 req-d469fd9e-5eeb-4a8a-9804-6b9b22ee8e98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:09:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:25 compute-0 nova_compute[257802]: 2025-10-02 12:09:25.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:26 compute-0 ceph-mon[73607]: pgmap v1292: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 200 op/s
Oct 02 12:09:26 compute-0 nova_compute[257802]: 2025-10-02 12:09:26.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:26.927 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:26.928 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:26.928 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 267 op/s
Oct 02 12:09:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:28 compute-0 ceph-mon[73607]: pgmap v1293: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 267 op/s
Oct 02 12:09:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Oct 02 12:09:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:30 compute-0 ceph-mon[73607]: pgmap v1294: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Oct 02 12:09:30 compute-0 nova_compute[257802]: 2025-10-02 12:09:30.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 30 KiB/s wr, 184 op/s
Oct 02 12:09:31 compute-0 nova_compute[257802]: 2025-10-02 12:09:31.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:32 compute-0 ceph-mon[73607]: pgmap v1295: 305 pgs: 305 active+clean; 214 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 30 KiB/s wr, 184 op/s
Oct 02 12:09:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 228 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 02 12:09:33 compute-0 ovn_controller[148183]: 2025-10-02T12:09:33Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:cb:b6 10.100.0.4
Oct 02 12:09:33 compute-0 ovn_controller[148183]: 2025-10-02T12:09:33Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:cb:b6 10.100.0.4
Oct 02 12:09:33 compute-0 sudo[287545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:33 compute-0 sudo[287545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:33 compute-0 sudo[287545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:33 compute-0 sudo[287570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:33 compute-0 sudo[287570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:33 compute-0 sudo[287570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:33.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:34 compute-0 ceph-mon[73607]: pgmap v1296: 305 pgs: 305 active+clean; 228 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 02 12:09:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 238 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 144 op/s
Oct 02 12:09:35 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 12:09:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:35 compute-0 nova_compute[257802]: 2025-10-02 12:09:35.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:36 compute-0 nova_compute[257802]: 2025-10-02 12:09:36.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:36 compute-0 ceph-mon[73607]: pgmap v1297: 305 pgs: 305 active+clean; 238 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 144 op/s
Oct 02 12:09:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1242565607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:37 compute-0 nova_compute[257802]: 2025-10-02 12:09:37.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:37 compute-0 nova_compute[257802]: 2025-10-02 12:09:37.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:37 compute-0 nova_compute[257802]: 2025-10-02 12:09:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:37 compute-0 nova_compute[257802]: 2025-10-02 12:09:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:37 compute-0 nova_compute[257802]: 2025-10-02 12:09:37.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:09:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 255 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Oct 02 12:09:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:38 compute-0 ceph-mon[73607]: pgmap v1298: 305 pgs: 305 active+clean; 255 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Oct 02 12:09:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3815105497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:38 compute-0 podman[287598]: 2025-10-02 12:09:38.919558114 +0000 UTC m=+0.058287513 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:09:39 compute-0 nova_compute[257802]: 2025-10-02 12:09:39.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 278 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 02 12:09:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:39.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:39.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:40 compute-0 nova_compute[257802]: 2025-10-02 12:09:40.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:40 compute-0 ovn_controller[148183]: 2025-10-02T12:09:40Z|00170|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct 02 12:09:40 compute-0 nova_compute[257802]: 2025-10-02 12:09:40.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:40 compute-0 ceph-mon[73607]: pgmap v1299: 305 pgs: 305 active+clean; 278 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 02 12:09:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1818087645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:40 compute-0 nova_compute[257802]: 2025-10-02 12:09:40.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:09:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 279 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.479 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.480 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.548 2 DEBUG nova.objects.instance [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'flavor' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3853346440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.631 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.631 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.632 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.632 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:41 compute-0 nova_compute[257802]: 2025-10-02 12:09:41.735 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:41 compute-0 podman[287618]: 2025-10-02 12:09:41.925533696 +0000 UTC m=+0.054593824 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:09:41 compute-0 podman[287619]: 2025-10-02 12:09:41.947350615 +0000 UTC m=+0.076409313 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true)
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.241 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.241 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.242 2 INFO nova.compute.manager [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Attaching volume 80038b9a-2fe1-4bab-ad9d-f3986e75d80d to /dev/vdb
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:09:42
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.log', 'volumes']
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:09:42 compute-0 ceph-mon[73607]: pgmap v1300: 305 pgs: 305 active+clean; 279 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.719 2 DEBUG os_brick.utils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.720 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.731 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.731 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1dc402-98ac-42dc-864f-205ab25dfb24]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.733 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.741 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.741 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c78f2930-f545-43c8-9367-a2f7cc233adb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.743 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.750 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.750 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[b962e4d3-53f7-4c68-bcc2-c74c0dacfc6f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.752 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[824a41ea-eb7a-4db2-975c-c36d61de62f4]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.752 2 DEBUG oslo_concurrency.processutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.774 2 DEBUG oslo_concurrency.processutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.776 2 DEBUG os_brick.initiator.connectors.lightos [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.777 2 DEBUG os_brick.initiator.connectors.lightos [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.777 2 DEBUG os_brick.initiator.connectors.lightos [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.777 2 DEBUG os_brick.utils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:09:42 compute-0 nova_compute[257802]: 2025-10-02 12:09:42.778 2 DEBUG nova.virt.block_device [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating existing volume attachment record: 403fd515-9007-40ce-b4e1-3421f7a24d84 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:09:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:09:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 248 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:09:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3761977314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:43.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.739 2 DEBUG nova.objects.instance [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'flavor' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:43.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.772 2 DEBUG nova.virt.libvirt.driver [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Attempting to attach volume 80038b9a-2fe1-4bab-ad9d-f3986e75d80d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.774 2 DEBUG nova.virt.libvirt.guest [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:09:43 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-80038b9a-2fe1-4bab-ad9d-f3986e75d80d">
Oct 02 12:09:43 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   </source>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:09:43 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:09:43 compute-0 nova_compute[257802]:   <serial>80038b9a-2fe1-4bab-ad9d-f3986e75d80d</serial>
Oct 02 12:09:43 compute-0 nova_compute[257802]: </disk>
Oct 02 12:09:43 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.876 2 DEBUG nova.virt.libvirt.driver [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.876 2 DEBUG nova.virt.libvirt.driver [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.877 2 DEBUG nova.virt.libvirt.driver [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:09:43 compute-0 nova_compute[257802]: 2025-10-02 12:09:43.877 2 DEBUG nova.virt.libvirt.driver [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No VIF found with MAC fa:16:3e:78:cb:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.065 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating instance_info_cache with network_info: [{"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.082 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-e8828a06-3170-4302-a5f9-2e4ec8445ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.082 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.085 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.111 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.111 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.111 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.111 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.112 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.468 2 DEBUG oslo_concurrency.lockutils [None req-f486e3e5-11de-4092-8864-763c07b4408c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983667213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.542 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 02 12:09:44 compute-0 ceph-mon[73607]: pgmap v1301: 305 pgs: 305 active+clean; 248 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Oct 02 12:09:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3983667213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.628 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.628 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.629 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:09:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 02 12:09:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.797 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.800 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4509MB free_disk=20.86734390258789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.801 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.801 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.925 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.925 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.926 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:09:44 compute-0 nova_compute[257802]: 2025-10-02 12:09:44.990 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 225 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 3.0 MiB/s wr, 119 op/s
Oct 02 12:09:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445604339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.416 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.421 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.447 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.472 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.473 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:45 compute-0 ceph-mon[73607]: osdmap e169: 3 total, 3 up, 3 in
Oct 02 12:09:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3936337925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2445604339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:45.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:45 compute-0 nova_compute[257802]: 2025-10-02 12:09:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:45 compute-0 podman[287733]: 2025-10-02 12:09:45.929652956 +0000 UTC m=+0.073636995 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:09:46 compute-0 nova_compute[257802]: 2025-10-02 12:09:46.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:46 compute-0 nova_compute[257802]: 2025-10-02 12:09:46.469 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:46 compute-0 nova_compute[257802]: 2025-10-02 12:09:46.660 2 DEBUG oslo_concurrency.lockutils [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:46 compute-0 nova_compute[257802]: 2025-10-02 12:09:46.660 2 DEBUG oslo_concurrency.lockutils [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:46 compute-0 ceph-mon[73607]: pgmap v1303: 305 pgs: 305 active+clean; 225 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 3.0 MiB/s wr, 119 op/s
Oct 02 12:09:46 compute-0 nova_compute[257802]: 2025-10-02 12:09:46.677 2 INFO nova.compute.manager [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Detaching volume 80038b9a-2fe1-4bab-ad9d-f3986e75d80d
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.027 2 INFO nova.virt.block_device [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Attempting to driver detach volume 80038b9a-2fe1-4bab-ad9d-f3986e75d80d from mountpoint /dev/vdb
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.035 2 DEBUG nova.virt.libvirt.driver [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Attempting to detach device vdb from instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.035 2 DEBUG nova.virt.libvirt.guest [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-80038b9a-2fe1-4bab-ad9d-f3986e75d80d">
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   </source>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <serial>80038b9a-2fe1-4bab-ad9d-f3986e75d80d</serial>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]: </disk>
Oct 02 12:09:47 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.041 2 INFO nova.virt.libvirt.driver [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully detached device vdb from instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 from the persistent domain config.
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.042 2 DEBUG nova.virt.libvirt.driver [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.042 2 DEBUG nova.virt.libvirt.guest [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-80038b9a-2fe1-4bab-ad9d-f3986e75d80d">
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   </source>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <serial>80038b9a-2fe1-4bab-ad9d-f3986e75d80d</serial>
Oct 02 12:09:47 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:09:47 compute-0 nova_compute[257802]: </disk>
Oct 02 12:09:47 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:09:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 1.7 MiB/s wr, 121 op/s
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.148 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759406987.148459, e8828a06-3170-4302-a5f9-2e4ec8445ab2 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.150 2 DEBUG nova.virt.libvirt.driver [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.152 2 INFO nova.virt.libvirt.driver [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully detached device vdb from instance e8828a06-3170-4302-a5f9-2e4ec8445ab2 from the live domain config.
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.437 2 DEBUG nova.objects.instance [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'flavor' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:47 compute-0 nova_compute[257802]: 2025-10-02 12:09:47.596 2 DEBUG oslo_concurrency.lockutils [None req-9bdbcc11-2ebb-4912-b39d-29c6ab77dd30 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:47.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:48 compute-0 nova_compute[257802]: 2025-10-02 12:09:48.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:48 compute-0 ceph-mon[73607]: pgmap v1304: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 1.7 MiB/s wr, 121 op/s
Oct 02 12:09:48 compute-0 nova_compute[257802]: 2025-10-02 12:09:48.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 90 KiB/s wr, 77 op/s
Oct 02 12:09:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:49.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:50 compute-0 ceph-mon[73607]: pgmap v1305: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 90 KiB/s wr, 77 op/s
Oct 02 12:09:50 compute-0 nova_compute[257802]: 2025-10-02 12:09:50.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 35 KiB/s wr, 63 op/s
Oct 02 12:09:51 compute-0 nova_compute[257802]: 2025-10-02 12:09:51.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:52 compute-0 ceph-mon[73607]: pgmap v1306: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 35 KiB/s wr, 63 op/s
Oct 02 12:09:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 02 12:09:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 02 12:09:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 02 12:09:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 10 KiB/s wr, 46 op/s
Oct 02 12:09:53 compute-0 sudo[287766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:53 compute-0 sudo[287766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:53 compute-0 sudo[287766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:09:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:09:53 compute-0 sudo[287791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:53 compute-0 sudo[287791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:53 compute-0 sudo[287791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:53 compute-0 ceph-mon[73607]: osdmap e170: 3 total, 3 up, 3 in
Oct 02 12:09:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1175183872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043415500535082825 of space, bias 1.0, pg target 1.3024650160524847 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:09:54 compute-0 ceph-mon[73607]: pgmap v1308: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 10 KiB/s wr, 46 op/s
Oct 02 12:09:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:09:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848731468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:09:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:09:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848731468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:09:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 8.8 KiB/s wr, 39 op/s
Oct 02 12:09:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:55.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:55 compute-0 nova_compute[257802]: 2025-10-02 12:09:55.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/848731468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:09:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/848731468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:09:56 compute-0 nova_compute[257802]: 2025-10-02 12:09:56.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:57 compute-0 ceph-mon[73607]: pgmap v1309: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 8.8 KiB/s wr, 39 op/s
Oct 02 12:09:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 195 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 974 KiB/s wr, 17 op/s
Oct 02 12:09:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:58.475 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:09:58 compute-0 nova_compute[257802]: 2025-10-02 12:09:58.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:09:58.476 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:09:58 compute-0 nova_compute[257802]: 2025-10-02 12:09:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:09:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 167 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:09:59 compute-0 ceph-mon[73607]: pgmap v1310: 305 pgs: 305 active+clean; 195 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 974 KiB/s wr, 17 op/s
Oct 02 12:09:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:09:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:59.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:10:00 compute-0 ceph-mon[73607]: pgmap v1311: 305 pgs: 305 active+clean; 167 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:10:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:10:00 compute-0 nova_compute[257802]: 2025-10-02 12:10:00.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:01 compute-0 sudo[287820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:01 compute-0 sudo[287820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:01 compute-0 sudo[287820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:01 compute-0 sudo[287845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:01 compute-0 sudo[287845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:01 compute-0 sudo[287845]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:10:01 compute-0 sudo[287870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:01 compute-0 sudo[287870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:01 compute-0 sudo[287870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:01 compute-0 sudo[287895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:10:01 compute-0 sudo[287895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:01 compute-0 nova_compute[257802]: 2025-10-02 12:10:01.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/139278732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:01 compute-0 sudo[287895]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:01.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:01.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 16f6a372-bf5e-4754-b99c-dc94ba997742 does not exist
Oct 02 12:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ef5502c8-3a94-423d-95a2-93737279e7d1 does not exist
Oct 02 12:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a5cf1920-f82c-4945-a6ca-8d1900f84a84 does not exist
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:01 compute-0 sudo[287951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:02 compute-0 sudo[287951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:02 compute-0 sudo[287951]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:02 compute-0 sudo[287976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:02 compute-0 sudo[287976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:02 compute-0 sudo[287976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:02 compute-0 sudo[288001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:02 compute-0 sudo[288001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:02 compute-0 sudo[288001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:02 compute-0 sudo[288026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:10:02 compute-0 sudo[288026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.446690493 +0000 UTC m=+0.035418489 container create 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:10:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:02.478 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:02 compute-0 systemd[1]: Started libpod-conmon-457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8.scope.
Oct 02 12:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.518173235 +0000 UTC m=+0.106901251 container init 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.526370214 +0000 UTC m=+0.115098210 container start 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.431437723 +0000 UTC m=+0.020165739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:02 compute-0 dreamy_saha[288109]: 167 167
Oct 02 12:10:02 compute-0 systemd[1]: libpod-457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8.scope: Deactivated successfully.
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.532278646 +0000 UTC m=+0.121006642 container attach 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.53287744 +0000 UTC m=+0.121605457 container died 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6ce01bd96bc20b638c95ff8deed3583a496b1e37573dd22cfb8bdb76e0a7632-merged.mount: Deactivated successfully.
Oct 02 12:10:02 compute-0 podman[288092]: 2025-10-02 12:10:02.590086127 +0000 UTC m=+0.178814123 container remove 457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:10:02 compute-0 systemd[1]: libpod-conmon-457c554ee4782aa7b6a8c2aa88f086bd7b895445cb1c4db680221bea90b2c0d8.scope: Deactivated successfully.
Oct 02 12:10:02 compute-0 ceph-mon[73607]: pgmap v1312: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:02 compute-0 podman[288132]: 2025-10-02 12:10:02.746660781 +0000 UTC m=+0.039423726 container create 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:10:02 compute-0 systemd[1]: Started libpod-conmon-2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9.scope.
Oct 02 12:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:02 compute-0 podman[288132]: 2025-10-02 12:10:02.730008338 +0000 UTC m=+0.022771293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:02 compute-0 podman[288132]: 2025-10-02 12:10:02.829441277 +0000 UTC m=+0.122204242 container init 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:10:02 compute-0 podman[288132]: 2025-10-02 12:10:02.837008891 +0000 UTC m=+0.129771826 container start 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:10:02 compute-0 podman[288132]: 2025-10-02 12:10:02.840522826 +0000 UTC m=+0.133285791 container attach 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:10:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 12:10:03 compute-0 eager_swanson[288148]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:10:03 compute-0 eager_swanson[288148]: --> relative data size: 1.0
Oct 02 12:10:03 compute-0 eager_swanson[288148]: --> All data devices are unavailable
Oct 02 12:10:03 compute-0 systemd[1]: libpod-2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9.scope: Deactivated successfully.
Oct 02 12:10:03 compute-0 podman[288132]: 2025-10-02 12:10:03.648320261 +0000 UTC m=+0.941083236 container died 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:10:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0daf29f1760583e0e40a11c5f124bd0bc2c8b7d8368d33765f813f3cc9c7cedb-merged.mount: Deactivated successfully.
Oct 02 12:10:03 compute-0 podman[288132]: 2025-10-02 12:10:03.910185676 +0000 UTC m=+1.202948631 container remove 2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_swanson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:03 compute-0 systemd[1]: libpod-conmon-2890ba791f6f0d48d44510b27c88e314c411bc820df16a94b600e0719724fbd9.scope: Deactivated successfully.
Oct 02 12:10:03 compute-0 sudo[288026]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 sudo[288177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:04 compute-0 sudo[288177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:04 compute-0 sudo[288177]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 sudo[288202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:04 compute-0 sudo[288202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:04 compute-0 sudo[288202]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 sudo[288227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:04 compute-0 sudo[288227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:04 compute-0 sudo[288227]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 sudo[288252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:10:04 compute-0 sudo[288252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.681282542 +0000 UTC m=+0.058803456 container create c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:10:04 compute-0 ceph-mon[73607]: pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.642481512 +0000 UTC m=+0.020002326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:04 compute-0 systemd[1]: Started libpod-conmon-c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c.scope.
Oct 02 12:10:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.827215628 +0000 UTC m=+0.204736452 container init c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.834659038 +0000 UTC m=+0.212179832 container start c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:10:04 compute-0 admiring_wu[288336]: 167 167
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.841007883 +0000 UTC m=+0.218528687 container attach c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:10:04 compute-0 systemd[1]: libpod-c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c.scope: Deactivated successfully.
Oct 02 12:10:04 compute-0 conmon[288336]: conmon c016183996229098a7f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c.scope/container/memory.events
Oct 02 12:10:04 compute-0 podman[288320]: 2025-10-02 12:10:04.843497613 +0000 UTC m=+0.221018407 container died c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c9658585f76bc93ca59bf990b3d6a1a21bd8040f27ed5ef1bf50c63f4e1807-merged.mount: Deactivated successfully.
Oct 02 12:10:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:10:05 compute-0 podman[288320]: 2025-10-02 12:10:05.384691838 +0000 UTC m=+0.762212632 container remove c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:10:05 compute-0 systemd[1]: libpod-conmon-c016183996229098a7f136b68c148e758f023a1fa89815330d5ac57fd780d52c.scope: Deactivated successfully.
Oct 02 12:10:05 compute-0 podman[288361]: 2025-10-02 12:10:05.53047502 +0000 UTC m=+0.022044814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:05 compute-0 podman[288361]: 2025-10-02 12:10:05.706502306 +0000 UTC m=+0.198072070 container create adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:10:05 compute-0 systemd[1]: Started libpod-conmon-adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c.scope.
Oct 02 12:10:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:05.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c83619525b14bd45e04134595b3644f98baccba21e2f6cb6b0c10acf02d69cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c83619525b14bd45e04134595b3644f98baccba21e2f6cb6b0c10acf02d69cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c83619525b14bd45e04134595b3644f98baccba21e2f6cb6b0c10acf02d69cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c83619525b14bd45e04134595b3644f98baccba21e2f6cb6b0c10acf02d69cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2391065681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/64052507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-0 podman[288361]: 2025-10-02 12:10:05.829128658 +0000 UTC m=+0.320698442 container init adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:10:05 compute-0 podman[288361]: 2025-10-02 12:10:05.836087937 +0000 UTC m=+0.327657701 container start adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:05 compute-0 podman[288361]: 2025-10-02 12:10:05.840879872 +0000 UTC m=+0.332449656 container attach adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:05 compute-0 nova_compute[257802]: 2025-10-02 12:10:05.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:06 compute-0 nova_compute[257802]: 2025-10-02 12:10:06.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:06 compute-0 nova_compute[257802]: 2025-10-02 12:10:06.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]: {
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:     "1": [
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:         {
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "devices": [
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "/dev/loop3"
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             ],
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "lv_name": "ceph_lv0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "lv_size": "7511998464",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "name": "ceph_lv0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "tags": {
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.cluster_name": "ceph",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.crush_device_class": "",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.encrypted": "0",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.osd_id": "1",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.type": "block",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:                 "ceph.vdo": "0"
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             },
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "type": "block",
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:             "vg_name": "ceph_vg0"
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:         }
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]:     ]
Oct 02 12:10:06 compute-0 xenodochial_galileo[288378]: }
Oct 02 12:10:06 compute-0 systemd[1]: libpod-adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c.scope: Deactivated successfully.
Oct 02 12:10:06 compute-0 podman[288361]: 2025-10-02 12:10:06.590547049 +0000 UTC m=+1.082116813 container died adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c83619525b14bd45e04134595b3644f98baccba21e2f6cb6b0c10acf02d69cb-merged.mount: Deactivated successfully.
Oct 02 12:10:06 compute-0 podman[288361]: 2025-10-02 12:10:06.695132403 +0000 UTC m=+1.186702157 container remove adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:10:06 compute-0 systemd[1]: libpod-conmon-adb673a04c90a0386eaec65947ed4c5e276295733a240da72306fa512b62382c.scope: Deactivated successfully.
Oct 02 12:10:06 compute-0 sudo[288252]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:06 compute-0 sudo[288401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:06 compute-0 sudo[288401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:06 compute-0 sudo[288401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:06 compute-0 ceph-mon[73607]: pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:10:06 compute-0 sudo[288426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:06 compute-0 sudo[288426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:06 compute-0 sudo[288426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:06 compute-0 sudo[288451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:06 compute-0 sudo[288451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:06 compute-0 sudo[288451]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:06 compute-0 sudo[288476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:10:06 compute-0 sudo[288476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.32930046 +0000 UTC m=+0.085404760 container create b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.265232698 +0000 UTC m=+0.021336998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:07 compute-0 systemd[1]: Started libpod-conmon-b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff.scope.
Oct 02 12:10:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.483351883 +0000 UTC m=+0.239456183 container init b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.489760499 +0000 UTC m=+0.245864779 container start b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:10:07 compute-0 romantic_davinci[288559]: 167 167
Oct 02 12:10:07 compute-0 systemd[1]: libpod-b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff.scope: Deactivated successfully.
Oct 02 12:10:07 compute-0 conmon[288559]: conmon b0a651cc1c8ce987f9d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff.scope/container/memory.events
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.530766743 +0000 UTC m=+0.286871053 container attach b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.532125396 +0000 UTC m=+0.288229666 container died b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:10:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:07.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c346d8cb09b072bfe863bbd9fa7a3cfa07b5cd4dba6cdae1e062dec549e0623b-merged.mount: Deactivated successfully.
Oct 02 12:10:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:07.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:07 compute-0 podman[288543]: 2025-10-02 12:10:07.973812938 +0000 UTC m=+0.729917258 container remove b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_davinci, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:10:07 compute-0 systemd[1]: libpod-conmon-b0a651cc1c8ce987f9d0eb85419338cee7003c871c4e128614eab354daa0faff.scope: Deactivated successfully.
Oct 02 12:10:08 compute-0 podman[288585]: 2025-10-02 12:10:08.16902876 +0000 UTC m=+0.079578690 container create 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:10:08 compute-0 podman[288585]: 2025-10-02 12:10:08.111640609 +0000 UTC m=+0.022190559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:08 compute-0 systemd[1]: Started libpod-conmon-3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903.scope.
Oct 02 12:10:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c096c5815d810dcf21c482d9ca32889152425dea6484845e369f5d0194c6d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c096c5815d810dcf21c482d9ca32889152425dea6484845e369f5d0194c6d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c096c5815d810dcf21c482d9ca32889152425dea6484845e369f5d0194c6d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c096c5815d810dcf21c482d9ca32889152425dea6484845e369f5d0194c6d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:08 compute-0 podman[288585]: 2025-10-02 12:10:08.272159949 +0000 UTC m=+0.182709919 container init 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:10:08 compute-0 podman[288585]: 2025-10-02 12:10:08.279056475 +0000 UTC m=+0.189606395 container start 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:10:08 compute-0 podman[288585]: 2025-10-02 12:10:08.299067111 +0000 UTC m=+0.209617261 container attach 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:10:08 compute-0 ovn_controller[148183]: 2025-10-02T12:10:08Z|00171|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct 02 12:10:08 compute-0 nova_compute[257802]: 2025-10-02 12:10:08.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:08 compute-0 ceph-mon[73607]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]: {
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:         "osd_id": 1,
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:         "type": "bluestore"
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]:     }
Oct 02 12:10:09 compute-0 wizardly_neumann[288602]: }
Oct 02 12:10:09 compute-0 systemd[1]: libpod-3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903.scope: Deactivated successfully.
Oct 02 12:10:09 compute-0 podman[288585]: 2025-10-02 12:10:09.09549291 +0000 UTC m=+1.006042830 container died 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:10:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1013 KiB/s wr, 54 op/s
Oct 02 12:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-18c096c5815d810dcf21c482d9ca32889152425dea6484845e369f5d0194c6d0-merged.mount: Deactivated successfully.
Oct 02 12:10:09 compute-0 podman[288585]: 2025-10-02 12:10:09.15698387 +0000 UTC m=+1.067533790 container remove 3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:10:09 compute-0 systemd[1]: libpod-conmon-3239492c3d77992906da85c5e140700443ba774c9c9fa631a52e0ba66e166903.scope: Deactivated successfully.
Oct 02 12:10:09 compute-0 podman[288624]: 2025-10-02 12:10:09.181447963 +0000 UTC m=+0.060865997 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:10:09 compute-0 sudo[288476]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:10:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:10:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a49e571d-db99-4ef7-a14b-57333f0ab57a does not exist
Oct 02 12:10:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dd7ad940-4de8-4b0b-916d-e046bd90db01 does not exist
Oct 02 12:10:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bb05568a-abfe-4472-bcb3-a5545cee34e0 does not exist
Oct 02 12:10:09 compute-0 sudo[288654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:09 compute-0 sudo[288654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:09 compute-0 sudo[288654]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:09 compute-0 sudo[288679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:10:09 compute-0 sudo[288679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:09 compute-0 sudo[288679]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:09.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:09.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:10 compute-0 ceph-mon[73607]: pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1013 KiB/s wr, 54 op/s
Oct 02 12:10:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:10:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3317979224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:10 compute-0 nova_compute[257802]: 2025-10-02 12:10:10.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 15 KiB/s wr, 7 op/s
Oct 02 12:10:11 compute-0 nova_compute[257802]: 2025-10-02 12:10:11.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:11.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:12 compute-0 ceph-mon[73607]: pgmap v1317: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 15 KiB/s wr, 7 op/s
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:12 compute-0 podman[288706]: 2025-10-02 12:10:12.916737199 +0000 UTC m=+0.056269164 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:10:12 compute-0 podman[288707]: 2025-10-02 12:10:12.920755287 +0000 UTC m=+0.055570928 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:10:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 02 12:10:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:13.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:13.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:13 compute-0 sudo[288746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:13 compute-0 sudo[288746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 sudo[288746]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:13 compute-0 sudo[288771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:13 compute-0 sudo[288771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 sudo[288771]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:14 compute-0 ceph-mon[73607]: pgmap v1318: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 02 12:10:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 174 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 312 KiB/s wr, 59 op/s
Oct 02 12:10:15 compute-0 nova_compute[257802]: 2025-10-02 12:10:15.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:15.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:15.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:15 compute-0 nova_compute[257802]: 2025-10-02 12:10:15.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:16 compute-0 ceph-mon[73607]: pgmap v1319: 305 pgs: 305 active+clean; 174 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 312 KiB/s wr, 59 op/s
Oct 02 12:10:16 compute-0 nova_compute[257802]: 2025-10-02 12:10:16.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:17 compute-0 podman[288798]: 2025-10-02 12:10:17.020899753 +0000 UTC m=+0.151236716 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:10:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 197 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Oct 02 12:10:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2082683193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2082683193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:17.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:18 compute-0 ceph-mon[73607]: pgmap v1320: 305 pgs: 305 active+clean; 197 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Oct 02 12:10:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2102354119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3144016193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.114 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.114 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.136 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.219 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.219 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.229 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.230 2 INFO nova.compute.claims [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.383 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:19.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:19.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406407542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.888 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.893 2 DEBUG nova.compute.provider_tree [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.926 2 DEBUG nova.scheduler.client.report [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.953 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:19 compute-0 nova_compute[257802]: 2025-10-02 12:10:19.954 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.010 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.011 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.055 2 INFO nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.079 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.206 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.207 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.207 2 INFO nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Creating image(s)
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.237 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.270 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.297 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.300 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.320 2 DEBUG nova.policy [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c2b867915b342b5acd8026e8fc9fe00', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07abaa757bde49eead1d80ce844ec6ba', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.357 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.358 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.358 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.359 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.382 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.386 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:20 compute-0 ceph-mon[73607]: pgmap v1321: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Oct 02 12:10:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/406407542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/819940276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.840 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:20 compute-0 nova_compute[257802]: 2025-10-02 12:10:20.950 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] resizing rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.075 2 DEBUG nova.objects.instance [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lazy-loading 'migration_context' on Instance uuid 2b0bce41-aada-43ea-8c21-a68eeb2720c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.091 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.091 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Ensure instance console log exists: /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.092 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.092 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.092 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 185 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.251 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Successfully created port: da20f9f6-37f3-4158-9959-4ce9ef867a87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:10:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2325337185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:10:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2325337185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:21 compute-0 nova_compute[257802]: 2025-10-02 12:10:21.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2325337185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2325337185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.433 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.433 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.433 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.434 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.434 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.435 2 INFO nova.compute.manager [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Terminating instance
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.436 2 DEBUG nova.compute.manager [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:10:22 compute-0 kernel: tapd83c3e72-00 (unregistering): left promiscuous mode
Oct 02 12:10:22 compute-0 NetworkManager[44987]: <info>  [1759407022.5005] device (tapd83c3e72-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 ovn_controller[148183]: 2025-10-02T12:10:22Z|00172|binding|INFO|Releasing lport d83c3e72-0088-4c8f-8272-fa1f917210b2 from this chassis (sb_readonly=0)
Oct 02 12:10:22 compute-0 ovn_controller[148183]: 2025-10-02T12:10:22Z|00173|binding|INFO|Setting lport d83c3e72-0088-4c8f-8272-fa1f917210b2 down in Southbound
Oct 02 12:10:22 compute-0 ovn_controller[148183]: 2025-10-02T12:10:22Z|00174|binding|INFO|Removing iface tapd83c3e72-00 ovn-installed in OVS
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.569 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:cb:b6 10.100.0.4'], port_security=['fa:16:3e:78:cb:b6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e8828a06-3170-4302-a5f9-2e4ec8445ab2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95056cabad5b4f32916e46a46b10f677', 'neutron:revision_number': '4', 'neutron:security_group_ids': '39e57887-e909-48db-87be-7ebe79b07773', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c365843-a96d-4209-bcbf-c5f27ee9a0dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d83c3e72-0088-4c8f-8272-fa1f917210b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.571 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d83c3e72-0088-4c8f-8272-fa1f917210b2 in datapath 032ac993-ac13-41ba-b86e-9d12c97f66b8 unbound from our chassis
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.573 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 032ac993-ac13-41ba-b86e-9d12c97f66b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.574 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[041e63ce-fc66-419d-a5bb-0ee760a102ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.575 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 namespace which is not needed anymore
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 02 12:10:22 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Consumed 14.798s CPU time.
Oct 02 12:10:22 compute-0 systemd-machined[211836]: Machine qemu-22-instance-0000002b terminated.
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.668 2 INFO nova.virt.libvirt.driver [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Instance destroyed successfully.
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.669 2 DEBUG nova.objects.instance [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'resources' on Instance uuid e8828a06-3170-4302-a5f9-2e4ec8445ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.681 2 DEBUG nova.virt.libvirt.vif [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1549047604',display_name='tempest-VolumesAdminNegativeTest-server-1549047604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1549047604',id=43,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCHD7xEhwk17Lh2SZM6L31piL3y0EGZErKHOswnaztMMAN9IszfT3ybcRKpvtuB9DRWBAMXjj3t2j2HbYhw2ugNQv5jQz2Kx18w60DNdRIQ5S3zYPwVYZ7NxtzNxXCYvQ==',key_name='tempest-keypair-1991120415',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:09:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-dvd5q095',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:09:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5ca95bd9fa148c7948c062421011d76',uuid=e8828a06-3170-4302-a5f9-2e4ec8445ab2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.682 2 DEBUG nova.network.os_vif_util [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "address": "fa:16:3e:78:cb:b6", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83c3e72-00", "ovs_interfaceid": "d83c3e72-0088-4c8f-8272-fa1f917210b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.682 2 DEBUG nova.network.os_vif_util [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.683 2 DEBUG os_vif [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd83c3e72-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.691 2 INFO os_vif [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:cb:b6,bridge_name='br-int',has_traffic_filtering=True,id=d83c3e72-0088-4c8f-8272-fa1f917210b2,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83c3e72-00')
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [NOTICE]   (287525) : haproxy version is 2.8.14-c23fe91
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [NOTICE]   (287525) : path to executable is /usr/sbin/haproxy
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [WARNING]  (287525) : Exiting Master process...
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [WARNING]  (287525) : Exiting Master process...
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [ALERT]    (287525) : Current worker (287527) exited with code 143 (Terminated)
Oct 02 12:10:22 compute-0 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[287521]: [WARNING]  (287525) : All workers exited. Exiting... (0)
Oct 02 12:10:22 compute-0 systemd[1]: libpod-00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d.scope: Deactivated successfully.
Oct 02 12:10:22 compute-0 podman[289045]: 2025-10-02 12:10:22.717725681 +0000 UTC m=+0.048329283 container died 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1321aeefe559e706d72b3e61bd420d6aa01761b7c64dfc36906f8a940c621ac-merged.mount: Deactivated successfully.
Oct 02 12:10:22 compute-0 podman[289045]: 2025-10-02 12:10:22.758721244 +0000 UTC m=+0.089324856 container cleanup 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:10:22 compute-0 systemd[1]: libpod-conmon-00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d.scope: Deactivated successfully.
Oct 02 12:10:22 compute-0 podman[289096]: 2025-10-02 12:10:22.81922065 +0000 UTC m=+0.040620935 container remove 00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.824 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[28a056b3-ccbf-45ef-8936-07a79521f7c3]: (4, ('Thu Oct  2 12:10:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 (00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d)\n00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d\nThu Oct  2 12:10:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 (00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d)\n00f31be34480606d2b93dc82b93919e1a6186e184e7084c075760404d089847d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.826 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaf1630-a2a5-47eb-b887-103707a6a173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.827 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032ac993-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:22 compute-0 kernel: tap032ac993-a0: left promiscuous mode
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.832 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[24de2201-6013-4dd2-9fb2-4833a1826d46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.862 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Successfully updated port: da20f9f6-37f3-4158-9959-4ce9ef867a87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.869 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e3edd4af-4370-4e90-ad8f-0b0c47a1bf17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.870 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d53e01a6-b936-4ee5-b87b-131f33782047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.878 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.879 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquired lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:22 compute-0 nova_compute[257802]: 2025-10-02 12:10:22.879 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.884 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eab52ab6-5f0e-480f-a104-d2caddf2f16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502515, 'reachable_time': 25016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289111, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d032ac993\x2dac13\x2d41ba\x2db86e\x2d9d12c97f66b8.mount: Deactivated successfully.
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.886 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:10:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:22.886 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[396d7fa3-eb92-4c4c-bf56-52866e86da9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.064 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:10:23 compute-0 ceph-mon[73607]: pgmap v1322: 305 pgs: 305 active+clean; 185 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 02 12:10:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 191 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.329 2 DEBUG nova.compute.manager [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-changed-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.329 2 DEBUG nova.compute.manager [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Refreshing instance network info cache due to event network-changed-da20f9f6-37f3-4158-9959-4ce9ef867a87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.329 2 DEBUG oslo_concurrency.lockutils [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.743 2 INFO nova.virt.libvirt.driver [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Deleting instance files /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2_del
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.744 2 INFO nova.virt.libvirt.driver [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Deletion of /var/lib/nova/instances/e8828a06-3170-4302-a5f9-2e4ec8445ab2_del complete
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.813 2 INFO nova.compute.manager [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Took 1.38 seconds to destroy the instance on the hypervisor.
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.814 2 DEBUG oslo.service.loopingcall [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.815 2 DEBUG nova.compute.manager [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:10:23 compute-0 nova_compute[257802]: 2025-10-02 12:10:23.815 2 DEBUG nova.network.neutron [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:10:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:23.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.002 2 DEBUG nova.network.neutron [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updating instance_info_cache with network_info: [{"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.045 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Releasing lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.046 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Instance network_info: |[{"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.047 2 DEBUG oslo_concurrency.lockutils [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.048 2 DEBUG nova.network.neutron [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Refreshing network info cache for port da20f9f6-37f3-4158-9959-4ce9ef867a87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.052 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Start _get_guest_xml network_info=[{"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.059 2 WARNING nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.076 2 DEBUG nova.virt.libvirt.host [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.077 2 DEBUG nova.virt.libvirt.host [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.082 2 DEBUG nova.virt.libvirt.host [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.082 2 DEBUG nova.virt.libvirt.host [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.087 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.087 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.088 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.090 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.090 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.091 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.091 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.092 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.093 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.093 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.093 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.094 2 DEBUG nova.virt.hardware [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.097 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.145 2 DEBUG nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-unplugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.147 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.147 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.148 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.149 2 DEBUG nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] No waiting events found dispatching network-vif-unplugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.149 2 DEBUG nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-unplugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.150 2 DEBUG nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.151 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.151 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.151 2 DEBUG oslo_concurrency.lockutils [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.152 2 DEBUG nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] No waiting events found dispatching network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.152 2 WARNING nova.compute.manager [req-05b822fa-ff46-4f95-b477-0f2c0ab39088 req-793b8e42-5d4b-408e-81f9-9e3a9753b591 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received unexpected event network-vif-plugged-d83c3e72-0088-4c8f-8272-fa1f917210b2 for instance with vm_state active and task_state deleting.
Oct 02 12:10:24 compute-0 ceph-mon[73607]: pgmap v1323: 305 pgs: 305 active+clean; 191 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Oct 02 12:10:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776217206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.539 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.562 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.565 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105699232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.992 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.994 2 DEBUG nova.virt.libvirt.vif [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-2083638909',display_name='tempest-ImagesOneServerTestJSON-server-2083638909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-2083638909',id=47,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07abaa757bde49eead1d80ce844ec6ba',ramdisk_id='',reservation_id='r-20t8i3nm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1710294689',owner_user_name='tempest-ImagesOneServerTestJSON-1710294689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:20Z,user_data=None,user_id='3c2b867915b342b5acd8026e8fc9fe00',uuid=2b0bce41-aada-43ea-8c21-a68eeb2720c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.995 2 DEBUG nova.network.os_vif_util [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converting VIF {"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.996 2 DEBUG nova.network.os_vif_util [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:24 compute-0 nova_compute[257802]: 2025-10-02 12:10:24.997 2 DEBUG nova.objects.instance [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b0bce41-aada-43ea-8c21-a68eeb2720c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.211 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <uuid>2b0bce41-aada-43ea-8c21-a68eeb2720c0</uuid>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <name>instance-0000002f</name>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:name>tempest-ImagesOneServerTestJSON-server-2083638909</nova:name>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:10:24</nova:creationTime>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:user uuid="3c2b867915b342b5acd8026e8fc9fe00">tempest-ImagesOneServerTestJSON-1710294689-project-member</nova:user>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:project uuid="07abaa757bde49eead1d80ce844ec6ba">tempest-ImagesOneServerTestJSON-1710294689</nova:project>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <nova:port uuid="da20f9f6-37f3-4158-9959-4ce9ef867a87">
Oct 02 12:10:25 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="serial">2b0bce41-aada-43ea-8c21-a68eeb2720c0</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="uuid">2b0bce41-aada-43ea-8c21-a68eeb2720c0</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk">
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config">
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:10:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:c7:29:c2"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <target dev="tapda20f9f6-37"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/console.log" append="off"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:10:25 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:10:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:10:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:10:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:10:25 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.213 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Preparing to wait for external event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.213 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.213 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.214 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.214 2 DEBUG nova.virt.libvirt.vif [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-2083638909',display_name='tempest-ImagesOneServerTestJSON-server-2083638909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-2083638909',id=47,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07abaa757bde49eead1d80ce844ec6ba',ramdisk_id='',reservation_id='r-20t8i3nm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1710294689',owner_user_name='tempest-ImagesOneServerTestJSON-1710294689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:20Z,user_data=None,user_id='3c2b867915b342b5acd8026e8fc9fe00',uuid=2b0bce41-aada-43ea-8c21-a68eeb2720c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.215 2 DEBUG nova.network.os_vif_util [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converting VIF {"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.216 2 DEBUG nova.network.os_vif_util [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.216 2 DEBUG os_vif [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda20f9f6-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda20f9f6-37, col_values=(('external_ids', {'iface-id': 'da20f9f6-37f3-4158-9959-4ce9ef867a87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:29:c2', 'vm-uuid': '2b0bce41-aada-43ea-8c21-a68eeb2720c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-0 NetworkManager[44987]: <info>  [1759407025.2256] manager: (tapda20f9f6-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.227 2 DEBUG nova.network.neutron [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.234 2 INFO os_vif [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37')
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.258 2 INFO nova.compute.manager [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Took 1.44 seconds to deallocate network for instance.
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.324 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.325 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.342 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.343 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.343 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] No VIF found with MAC fa:16:3e:c7:29:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.344 2 INFO nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Using config drive
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.376 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.488 2 DEBUG oslo_concurrency.processutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/776217206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2105699232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:25.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.936 2 INFO nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Creating config drive at /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.945 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_5gtvb1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039167010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.979 2 DEBUG oslo_concurrency.processutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:25 compute-0 nova_compute[257802]: 2025-10-02 12:10:25.987 2 DEBUG nova.compute.provider_tree [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.020 2 DEBUG nova.scheduler.client.report [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.095 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_5gtvb1" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.136 2 DEBUG nova.storage.rbd_utils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] rbd image 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.139 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.162 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.204 2 INFO nova.scheduler.client.report [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Deleted allocations for instance e8828a06-3170-4302-a5f9-2e4ec8445ab2
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.256 2 DEBUG nova.compute.manager [req-7b146de3-2c31-476c-8e7f-eed5fbca9e1c req-5c390df8-487d-4206-bac0-83598536b940 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Received event network-vif-deleted-d83c3e72-0088-4c8f-8272-fa1f917210b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.268 2 DEBUG oslo_concurrency.lockutils [None req-79cf0fa3-326d-4436-b676-445c358140e7 c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "e8828a06-3170-4302-a5f9-2e4ec8445ab2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.328 2 DEBUG oslo_concurrency.processutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config 2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.329 2 INFO nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Deleting local config drive /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0/disk.config because it was imported into RBD.
Oct 02 12:10:26 compute-0 kernel: tapda20f9f6-37: entered promiscuous mode
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.3846] manager: (tapda20f9f6-37): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Oct 02 12:10:26 compute-0 ovn_controller[148183]: 2025-10-02T12:10:26Z|00175|binding|INFO|Claiming lport da20f9f6-37f3-4158-9959-4ce9ef867a87 for this chassis.
Oct 02 12:10:26 compute-0 ovn_controller[148183]: 2025-10-02T12:10:26Z|00176|binding|INFO|da20f9f6-37f3-4158-9959-4ce9ef867a87: Claiming fa:16:3e:c7:29:c2 10.100.0.9
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.398 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:29:c2 10.100.0.9'], port_security=['fa:16:3e:c7:29:c2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2b0bce41-aada-43ea-8c21-a68eeb2720c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07abaa757bde49eead1d80ce844ec6ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfd02b02-57c1-43db-a4de-9009b8d1e33b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=149973b7-bcb2-4360-8df8-fdb27dc8cf49, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=da20f9f6-37f3-4158-9959-4ce9ef867a87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.400 158261 INFO neutron.agent.ovn.metadata.agent [-] Port da20f9f6-37f3-4158-9959-4ce9ef867a87 in datapath cbcbfca3-dfd0-418b-81d4-5015e6d4350b bound to our chassis
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.401 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbcbfca3-dfd0-418b-81d4-5015e6d4350b
Oct 02 12:10:26 compute-0 ovn_controller[148183]: 2025-10-02T12:10:26Z|00177|binding|INFO|Setting lport da20f9f6-37f3-4158-9959-4ce9ef867a87 ovn-installed in OVS
Oct 02 12:10:26 compute-0 ovn_controller[148183]: 2025-10-02T12:10:26Z|00178|binding|INFO|Setting lport da20f9f6-37f3-4158-9959-4ce9ef867a87 up in Southbound
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.411 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[94a61b42-8054-40dd-93cb-7184e06029f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.412 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbcbfca3-d1 in ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.414 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbcbfca3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.414 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb71b8a-5065-4753-8682-9033a15c6d1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.415 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67f1c8c1-dfaf-41ee-8d92-07bfa3eb8112]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 systemd-udevd[289274]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:26 compute-0 systemd-machined[211836]: New machine qemu-23-instance-0000002f.
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.426 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b162aa6e-d351-4a18-b85c-7b8da49aacb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.4329] device (tapda20f9f6-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.4339] device (tapda20f9f6-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:26 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000002f.
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.444 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d4df5e57-b426-488a-87a7-48011b259bfc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.469 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6ca6f1-3a81-4f7f-9687-ea43e1bc4341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.4748] manager: (tapcbcbfca3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.473 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[06602658-554a-4a63-8591-f2d317ad3d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.513 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[790e3975-ff18-4f48-b7cf-7cec61dae14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.523 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bfabf6d9-3d27-481b-acc2-939dab3ff383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.5554] device (tapcbcbfca3-d0): carrier: link connected
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.565 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1599a5-8917-4113-b06e-72eea2e17e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.583 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46912800-30ff-4e7f-bee5-d86c7e68c17f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbcbfca3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:eb:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509418, 'reachable_time': 28430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289306, 'error': None, 'target': 'ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.597 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[662e8de6-2fa4-4ff2-baa4-ba5e321df7a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:ebe6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509418, 'tstamp': 509418}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289307, 'error': None, 'target': 'ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ceph-mon[73607]: pgmap v1324: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:10:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2039167010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.613 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[599f1259-75c6-4d26-8233-c0e5dfd0b5e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbcbfca3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:eb:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509418, 'reachable_time': 28430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289308, 'error': None, 'target': 'ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.641 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8c1293-1871-4c21-8c44-0fc091424e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.691 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4b048a-9326-4ba7-b304-0a900824ffe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.693 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbcbfca3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.693 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.694 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbcbfca3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 NetworkManager[44987]: <info>  [1759407026.6964] manager: (tapcbcbfca3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 02 12:10:26 compute-0 kernel: tapcbcbfca3-d0: entered promiscuous mode
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.699 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbcbfca3-d0, col_values=(('external_ids', {'iface-id': 'f72e6ad3-8ad7-4a67-8c93-4692070fce56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_controller[148183]: 2025-10-02T12:10:26Z|00179|binding|INFO|Releasing lport f72e6ad3-8ad7-4a67-8c93-4692070fce56 from this chassis (sb_readonly=0)
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.723 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbcbfca3-dfd0-418b-81d4-5015e6d4350b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbcbfca3-dfd0-418b-81d4-5015e6d4350b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.724 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb57965-2582-4603-a837-d49d12a49972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.724 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-cbcbfca3-dfd0-418b-81d4-5015e6d4350b
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/cbcbfca3-dfd0-418b-81d4-5015e6d4350b.pid.haproxy
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID cbcbfca3-dfd0-418b-81d4-5015e6d4350b
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.725 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'env', 'PROCESS_TAG=haproxy-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbcbfca3-dfd0-418b-81d4-5015e6d4350b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.888 2 DEBUG nova.compute.manager [req-3ce7e299-77ef-4c2d-951b-8266788cfe7f req-8875af7e-dbe3-4f38-a7f3-1a437cfa6328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.889 2 DEBUG oslo_concurrency.lockutils [req-3ce7e299-77ef-4c2d-951b-8266788cfe7f req-8875af7e-dbe3-4f38-a7f3-1a437cfa6328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.889 2 DEBUG oslo_concurrency.lockutils [req-3ce7e299-77ef-4c2d-951b-8266788cfe7f req-8875af7e-dbe3-4f38-a7f3-1a437cfa6328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.889 2 DEBUG oslo_concurrency.lockutils [req-3ce7e299-77ef-4c2d-951b-8266788cfe7f req-8875af7e-dbe3-4f38-a7f3-1a437cfa6328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.890 2 DEBUG nova.compute.manager [req-3ce7e299-77ef-4c2d-951b-8266788cfe7f req-8875af7e-dbe3-4f38-a7f3-1a437cfa6328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Processing event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.928 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.930 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:26.930 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 nova_compute[257802]: 2025-10-02 12:10:26.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.053 2 DEBUG nova.network.neutron [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updated VIF entry in instance network info cache for port da20f9f6-37f3-4158-9959-4ce9ef867a87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.054 2 DEBUG nova.network.neutron [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updating instance_info_cache with network_info: [{"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.073 2 DEBUG oslo_concurrency.lockutils [req-002f5724-dfd8-49b3-bbcc-0be4b7d4c9dc req-d90e9cac-8554-4eb5-b109-93ca272bc96f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 165 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 214 op/s
Oct 02 12:10:27 compute-0 podman[289340]: 2025-10-02 12:10:27.164891147 +0000 UTC m=+0.127324666 container create 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:10:27 compute-0 podman[289340]: 2025-10-02 12:10:27.076638909 +0000 UTC m=+0.039072448 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:27 compute-0 systemd[1]: Started libpod-conmon-83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec.scope.
Oct 02 12:10:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173846d4a275f10ae872f7937d3139aae156895abf5b2f8b61766973e4b5c742/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:27 compute-0 podman[289340]: 2025-10-02 12:10:27.259405047 +0000 UTC m=+0.221838556 container init 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:10:27 compute-0 podman[289340]: 2025-10-02 12:10:27.266282954 +0000 UTC m=+0.228716443 container start 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [NOTICE]   (289360) : New worker (289376) forked
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [NOTICE]   (289360) : Loading success.
Oct 02 12:10:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.869 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407027.869135, 2b0bce41-aada-43ea-8c21-a68eeb2720c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.870 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] VM Started (Lifecycle Event)
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.872 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.874 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.878 2 INFO nova.virt.libvirt.driver [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Instance spawned successfully.
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.878 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.906 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.912 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.912 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.913 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.913 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.913 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.914 2 DEBUG nova.virt.libvirt.driver [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.917 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.957 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.957 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407027.8693259, 2b0bce41-aada-43ea-8c21-a68eeb2720c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.957 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] VM Paused (Lifecycle Event)
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.978 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.980 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407027.8744636, 2b0bce41-aada-43ea-8c21-a68eeb2720c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.981 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] VM Resumed (Lifecycle Event)
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.988 2 INFO nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 7.78 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.988 2 DEBUG nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:27 compute-0 nova_compute[257802]: 2025-10-02 12:10:27.999 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.001 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.048 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.069 2 INFO nova.compute.manager [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 8.89 seconds to build instance.
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.088 2 DEBUG oslo_concurrency.lockutils [None req-4fe111f2-8df4-4dd1-be6c-2732b5f5e433 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:28 compute-0 ceph-mon[73607]: pgmap v1325: 305 pgs: 305 active+clean; 165 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 214 op/s
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.994 2 DEBUG nova.compute.manager [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.994 2 DEBUG oslo_concurrency.lockutils [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.994 2 DEBUG oslo_concurrency.lockutils [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.995 2 DEBUG oslo_concurrency.lockutils [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.995 2 DEBUG nova.compute.manager [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] No waiting events found dispatching network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:28 compute-0 nova_compute[257802]: 2025-10-02 12:10:28.995 2 WARNING nova.compute.manager [req-cae87f4f-c946-4494-b8cf-284bcbba586d req-9d4a376d-9ade-4abc-899b-54ae621f4c80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received unexpected event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 for instance with vm_state active and task_state None.
Oct 02 12:10:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 204 op/s
Oct 02 12:10:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:30 compute-0 nova_compute[257802]: 2025-10-02 12:10:30.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:30 compute-0 nova_compute[257802]: 2025-10-02 12:10:30.387 2 DEBUG nova.compute.manager [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:30 compute-0 nova_compute[257802]: 2025-10-02 12:10:30.445 2 INFO nova.compute.manager [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] instance snapshotting
Oct 02 12:10:30 compute-0 ceph-mon[73607]: pgmap v1326: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 204 op/s
Oct 02 12:10:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/962521600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:30 compute-0 nova_compute[257802]: 2025-10-02 12:10:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:30 compute-0 nova_compute[257802]: 2025-10-02 12:10:30.965 2 INFO nova.virt.libvirt.driver [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Beginning live snapshot process
Oct 02 12:10:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Oct 02 12:10:31 compute-0 nova_compute[257802]: 2025-10-02 12:10:31.176 2 DEBUG nova.virt.libvirt.imagebackend [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:10:31 compute-0 nova_compute[257802]: 2025-10-02 12:10:31.356 2 DEBUG nova.storage.rbd_utils [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] creating snapshot(5b3fe4b9ccd44f148c50e843f2efc1bf) on rbd image(2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:10:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 02 12:10:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 02 12:10:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 02 12:10:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:31 compute-0 nova_compute[257802]: 2025-10-02 12:10:31.892 2 DEBUG nova.storage.rbd_utils [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] cloning vms/2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk@5b3fe4b9ccd44f148c50e843f2efc1bf to images/6cb20915-4f3a-4331-8e5e-9cacc75a0090 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:10:32 compute-0 nova_compute[257802]: 2025-10-02 12:10:32.036 2 DEBUG nova.storage.rbd_utils [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] flattening images/6cb20915-4f3a-4331-8e5e-9cacc75a0090 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:10:32 compute-0 nova_compute[257802]: 2025-10-02 12:10:32.348 2 DEBUG nova.storage.rbd_utils [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] removing snapshot(5b3fe4b9ccd44f148c50e843f2efc1bf) on rbd image(2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:10:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 02 12:10:32 compute-0 ceph-mon[73607]: pgmap v1327: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Oct 02 12:10:32 compute-0 ceph-mon[73607]: osdmap e171: 3 total, 3 up, 3 in
Oct 02 12:10:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 02 12:10:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 02 12:10:32 compute-0 nova_compute[257802]: 2025-10-02 12:10:32.878 2 DEBUG nova.storage.rbd_utils [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] creating snapshot(snap) on rbd image(6cb20915-4f3a-4331-8e5e-9cacc75a0090) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:10:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 161 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.1 MiB/s wr, 255 op/s
Oct 02 12:10:33 compute-0 ovn_controller[148183]: 2025-10-02T12:10:33Z|00180|binding|INFO|Releasing lport f72e6ad3-8ad7-4a67-8c93-4692070fce56 from this chassis (sb_readonly=0)
Oct 02 12:10:33 compute-0 nova_compute[257802]: 2025-10-02 12:10:33.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-0 ovn_controller[148183]: 2025-10-02T12:10:33Z|00181|binding|INFO|Releasing lport f72e6ad3-8ad7-4a67-8c93-4692070fce56 from this chassis (sb_readonly=0)
Oct 02 12:10:33 compute-0 nova_compute[257802]: 2025-10-02 12:10:33.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 02 12:10:33 compute-0 ceph-mon[73607]: osdmap e172: 3 total, 3 up, 3 in
Oct 02 12:10:33 compute-0 sudo[289558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:33 compute-0 sudo[289558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:33 compute-0 sudo[289558]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:34 compute-0 sudo[289583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:34 compute-0 sudo[289583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:34 compute-0 sudo[289583]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 02 12:10:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 02 12:10:34 compute-0 ceph-mon[73607]: pgmap v1330: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 161 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.1 MiB/s wr, 255 op/s
Oct 02 12:10:34 compute-0 ceph-mon[73607]: osdmap e173: 3 total, 3 up, 3 in
Oct 02 12:10:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 207 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.4 MiB/s wr, 291 op/s
Oct 02 12:10:35 compute-0 nova_compute[257802]: 2025-10-02 12:10:35.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:35 compute-0 nova_compute[257802]: 2025-10-02 12:10:35.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:36 compute-0 nova_compute[257802]: 2025-10-02 12:10:36.117 2 INFO nova.virt.libvirt.driver [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Snapshot image upload complete
Oct 02 12:10:36 compute-0 nova_compute[257802]: 2025-10-02 12:10:36.119 2 INFO nova.compute.manager [None req-273e25a3-222a-444e-a60f-af0ee0d8f697 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 5.67 seconds to snapshot the instance on the hypervisor.
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:10:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 244 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 10 MiB/s wr, 263 op/s
Oct 02 12:10:37 compute-0 ceph-mon[73607]: pgmap v1332: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 207 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.4 MiB/s wr, 291 op/s
Oct 02 12:10:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3447389853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1647860312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407022.666567, e8828a06-3170-4302-a5f9-2e4ec8445ab2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.669 2 INFO nova.compute.manager [-] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] VM Stopped (Lifecycle Event)
Oct 02 12:10:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:37.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:37.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:37 compute-0 nova_compute[257802]: 2025-10-02 12:10:37.909 2 DEBUG nova.compute.manager [None req-f3310a6b-b4a2-4cfc-b447-5ae9a6c15e8c - - - - - -] [instance: e8828a06-3170-4302-a5f9-2e4ec8445ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:38 compute-0 nova_compute[257802]: 2025-10-02 12:10:38.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4088970629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:38 compute-0 ceph-mon[73607]: pgmap v1333: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 244 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 10 MiB/s wr, 263 op/s
Oct 02 12:10:39 compute-0 nova_compute[257802]: 2025-10-02 12:10:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 260 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 9.3 MiB/s wr, 312 op/s
Oct 02 12:10:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2592809986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:39 compute-0 podman[289611]: 2025-10-02 12:10:39.939630933 +0000 UTC m=+0.080860450 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:10:40 compute-0 nova_compute[257802]: 2025-10-02 12:10:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:40 compute-0 nova_compute[257802]: 2025-10-02 12:10:40.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 02 12:10:40 compute-0 ceph-mon[73607]: pgmap v1334: 305 pgs: 305 active+clean; 260 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 9.3 MiB/s wr, 312 op/s
Oct 02 12:10:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 02 12:10:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 02 12:10:40 compute-0 nova_compute[257802]: 2025-10-02 12:10:40.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:10:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 263 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 189 op/s
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.486 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.486 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.487 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:10:41 compute-0 nova_compute[257802]: 2025-10-02 12:10:41.487 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b0bce41-aada-43ea-8c21-a68eeb2720c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:41 compute-0 ovn_controller[148183]: 2025-10-02T12:10:41Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:29:c2 10.100.0.9
Oct 02 12:10:41 compute-0 ovn_controller[148183]: 2025-10-02T12:10:41Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:29:c2 10.100.0.9
Oct 02 12:10:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:41 compute-0 ceph-mon[73607]: osdmap e174: 3 total, 3 up, 3 in
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:10:42
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'volumes']
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:10:42 compute-0 nova_compute[257802]: 2025-10-02 12:10:42.626 2 DEBUG nova.compute.manager [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 nova_compute[257802]: 2025-10-02 12:10:42.668 2 INFO nova.compute.manager [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] instance snapshotting
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:10:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:10:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 02 12:10:43 compute-0 nova_compute[257802]: 2025-10-02 12:10:43.010 2 INFO nova.virt.libvirt.driver [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Beginning live snapshot process
Oct 02 12:10:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 02 12:10:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 02 12:10:43 compute-0 ceph-mon[73607]: pgmap v1336: 305 pgs: 305 active+clean; 263 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 189 op/s
Oct 02 12:10:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2662862687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 274 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 220 op/s
Oct 02 12:10:43 compute-0 nova_compute[257802]: 2025-10-02 12:10:43.162 2 DEBUG nova.virt.libvirt.imagebackend [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:10:43 compute-0 nova_compute[257802]: 2025-10-02 12:10:43.361 2 DEBUG nova.storage.rbd_utils [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] creating snapshot(2264a35fdd5747eebb6bd1b5d62691b4) on rbd image(2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:10:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:43.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:43 compute-0 podman[289684]: 2025-10-02 12:10:43.908051689 +0000 UTC m=+0.049510400 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 12:10:43 compute-0 podman[289683]: 2025-10-02 12:10:43.910355095 +0000 UTC m=+0.054554773 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:10:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 02 12:10:44 compute-0 ceph-mon[73607]: osdmap e175: 3 total, 3 up, 3 in
Oct 02 12:10:44 compute-0 ceph-mon[73607]: pgmap v1338: 305 pgs: 305 active+clean; 274 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 220 op/s
Oct 02 12:10:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2641519561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.225 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updating instance_info_cache with network_info: [{"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.244 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-2b0bce41-aada-43ea-8c21-a68eeb2720c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.245 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.245 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.262 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.262 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.262 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.263 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.263 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.582 2 DEBUG nova.storage.rbd_utils [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] cloning vms/2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk@2264a35fdd5747eebb6bd1b5d62691b4 to images/856e53ca-7fff-4728-88ae-8118b60bc057 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:10:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813748826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.688 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.764 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.764 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.997 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.998 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=20.885372161865234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.998 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:44 compute-0 nova_compute[257802]: 2025-10-02 12:10:44.998 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 2b0bce41-aada-43ea-8c21-a68eeb2720c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.115 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 266 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.204 2 DEBUG nova.storage.rbd_utils [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] flattening images/856e53ca-7fff-4728-88ae-8118b60bc057 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:45 compute-0 ceph-mon[73607]: osdmap e176: 3 total, 3 up, 3 in
Oct 02 12:10:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1813748826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810253798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.635 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.639 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.653 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.676 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.676 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:45.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:45 compute-0 nova_compute[257802]: 2025-10-02 12:10:45.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:46.036 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:46.037 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:10:46 compute-0 nova_compute[257802]: 2025-10-02 12:10:46.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:46 compute-0 ceph-mon[73607]: pgmap v1340: 305 pgs: 305 active+clean; 266 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Oct 02 12:10:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1810253798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:46 compute-0 nova_compute[257802]: 2025-10-02 12:10:46.672 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:46 compute-0 nova_compute[257802]: 2025-10-02 12:10:46.681 2 DEBUG nova.storage.rbd_utils [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] removing snapshot(2264a35fdd5747eebb6bd1b5d62691b4) on rbd image(2b0bce41-aada-43ea-8c21-a68eeb2720c0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:10:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:47.040 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 246 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.5 MiB/s wr, 252 op/s
Oct 02 12:10:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 02 12:10:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:47.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:47 compute-0 podman[289840]: 2025-10-02 12:10:47.953084001 +0000 UTC m=+0.085821021 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:10:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 02 12:10:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 02 12:10:48 compute-0 nova_compute[257802]: 2025-10-02 12:10:48.138 2 DEBUG nova.storage.rbd_utils [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] creating snapshot(snap) on rbd image(856e53ca-7fff-4728-88ae-8118b60bc057) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:10:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 02 12:10:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:10:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6955 writes, 30K keys, 6954 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6954 writes, 6953 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1728 writes, 7373 keys, 1728 commit groups, 1.0 writes per commit group, ingest: 11.03 MB, 0.02 MB/s
                                           Interval WAL: 1727 writes, 1727 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     70.7      0.54              0.10        17    0.032       0      0       0.0       0.0
                                             L6      1/0    8.19 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    111.9     91.8      1.49              0.37        16    0.093     78K   8972       0.0       0.0
                                            Sum      1/0    8.19 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     82.2     86.2      2.03              0.46        33    0.062     78K   8972       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     46.6     47.3      1.03              0.14         8    0.129     23K   2586       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    111.9     91.8      1.49              0.37        16    0.093     78K   8972       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     71.2      0.53              0.10        16    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.037, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 2.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 17.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000193 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1028,16.73 MB,5.50234%) FilterBlock(34,225.80 KB,0.0725345%) IndexBlock(34,418.17 KB,0.134333%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:10:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 301 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.0 MiB/s wr, 313 op/s
Oct 02 12:10:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 02 12:10:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 02 12:10:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:49 compute-0 ceph-mon[73607]: pgmap v1341: 305 pgs: 305 active+clean; 246 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.5 MiB/s wr, 252 op/s
Oct 02 12:10:49 compute-0 ceph-mon[73607]: osdmap e177: 3 total, 3 up, 3 in
Oct 02 12:10:50 compute-0 nova_compute[257802]: 2025-10-02 12:10:50.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:50 compute-0 nova_compute[257802]: 2025-10-02 12:10:50.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-0 nova_compute[257802]: 2025-10-02 12:10:50.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 326 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.8 MiB/s wr, 242 op/s
Oct 02 12:10:51 compute-0 ceph-mon[73607]: pgmap v1343: 305 pgs: 305 active+clean; 301 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.0 MiB/s wr, 313 op/s
Oct 02 12:10:51 compute-0 ceph-mon[73607]: osdmap e178: 3 total, 3 up, 3 in
Oct 02 12:10:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:51 compute-0 nova_compute[257802]: 2025-10-02 12:10:51.824 2 INFO nova.virt.libvirt.driver [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Snapshot image upload complete
Oct 02 12:10:51 compute-0 nova_compute[257802]: 2025-10-02 12:10:51.824 2 INFO nova.compute.manager [None req-bf927f91-9617-49ca-99ea-e518433891a8 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 9.16 seconds to snapshot the instance on the hypervisor.
Oct 02 12:10:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 02 12:10:52 compute-0 ceph-mon[73607]: pgmap v1345: 305 pgs: 305 active+clean; 326 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.8 MiB/s wr, 242 op/s
Oct 02 12:10:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 349 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 6.6 MiB/s wr, 224 op/s
Oct 02 12:10:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 02 12:10:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 02 12:10:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:10:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:10:54 compute-0 sudo[289887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:54 compute-0 sudo[289887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:54 compute-0 sudo[289887]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005335379152707985 of space, bias 1.0, pg target 1.6006137458123955 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004322465917550716 of space, bias 1.0, pg target 1.292417309347664 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:10:54 compute-0 sudo[289912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:54 compute-0 sudo[289912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:54 compute-0 sudo[289912]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 02 12:10:54 compute-0 ceph-mon[73607]: pgmap v1346: 305 pgs: 305 active+clean; 349 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 6.6 MiB/s wr, 224 op/s
Oct 02 12:10:54 compute-0 ceph-mon[73607]: osdmap e179: 3 total, 3 up, 3 in
Oct 02 12:10:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 02 12:10:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 02 12:10:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 367 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 125 op/s
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:55 compute-0 ceph-mon[73607]: osdmap e180: 3 total, 3 up, 3 in
Oct 02 12:10:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1703261303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1703261303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.892 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.893 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.893 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.893 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.894 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.894 2 INFO nova.compute.manager [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Terminating instance
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.895 2 DEBUG nova.compute.manager [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:55 compute-0 kernel: tapda20f9f6-37 (unregistering): left promiscuous mode
Oct 02 12:10:55 compute-0 NetworkManager[44987]: <info>  [1759407055.9600] device (tapda20f9f6-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:10:55 compute-0 ovn_controller[148183]: 2025-10-02T12:10:55Z|00182|binding|INFO|Releasing lport da20f9f6-37f3-4158-9959-4ce9ef867a87 from this chassis (sb_readonly=0)
Oct 02 12:10:55 compute-0 ovn_controller[148183]: 2025-10-02T12:10:55Z|00183|binding|INFO|Setting lport da20f9f6-37f3-4158-9959-4ce9ef867a87 down in Southbound
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:55 compute-0 ovn_controller[148183]: 2025-10-02T12:10:55Z|00184|binding|INFO|Removing iface tapda20f9f6-37 ovn-installed in OVS
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:55.977 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:29:c2 10.100.0.9'], port_security=['fa:16:3e:c7:29:c2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2b0bce41-aada-43ea-8c21-a68eeb2720c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07abaa757bde49eead1d80ce844ec6ba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfd02b02-57c1-43db-a4de-9009b8d1e33b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=149973b7-bcb2-4360-8df8-fdb27dc8cf49, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=da20f9f6-37f3-4158-9959-4ce9ef867a87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:55.978 158261 INFO neutron.agent.ovn.metadata.agent [-] Port da20f9f6-37f3-4158-9959-4ce9ef867a87 in datapath cbcbfca3-dfd0-418b-81d4-5015e6d4350b unbound from our chassis
Oct 02 12:10:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:55.979 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbcbfca3-dfd0-418b-81d4-5015e6d4350b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:10:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:55.980 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[663269c3-e148-4aa5-8e6b-9c5a064fb7b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:55.981 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b namespace which is not needed anymore
Oct 02 12:10:55 compute-0 nova_compute[257802]: 2025-10-02 12:10:55.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 02 12:10:56 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002f.scope: Consumed 14.967s CPU time.
Oct 02 12:10:56 compute-0 systemd-machined[211836]: Machine qemu-23-instance-0000002f terminated.
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.148 2 INFO nova.virt.libvirt.driver [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Instance destroyed successfully.
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.149 2 DEBUG nova.objects.instance [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lazy-loading 'resources' on Instance uuid 2b0bce41-aada-43ea-8c21-a68eeb2720c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.163 2 DEBUG nova.virt.libvirt.vif [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-2083638909',display_name='tempest-ImagesOneServerTestJSON-server-2083638909',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-2083638909',id=47,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07abaa757bde49eead1d80ce844ec6ba',ramdisk_id='',reservation_id='r-20t8i3nm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1710294689',owner_user_name='tempest-ImagesOneServerTestJSON-1710294689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:51Z,user_data=None,user_id='3c2b867915b342b5acd8026e8fc9fe00',uuid=2b0bce41-aada-43ea-8c21-a68eeb2720c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.163 2 DEBUG nova.network.os_vif_util [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converting VIF {"id": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "address": "fa:16:3e:c7:29:c2", "network": {"id": "cbcbfca3-dfd0-418b-81d4-5015e6d4350b", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1880668607-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07abaa757bde49eead1d80ce844ec6ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda20f9f6-37", "ovs_interfaceid": "da20f9f6-37f3-4158-9959-4ce9ef867a87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.164 2 DEBUG nova.network.os_vif_util [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.164 2 DEBUG os_vif [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda20f9f6-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.171 2 INFO os_vif [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:29:c2,bridge_name='br-int',has_traffic_filtering=True,id=da20f9f6-37f3-4158-9959-4ce9ef867a87,network=Network(cbcbfca3-dfd0-418b-81d4-5015e6d4350b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda20f9f6-37')
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [NOTICE]   (289360) : haproxy version is 2.8.14-c23fe91
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [NOTICE]   (289360) : path to executable is /usr/sbin/haproxy
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [WARNING]  (289360) : Exiting Master process...
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [WARNING]  (289360) : Exiting Master process...
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [ALERT]    (289360) : Current worker (289376) exited with code 143 (Terminated)
Oct 02 12:10:56 compute-0 neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b[289356]: [WARNING]  (289360) : All workers exited. Exiting... (0)
Oct 02 12:10:56 compute-0 systemd[1]: libpod-83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec.scope: Deactivated successfully.
Oct 02 12:10:56 compute-0 podman[289962]: 2025-10-02 12:10:56.184755534 +0000 UTC m=+0.100570597 container died 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.373 2 DEBUG nova.compute.manager [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-unplugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.374 2 DEBUG oslo_concurrency.lockutils [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec-userdata-shm.mount: Deactivated successfully.
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.375 2 DEBUG oslo_concurrency.lockutils [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.376 2 DEBUG oslo_concurrency.lockutils [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.376 2 DEBUG nova.compute.manager [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] No waiting events found dispatching network-vif-unplugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.376 2 DEBUG nova.compute.manager [req-a4e33cc8-700d-4640-a4d2-a19070e7be56 req-bc9e2aad-294f-4e9f-8438-1d87ac1dea7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-unplugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-173846d4a275f10ae872f7937d3139aae156895abf5b2f8b61766973e4b5c742-merged.mount: Deactivated successfully.
Oct 02 12:10:56 compute-0 podman[289962]: 2025-10-02 12:10:56.517854987 +0000 UTC m=+0.433670050 container cleanup 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:10:56 compute-0 systemd[1]: libpod-conmon-83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec.scope: Deactivated successfully.
Oct 02 12:10:56 compute-0 podman[290024]: 2025-10-02 12:10:56.773217325 +0000 UTC m=+0.231552092 container remove 83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.782 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0b99c577-81d1-47bb-bb25-450deeb2c257]: (4, ('Thu Oct  2 12:10:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b (83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec)\n83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec\nThu Oct  2 12:10:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b (83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec)\n83d60f55c3b68be6b61b8862d05cb16fddde25be05f1803d64cc50754cb6acec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.783 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e418af5f-a922-4698-94c2-fa847fb35171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.784 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbcbfca3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:56 compute-0 kernel: tapcbcbfca3-d0: left promiscuous mode
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 nova_compute[257802]: 2025-10-02 12:10:56.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.804 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4344090-3114-423c-81c6-ad858bfa4b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.838 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[41ddf0f3-a545-43ba-ac59-0f51f8cd093c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.839 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[24ca2186-15d1-4990-8d48-756bd606510e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.854 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fb504568-8cc3-49ec-9b66-c353e0c3a8e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509409, 'reachable_time': 31095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290040, 'error': None, 'target': 'ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dcbcbfca3\x2ddfd0\x2d418b\x2d81d4\x2d5015e6d4350b.mount: Deactivated successfully.
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.860 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbcbfca3-dfd0-418b-81d4-5015e6d4350b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:10:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:10:56.860 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[9f726607-4574-4938-a6ef-f9d713fd6c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:56 compute-0 ceph-mon[73607]: pgmap v1349: 305 pgs: 305 active+clean; 367 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 125 op/s
Oct 02 12:10:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 347 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 104 op/s
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.223 2 INFO nova.virt.libvirt.driver [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Deleting instance files /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0_del
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.224 2 INFO nova.virt.libvirt.driver [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Deletion of /var/lib/nova/instances/2b0bce41-aada-43ea-8c21-a68eeb2720c0_del complete
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.290 2 INFO nova.compute.manager [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 1.40 seconds to destroy the instance on the hypervisor.
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.291 2 DEBUG oslo.service.loopingcall [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.291 2 DEBUG nova.compute.manager [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:10:57 compute-0 nova_compute[257802]: 2025-10-02 12:10:57.291 2 DEBUG nova.network.neutron [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:10:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:57.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 02 12:10:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 02 12:10:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 02 12:10:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 02 12:10:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 02 12:10:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.498 2 DEBUG nova.compute.manager [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.499 2 DEBUG oslo_concurrency.lockutils [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.499 2 DEBUG oslo_concurrency.lockutils [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.499 2 DEBUG oslo_concurrency.lockutils [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.500 2 DEBUG nova.compute.manager [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] No waiting events found dispatching network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.500 2 WARNING nova.compute.manager [req-0dca53df-6cfd-486c-9a69-9c7e054f5026 req-d7dd713d-e731-4b9d-b208-9e1346a88fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received unexpected event network-vif-plugged-da20f9f6-37f3-4158-9959-4ce9ef867a87 for instance with vm_state active and task_state deleting.
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.637 2 DEBUG nova.network.neutron [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.655 2 INFO nova.compute.manager [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Took 1.36 seconds to deallocate network for instance.
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.701 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.701 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.713 2 DEBUG nova.compute.manager [req-1e3f9e35-6a7a-45ca-9316-23e78aa9dd5a req-49a169fa-173e-4dd7-8975-894ab55f1d81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Received event network-vif-deleted-da20f9f6-37f3-4158-9959-4ce9ef867a87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:58 compute-0 nova_compute[257802]: 2025-10-02 12:10:58.756 2 DEBUG oslo_concurrency.processutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:59 compute-0 ceph-mon[73607]: pgmap v1350: 305 pgs: 305 active+clean; 347 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 104 op/s
Oct 02 12:10:59 compute-0 ceph-mon[73607]: osdmap e181: 3 total, 3 up, 3 in
Oct 02 12:10:59 compute-0 ceph-mon[73607]: osdmap e182: 3 total, 3 up, 3 in
Oct 02 12:10:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 220 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 246 op/s
Oct 02 12:10:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649341966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.208 2 DEBUG oslo_concurrency.processutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.214 2 DEBUG nova.compute.provider_tree [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.235 2 DEBUG nova.scheduler.client.report [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.266 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.304 2 INFO nova.scheduler.client.report [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Deleted allocations for instance 2b0bce41-aada-43ea-8c21-a68eeb2720c0
Oct 02 12:10:59 compute-0 nova_compute[257802]: 2025-10-02 12:10:59.369 2 DEBUG oslo_concurrency.lockutils [None req-8ae8a1b3-a74a-463b-897e-1968148428d2 3c2b867915b342b5acd8026e8fc9fe00 07abaa757bde49eead1d80ce844ec6ba - - default default] Lock "2b0bce41-aada-43ea-8c21-a68eeb2720c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:10:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:10:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:10:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 12:10:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 12:11:00 compute-0 ceph-mon[73607]: pgmap v1353: 305 pgs: 305 active+clean; 220 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 246 op/s
Oct 02 12:11:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2649341966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:01 compute-0 nova_compute[257802]: 2025-10-02 12:11:01.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 177 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 496 KiB/s wr, 158 op/s
Oct 02 12:11:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4058958109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:01 compute-0 nova_compute[257802]: 2025-10-02 12:11:01.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:02 compute-0 ceph-mon[73607]: pgmap v1354: 305 pgs: 305 active+clean; 177 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 496 KiB/s wr, 158 op/s
Oct 02 12:11:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 132 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 395 KiB/s wr, 155 op/s
Oct 02 12:11:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 02 12:11:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 02 12:11:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2952046040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 02 12:11:03 compute-0 nova_compute[257802]: 2025-10-02 12:11:03.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:11:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:11:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:03.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:04 compute-0 ceph-mon[73607]: pgmap v1355: 305 pgs: 305 active+clean; 132 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 395 KiB/s wr, 155 op/s
Oct 02 12:11:04 compute-0 ceph-mon[73607]: osdmap e183: 3 total, 3 up, 3 in
Oct 02 12:11:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 133 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 1.2 MiB/s wr, 105 op/s
Oct 02 12:11:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:11:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:11:06 compute-0 nova_compute[257802]: 2025-10-02 12:11:06.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:06 compute-0 nova_compute[257802]: 2025-10-02 12:11:06.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:06 compute-0 ceph-mon[73607]: pgmap v1357: 305 pgs: 305 active+clean; 133 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 1.2 MiB/s wr, 105 op/s
Oct 02 12:11:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 142 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 1.4 MiB/s wr, 116 op/s
Oct 02 12:11:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/612810927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000071s ======
Oct 02 12:11:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:07.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Oct 02 12:11:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 02 12:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 02 12:11:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 02 12:11:08 compute-0 ceph-mon[73607]: pgmap v1358: 305 pgs: 305 active+clean; 142 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 1.4 MiB/s wr, 116 op/s
Oct 02 12:11:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1638063544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:08 compute-0 ceph-mon[73607]: osdmap e184: 3 total, 3 up, 3 in
Oct 02 12:11:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 143 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 02 12:11:09 compute-0 sudo[290069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:09 compute-0 sudo[290069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:09 compute-0 sudo[290069]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:09 compute-0 sudo[290094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:09 compute-0 sudo[290094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:09 compute-0 sudo[290094]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:09 compute-0 sudo[290119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:09 compute-0 sudo[290119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:09 compute-0 sudo[290119]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:09 compute-0 sudo[290144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:11:09 compute-0 sudo[290144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:10 compute-0 sudo[290144]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:10 compute-0 ceph-mon[73607]: pgmap v1360: 305 pgs: 305 active+clean; 143 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 232957a9-4f02-4cba-9267-ebcc1e1362a3 does not exist
Oct 02 12:11:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9f1c9acc-e51c-41d1-8776-24e6403841d3 does not exist
Oct 02 12:11:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4f2954fd-6bc9-4e83-a898-317cbed382d5 does not exist
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:11:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:11:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:10 compute-0 sudo[290201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:10 compute-0 sudo[290201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:10 compute-0 sudo[290201]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:10 compute-0 sudo[290232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:10 compute-0 sudo[290232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:10 compute-0 sudo[290232]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:10 compute-0 podman[290225]: 2025-10-02 12:11:10.700912748 +0000 UTC m=+0.060332673 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:11:10 compute-0 sudo[290270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:10 compute-0 sudo[290270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:10 compute-0 sudo[290270]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:10 compute-0 sudo[290295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:11:10 compute-0 sudo[290295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:11 compute-0 nova_compute[257802]: 2025-10-02 12:11:11.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 119 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.143403591 +0000 UTC m=+0.043120786 container create 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:11:11 compute-0 nova_compute[257802]: 2025-10-02 12:11:11.144 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407056.1435142, 2b0bce41-aada-43ea-8c21-a68eeb2720c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:11 compute-0 nova_compute[257802]: 2025-10-02 12:11:11.145 2 INFO nova.compute.manager [-] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] VM Stopped (Lifecycle Event)
Oct 02 12:11:11 compute-0 nova_compute[257802]: 2025-10-02 12:11:11.164 2 DEBUG nova.compute.manager [None req-e451c8a7-2f9c-42a0-9043-b6b016fed1ac - - - - - -] [instance: 2b0bce41-aada-43ea-8c21-a68eeb2720c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:11 compute-0 nova_compute[257802]: 2025-10-02 12:11:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:11 compute-0 systemd[1]: Started libpod-conmon-859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364.scope.
Oct 02 12:11:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.123218222 +0000 UTC m=+0.022935427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.234348405 +0000 UTC m=+0.134065610 container init 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.24240509 +0000 UTC m=+0.142122285 container start 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.245529866 +0000 UTC m=+0.145247081 container attach 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 12:11:11 compute-0 gifted_turing[290377]: 167 167
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.248692052 +0000 UTC m=+0.148409237 container died 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:11 compute-0 systemd[1]: libpod-859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364.scope: Deactivated successfully.
Oct 02 12:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59ce9ae58e58641bf36fb3060dec81ef39e7a862e7ca1d013606de1921cb376-merged.mount: Deactivated successfully.
Oct 02 12:11:11 compute-0 podman[290361]: 2025-10-02 12:11:11.289321957 +0000 UTC m=+0.189039142 container remove 859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:11 compute-0 systemd[1]: libpod-conmon-859a9aad8bbbd8db7731b6a359b0606d743dbbde190862879d4352d540fb0364.scope: Deactivated successfully.
Oct 02 12:11:11 compute-0 podman[290402]: 2025-10-02 12:11:11.438629945 +0000 UTC m=+0.038493414 container create 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:11:11 compute-0 systemd[1]: Started libpod-conmon-11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e.scope.
Oct 02 12:11:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:11 compute-0 podman[290402]: 2025-10-02 12:11:11.422140175 +0000 UTC m=+0.022003644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:11 compute-0 podman[290402]: 2025-10-02 12:11:11.52796991 +0000 UTC m=+0.127833409 container init 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:11:11 compute-0 podman[290402]: 2025-10-02 12:11:11.53497193 +0000 UTC m=+0.134835399 container start 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:11:11 compute-0 podman[290402]: 2025-10-02 12:11:11.539355976 +0000 UTC m=+0.139219445 container attach 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:11:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:12 compute-0 vigilant_dubinsky[290419]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:11:12 compute-0 vigilant_dubinsky[290419]: --> relative data size: 1.0
Oct 02 12:11:12 compute-0 vigilant_dubinsky[290419]: --> All data devices are unavailable
Oct 02 12:11:12 compute-0 systemd[1]: libpod-11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e.scope: Deactivated successfully.
Oct 02 12:11:12 compute-0 podman[290434]: 2025-10-02 12:11:12.322501443 +0000 UTC m=+0.020678172 container died 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-361d9a200dcf23eccce0abc704a677835f94c6e7805fc7c6cb343e50c32a7e0e-merged.mount: Deactivated successfully.
Oct 02 12:11:12 compute-0 podman[290434]: 2025-10-02 12:11:12.370483846 +0000 UTC m=+0.068660575 container remove 11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dubinsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:11:12 compute-0 systemd[1]: libpod-conmon-11fb4d95cf93edccb7abf07cd69ad3150387f35e6a84116ddfbca936c82c443e.scope: Deactivated successfully.
Oct 02 12:11:12 compute-0 sudo[290295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:12 compute-0 sudo[290448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:12 compute-0 sudo[290448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:12 compute-0 sudo[290448]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:12 compute-0 sudo[290473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:12 compute-0 sudo[290473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:12 compute-0 sudo[290473]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:12 compute-0 ceph-mon[73607]: pgmap v1361: 305 pgs: 305 active+clean; 119 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Oct 02 12:11:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4274527272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:12 compute-0 sudo[290498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:12 compute-0 sudo[290498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:12 compute-0 sudo[290498]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:12 compute-0 sudo[290523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:11:12 compute-0 sudo[290523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.003597358 +0000 UTC m=+0.043916595 container create 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:13 compute-0 systemd[1]: Started libpod-conmon-3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e.scope.
Oct 02 12:11:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:12.986894284 +0000 UTC m=+0.027213571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.081958227 +0000 UTC m=+0.122277474 container init 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.090960965 +0000 UTC m=+0.131280222 container start 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:13 compute-0 vigilant_hellman[290605]: 167 167
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.096809457 +0000 UTC m=+0.137128694 container attach 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:13 compute-0 systemd[1]: libpod-3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e.scope: Deactivated successfully.
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.097434122 +0000 UTC m=+0.137753359 container died 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-788a442265e06162fce6a3614f2a5b18afd190829eda59d327a61952c7fd78ad-merged.mount: Deactivated successfully.
Oct 02 12:11:13 compute-0 podman[290589]: 2025-10-02 12:11:13.138170029 +0000 UTC m=+0.178489276 container remove 3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:11:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 88 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:11:13 compute-0 systemd[1]: libpod-conmon-3d9ab2782a9a564406b0c7d8dc6c52560c9bc745b081857b23b186d5ecfb033e.scope: Deactivated successfully.
Oct 02 12:11:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:13 compute-0 podman[290629]: 2025-10-02 12:11:13.320870466 +0000 UTC m=+0.049122461 container create aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:13 compute-0 systemd[1]: Started libpod-conmon-aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973.scope.
Oct 02 12:11:13 compute-0 podman[290629]: 2025-10-02 12:11:13.297619313 +0000 UTC m=+0.025871328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2c6b4590329ba72cfccde5ba1dfc0d765bae279f0df34bcb461eb480e85255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2c6b4590329ba72cfccde5ba1dfc0d765bae279f0df34bcb461eb480e85255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2c6b4590329ba72cfccde5ba1dfc0d765bae279f0df34bcb461eb480e85255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2c6b4590329ba72cfccde5ba1dfc0d765bae279f0df34bcb461eb480e85255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:13 compute-0 podman[290629]: 2025-10-02 12:11:13.415435658 +0000 UTC m=+0.143687693 container init aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:11:13 compute-0 podman[290629]: 2025-10-02 12:11:13.423663428 +0000 UTC m=+0.151915433 container start aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:11:13 compute-0 podman[290629]: 2025-10-02 12:11:13.428192547 +0000 UTC m=+0.156444552 container attach aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:11:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:13.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]: {
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:     "1": [
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:         {
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "devices": [
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "/dev/loop3"
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             ],
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "lv_name": "ceph_lv0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "lv_size": "7511998464",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "name": "ceph_lv0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "tags": {
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.cluster_name": "ceph",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.crush_device_class": "",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.encrypted": "0",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.osd_id": "1",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.type": "block",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:                 "ceph.vdo": "0"
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             },
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "type": "block",
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:             "vg_name": "ceph_vg0"
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:         }
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]:     ]
Oct 02 12:11:14 compute-0 bold_varahamihira[290646]: }
Oct 02 12:11:14 compute-0 systemd[1]: libpod-aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973.scope: Deactivated successfully.
Oct 02 12:11:14 compute-0 podman[290629]: 2025-10-02 12:11:14.129125812 +0000 UTC m=+0.857377817 container died aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2c6b4590329ba72cfccde5ba1dfc0d765bae279f0df34bcb461eb480e85255-merged.mount: Deactivated successfully.
Oct 02 12:11:14 compute-0 podman[290629]: 2025-10-02 12:11:14.229806692 +0000 UTC m=+0.958058687 container remove aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:14 compute-0 systemd[1]: libpod-conmon-aa19f34b280c86f43afc448009312188df0f4e839857da92ee9c262d255fe973.scope: Deactivated successfully.
Oct 02 12:11:14 compute-0 podman[290662]: 2025-10-02 12:11:14.248316751 +0000 UTC m=+0.081618579 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:11:14 compute-0 podman[290656]: 2025-10-02 12:11:14.271069942 +0000 UTC m=+0.107399194 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:11:14 compute-0 sudo[290523]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:14 compute-0 sudo[290706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 sudo[290706]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:14 compute-0 sudo[290731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 sudo[290731]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:14 compute-0 sudo[290754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 sudo[290754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:14 compute-0 sudo[290778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 sudo[290778]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:14 compute-0 sudo[290809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 sudo[290809]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:14 compute-0 sudo[290834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:11:14 compute-0 sudo[290834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:14 compute-0 ceph-mon[73607]: pgmap v1362: 305 pgs: 305 active+clean; 88 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:11:14 compute-0 podman[290901]: 2025-10-02 12:11:14.899021009 +0000 UTC m=+0.047895912 container create 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:14 compute-0 systemd[1]: Started libpod-conmon-2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb.scope.
Oct 02 12:11:14 compute-0 podman[290901]: 2025-10-02 12:11:14.881394292 +0000 UTC m=+0.030269215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:14 compute-0 podman[290901]: 2025-10-02 12:11:14.988591739 +0000 UTC m=+0.137466662 container init 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:11:14 compute-0 podman[290901]: 2025-10-02 12:11:14.996503151 +0000 UTC m=+0.145378054 container start 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:11:15 compute-0 podman[290901]: 2025-10-02 12:11:15.000740724 +0000 UTC m=+0.149615637 container attach 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:11:15 compute-0 xenodochial_ptolemy[290917]: 167 167
Oct 02 12:11:15 compute-0 systemd[1]: libpod-2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb.scope: Deactivated successfully.
Oct 02 12:11:15 compute-0 conmon[290917]: conmon 2fbec3a56f033e2f8930 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb.scope/container/memory.events
Oct 02 12:11:15 compute-0 podman[290922]: 2025-10-02 12:11:15.042853604 +0000 UTC m=+0.023309016 container died 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8235eabe69299170d3c104f7e04044e47f12372a2ce3a3a6e15c70c192f777bf-merged.mount: Deactivated successfully.
Oct 02 12:11:15 compute-0 podman[290922]: 2025-10-02 12:11:15.084494734 +0000 UTC m=+0.064950126 container remove 2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:11:15 compute-0 systemd[1]: libpod-conmon-2fbec3a56f033e2f89303e74e37920a848ca28c8415e0a2ca3fd82aa7fd228bb.scope: Deactivated successfully.
Oct 02 12:11:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Oct 02 12:11:15 compute-0 podman[290944]: 2025-10-02 12:11:15.257864215 +0000 UTC m=+0.054300587 container create b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:11:15 compute-0 systemd[1]: Started libpod-conmon-b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d.scope.
Oct 02 12:11:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8d999821c2e4398a31338283a886f2e4a631a041ff60b49c7245c0bb386f58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:15 compute-0 podman[290944]: 2025-10-02 12:11:15.234432257 +0000 UTC m=+0.030868669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8d999821c2e4398a31338283a886f2e4a631a041ff60b49c7245c0bb386f58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8d999821c2e4398a31338283a886f2e4a631a041ff60b49c7245c0bb386f58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8d999821c2e4398a31338283a886f2e4a631a041ff60b49c7245c0bb386f58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:15 compute-0 podman[290944]: 2025-10-02 12:11:15.345371525 +0000 UTC m=+0.141807977 container init b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:11:15 compute-0 podman[290944]: 2025-10-02 12:11:15.361519087 +0000 UTC m=+0.157955459 container start b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:11:15 compute-0 podman[290944]: 2025-10-02 12:11:15.372890352 +0000 UTC m=+0.169326724 container attach b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:11:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:15.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:16 compute-0 nova_compute[257802]: 2025-10-02 12:11:16.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:16 compute-0 nova_compute[257802]: 2025-10-02 12:11:16.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:16 compute-0 youthful_napier[290960]: {
Oct 02 12:11:16 compute-0 youthful_napier[290960]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:11:16 compute-0 youthful_napier[290960]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:11:16 compute-0 youthful_napier[290960]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:11:16 compute-0 youthful_napier[290960]:         "osd_id": 1,
Oct 02 12:11:16 compute-0 youthful_napier[290960]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:11:16 compute-0 youthful_napier[290960]:         "type": "bluestore"
Oct 02 12:11:16 compute-0 youthful_napier[290960]:     }
Oct 02 12:11:16 compute-0 youthful_napier[290960]: }
Oct 02 12:11:16 compute-0 podman[290944]: 2025-10-02 12:11:16.21218038 +0000 UTC m=+1.008616772 container died b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:11:16 compute-0 systemd[1]: libpod-b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d.scope: Deactivated successfully.
Oct 02 12:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f8d999821c2e4398a31338283a886f2e4a631a041ff60b49c7245c0bb386f58-merged.mount: Deactivated successfully.
Oct 02 12:11:16 compute-0 podman[290944]: 2025-10-02 12:11:16.272208725 +0000 UTC m=+1.068645097 container remove b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:11:16 compute-0 systemd[1]: libpod-conmon-b16ae49b26265458615aa0885ab132b34bf5de98a0a1618a8f2f57601766cc2d.scope: Deactivated successfully.
Oct 02 12:11:16 compute-0 sudo[290834]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:11:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:11:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d4e02761-71f4-4671-88c2-035973cbba5d does not exist
Oct 02 12:11:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ebf1177b-9100-4e56-8c3e-75f1367012fa does not exist
Oct 02 12:11:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a37eea8f-b78f-4330-8a3e-2b4beba0670e does not exist
Oct 02 12:11:16 compute-0 sudo[290995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:16 compute-0 sudo[290995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:16 compute-0 sudo[290995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:16 compute-0 sudo[291021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:11:16 compute-0 sudo[291021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:16 compute-0 sudo[291021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:16 compute-0 ceph-mon[73607]: pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Oct 02 12:11:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:11:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 970 KiB/s wr, 126 op/s
Oct 02 12:11:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2489225117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:18 compute-0 ceph-mon[73607]: pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 970 KiB/s wr, 126 op/s
Oct 02 12:11:18 compute-0 podman[291047]: 2025-10-02 12:11:18.94402885 +0000 UTC m=+0.078262167 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:11:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 93 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 211 KiB/s wr, 113 op/s
Oct 02 12:11:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:19.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:19.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.592 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.593 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.611 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.680 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.681 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.687 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.688 2 INFO nova.compute.claims [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:11:20 compute-0 ceph-mon[73607]: pgmap v1365: 305 pgs: 305 active+clean; 93 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 211 KiB/s wr, 113 op/s
Oct 02 12:11:20 compute-0 nova_compute[257802]: 2025-10-02 12:11:20.815 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 118 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1008 KiB/s wr, 109 op/s
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511069674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.265 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.271 2 DEBUG nova.compute.provider_tree [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.298 2 DEBUG nova.scheduler.client.report [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.330 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.331 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.392 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.393 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.415 2 INFO nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.447 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.566 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.568 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.568 2 INFO nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Creating image(s)
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.592 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.623 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.654 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.659 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.731 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.732 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.733 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.733 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.757 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.761 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0b03ee08-f5c8-4897-8215-b9998393372f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:11:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:21.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:11:21 compute-0 nova_compute[257802]: 2025-10-02 12:11:21.796 2 DEBUG nova.policy [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e2a366ecf5934af989ad59e70e8c0b40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:11:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2511069674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.009 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0b03ee08-f5c8-4897-8215-b9998393372f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.062 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] resizing rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.160 2 DEBUG nova.objects.instance [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b03ee08-f5c8-4897-8215-b9998393372f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.174 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.175 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Ensure instance console log exists: /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.175 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.176 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.176 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:22 compute-0 nova_compute[257802]: 2025-10-02 12:11:22.695 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Successfully created port: 608719c5-5877-4237-83ea-eb3cba59c3a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:11:22 compute-0 ceph-mon[73607]: pgmap v1366: 305 pgs: 305 active+clean; 118 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1008 KiB/s wr, 109 op/s
Oct 02 12:11:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2799465910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 152 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 124 op/s
Oct 02 12:11:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:23.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1828815906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.032 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Successfully updated port: 608719c5-5877-4237-83ea-eb3cba59c3a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.050 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.051 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.051 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.362 2 DEBUG nova.compute.manager [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.363 2 DEBUG nova.compute.manager [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing instance network info cache due to event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.363 2 DEBUG oslo_concurrency.lockutils [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:24 compute-0 nova_compute[257802]: 2025-10-02 12:11:24.732 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:11:24 compute-0 ceph-mon[73607]: pgmap v1367: 305 pgs: 305 active+clean; 152 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 124 op/s
Oct 02 12:11:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 177 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 93 op/s
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.729 2 DEBUG nova.network.neutron [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.749 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.749 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Instance network_info: |[{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.750 2 DEBUG oslo_concurrency.lockutils [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.750 2 DEBUG nova.network.neutron [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.752 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Start _get_guest_xml network_info=[{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.755 2 WARNING nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.760 2 DEBUG nova.virt.libvirt.host [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.761 2 DEBUG nova.virt.libvirt.host [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.765 2 DEBUG nova.virt.libvirt.host [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.766 2 DEBUG nova.virt.libvirt.host [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.767 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.767 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.768 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.769 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.769 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.769 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.769 2 DEBUG nova.virt.hardware [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:11:25 compute-0 nova_compute[257802]: 2025-10-02 12:11:25.772 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:25.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225722311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.198 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.228 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.233 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:26 compute-0 ceph-mon[73607]: pgmap v1368: 305 pgs: 305 active+clean; 177 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 93 op/s
Oct 02 12:11:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633975460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.756 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.758 2 DEBUG nova.virt.libvirt.vif [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1894910822',display_name='tempest-SecurityGroupsTestJSON-server-1894910822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1894910822',id=51,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-ew1di0g7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:21Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b03ee08-f5c8-4897-8215-b9998393372f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.759 2 DEBUG nova.network.os_vif_util [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.760 2 DEBUG nova.network.os_vif_util [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.761 2 DEBUG nova.objects.instance [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b03ee08-f5c8-4897-8215-b9998393372f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.783 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <uuid>0b03ee08-f5c8-4897-8215-b9998393372f</uuid>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <name>instance-00000033</name>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1894910822</nova:name>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:11:25</nova:creationTime>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:user uuid="e2a366ecf5934af989ad59e70e8c0b40">tempest-SecurityGroupsTestJSON-82162709-project-member</nova:user>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:project uuid="e3110a9141bf416d96e84c98f5ec90b6">tempest-SecurityGroupsTestJSON-82162709</nova:project>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <nova:port uuid="608719c5-5877-4237-83ea-eb3cba59c3a3">
Oct 02 12:11:26 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="serial">0b03ee08-f5c8-4897-8215-b9998393372f</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="uuid">0b03ee08-f5c8-4897-8215-b9998393372f</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b03ee08-f5c8-4897-8215-b9998393372f_disk">
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b03ee08-f5c8-4897-8215-b9998393372f_disk.config">
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:11:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6e:cf:c2"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <target dev="tap608719c5-58"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/console.log" append="off"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:11:26 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:11:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:11:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:11:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:11:26 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.785 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Preparing to wait for external event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.785 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.786 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.786 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.787 2 DEBUG nova.virt.libvirt.vif [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1894910822',display_name='tempest-SecurityGroupsTestJSON-server-1894910822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1894910822',id=51,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-ew1di0g7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:21Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b03ee08-f5c8-4897-8215-b9998393372f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.787 2 DEBUG nova.network.os_vif_util [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.787 2 DEBUG nova.network.os_vif_util [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.788 2 DEBUG os_vif [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap608719c5-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap608719c5-58, col_values=(('external_ids', {'iface-id': '608719c5-5877-4237-83ea-eb3cba59c3a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:cf:c2', 'vm-uuid': '0b03ee08-f5c8-4897-8215-b9998393372f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:26 compute-0 NetworkManager[44987]: <info>  [1759407086.7999] manager: (tap608719c5-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.808 2 INFO os_vif [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58')
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.906 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.908 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.908 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No VIF found with MAC fa:16:3e:6e:cf:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.909 2 INFO nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Using config drive
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:26.929 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:26.929 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:26.930 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:26 compute-0 nova_compute[257802]: 2025-10-02 12:11:26.942 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 204 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 758 KiB/s rd, 5.6 MiB/s wr, 127 op/s
Oct 02 12:11:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2225722311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1633975460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.751 2 INFO nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Creating config drive at /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.758 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp69k_dwcc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.783 2 DEBUG nova.network.neutron [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updated VIF entry in instance network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.784 2 DEBUG nova.network.neutron [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:27.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.802 2 DEBUG oslo_concurrency.lockutils [req-4e16ce8c-ec56-4e31-9363-db4e05fc7937 req-e5a802df-99b9-4942-9787-2e0df0455f43 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.890 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp69k_dwcc" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.962 2 DEBUG nova.storage.rbd_utils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b03ee08-f5c8-4897-8215-b9998393372f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:27 compute-0 nova_compute[257802]: 2025-10-02 12:11:27.967 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config 0b03ee08-f5c8-4897-8215-b9998393372f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 213 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 5.7 MiB/s wr, 136 op/s
Oct 02 12:11:29 compute-0 ceph-mon[73607]: pgmap v1369: 305 pgs: 305 active+clean; 204 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 758 KiB/s rd, 5.6 MiB/s wr, 127 op/s
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.425 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.426 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.620 2 DEBUG oslo_concurrency.processutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config 0b03ee08-f5c8-4897-8215-b9998393372f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.621 2 INFO nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Deleting local config drive /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f/disk.config because it was imported into RBD.
Oct 02 12:11:29 compute-0 kernel: tap608719c5-58: entered promiscuous mode
Oct 02 12:11:29 compute-0 NetworkManager[44987]: <info>  [1759407089.6704] manager: (tap608719c5-58): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 ovn_controller[148183]: 2025-10-02T12:11:29Z|00185|binding|INFO|Claiming lport 608719c5-5877-4237-83ea-eb3cba59c3a3 for this chassis.
Oct 02 12:11:29 compute-0 ovn_controller[148183]: 2025-10-02T12:11:29Z|00186|binding|INFO|608719c5-5877-4237-83ea-eb3cba59c3a3: Claiming fa:16:3e:6e:cf:c2 10.100.0.3
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.703 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:cf:c2 10.100.0.3'], port_security=['fa:16:3e:6e:cf:c2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0b03ee08-f5c8-4897-8215-b9998393372f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=608719c5-5877-4237-83ea-eb3cba59c3a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.704 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 608719c5-5877-4237-83ea-eb3cba59c3a3 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 bound to our chassis
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.705 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:11:29 compute-0 systemd-machined[211836]: New machine qemu-24-instance-00000033.
Oct 02 12:11:29 compute-0 systemd-udevd[291400]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.717 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e6578ec8-9db1-4064-b5ce-4cf01c746ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.719 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb99f809e-51 in ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.721 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb99f809e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.722 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac0c497-5de2-46fe-9050-0896ee0c10d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.723 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[73a33f05-e459-4ce3-9760-fb801445bc6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 NetworkManager[44987]: <info>  [1759407089.7297] device (tap608719c5-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:11:29 compute-0 NetworkManager[44987]: <info>  [1759407089.7312] device (tap608719c5-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.736 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[516a3da3-fd1a-4181-bd51-230cf6e6afb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000033.
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.761 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3c57b1-7378-4529-bcdc-6a519f2a841f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 ovn_controller[148183]: 2025-10-02T12:11:29Z|00187|binding|INFO|Setting lport 608719c5-5877-4237-83ea-eb3cba59c3a3 ovn-installed in OVS
Oct 02 12:11:29 compute-0 ovn_controller[148183]: 2025-10-02T12:11:29Z|00188|binding|INFO|Setting lport 608719c5-5877-4237-83ea-eb3cba59c3a3 up in Southbound
Oct 02 12:11:29 compute-0 nova_compute[257802]: 2025-10-02 12:11:29.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:29.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.799 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4962a3ec-d4e3-49ed-847d-f0a8f36b670c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 NetworkManager[44987]: <info>  [1759407089.8075] manager: (tapb99f809e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.809 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e5dba8-2fad-4d41-8ac5-ee62f1c92598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.839 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a0591936-048f-49ac-ac9b-5ef9df7a7b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.841 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ccad936f-8551-41eb-913b-14635cb63329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 NetworkManager[44987]: <info>  [1759407089.8606] device (tapb99f809e-50): carrier: link connected
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.865 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2e03b39a-b2f0-4b3a-a1e5-a2d90218572a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.880 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[11427c0c-9b1e-4c72-b348-7d6d098cd55a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291433, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.898 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[16f786cf-df03-41fd-b9a6-ba5ec67049cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:e494'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515748, 'tstamp': 515748}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291434, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.912 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[02f86cb6-dcff-4dc5-be60-116b36218f64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291435, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:29.947 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3662d7ef-01cc-486f-91d8-fa04e843928e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.003 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8246643a-dea1-441e-89d3-96163f40d8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.005 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.006 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.007 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb99f809e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:30 compute-0 kernel: tapb99f809e-50: entered promiscuous mode
Oct 02 12:11:30 compute-0 NetworkManager[44987]: <info>  [1759407090.0104] manager: (tapb99f809e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.013 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb99f809e-50, col_values=(('external_ids', {'iface-id': '27833f83-24e9-4232-9a1b-0a4ea68e9e00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:30 compute-0 ovn_controller[148183]: 2025-10-02T12:11:30Z|00189|binding|INFO|Releasing lport 27833f83-24e9-4232-9a1b-0a4ea68e9e00 from this chassis (sb_readonly=0)
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.036 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b99f809e-5bc0-4d91-be9e-5e46fd286a27.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b99f809e-5bc0-4d91-be9e-5e46fd286a27.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.037 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[88802d73-54e8-4404-ba67-6a1b8408eecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.038 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/b99f809e-5bc0-4d91-be9e-5e46fd286a27.pid.haproxy
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:11:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:30.039 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'env', 'PROCESS_TAG=haproxy-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b99f809e-5bc0-4d91-be9e-5e46fd286a27.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.106 2 DEBUG nova.compute.manager [req-d5988495-a281-4ae2-935f-7c1eef3080ef req-fb6434ed-d42d-47b6-98f5-e474f1f1e277 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.109 2 DEBUG oslo_concurrency.lockutils [req-d5988495-a281-4ae2-935f-7c1eef3080ef req-fb6434ed-d42d-47b6-98f5-e474f1f1e277 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.110 2 DEBUG oslo_concurrency.lockutils [req-d5988495-a281-4ae2-935f-7c1eef3080ef req-fb6434ed-d42d-47b6-98f5-e474f1f1e277 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.111 2 DEBUG oslo_concurrency.lockutils [req-d5988495-a281-4ae2-935f-7c1eef3080ef req-fb6434ed-d42d-47b6-98f5-e474f1f1e277 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:30 compute-0 nova_compute[257802]: 2025-10-02 12:11:30.112 2 DEBUG nova.compute.manager [req-d5988495-a281-4ae2-935f-7c1eef3080ef req-fb6434ed-d42d-47b6-98f5-e474f1f1e277 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Processing event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:11:30 compute-0 podman[291482]: 2025-10-02 12:11:30.406244071 +0000 UTC m=+0.030277875 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:11:30 compute-0 ceph-mon[73607]: pgmap v1370: 305 pgs: 305 active+clean; 213 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 5.7 MiB/s wr, 136 op/s
Oct 02 12:11:30 compute-0 podman[291482]: 2025-10-02 12:11:30.832581452 +0000 UTC m=+0.456615276 container create c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:11:31 compute-0 systemd[1]: Started libpod-conmon-c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e.scope.
Oct 02 12:11:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 213 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.5 MiB/s wr, 165 op/s
Oct 02 12:11:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75160e36c7dd180302c05b2c8cc01de3ad46698c7831f8ac45cc9501fe5e9117/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:31 compute-0 podman[291482]: 2025-10-02 12:11:31.221280061 +0000 UTC m=+0.845313945 container init c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:11:31 compute-0 podman[291482]: 2025-10-02 12:11:31.230638818 +0000 UTC m=+0.854672642 container start c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:11:31 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [NOTICE]   (291526) : New worker (291528) forked
Oct 02 12:11:31 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [NOTICE]   (291526) : Loading success.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.324 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.326 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407091.3249047, 0b03ee08-f5c8-4897-8215-b9998393372f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.326 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] VM Started (Lifecycle Event)
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.329 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.333 2 INFO nova.virt.libvirt.driver [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Instance spawned successfully.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.333 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.356 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.360 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.368 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.369 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.370 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.370 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.370 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.371 2 DEBUG nova.virt.libvirt.driver [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.381 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.382 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407091.3250506, 0b03ee08-f5c8-4897-8215-b9998393372f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.382 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] VM Paused (Lifecycle Event)
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.415 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.419 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407091.3283312, 0b03ee08-f5c8-4897-8215-b9998393372f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.419 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] VM Resumed (Lifecycle Event)
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.448 2 INFO nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Took 9.88 seconds to spawn the instance on the hypervisor.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.448 2 DEBUG nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.450 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.455 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.488 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.511 2 INFO nova.compute.manager [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Took 10.86 seconds to build instance.
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.527 2 DEBUG oslo_concurrency.lockutils [None req-e0d7389e-4c34-4a57-a165-6d2d32d5d0e1 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:31.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:31 compute-0 nova_compute[257802]: 2025-10-02 12:11:31.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.197 2 DEBUG nova.compute.manager [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.198 2 DEBUG oslo_concurrency.lockutils [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.198 2 DEBUG oslo_concurrency.lockutils [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.199 2 DEBUG oslo_concurrency.lockutils [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.199 2 DEBUG nova.compute.manager [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] No waiting events found dispatching network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:32 compute-0 nova_compute[257802]: 2025-10-02 12:11:32.200 2 WARNING nova.compute.manager [req-70d789a0-ab28-48f8-b959-53db4fb687b6 req-78279ef8-fe62-46bd-af00-6b8f2a1a00d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received unexpected event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 for instance with vm_state active and task_state None.
Oct 02 12:11:33 compute-0 ceph-mon[73607]: pgmap v1371: 305 pgs: 305 active+clean; 213 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.5 MiB/s wr, 165 op/s
Oct 02 12:11:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 197 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 189 op/s
Oct 02 12:11:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:11:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:11:34 compute-0 nova_compute[257802]: 2025-10-02 12:11:34.237 2 DEBUG nova.compute.manager [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:34 compute-0 nova_compute[257802]: 2025-10-02 12:11:34.238 2 DEBUG nova.compute.manager [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing instance network info cache due to event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:34 compute-0 nova_compute[257802]: 2025-10-02 12:11:34.238 2 DEBUG oslo_concurrency.lockutils [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:34 compute-0 nova_compute[257802]: 2025-10-02 12:11:34.238 2 DEBUG oslo_concurrency.lockutils [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:34 compute-0 nova_compute[257802]: 2025-10-02 12:11:34.238 2 DEBUG nova.network.neutron [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:34 compute-0 ceph-mon[73607]: pgmap v1372: 305 pgs: 305 active+clean; 197 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 189 op/s
Oct 02 12:11:34 compute-0 sudo[291539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:34 compute-0 sudo[291539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:34 compute-0 sudo[291539]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:34 compute-0 sudo[291564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:34 compute-0 sudo[291564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:34 compute-0 sudo[291564]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 192 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 192 op/s
Oct 02 12:11:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.145 2 DEBUG nova.network.neutron [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updated VIF entry in instance network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.146 2 DEBUG nova.network.neutron [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.177 2 DEBUG oslo_concurrency.lockutils [req-d6ae6a2c-2b64-4f6f-9232-0c6bf1a9311e req-f13b26b8-45d8-4f10-96ba-4dfb90b658e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.398 2 DEBUG nova.compute.manager [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.399 2 DEBUG nova.compute.manager [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing instance network info cache due to event network-changed-608719c5-5877-4237-83ea-eb3cba59c3a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.399 2 DEBUG oslo_concurrency.lockutils [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.400 2 DEBUG oslo_concurrency.lockutils [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.400 2 DEBUG nova.network.neutron [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Refreshing network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:36 compute-0 ceph-mon[73607]: pgmap v1373: 305 pgs: 305 active+clean; 192 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 192 op/s
Oct 02 12:11:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2443498065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:36 compute-0 nova_compute[257802]: 2025-10-02 12:11:36.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-0 nova_compute[257802]: 2025-10-02 12:11:37.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:37 compute-0 nova_compute[257802]: 2025-10-02 12:11:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:37 compute-0 nova_compute[257802]: 2025-10-02 12:11:37.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:11:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 936 KiB/s wr, 205 op/s
Oct 02 12:11:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:38 compute-0 nova_compute[257802]: 2025-10-02 12:11:38.032 2 DEBUG nova.network.neutron [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updated VIF entry in instance network info cache for port 608719c5-5877-4237-83ea-eb3cba59c3a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:38 compute-0 nova_compute[257802]: 2025-10-02 12:11:38.032 2 DEBUG nova.network.neutron [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:38 compute-0 nova_compute[257802]: 2025-10-02 12:11:38.050 2 DEBUG oslo_concurrency.lockutils [req-dcc606dd-c4ea-442f-9bb7-126b560aa61c req-3bc14cf4-b85a-4e31-994b-4f666987692e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:38 compute-0 nova_compute[257802]: 2025-10-02 12:11:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:38 compute-0 ceph-mon[73607]: pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 936 KiB/s wr, 205 op/s
Oct 02 12:11:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2360845791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 93 KiB/s wr, 152 op/s
Oct 02 12:11:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:11:39.427 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2049217689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 02 12:11:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 02 12:11:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 02 12:11:40 compute-0 ceph-mon[73607]: pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 93 KiB/s wr, 152 op/s
Oct 02 12:11:40 compute-0 podman[291592]: 2025-10-02 12:11:40.937988141 +0000 UTC m=+0.077787796 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct 02 12:11:41 compute-0 nova_compute[257802]: 2025-10-02 12:11:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:41 compute-0 nova_compute[257802]: 2025-10-02 12:11:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 125 op/s
Oct 02 12:11:41 compute-0 nova_compute[257802]: 2025-10-02 12:11:41.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 02 12:11:41 compute-0 ceph-mon[73607]: osdmap e185: 3 total, 3 up, 3 in
Oct 02 12:11:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 02 12:11:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 02 12:11:41 compute-0 nova_compute[257802]: 2025-10-02 12:11:41.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:41.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:42 compute-0 nova_compute[257802]: 2025-10-02 12:11:42.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:11:42
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes']
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:11:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 02 12:11:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 02 12:11:42 compute-0 ceph-mon[73607]: pgmap v1377: 305 pgs: 305 active+clean; 167 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 125 op/s
Oct 02 12:11:42 compute-0 ceph-mon[73607]: osdmap e186: 3 total, 3 up, 3 in
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:11:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:11:43 compute-0 nova_compute[257802]: 2025-10-02 12:11:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:43 compute-0 nova_compute[257802]: 2025-10-02 12:11:43.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:11:43 compute-0 nova_compute[257802]: 2025-10-02 12:11:43.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 195 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 65 op/s
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:11:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:43 compute-0 ceph-mon[73607]: osdmap e187: 3 total, 3 up, 3 in
Oct 02 12:11:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/20976089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:43.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:44 compute-0 podman[291614]: 2025-10-02 12:11:44.916664614 +0000 UTC m=+0.055366182 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 12:11:44 compute-0 podman[291613]: 2025-10-02 12:11:44.949557542 +0000 UTC m=+0.090379732 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:11:45 compute-0 nova_compute[257802]: 2025-10-02 12:11:45.079 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:45 compute-0 nova_compute[257802]: 2025-10-02 12:11:45.080 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:45 compute-0 nova_compute[257802]: 2025-10-02 12:11:45.080 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:11:45 compute-0 nova_compute[257802]: 2025-10-02 12:11:45.080 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0b03ee08-f5c8-4897-8215-b9998393372f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:45 compute-0 ceph-mon[73607]: pgmap v1380: 305 pgs: 305 active+clean; 195 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 65 op/s
Oct 02 12:11:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 214 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 69 op/s
Oct 02 12:11:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:46 compute-0 nova_compute[257802]: 2025-10-02 12:11:46.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1037535637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:46 compute-0 nova_compute[257802]: 2025-10-02 12:11:46.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 254 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 9.0 MiB/s wr, 159 op/s
Oct 02 12:11:47 compute-0 ovn_controller[148183]: 2025-10-02T12:11:47Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:cf:c2 10.100.0.3
Oct 02 12:11:47 compute-0 ovn_controller[148183]: 2025-10-02T12:11:47Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:cf:c2 10.100.0.3
Oct 02 12:11:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:47.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:48 compute-0 ceph-mon[73607]: pgmap v1381: 305 pgs: 305 active+clean; 214 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 69 op/s
Oct 02 12:11:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3852036067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 02 12:11:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 02 12:11:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 02 12:11:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 299 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 230 op/s
Oct 02 12:11:49 compute-0 ceph-mon[73607]: pgmap v1382: 305 pgs: 305 active+clean; 254 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 9.0 MiB/s wr, 159 op/s
Oct 02 12:11:49 compute-0 ceph-mon[73607]: osdmap e188: 3 total, 3 up, 3 in
Oct 02 12:11:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:11:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:49.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:11:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:49 compute-0 podman[291655]: 2025-10-02 12:11:49.940966717 +0000 UTC m=+0.078833721 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:11:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 02 12:11:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 02 12:11:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 02 12:11:50 compute-0 ceph-mon[73607]: pgmap v1384: 305 pgs: 305 active+clean; 299 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 230 op/s
Oct 02 12:11:50 compute-0 ceph-mon[73607]: osdmap e189: 3 total, 3 up, 3 in
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.836 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.865 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.866 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.866 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.902 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.903 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.903 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.903 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:11:50 compute-0 nova_compute[257802]: 2025-10-02 12:11:50.904 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 322 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 9.9 MiB/s wr, 225 op/s
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193231085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.356 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.437 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.438 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.576 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.577 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4456MB free_disk=20.887603759765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.687 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 0b03ee08-f5c8-4897-8215-b9998393372f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.687 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.687 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.765 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:51 compute-0 nova_compute[257802]: 2025-10-02 12:11:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4193231085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904225357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.197 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.202 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.220 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.268 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.269 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.500 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:52 compute-0 nova_compute[257802]: 2025-10-02 12:11:52.520 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:53 compute-0 ceph-mon[73607]: pgmap v1386: 305 pgs: 305 active+clean; 322 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 9.9 MiB/s wr, 225 op/s
Oct 02 12:11:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/904225357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 296 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.1 MiB/s wr, 234 op/s
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.234378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113234478, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2508, "num_deletes": 518, "total_data_size": 3680230, "memory_usage": 3734184, "flush_reason": "Manual Compaction"}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113254731, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3596897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29096, "largest_seqno": 31603, "table_properties": {"data_size": 3586221, "index_size": 6338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25900, "raw_average_key_size": 20, "raw_value_size": 3562533, "raw_average_value_size": 2755, "num_data_blocks": 274, "num_entries": 1293, "num_filter_entries": 1293, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406930, "oldest_key_time": 1759406930, "file_creation_time": 1759407113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 20381 microseconds, and 9583 cpu microseconds.
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.254796) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3596897 bytes OK
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.254864) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.256493) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.256514) EVENT_LOG_v1 {"time_micros": 1759407113256506, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.256539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3668900, prev total WAL file size 3668900, number of live WAL files 2.
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.258162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3512KB)], [65(8381KB)]
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113258202, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12179774, "oldest_snapshot_seqno": -1}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5614 keys, 10139275 bytes, temperature: kUnknown
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113336497, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10139275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10100050, "index_size": 24075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 144341, "raw_average_key_size": 25, "raw_value_size": 9997428, "raw_average_value_size": 1780, "num_data_blocks": 968, "num_entries": 5614, "num_filter_entries": 5614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.336941) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10139275 bytes
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.338256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 6666, records dropped: 1052 output_compression: NoCompression
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.338278) EVENT_LOG_v1 {"time_micros": 1759407113338267, "job": 36, "event": "compaction_finished", "compaction_time_micros": 78462, "compaction_time_cpu_micros": 32854, "output_level": 6, "num_output_files": 1, "total_output_size": 10139275, "num_input_records": 6666, "num_output_records": 5614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113339147, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113341052, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.258045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.341120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.341129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.341133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.341137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:11:53.341141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:53.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004868635713288666 of space, bias 1.0, pg target 1.4605907139865997 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003410062465103294 of space, bias 1.0, pg target 1.019608677065885 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:11:54 compute-0 ceph-mon[73607]: pgmap v1387: 305 pgs: 305 active+clean; 296 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.1 MiB/s wr, 234 op/s
Oct 02 12:11:54 compute-0 sudo[291729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:54 compute-0 sudo[291729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:54 compute-0 sudo[291729]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:54 compute-0 sudo[291754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:54 compute-0 sudo[291754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:54 compute-0 sudo[291754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:11:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1227284470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:11:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1227284470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 252 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Oct 02 12:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2172909903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2137773829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1227284470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1227284470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:55.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:11:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:56 compute-0 nova_compute[257802]: 2025-10-02 12:11:56.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:56 compute-0 ceph-mon[73607]: pgmap v1388: 305 pgs: 305 active+clean; 252 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Oct 02 12:11:56 compute-0 nova_compute[257802]: 2025-10-02 12:11:56.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 200 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:11:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3287134201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:57.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 02 12:11:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 02 12:11:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 02 12:11:58 compute-0 ceph-mon[73607]: pgmap v1389: 305 pgs: 305 active+clean; 200 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:11:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3524384916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:58 compute-0 ceph-mon[73607]: osdmap e190: 3 total, 3 up, 3 in
Oct 02 12:11:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 1.3 MiB/s wr, 117 op/s
Oct 02 12:11:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:11:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:11:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:00 compute-0 ceph-mon[73607]: pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 1.3 MiB/s wr, 117 op/s
Oct 02 12:12:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 175 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 188 KiB/s wr, 130 op/s
Oct 02 12:12:01 compute-0 nova_compute[257802]: 2025-10-02 12:12:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:01.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:01 compute-0 nova_compute[257802]: 2025-10-02 12:12:01.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:02 compute-0 ceph-mon[73607]: pgmap v1392: 305 pgs: 305 active+clean; 175 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 188 KiB/s wr, 130 op/s
Oct 02 12:12:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 192 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 604 KiB/s wr, 157 op/s
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.456 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.457 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.499 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:12:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.661 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.661 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.667 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.668 2 INFO nova.compute.claims [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:12:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:03 compute-0 nova_compute[257802]: 2025-10-02 12:12:03.855 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521159409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.277 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.283 2 DEBUG nova.compute.provider_tree [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.301 2 DEBUG nova.scheduler.client.report [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.334 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.335 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.424 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.425 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.458 2 INFO nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.480 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.628 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.631 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.631 2 INFO nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Creating image(s)
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.667 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.692 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.717 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.720 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.783 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.784 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.785 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.785 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.816 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.820 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:04 compute-0 nova_compute[257802]: 2025-10-02 12:12:04.849 2 DEBUG nova.policy [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e2a366ecf5934af989ad59e70e8c0b40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:12:05 compute-0 ceph-mon[73607]: pgmap v1393: 305 pgs: 305 active+clean; 192 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 604 KiB/s wr, 157 op/s
Oct 02 12:12:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2521159409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 213 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.436 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.512 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] resizing rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.638 2 DEBUG nova.objects.instance [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.659 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.660 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Ensure instance console log exists: /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.661 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.661 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.661 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:05 compute-0 nova_compute[257802]: 2025-10-02 12:12:05.766 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Successfully created port: 1bb3def9-caba-4dc1-8217-e979ae761982 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:12:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:05.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:05.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:06 compute-0 ceph-mon[73607]: pgmap v1394: 305 pgs: 305 active+clean; 213 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Oct 02 12:12:06 compute-0 nova_compute[257802]: 2025-10-02 12:12:06.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:06 compute-0 nova_compute[257802]: 2025-10-02 12:12:06.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 230 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 146 op/s
Oct 02 12:12:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1826822819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.419 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Successfully updated port: 1bb3def9-caba-4dc1-8217-e979ae761982 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.461 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.461 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquired lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.461 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.598 2 DEBUG nova.compute.manager [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.599 2 DEBUG nova.compute.manager [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing instance network info cache due to event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.599 2 DEBUG oslo_concurrency.lockutils [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:07 compute-0 nova_compute[257802]: 2025-10-02 12:12:07.761 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:12:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:07.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:08 compute-0 ceph-mon[73607]: pgmap v1395: 305 pgs: 305 active+clean; 230 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 146 op/s
Oct 02 12:12:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2447159947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 144 op/s
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.183 2 DEBUG nova.network.neutron [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.208 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Releasing lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.208 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance network_info: |[{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.209 2 DEBUG oslo_concurrency.lockutils [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.209 2 DEBUG nova.network.neutron [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.211 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start _get_guest_xml network_info=[{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.216 2 WARNING nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.223 2 DEBUG nova.virt.libvirt.host [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.223 2 DEBUG nova.virt.libvirt.host [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.235 2 DEBUG nova.virt.libvirt.host [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.236 2 DEBUG nova.virt.libvirt.host [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.237 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.237 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.238 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.239 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.239 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.239 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.239 2 DEBUG nova.virt.hardware [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.242 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:12:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384478115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.657 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.682 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:09 compute-0 nova_compute[257802]: 2025-10-02 12:12:09.688 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:09.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:09.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:12:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2122675690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.140 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.144 2 DEBUG nova.virt.libvirt.vif [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:12:04Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.145 2 DEBUG nova.network.os_vif_util [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.146 2 DEBUG nova.network.os_vif_util [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.148 2 DEBUG nova.objects.instance [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1384478115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.189 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <uuid>0b6d53d9-53fc-4b0d-b849-fadc57280e6e</uuid>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <name>instance-00000036</name>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1336832158</nova:name>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:12:09</nova:creationTime>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:user uuid="e2a366ecf5934af989ad59e70e8c0b40">tempest-SecurityGroupsTestJSON-82162709-project-member</nova:user>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:project uuid="e3110a9141bf416d96e84c98f5ec90b6">tempest-SecurityGroupsTestJSON-82162709</nova:project>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <nova:port uuid="1bb3def9-caba-4dc1-8217-e979ae761982">
Oct 02 12:12:10 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <system>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="serial">0b6d53d9-53fc-4b0d-b849-fadc57280e6e</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="uuid">0b6d53d9-53fc-4b0d-b849-fadc57280e6e</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </system>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <os>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </os>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <features>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </features>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk">
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config">
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:12:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:9e:93:0e"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <target dev="tap1bb3def9-ca"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/console.log" append="off"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <video>
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </video>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:12:10 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:12:10 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:12:10 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:12:10 compute-0 nova_compute[257802]: </domain>
Oct 02 12:12:10 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.191 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Preparing to wait for external event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.192 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.192 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.192 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.193 2 DEBUG nova.virt.libvirt.vif [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:12:04Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.193 2 DEBUG nova.network.os_vif_util [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.194 2 DEBUG nova.network.os_vif_util [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.194 2 DEBUG os_vif [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bb3def9-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bb3def9-ca, col_values=(('external_ids', {'iface-id': '1bb3def9-caba-4dc1-8217-e979ae761982', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:93:0e', 'vm-uuid': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:10 compute-0 NetworkManager[44987]: <info>  [1759407130.2019] manager: (tap1bb3def9-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.211 2 INFO os_vif [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca')
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.339 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.340 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.340 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] No VIF found with MAC fa:16:3e:9e:93:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.341 2 INFO nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Using config drive
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.379 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.883 2 INFO nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Creating config drive at /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config
Oct 02 12:12:10 compute-0 nova_compute[257802]: 2025-10-02 12:12:10.890 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdukbixn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.024 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdukbixn" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.056 2 DEBUG nova.storage.rbd_utils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] rbd image 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.060 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 268 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 128 op/s
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.525 2 DEBUG nova.network.neutron [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updated VIF entry in instance network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.526 2 DEBUG nova.network.neutron [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.557 2 DEBUG oslo_concurrency.lockutils [req-6c8917ec-e29c-4b9a-8f34-1fe10be35747 req-047d8279-8302-4472-b1b0-c09b40695fac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:11 compute-0 ceph-mon[73607]: pgmap v1396: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 144 op/s
Oct 02 12:12:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2122675690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:11 compute-0 podman[292098]: 2025-10-02 12:12:11.911572891 +0000 UTC m=+0.048414875 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:12:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.988 2 DEBUG oslo_concurrency.processutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config 0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:11 compute-0 nova_compute[257802]: 2025-10-02 12:12:11.988 2 INFO nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Deleting local config drive /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/disk.config because it was imported into RBD.
Oct 02 12:12:12 compute-0 NetworkManager[44987]: <info>  [1759407132.0351] manager: (tap1bb3def9-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 02 12:12:12 compute-0 kernel: tap1bb3def9-ca: entered promiscuous mode
Oct 02 12:12:12 compute-0 nova_compute[257802]: 2025-10-02 12:12:12.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:12 compute-0 ovn_controller[148183]: 2025-10-02T12:12:12Z|00190|binding|INFO|Claiming lport 1bb3def9-caba-4dc1-8217-e979ae761982 for this chassis.
Oct 02 12:12:12 compute-0 ovn_controller[148183]: 2025-10-02T12:12:12Z|00191|binding|INFO|1bb3def9-caba-4dc1-8217-e979ae761982: Claiming fa:16:3e:9e:93:0e 10.100.0.13
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.050 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:0e 10.100.0.13'], port_security=['fa:16:3e:9e:93:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1bb3def9-caba-4dc1-8217-e979ae761982) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.052 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb3def9-caba-4dc1-8217-e979ae761982 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 bound to our chassis
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.053 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:12:12 compute-0 ovn_controller[148183]: 2025-10-02T12:12:12Z|00192|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 ovn-installed in OVS
Oct 02 12:12:12 compute-0 ovn_controller[148183]: 2025-10-02T12:12:12Z|00193|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 up in Southbound
Oct 02 12:12:12 compute-0 nova_compute[257802]: 2025-10-02 12:12:12.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:12 compute-0 systemd-udevd[292132]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:12:12 compute-0 systemd-machined[211836]: New machine qemu-25-instance-00000036.
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.068 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[033a6d42-5744-480c-8902-f971c8f68d17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000036.
Oct 02 12:12:12 compute-0 NetworkManager[44987]: <info>  [1759407132.0824] device (tap1bb3def9-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:12:12 compute-0 NetworkManager[44987]: <info>  [1759407132.0836] device (tap1bb3def9-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.099 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f165dd74-da40-43cd-9803-ea4c67ff7b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.102 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0a43cde6-2f36-44d6-a671-91589328a2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.129 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5294bdc0-97be-4588-8c98-0614a3e1360f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.147 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98a3bbab-474e-4dc5-846a-5fffb5c1237e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292142, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.160 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[05a37f81-bac3-4a9a-a117-1eb8c89d2137]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515759, 'tstamp': 515759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292145, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515762, 'tstamp': 515762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292145, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.161 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:12 compute-0 nova_compute[257802]: 2025-10-02 12:12:12.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.164 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb99f809e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.164 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.165 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb99f809e-50, col_values=(('external_ids', {'iface-id': '27833f83-24e9-4232-9a1b-0a4ea68e9e00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:12.165 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.062 2 DEBUG nova.compute.manager [req-b2f9de24-64a2-459f-b421-a438f3ffa124 req-a67dc1b2-4269-41b1-9498-9f5ab9a97c1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.063 2 DEBUG oslo_concurrency.lockutils [req-b2f9de24-64a2-459f-b421-a438f3ffa124 req-a67dc1b2-4269-41b1-9498-9f5ab9a97c1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.063 2 DEBUG oslo_concurrency.lockutils [req-b2f9de24-64a2-459f-b421-a438f3ffa124 req-a67dc1b2-4269-41b1-9498-9f5ab9a97c1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.063 2 DEBUG oslo_concurrency.lockutils [req-b2f9de24-64a2-459f-b421-a438f3ffa124 req-a67dc1b2-4269-41b1-9498-9f5ab9a97c1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.063 2 DEBUG nova.compute.manager [req-b2f9de24-64a2-459f-b421-a438f3ffa124 req-a67dc1b2-4269-41b1-9498-9f5ab9a97c1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Processing event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:12:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 278 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.0 MiB/s wr, 130 op/s
Oct 02 12:12:13 compute-0 ceph-mon[73607]: pgmap v1397: 305 pgs: 305 active+clean; 268 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 128 op/s
Oct 02 12:12:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.757 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407133.7568934, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.757 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Started (Lifecycle Event)
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.759 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.762 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.765 2 INFO nova.virt.libvirt.driver [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance spawned successfully.
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.765 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.790 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.796 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.800 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.801 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.801 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.802 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.802 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.802 2 DEBUG nova.virt.libvirt.driver [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.828 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.828 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407133.7570143, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.828 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Paused (Lifecycle Event)
Oct 02 12:12:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.857 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.860 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407133.7616742, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.861 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Resumed (Lifecycle Event)
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.885 2 INFO nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Took 9.26 seconds to spawn the instance on the hypervisor.
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.885 2 DEBUG nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.886 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.892 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.950 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:12:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:13.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:13 compute-0 nova_compute[257802]: 2025-10-02 12:12:13.989 2 INFO nova.compute.manager [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Took 10.38 seconds to build instance.
Oct 02 12:12:14 compute-0 nova_compute[257802]: 2025-10-02 12:12:14.017 2 DEBUG oslo_concurrency.lockutils [None req-e8137865-4af8-4e9a-a5c1-ad885629c137 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:14 compute-0 sudo[292192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:14 compute-0 sudo[292192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:14 compute-0 sudo[292192]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:14 compute-0 sudo[292217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:14 compute-0 sudo[292217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:14 compute-0 sudo[292217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:15 compute-0 ceph-mon[73607]: pgmap v1398: 305 pgs: 305 active+clean; 278 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.0 MiB/s wr, 130 op/s
Oct 02 12:12:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 282 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 142 op/s
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.270 2 DEBUG nova.compute.manager [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.271 2 DEBUG oslo_concurrency.lockutils [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.271 2 DEBUG oslo_concurrency.lockutils [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.271 2 DEBUG oslo_concurrency.lockutils [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.272 2 DEBUG nova.compute.manager [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:15 compute-0 nova_compute[257802]: 2025-10-02 12:12:15.272 2 WARNING nova.compute.manager [req-e4f5323e-d2fd-4f47-8bff-3474f54e6b79 req-51c69fed-3689-49c6-8da6-a1b57ac1ee59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state None.
Oct 02 12:12:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:15 compute-0 podman[292243]: 2025-10-02 12:12:15.937297164 +0000 UTC m=+0.070440609 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:12:15 compute-0 podman[292242]: 2025-10-02 12:12:15.94376189 +0000 UTC m=+0.076669968 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:16 compute-0 ceph-mon[73607]: pgmap v1399: 305 pgs: 305 active+clean; 282 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 142 op/s
Oct 02 12:12:16 compute-0 nova_compute[257802]: 2025-10-02 12:12:16.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:16 compute-0 sudo[292281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:16 compute-0 sudo[292281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:16 compute-0 sudo[292281]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:16 compute-0 sudo[292306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:16 compute-0 sudo[292306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:16 compute-0 sudo[292306]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:17 compute-0 sudo[292331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:17 compute-0 sudo[292331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:17 compute-0 sudo[292331]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:17.046 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:17.047 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:17 compute-0 nova_compute[257802]: 2025-10-02 12:12:17.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:17 compute-0 sudo[292356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:12:17 compute-0 sudo[292356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Oct 02 12:12:17 compute-0 sudo[292356]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:17.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:17.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:18 compute-0 nova_compute[257802]: 2025-10-02 12:12:18.289 2 DEBUG nova.compute.manager [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:18 compute-0 nova_compute[257802]: 2025-10-02 12:12:18.290 2 DEBUG nova.compute.manager [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing instance network info cache due to event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:12:18 compute-0 nova_compute[257802]: 2025-10-02 12:12:18.290 2 DEBUG oslo_concurrency.lockutils [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:18 compute-0 nova_compute[257802]: 2025-10-02 12:12:18.290 2 DEBUG oslo_concurrency.lockutils [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:18 compute-0 nova_compute[257802]: 2025-10-02 12:12:18.290 2 DEBUG nova.network.neutron [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:12:18 compute-0 ceph-mon[73607]: pgmap v1400: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Oct 02 12:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:19.050 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 293 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.3 MiB/s wr, 223 op/s
Oct 02 12:12:19 compute-0 nova_compute[257802]: 2025-10-02 12:12:19.443 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:19 compute-0 nova_compute[257802]: 2025-10-02 12:12:19.444 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:19 compute-0 nova_compute[257802]: 2025-10-02 12:12:19.444 2 INFO nova.compute.manager [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Rebooting instance
Oct 02 12:12:19 compute-0 nova_compute[257802]: 2025-10-02 12:12:19.462 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:19.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0e49ca69-3e66-4716-a5b3-bc8f76752c57 does not exist
Oct 02 12:12:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 16a9dc95-731c-40b9-b81e-ee5a6b0d89fb does not exist
Oct 02 12:12:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 93809609-6298-4420-a429-40cc2cf6a5be does not exist
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:12:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:12:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:19.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:19 compute-0 sudo[292412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:19 compute-0 sudo[292412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:20 compute-0 sudo[292412]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:20 compute-0 sudo[292443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:20 compute-0 sudo[292443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:20 compute-0 sudo[292443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:20 compute-0 podman[292436]: 2025-10-02 12:12:20.101933093 +0000 UTC m=+0.089786727 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:12:20 compute-0 sudo[292484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:20 compute-0 sudo[292484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:20 compute-0 sudo[292484]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:20 compute-0 sudo[292513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:12:20 compute-0 sudo[292513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:20 compute-0 nova_compute[257802]: 2025-10-02 12:12:20.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.548821532 +0000 UTC m=+0.054200414 container create 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 12:12:20 compute-0 systemd[1]: Started libpod-conmon-2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d.scope.
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.515788912 +0000 UTC m=+0.021167814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.653642682 +0000 UTC m=+0.159021584 container init 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.661061893 +0000 UTC m=+0.166440785 container start 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:20 compute-0 systemd[1]: libpod-2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d.scope: Deactivated successfully.
Oct 02 12:12:20 compute-0 musing_varahamihira[292595]: 167 167
Oct 02 12:12:20 compute-0 conmon[292595]: conmon 2fbe14a9cb5be1d68ef5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d.scope/container/memory.events
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.681289612 +0000 UTC m=+0.186668514 container attach 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.683466506 +0000 UTC m=+0.188845388 container died 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f52278e004d68703d99f3680f1a4221fd5e01daca4cbd136cd80ec8890383cb-merged.mount: Deactivated successfully.
Oct 02 12:12:20 compute-0 podman[292579]: 2025-10-02 12:12:20.827008683 +0000 UTC m=+0.332387565 container remove 2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:20 compute-0 systemd[1]: libpod-conmon-2fbe14a9cb5be1d68ef522b62e6ccf573aac42a2ddeb9e054af0ca10912bbc1d.scope: Deactivated successfully.
Oct 02 12:12:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 02 12:12:21 compute-0 podman[292622]: 2025-10-02 12:12:21.115692799 +0000 UTC m=+0.149151485 container create 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:21 compute-0 podman[292622]: 2025-10-02 12:12:21.059780334 +0000 UTC m=+0.093239060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 293 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Oct 02 12:12:21 compute-0 systemd[1]: Started libpod-conmon-944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735.scope.
Oct 02 12:12:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.224 2 DEBUG nova.network.neutron [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updated VIF entry in instance network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.226 2 DEBUG nova.network.neutron [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.249 2 DEBUG oslo_concurrency.lockutils [req-b1386749-5a8f-4f4f-a521-c854db5f5aa7 req-dcee3ed3-e498-4847-8967-a5e571fe6a65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.249 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquired lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.250 2 DEBUG nova.network.neutron [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:12:21 compute-0 podman[292622]: 2025-10-02 12:12:21.273475963 +0000 UTC m=+0.306934639 container init 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:21 compute-0 podman[292622]: 2025-10-02 12:12:21.280815 +0000 UTC m=+0.314273646 container start 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:12:21 compute-0 podman[292622]: 2025-10-02 12:12:21.285293209 +0000 UTC m=+0.318751875 container attach 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:12:21 compute-0 nova_compute[257802]: 2025-10-02 12:12:21.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:21 compute-0 ceph-mon[73607]: pgmap v1401: 305 pgs: 305 active+clean; 293 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.3 MiB/s wr, 223 op/s
Oct 02 12:12:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:12:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:12:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:21.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 02 12:12:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 02 12:12:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:21.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:22 compute-0 inspiring_wu[292639]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:12:22 compute-0 inspiring_wu[292639]: --> relative data size: 1.0
Oct 02 12:12:22 compute-0 inspiring_wu[292639]: --> All data devices are unavailable
Oct 02 12:12:22 compute-0 systemd[1]: libpod-944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735.scope: Deactivated successfully.
Oct 02 12:12:22 compute-0 podman[292622]: 2025-10-02 12:12:22.090077941 +0000 UTC m=+1.123536587 container died 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf66f637890bbf5d0b7c9c4b880a772bd13c864aac4fddc1fe928f8c74f1ba87-merged.mount: Deactivated successfully.
Oct 02 12:12:22 compute-0 podman[292622]: 2025-10-02 12:12:22.3170284 +0000 UTC m=+1.350487046 container remove 944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:12:22 compute-0 systemd[1]: libpod-conmon-944a101cb51321a66e7608939c1cb619833a81b97df230f5ea0b1740a32bd735.scope: Deactivated successfully.
Oct 02 12:12:22 compute-0 sudo[292513]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:22 compute-0 sudo[292666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:22 compute-0 sudo[292666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:22 compute-0 sudo[292666]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:22 compute-0 sudo[292692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:22 compute-0 sudo[292692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:22 compute-0 sudo[292692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:22 compute-0 sudo[292717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:22 compute-0 sudo[292717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:22 compute-0 sudo[292717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:22 compute-0 sudo[292742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:12:22 compute-0 sudo[292742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:22 compute-0 ceph-mon[73607]: pgmap v1402: 305 pgs: 305 active+clean; 293 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Oct 02 12:12:22 compute-0 ceph-mon[73607]: osdmap e191: 3 total, 3 up, 3 in
Oct 02 12:12:22 compute-0 podman[292807]: 2025-10-02 12:12:22.977162887 +0000 UTC m=+0.066952743 container create ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:12:23 compute-0 systemd[1]: Started libpod-conmon-ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205.scope.
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:22.943509312 +0000 UTC m=+0.033299208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:23.061200484 +0000 UTC m=+0.150990380 container init ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:23.067070586 +0000 UTC m=+0.156860442 container start ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:23.071104994 +0000 UTC m=+0.160894890 container attach ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:23 compute-0 keen_swirles[292823]: 167 167
Oct 02 12:12:23 compute-0 systemd[1]: libpod-ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205.scope: Deactivated successfully.
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:23.075094551 +0000 UTC m=+0.164884417 container died ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca6206dbe264d527b390ed6479d9bd1ba40671ee47defffe7065e4bf7f419cb-merged.mount: Deactivated successfully.
Oct 02 12:12:23 compute-0 podman[292807]: 2025-10-02 12:12:23.122176841 +0000 UTC m=+0.211966687 container remove ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:12:23 compute-0 systemd[1]: libpod-conmon-ea0d71b56da37f5b6feb24e0a7666914d81212d5a7d755fa00f3e99297f31205.scope: Deactivated successfully.
Oct 02 12:12:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 297 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Oct 02 12:12:23 compute-0 podman[292846]: 2025-10-02 12:12:23.315155317 +0000 UTC m=+0.066224925 container create b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:12:23 compute-0 systemd[1]: Started libpod-conmon-b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c.scope.
Oct 02 12:12:23 compute-0 podman[292846]: 2025-10-02 12:12:23.277999238 +0000 UTC m=+0.029068936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7fee19311abbea19bccccdc68e23e5e6545170dac5e3d79ba687876daf0f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7fee19311abbea19bccccdc68e23e5e6545170dac5e3d79ba687876daf0f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7fee19311abbea19bccccdc68e23e5e6545170dac5e3d79ba687876daf0f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7fee19311abbea19bccccdc68e23e5e6545170dac5e3d79ba687876daf0f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:23 compute-0 podman[292846]: 2025-10-02 12:12:23.424723672 +0000 UTC m=+0.175793280 container init b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:12:23 compute-0 podman[292846]: 2025-10-02 12:12:23.433933106 +0000 UTC m=+0.185002714 container start b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:12:23 compute-0 podman[292846]: 2025-10-02 12:12:23.437725958 +0000 UTC m=+0.188795576 container attach b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:12:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:23.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 02 12:12:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 02 12:12:24 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 02 12:12:24 compute-0 heuristic_greider[292862]: {
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:     "1": [
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:         {
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "devices": [
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "/dev/loop3"
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             ],
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "lv_name": "ceph_lv0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "lv_size": "7511998464",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "name": "ceph_lv0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "tags": {
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.cluster_name": "ceph",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.crush_device_class": "",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.encrypted": "0",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.osd_id": "1",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.type": "block",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:                 "ceph.vdo": "0"
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             },
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "type": "block",
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:             "vg_name": "ceph_vg0"
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:         }
Oct 02 12:12:24 compute-0 heuristic_greider[292862]:     ]
Oct 02 12:12:24 compute-0 heuristic_greider[292862]: }
Oct 02 12:12:24 compute-0 systemd[1]: libpod-b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c.scope: Deactivated successfully.
Oct 02 12:12:24 compute-0 podman[292846]: 2025-10-02 12:12:24.159761434 +0000 UTC m=+0.910831032 container died b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7fee19311abbea19bccccdc68e23e5e6545170dac5e3d79ba687876daf0f0d-merged.mount: Deactivated successfully.
Oct 02 12:12:24 compute-0 podman[292846]: 2025-10-02 12:12:24.22684152 +0000 UTC m=+0.977911128 container remove b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:12:24 compute-0 systemd[1]: libpod-conmon-b97475083e53f5cc65f788ea30987fe6d55823420ba86d36e5e6c4b3c373010c.scope: Deactivated successfully.
Oct 02 12:12:24 compute-0 sudo[292742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:24 compute-0 sudo[292885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:24 compute-0 sudo[292885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:24 compute-0 sudo[292885]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.375 2 DEBUG nova.network.neutron [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:24 compute-0 sudo[292910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:24 compute-0 sudo[292910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:24 compute-0 sudo[292910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.428 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Releasing lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.429 2 DEBUG nova.compute.manager [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:24 compute-0 sudo[292936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:24 compute-0 sudo[292936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:24 compute-0 sudo[292936]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:24 compute-0 sudo[292961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:12:24 compute-0 sudo[292961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:24 compute-0 kernel: tap1bb3def9-ca (unregistering): left promiscuous mode
Oct 02 12:12:24 compute-0 NetworkManager[44987]: <info>  [1759407144.6723] device (tap1bb3def9-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:12:24 compute-0 ovn_controller[148183]: 2025-10-02T12:12:24Z|00194|binding|INFO|Releasing lport 1bb3def9-caba-4dc1-8217-e979ae761982 from this chassis (sb_readonly=0)
Oct 02 12:12:24 compute-0 ovn_controller[148183]: 2025-10-02T12:12:24Z|00195|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 down in Southbound
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 ovn_controller[148183]: 2025-10-02T12:12:24Z|00196|binding|INFO|Removing iface tap1bb3def9-ca ovn-installed in OVS
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.689 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:0e 10.100.0.13'], port_security=['fa:16:3e:9e:93:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '18deabc2-ff14-4e97-938f-3c48233efc80 c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1bb3def9-caba-4dc1-8217-e979ae761982) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.690 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb3def9-caba-4dc1-8217-e979ae761982 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 unbound from our chassis
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.692 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.707 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[97f56beb-b8e2-474c-9450-c06805f397c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 02 12:12:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000036.scope: Consumed 12.649s CPU time.
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.730 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1328ce-b75f-4240-838c-e939a8616a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 systemd-machined[211836]: Machine qemu-25-instance-00000036 terminated.
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.732 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec209c7-ac67-4e11-82ae-1fa4d09539a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.756 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[940273a5-c59c-42bf-baee-87b8dfc042c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.772 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e24a5596-4b09-494c-9f13-54827f974e1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293024, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.787 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f001f1ad-1155-4c60-a591-794d3ba311eb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515759, 'tstamp': 515759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293035, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515762, 'tstamp': 515762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293035, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.788 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.795 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb99f809e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.797 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.797 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb99f809e-50, col_values=(('external_ids', {'iface-id': '27833f83-24e9-4232-9a1b-0a4ea68e9e00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:24.797 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.850 2 INFO nova.virt.libvirt.driver [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance destroyed successfully.
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.850 2 DEBUG nova.objects.instance [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'resources' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.850279268 +0000 UTC m=+0.038683209 container create 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:24 compute-0 systemd[1]: Started libpod-conmon-62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63.scope.
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.901 2 DEBUG nova.virt.libvirt.vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:12:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:12:24Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.901 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.902 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.902 2 DEBUG os_vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bb3def9-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.909 2 INFO os_vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca')
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.914 2 DEBUG nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start _get_guest_xml network_info=[{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.917 2 WARNING nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.923907262 +0000 UTC m=+0.112311223 container init 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.923 2 DEBUG nova.virt.libvirt.host [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.925 2 DEBUG nova.virt.libvirt.host [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.832811654 +0000 UTC m=+0.021215615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.930 2 DEBUG nova.virt.libvirt.host [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.931 2 DEBUG nova.virt.libvirt.host [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.931941637 +0000 UTC m=+0.120345578 container start 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.932 2 DEBUG nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.932 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.933 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.933 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.933 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.934 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.934 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.934 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.935 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.935184015 +0000 UTC m=+0.123588006 container attach 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.935 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.935 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.935 2 DEBUG nova.virt.hardware [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.936 2 DEBUG nova.objects.instance [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:24 compute-0 systemd[1]: libpod-62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63.scope: Deactivated successfully.
Oct 02 12:12:24 compute-0 upbeat_chaplygin[293066]: 167 167
Oct 02 12:12:24 compute-0 conmon[293066]: conmon 62270df76ab46e64d984 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63.scope/container/memory.events
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.939574491 +0000 UTC m=+0.127978442 container died 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6444539a95ebceaaded58442fe16572f404a6fbb5c3366a266d2221b7c1e1f-merged.mount: Deactivated successfully.
Oct 02 12:12:24 compute-0 nova_compute[257802]: 2025-10-02 12:12:24.968 2 DEBUG oslo_concurrency.processutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:24 compute-0 podman[293039]: 2025-10-02 12:12:24.973611367 +0000 UTC m=+0.162015308 container remove 62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:12:24 compute-0 systemd[1]: libpod-conmon-62270df76ab46e64d984f4442e353905c0fb6243a9738f76c880daa013b24c63.scope: Deactivated successfully.
Oct 02 12:12:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 02 12:12:25 compute-0 ceph-mon[73607]: pgmap v1404: 305 pgs: 305 active+clean; 297 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Oct 02 12:12:25 compute-0 ceph-mon[73607]: osdmap e192: 3 total, 3 up, 3 in
Oct 02 12:12:25 compute-0 podman[293101]: 2025-10-02 12:12:25.159308936 +0000 UTC m=+0.047320978 container create be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:12:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 292 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 133 op/s
Oct 02 12:12:25 compute-0 systemd[1]: Started libpod-conmon-be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27.scope.
Oct 02 12:12:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0039d5ec5834658f5a4804e3e40bf5a418cfb455ebec2ec527ccab134ac0c103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0039d5ec5834658f5a4804e3e40bf5a418cfb455ebec2ec527ccab134ac0c103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0039d5ec5834658f5a4804e3e40bf5a418cfb455ebec2ec527ccab134ac0c103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0039d5ec5834658f5a4804e3e40bf5a418cfb455ebec2ec527ccab134ac0c103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:25 compute-0 podman[293101]: 2025-10-02 12:12:25.226406762 +0000 UTC m=+0.114418804 container init be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:12:25 compute-0 podman[293101]: 2025-10-02 12:12:25.233240498 +0000 UTC m=+0.121252520 container start be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:12:25 compute-0 podman[293101]: 2025-10-02 12:12:25.237524281 +0000 UTC m=+0.125536303 container attach be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:12:25 compute-0 podman[293101]: 2025-10-02 12:12:25.143992395 +0000 UTC m=+0.032004427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.257 2 DEBUG nova.compute.manager [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.260 2 DEBUG oslo_concurrency.lockutils [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.260 2 DEBUG oslo_concurrency.lockutils [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.260 2 DEBUG oslo_concurrency.lockutils [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.261 2 DEBUG nova.compute.manager [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.261 2 WARNING nova.compute.manager [req-bb18b398-49a1-47d5-86ac-dec929846f81 req-60f996b5-4662-4547-8a6b-7dab6ff2a8b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:12:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 02 12:12:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.675 2 DEBUG oslo_concurrency.processutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:25 compute-0 nova_compute[257802]: 2025-10-02 12:12:25.711 2 DEBUG oslo_concurrency.processutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:26 compute-0 condescending_einstein[293124]: {
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:         "osd_id": 1,
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:         "type": "bluestore"
Oct 02 12:12:26 compute-0 condescending_einstein[293124]:     }
Oct 02 12:12:26 compute-0 condescending_einstein[293124]: }
Oct 02 12:12:26 compute-0 systemd[1]: libpod-be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27.scope: Deactivated successfully.
Oct 02 12:12:26 compute-0 podman[293101]: 2025-10-02 12:12:26.080713124 +0000 UTC m=+0.968725146 container died be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:12:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:12:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170124711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.147 2 DEBUG oslo_concurrency.processutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.149 2 DEBUG nova.virt.libvirt.vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:12:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:12:24Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.149 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.150 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.151 2 DEBUG nova.objects.instance [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0039d5ec5834658f5a4804e3e40bf5a418cfb455ebec2ec527ccab134ac0c103-merged.mount: Deactivated successfully.
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.179 2 DEBUG nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <uuid>0b6d53d9-53fc-4b0d-b849-fadc57280e6e</uuid>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <name>instance-00000036</name>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1336832158</nova:name>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:12:24</nova:creationTime>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:user uuid="e2a366ecf5934af989ad59e70e8c0b40">tempest-SecurityGroupsTestJSON-82162709-project-member</nova:user>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:project uuid="e3110a9141bf416d96e84c98f5ec90b6">tempest-SecurityGroupsTestJSON-82162709</nova:project>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <nova:port uuid="1bb3def9-caba-4dc1-8217-e979ae761982">
Oct 02 12:12:26 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="serial">0b6d53d9-53fc-4b0d-b849-fadc57280e6e</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="uuid">0b6d53d9-53fc-4b0d-b849-fadc57280e6e</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk">
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_disk.config">
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:12:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:9e:93:0e"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <target dev="tap1bb3def9-ca"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e/console.log" append="off"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:12:26 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:12:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:12:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:12:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:12:26 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.183 2 DEBUG nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.184 2 DEBUG nova.virt.libvirt.driver [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.185 2 DEBUG nova.virt.libvirt.vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:12:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:12:24Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.186 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.187 2 DEBUG nova.network.os_vif_util [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.188 2 DEBUG os_vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bb3def9-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bb3def9-ca, col_values=(('external_ids', {'iface-id': '1bb3def9-caba-4dc1-8217-e979ae761982', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:93:0e', 'vm-uuid': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 podman[293101]: 2025-10-02 12:12:26.200855925 +0000 UTC m=+1.088867947 container remove be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:26 compute-0 NetworkManager[44987]: <info>  [1759407146.2020] manager: (tap1bb3def9-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.210 2 INFO os_vif [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca')
Oct 02 12:12:26 compute-0 systemd[1]: libpod-conmon-be1eeb612a846bc1f875bd373d085eb7fbdca0d06b9d48582d426fd0c4635d27.scope: Deactivated successfully.
Oct 02 12:12:26 compute-0 sudo[292961]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:12:26 compute-0 kernel: tap1bb3def9-ca: entered promiscuous mode
Oct 02 12:12:26 compute-0 systemd-udevd[293002]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:12:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:26 compute-0 NetworkManager[44987]: <info>  [1759407146.3194] manager: (tap1bb3def9-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 02 12:12:26 compute-0 ovn_controller[148183]: 2025-10-02T12:12:26Z|00197|binding|INFO|Claiming lport 1bb3def9-caba-4dc1-8217-e979ae761982 for this chassis.
Oct 02 12:12:26 compute-0 ovn_controller[148183]: 2025-10-02T12:12:26Z|00198|binding|INFO|1bb3def9-caba-4dc1-8217-e979ae761982: Claiming fa:16:3e:9e:93:0e 10.100.0.13
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.335 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:0e 10.100.0.13'], port_security=['fa:16:3e:9e:93:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '18deabc2-ff14-4e97-938f-3c48233efc80 c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1bb3def9-caba-4dc1-8217-e979ae761982) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.336 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb3def9-caba-4dc1-8217-e979ae761982 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 bound to our chassis
Oct 02 12:12:26 compute-0 NetworkManager[44987]: <info>  [1759407146.3398] device (tap1bb3def9-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:12:26 compute-0 NetworkManager[44987]: <info>  [1759407146.3432] device (tap1bb3def9-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.341 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:12:26 compute-0 ovn_controller[148183]: 2025-10-02T12:12:26Z|00199|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 ovn-installed in OVS
Oct 02 12:12:26 compute-0 ovn_controller[148183]: 2025-10-02T12:12:26Z|00200|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 up in Southbound
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 systemd-machined[211836]: New machine qemu-26-instance-00000036.
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.368 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[139df4ab-4488-41f8-b384-20062ea1aacc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000036.
Oct 02 12:12:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 73cb51a9-2d08-4bf5-9e85-7bc3cc0539a8 does not exist
Oct 02 12:12:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b8949d77-71f3-44a9-9dc3-2a85564d182d does not exist
Oct 02 12:12:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3de346d5-d72a-46e8-89bf-af75ac306238 does not exist
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.416 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6b7ce7-7029-4723-9daa-85d61e758c94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.419 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c347ac12-ed9b-4b3a-ad9f-473739a4a171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.457 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7b4e66-3b11-4927-a3df-2fae082b42df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 sudo[293222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:26 compute-0 sudo[293222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:26 compute-0 sudo[293222]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.479 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[68e1ffc7-8a5b-4942-a068-bc3f9c31d967]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293252, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.501 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[241888bf-5102-4ece-8b5f-f8a25033d692]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515759, 'tstamp': 515759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293257, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515762, 'tstamp': 515762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293257, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.503 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 nova_compute[257802]: 2025-10-02 12:12:26.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.505 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb99f809e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.506 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.506 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb99f809e-50, col_values=(('external_ids', {'iface-id': '27833f83-24e9-4232-9a1b-0a4ea68e9e00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.506 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:26 compute-0 sudo[293256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:12:26 compute-0 sudo[293256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:26 compute-0 sudo[293256]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:26 compute-0 ceph-mon[73607]: pgmap v1406: 305 pgs: 305 active+clean; 292 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 133 op/s
Oct 02 12:12:26 compute-0 ceph-mon[73607]: osdmap e193: 3 total, 3 up, 3 in
Oct 02 12:12:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/552808507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4170124711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.929 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.930 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:26.930 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 283 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 127 op/s
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.360 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 0b6d53d9-53fc-4b0d-b849-fadc57280e6e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.361 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407147.360099, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.361 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Resumed (Lifecycle Event)
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.365 2 DEBUG nova.compute.manager [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.370 2 INFO nova.virt.libvirt.driver [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance rebooted successfully.
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.370 2 DEBUG nova.compute.manager [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.384 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.384 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.384 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.385 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.385 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.385 2 WARNING nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.385 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.385 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.386 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.386 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.386 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.386 2 WARNING nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.386 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.387 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.387 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.387 2 DEBUG oslo_concurrency.lockutils [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.387 2 DEBUG nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.387 2 WARNING nova.compute.manager [req-96acf276-e9a3-455e-bd8c-8d2344b90e48 req-032cdc1c-2502-4a66-a9c7-587ed0cd9f95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.415 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.420 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.467 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.468 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407147.3654015, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.468 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Started (Lifecycle Event)
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.488 2 DEBUG oslo_concurrency.lockutils [None req-ccb33bb0-51f6-423f-824a-dfd0a85681c8 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.507 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:27 compute-0 nova_compute[257802]: 2025-10-02 12:12:27.512 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:12:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:12:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/991575295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:27.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:28 compute-0 ceph-mon[73607]: pgmap v1408: 305 pgs: 305 active+clean; 283 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 127 op/s
Oct 02 12:12:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 260 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Oct 02 12:12:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:30 compute-0 nova_compute[257802]: 2025-10-02 12:12:30.362 2 DEBUG nova.compute.manager [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:30 compute-0 nova_compute[257802]: 2025-10-02 12:12:30.362 2 DEBUG nova.compute.manager [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing instance network info cache due to event network-changed-1bb3def9-caba-4dc1-8217-e979ae761982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:12:30 compute-0 nova_compute[257802]: 2025-10-02 12:12:30.363 2 DEBUG oslo_concurrency.lockutils [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:30 compute-0 nova_compute[257802]: 2025-10-02 12:12:30.363 2 DEBUG oslo_concurrency.lockutils [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:30 compute-0 nova_compute[257802]: 2025-10-02 12:12:30.363 2 DEBUG nova.network.neutron [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Refreshing network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:12:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 02 12:12:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 02 12:12:30 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 02 12:12:30 compute-0 ceph-mon[73607]: pgmap v1409: 305 pgs: 305 active+clean; 260 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Oct 02 12:12:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 260 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.249 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.250 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.251 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.251 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.252 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.254 2 INFO nova.compute.manager [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Terminating instance
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.256 2 DEBUG nova.compute.manager [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:12:31 compute-0 kernel: tap1bb3def9-ca (unregistering): left promiscuous mode
Oct 02 12:12:31 compute-0 NetworkManager[44987]: <info>  [1759407151.3061] device (tap1bb3def9-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:12:31 compute-0 ovn_controller[148183]: 2025-10-02T12:12:31Z|00201|binding|INFO|Releasing lport 1bb3def9-caba-4dc1-8217-e979ae761982 from this chassis (sb_readonly=0)
Oct 02 12:12:31 compute-0 ovn_controller[148183]: 2025-10-02T12:12:31Z|00202|binding|INFO|Setting lport 1bb3def9-caba-4dc1-8217-e979ae761982 down in Southbound
Oct 02 12:12:31 compute-0 ovn_controller[148183]: 2025-10-02T12:12:31Z|00203|binding|INFO|Removing iface tap1bb3def9-ca ovn-installed in OVS
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.326 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:0e 10.100.0.13'], port_security=['fa:16:3e:9e:93:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0b6d53d9-53fc-4b0d-b849-fadc57280e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '18deabc2-ff14-4e97-938f-3c48233efc80 49af8dfb-d6bf-4a11-bc25-afb1962b24b2 c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1bb3def9-caba-4dc1-8217-e979ae761982) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.328 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb3def9-caba-4dc1-8217-e979ae761982 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 unbound from our chassis
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.329 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b99f809e-5bc0-4d91-be9e-5e46fd286a27
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.350 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[43488866-346d-4757-9e25-ff4f84a194b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 02 12:12:31 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000036.scope: Consumed 4.908s CPU time.
Oct 02 12:12:31 compute-0 systemd-machined[211836]: Machine qemu-26-instance-00000036 terminated.
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.389 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e14cb0ac-b3ae-416b-bf45-6aeaebe05947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.394 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4c400eab-f85d-437a-9804-0aa3487ae462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.432 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0c7c72-1070-4168-a724-5d32953b2ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.451 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29f502c5-56a6-4ec3-be65-e4615c39f079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb99f809e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515748, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293337, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.466 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1605c2a6-6c0a-40a3-b38d-03a5dc196523]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515759, 'tstamp': 515759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293338, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb99f809e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515762, 'tstamp': 515762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293338, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.468 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.478 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb99f809e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.478 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.479 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb99f809e-50, col_values=(('external_ids', {'iface-id': '27833f83-24e9-4232-9a1b-0a4ea68e9e00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:31.479 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.499 2 INFO nova.virt.libvirt.driver [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Instance destroyed successfully.
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.500 2 DEBUG nova.objects.instance [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'resources' on Instance uuid 0b6d53d9-53fc-4b0d-b849-fadc57280e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.518 2 DEBUG nova.virt.libvirt.vif [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1336832158',display_name='tempest-SecurityGroupsTestJSON-server-1336832158',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1336832158',id=54,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:12:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-i6f00o4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:12:27Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b6d53d9-53fc-4b0d-b849-fadc57280e6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.520 2 DEBUG nova.network.os_vif_util [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.521 2 DEBUG nova.network.os_vif_util [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.521 2 DEBUG os_vif [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bb3def9-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.529 2 INFO os_vif [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:0e,bridge_name='br-int',has_traffic_filtering=True,id=1bb3def9-caba-4dc1-8217-e979ae761982,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb3def9-ca')
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.789 2 DEBUG nova.compute.manager [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.789 2 DEBUG oslo_concurrency.lockutils [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.789 2 DEBUG oslo_concurrency.lockutils [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.789 2 DEBUG oslo_concurrency.lockutils [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.790 2 DEBUG nova.compute.manager [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:31 compute-0 nova_compute[257802]: 2025-10-02 12:12:31.790 2 DEBUG nova.compute.manager [req-54a618e7-d1fa-41ca-9181-91d7b93dde3f req-01112b0a-e794-4ca6-9c07-2b346744057d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-unplugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:12:31 compute-0 ceph-mon[73607]: osdmap e194: 3 total, 3 up, 3 in
Oct 02 12:12:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.050 2 INFO nova.virt.libvirt.driver [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Deleting instance files /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_del
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.051 2 INFO nova.virt.libvirt.driver [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Deletion of /var/lib/nova/instances/0b6d53d9-53fc-4b0d-b849-fadc57280e6e_del complete
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.057 2 DEBUG nova.network.neutron [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updated VIF entry in instance network info cache for port 1bb3def9-caba-4dc1-8217-e979ae761982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.058 2 DEBUG nova.network.neutron [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [{"id": "1bb3def9-caba-4dc1-8217-e979ae761982", "address": "fa:16:3e:9e:93:0e", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb3def9-ca", "ovs_interfaceid": "1bb3def9-caba-4dc1-8217-e979ae761982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.125 2 DEBUG oslo_concurrency.lockutils [req-62253098-571a-4a3f-a64a-0b47d07e927a req-6b538cc2-38bc-406a-80f5-c91b2e79b095 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0b6d53d9-53fc-4b0d-b849-fadc57280e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.156 2 INFO nova.compute.manager [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.157 2 DEBUG oslo.service.loopingcall [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.157 2 DEBUG nova.compute.manager [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:12:32 compute-0 nova_compute[257802]: 2025-10-02 12:12:32.158 2 DEBUG nova.network.neutron [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:12:32 compute-0 ceph-mon[73607]: pgmap v1411: 305 pgs: 305 active+clean; 260 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:12:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 214 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 242 op/s
Oct 02 12:12:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 02 12:12:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 02 12:12:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 02 12:12:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.990 2 DEBUG nova.compute.manager [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.991 2 DEBUG oslo_concurrency.lockutils [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.991 2 DEBUG oslo_concurrency.lockutils [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.991 2 DEBUG oslo_concurrency.lockutils [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.991 2 DEBUG nova.compute.manager [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] No waiting events found dispatching network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:33 compute-0 nova_compute[257802]: 2025-10-02 12:12:33.992 2 WARNING nova.compute.manager [req-b4535df1-f4c9-458c-a90e-026341d5476e req-2b966b21-a019-4a26-b483-511e07ec4313 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received unexpected event network-vif-plugged-1bb3def9-caba-4dc1-8217-e979ae761982 for instance with vm_state active and task_state deleting.
Oct 02 12:12:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.318 2 DEBUG nova.network.neutron [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.352 2 INFO nova.compute.manager [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Took 2.19 seconds to deallocate network for instance.
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.461 2 DEBUG nova.compute.manager [req-79c5d4c8-2e99-4b08-af68-a401b81872c0 req-4bb6ba58-256e-4aba-9299-b78d4f2f8582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Received event network-vif-deleted-1bb3def9-caba-4dc1-8217-e979ae761982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.630 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.631 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:34 compute-0 ceph-mon[73607]: pgmap v1412: 305 pgs: 305 active+clean; 214 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 242 op/s
Oct 02 12:12:34 compute-0 ceph-mon[73607]: osdmap e195: 3 total, 3 up, 3 in
Oct 02 12:12:34 compute-0 nova_compute[257802]: 2025-10-02 12:12:34.716 2 DEBUG oslo_concurrency.processutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:35 compute-0 sudo[293391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:35 compute-0 sudo[293391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:35 compute-0 sudo[293391]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:35 compute-0 sudo[293416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:35 compute-0 sudo[293416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:35 compute-0 sudo[293416]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735599487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 164 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 332 KiB/s wr, 222 op/s
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.179 2 DEBUG oslo_concurrency.processutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.185 2 DEBUG nova.compute.provider_tree [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.218 2 DEBUG nova.scheduler.client.report [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.255 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.319 2 INFO nova.scheduler.client.report [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Deleted allocations for instance 0b6d53d9-53fc-4b0d-b849-fadc57280e6e
Oct 02 12:12:35 compute-0 nova_compute[257802]: 2025-10-02 12:12:35.422 2 DEBUG oslo_concurrency.lockutils [None req-4eaadd16-fcfb-45e6-99f3-80b5f26fdf55 e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b6d53d9-53fc-4b0d-b849-fadc57280e6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3094586200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2735599487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:36 compute-0 nova_compute[257802]: 2025-10-02 12:12:36.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:36 compute-0 nova_compute[257802]: 2025-10-02 12:12:36.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:36 compute-0 ceph-mon[73607]: pgmap v1414: 305 pgs: 305 active+clean; 164 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 332 KiB/s wr, 222 op/s
Oct 02 12:12:37 compute-0 nova_compute[257802]: 2025-10-02 12:12:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:37 compute-0 nova_compute[257802]: 2025-10-02 12:12:37.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:12:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 KiB/s wr, 196 op/s
Oct 02 12:12:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:12:38 compute-0 nova_compute[257802]: 2025-10-02 12:12:38.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 02 12:12:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 02 12:12:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 02 12:12:39 compute-0 ceph-mon[73607]: pgmap v1415: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 KiB/s wr, 196 op/s
Oct 02 12:12:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/728985912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:39 compute-0 ceph-mon[73607]: osdmap e196: 3 total, 3 up, 3 in
Oct 02 12:12:39 compute-0 nova_compute[257802]: 2025-10-02 12:12:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 138 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Oct 02 12:12:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2811533309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:41 compute-0 ceph-mon[73607]: pgmap v1417: 305 pgs: 305 active+clean; 138 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Oct 02 12:12:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3874037984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:41 compute-0 nova_compute[257802]: 2025-10-02 12:12:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 154 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:12:41 compute-0 nova_compute[257802]: 2025-10-02 12:12:41.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:41 compute-0 nova_compute[257802]: 2025-10-02 12:12:41.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:41.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:42 compute-0 nova_compute[257802]: 2025-10-02 12:12:42.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:12:42
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.control']
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:12:42 compute-0 podman[293447]: 2025-10-02 12:12:42.920916114 +0000 UTC m=+0.055276491 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:12:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:12:43 compute-0 nova_compute[257802]: 2025-10-02 12:12:43.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 167 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 12:12:43 compute-0 ceph-mon[73607]: pgmap v1418: 305 pgs: 305 active+clean; 154 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:12:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:43.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:44 compute-0 ceph-mon[73607]: pgmap v1419: 305 pgs: 305 active+clean; 167 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:12:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.568 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.569 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.569 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:12:45 compute-0 nova_compute[257802]: 2025-10-02 12:12:45.569 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0b03ee08-f5c8-4897-8215-b9998393372f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:46 compute-0 ceph-mon[73607]: pgmap v1420: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:12:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1393102552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1524387760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1341746596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:12:46 compute-0 nova_compute[257802]: 2025-10-02 12:12:46.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:46 compute-0 nova_compute[257802]: 2025-10-02 12:12:46.497 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407151.4963436, 0b6d53d9-53fc-4b0d-b849-fadc57280e6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:46 compute-0 nova_compute[257802]: 2025-10-02 12:12:46.498 2 INFO nova.compute.manager [-] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] VM Stopped (Lifecycle Event)
Oct 02 12:12:46 compute-0 nova_compute[257802]: 2025-10-02 12:12:46.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:46 compute-0 nova_compute[257802]: 2025-10-02 12:12:46.531 2 DEBUG nova.compute.manager [None req-166cb269-2ac4-47de-9451-f9af962cc915 - - - - - -] [instance: 0b6d53d9-53fc-4b0d-b849-fadc57280e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:46 compute-0 podman[293471]: 2025-10-02 12:12:46.930766595 +0000 UTC m=+0.073150045 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 02 12:12:46 compute-0 podman[293472]: 2025-10-02 12:12:46.948736051 +0000 UTC m=+0.076630179 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.007 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.008 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.008 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.008 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.008 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.009 2 INFO nova.compute.manager [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Terminating instance
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.010 2 DEBUG nova.compute.manager [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:12:47 compute-0 kernel: tap608719c5-58 (unregistering): left promiscuous mode
Oct 02 12:12:47 compute-0 NetworkManager[44987]: <info>  [1759407167.0833] device (tap608719c5-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 ovn_controller[148183]: 2025-10-02T12:12:47Z|00204|binding|INFO|Releasing lport 608719c5-5877-4237-83ea-eb3cba59c3a3 from this chassis (sb_readonly=0)
Oct 02 12:12:47 compute-0 ovn_controller[148183]: 2025-10-02T12:12:47Z|00205|binding|INFO|Setting lport 608719c5-5877-4237-83ea-eb3cba59c3a3 down in Southbound
Oct 02 12:12:47 compute-0 ovn_controller[148183]: 2025-10-02T12:12:47Z|00206|binding|INFO|Removing iface tap608719c5-58 ovn-installed in OVS
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.095 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:cf:c2 10.100.0.3'], port_security=['fa:16:3e:6e:cf:c2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0b03ee08-f5c8-4897-8215-b9998393372f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3110a9141bf416d96e84c98f5ec90b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '48637675-22bd-4b89-a98d-4a9ddd5187c2 9fea2f9b-cde2-4117-93cd-822060828542 c52c7c6e-0669-4ab9-8dc6-1a7e6f706889', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d030627f-5151-46c4-9ad6-e8160d673b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=608719c5-5877-4237-83ea-eb3cba59c3a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.097 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 608719c5-5877-4237-83ea-eb3cba59c3a3 in datapath b99f809e-5bc0-4d91-be9e-5e46fd286a27 unbound from our chassis
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.098 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b99f809e-5bc0-4d91-be9e-5e46fd286a27, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.099 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e743ce92-f516-4564-8f6c-11ff321f94d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.100 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27 namespace which is not needed anymore
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 02 12:12:47 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000033.scope: Consumed 16.707s CPU time.
Oct 02 12:12:47 compute-0 systemd-machined[211836]: Machine qemu-24-instance-00000033 terminated.
Oct 02 12:12:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.252 2 INFO nova.virt.libvirt.driver [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Instance destroyed successfully.
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.257 2 DEBUG nova.objects.instance [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lazy-loading 'resources' on Instance uuid 0b03ee08-f5c8-4897-8215-b9998393372f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:47 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [NOTICE]   (291526) : haproxy version is 2.8.14-c23fe91
Oct 02 12:12:47 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [NOTICE]   (291526) : path to executable is /usr/sbin/haproxy
Oct 02 12:12:47 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [WARNING]  (291526) : Exiting Master process...
Oct 02 12:12:47 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [ALERT]    (291526) : Current worker (291528) exited with code 143 (Terminated)
Oct 02 12:12:47 compute-0 neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27[291522]: [WARNING]  (291526) : All workers exited. Exiting... (0)
Oct 02 12:12:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2819117286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:47 compute-0 systemd[1]: libpod-c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e.scope: Deactivated successfully.
Oct 02 12:12:47 compute-0 podman[293532]: 2025-10-02 12:12:47.273951681 +0000 UTC m=+0.068658064 container died c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e-userdata-shm.mount: Deactivated successfully.
Oct 02 12:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-75160e36c7dd180302c05b2c8cc01de3ad46698c7831f8ac45cc9501fe5e9117-merged.mount: Deactivated successfully.
Oct 02 12:12:47 compute-0 podman[293532]: 2025-10-02 12:12:47.323523402 +0000 UTC m=+0.118229765 container cleanup c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:12:47 compute-0 systemd[1]: libpod-conmon-c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e.scope: Deactivated successfully.
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.356 2 DEBUG nova.virt.libvirt.vif [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:11:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1894910822',display_name='tempest-SecurityGroupsTestJSON-server-1894910822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1894910822',id=51,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:11:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3110a9141bf416d96e84c98f5ec90b6',ramdisk_id='',reservation_id='r-ew1di0g7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-82162709',owner_user_name='tempest-SecurityGroupsTestJSON-82162709-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:11:31Z,user_data=None,user_id='e2a366ecf5934af989ad59e70e8c0b40',uuid=0b03ee08-f5c8-4897-8215-b9998393372f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.357 2 DEBUG nova.network.os_vif_util [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converting VIF {"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.358 2 DEBUG nova.network.os_vif_util [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.358 2 DEBUG os_vif [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap608719c5-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.367 2 INFO os_vif [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:cf:c2,bridge_name='br-int',has_traffic_filtering=True,id=608719c5-5877-4237-83ea-eb3cba59c3a3,network=Network(b99f809e-5bc0-4d91-be9e-5e46fd286a27),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608719c5-58')
Oct 02 12:12:47 compute-0 podman[293571]: 2025-10-02 12:12:47.410793037 +0000 UTC m=+0.060924397 container remove c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.422 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99c61ef0-893a-45d5-bb12-8c558468a775]: (4, ('Thu Oct  2 12:12:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27 (c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e)\nc606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e\nThu Oct  2 12:12:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27 (c606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e)\nc606a4d480195c264a79b7ebc7c4654300c5dde81d5fb63ea3a0084a754ab98e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.423 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ede562bb-5f40-4b49-b56a-7b9c39ce1b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.425 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb99f809e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 kernel: tapb99f809e-50: left promiscuous mode
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.446 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[61669a8d-cf90-4c6d-bfa8-f9a2482fa551]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.481 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6a250171-0216-4db3-a339-2f97d1b057ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.482 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[add78daa-8215-43cc-a572-8817c448ba9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.500 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c54d6515-c670-4f5c-9447-f108b98527af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515741, 'reachable_time': 21329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293604, 'error': None, 'target': 'ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.502 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b99f809e-5bc0-4d91-be9e-5e46fd286a27 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:12:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:47.502 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2deb89a8-3389-4c08-bdd1-aa51dd200131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:47 compute-0 systemd[1]: run-netns-ovnmeta\x2db99f809e\x2d5bc0\x2d4d91\x2dbe9e\x2d5e46fd286a27.mount: Deactivated successfully.
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.702 2 DEBUG nova.compute.manager [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-unplugged-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.703 2 DEBUG oslo_concurrency.lockutils [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.703 2 DEBUG oslo_concurrency.lockutils [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.704 2 DEBUG oslo_concurrency.lockutils [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.704 2 DEBUG nova.compute.manager [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] No waiting events found dispatching network-vif-unplugged-608719c5-5877-4237-83ea-eb3cba59c3a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.704 2 DEBUG nova.compute.manager [req-1b468208-2a1d-43d7-b8f6-f796e357365d req-49268318-a3c2-43e4-ad3a-8ea18b608de2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-unplugged-608719c5-5877-4237-83ea-eb3cba59c3a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:12:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.915 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [{"id": "608719c5-5877-4237-83ea-eb3cba59c3a3", "address": "fa:16:3e:6e:cf:c2", "network": {"id": "b99f809e-5bc0-4d91-be9e-5e46fd286a27", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-262900778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3110a9141bf416d96e84c98f5ec90b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608719c5-58", "ovs_interfaceid": "608719c5-5877-4237-83ea-eb3cba59c3a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.961 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-0b03ee08-f5c8-4897-8215-b9998393372f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.962 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.962 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.995 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.995 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.996 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.996 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:12:47 compute-0 nova_compute[257802]: 2025-10-02 12:12:47.996 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946215598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.468 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:48 compute-0 ceph-mon[73607]: pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 12:12:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.639 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.640 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.839 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.840 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4642MB free_disk=20.922027587890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.840 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.840 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.955 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 0b03ee08-f5c8-4897-8215-b9998393372f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.956 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:12:48 compute-0 nova_compute[257802]: 2025-10-02 12:12:48.956 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.030 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.143 2 INFO nova.virt.libvirt.driver [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Deleting instance files /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f_del
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.144 2 INFO nova.virt.libvirt.driver [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Deletion of /var/lib/nova/instances/0b03ee08-f5c8-4897-8215-b9998393372f_del complete
Oct 02 12:12:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 120 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.206 2 INFO nova.compute.manager [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Took 2.20 seconds to destroy the instance on the hypervisor.
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.207 2 DEBUG oslo.service.loopingcall [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.207 2 DEBUG nova.compute.manager [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.207 2 DEBUG nova.network.neutron [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:12:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335536330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.527 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.534 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.573 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.600 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.601 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1946215598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1335536330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.822 2 DEBUG nova.compute.manager [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.823 2 DEBUG oslo_concurrency.lockutils [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.823 2 DEBUG oslo_concurrency.lockutils [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.824 2 DEBUG oslo_concurrency.lockutils [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.824 2 DEBUG nova.compute.manager [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] No waiting events found dispatching network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:49 compute-0 nova_compute[257802]: 2025-10-02 12:12:49.824 2 WARNING nova.compute.manager [req-b51bf769-727e-4768-a585-4791a60e77ea req-6f361d54-e6d5-469f-9d7c-b83ef337a465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received unexpected event network-vif-plugged-608719c5-5877-4237-83ea-eb3cba59c3a3 for instance with vm_state active and task_state deleting.
Oct 02 12:12:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.660 2 DEBUG nova.network.neutron [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.677 2 INFO nova.compute.manager [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Took 1.47 seconds to deallocate network for instance.
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.750 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.751 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:50 compute-0 ceph-mon[73607]: pgmap v1422: 305 pgs: 305 active+clean; 120 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.818 2 DEBUG oslo_concurrency.processutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:50 compute-0 nova_compute[257802]: 2025-10-02 12:12:50.858 2 DEBUG nova.compute.manager [req-7bbd927a-45f0-4fd1-82c6-b70c0ff8aa23 req-69964772-4af5-4ee9-92f6-f81b6570e592 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Received event network-vif-deleted-608719c5-5877-4237-83ea-eb3cba59c3a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:51 compute-0 podman[293654]: 2025-10-02 12:12:51.008751225 +0000 UTC m=+0.134152852 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:12:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Oct 02 12:12:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957051304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.307 2 DEBUG oslo_concurrency.processutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.317 2 DEBUG nova.compute.provider_tree [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.375 2 DEBUG nova.scheduler.client.report [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.405 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.468 2 INFO nova.scheduler.client.report [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Deleted allocations for instance 0b03ee08-f5c8-4897-8215-b9998393372f
Oct 02 12:12:51 compute-0 nova_compute[257802]: 2025-10-02 12:12:51.588 2 DEBUG oslo_concurrency.lockutils [None req-bd0a7497-52d9-4259-ba1e-325f558091ad e2a366ecf5934af989ad59e70e8c0b40 e3110a9141bf416d96e84c98f5ec90b6 - - default default] Lock "0b03ee08-f5c8-4897-8215-b9998393372f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:12:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:12:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3957051304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:52 compute-0 nova_compute[257802]: 2025-10-02 12:12:52.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 585 KiB/s wr, 79 op/s
Oct 02 12:12:53 compute-0 ceph-mon[73607]: pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Oct 02 12:12:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:12:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:54.457 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:54.458 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:54 compute-0 nova_compute[257802]: 2025-10-02 12:12:54.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:54 compute-0 ceph-mon[73607]: pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 585 KiB/s wr, 79 op/s
Oct 02 12:12:54 compute-0 nova_compute[257802]: 2025-10-02 12:12:54.737 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 83 op/s
Oct 02 12:12:55 compute-0 sudo[293703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:55 compute-0 sudo[293703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:55 compute-0 sudo[293703]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:55 compute-0 sudo[293728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:55 compute-0 sudo[293728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:55 compute-0 sudo[293728]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 02 12:12:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 02 12:12:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 02 12:12:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1688446545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:12:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1688446545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:12:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:55.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:56 compute-0 nova_compute[257802]: 2025-10-02 12:12:56.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 02 12:12:56 compute-0 ceph-mon[73607]: pgmap v1425: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 83 op/s
Oct 02 12:12:56 compute-0 ceph-mon[73607]: osdmap e197: 3 total, 3 up, 3 in
Oct 02 12:12:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 02 12:12:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 02 12:12:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 123 op/s
Oct 02 12:12:57 compute-0 nova_compute[257802]: 2025-10-02 12:12:57.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 02 12:12:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 02 12:12:57 compute-0 ceph-mon[73607]: osdmap e198: 3 total, 3 up, 3 in
Oct 02 12:12:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 02 12:12:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:58 compute-0 nova_compute[257802]: 2025-10-02 12:12:58.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:12:58.459 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:12:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 77K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 6245 syncs, 3.15 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9197 writes, 34K keys, 9197 commit groups, 1.0 writes per commit group, ingest: 36.18 MB, 0.06 MB/s
                                           Interval WAL: 9197 writes, 3472 syncs, 2.65 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:12:59 compute-0 ceph-mon[73607]: pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 123 op/s
Oct 02 12:12:59 compute-0 ceph-mon[73607]: osdmap e199: 3 total, 3 up, 3 in
Oct 02 12:12:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 112 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.0 MiB/s wr, 162 op/s
Oct 02 12:12:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:12:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:12:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:59.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:00 compute-0 ceph-mon[73607]: pgmap v1430: 305 pgs: 305 active+clean; 112 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.0 MiB/s wr, 162 op/s
Oct 02 12:13:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.5 MiB/s wr, 140 op/s
Oct 02 12:13:01 compute-0 nova_compute[257802]: 2025-10-02 12:13:01.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3142733455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:02.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:02 compute-0 nova_compute[257802]: 2025-10-02 12:13:02.251 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407167.2496452, 0b03ee08-f5c8-4897-8215-b9998393372f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:02 compute-0 nova_compute[257802]: 2025-10-02 12:13:02.251 2 INFO nova.compute.manager [-] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] VM Stopped (Lifecycle Event)
Oct 02 12:13:02 compute-0 nova_compute[257802]: 2025-10-02 12:13:02.284 2 DEBUG nova.compute.manager [None req-9fa95afc-01aa-4438-9820-5649ac74bb4e - - - - - -] [instance: 0b03ee08-f5c8-4897-8215-b9998393372f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:02 compute-0 nova_compute[257802]: 2025-10-02 12:13:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:02 compute-0 ceph-mon[73607]: pgmap v1431: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.5 MiB/s wr, 140 op/s
Oct 02 12:13:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 173 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.8 MiB/s wr, 163 op/s
Oct 02 12:13:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 02 12:13:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 02 12:13:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 02 12:13:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:03.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:04.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:04 compute-0 ceph-mon[73607]: pgmap v1432: 305 pgs: 305 active+clean; 173 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.8 MiB/s wr, 163 op/s
Oct 02 12:13:04 compute-0 ceph-mon[73607]: osdmap e200: 3 total, 3 up, 3 in
Oct 02 12:13:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 189 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.1 MiB/s wr, 187 op/s
Oct 02 12:13:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:05.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:06.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.443 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.443 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.465 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.598 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.599 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.605 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.605 2 INFO nova.compute.claims [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:13:06 compute-0 nova_compute[257802]: 2025-10-02 12:13:06.744 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:06 compute-0 ceph-mon[73607]: pgmap v1434: 305 pgs: 305 active+clean; 189 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.1 MiB/s wr, 187 op/s
Oct 02 12:13:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:13:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 209 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 MiB/s wr, 141 op/s
Oct 02 12:13:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741015868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.203 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.209 2 DEBUG nova.compute.provider_tree [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.230 2 DEBUG nova.scheduler.client.report [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.266 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.267 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.365 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.365 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.407 2 INFO nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.428 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.533 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.536 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.537 2 INFO nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Creating image(s)
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.577 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.609 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.642 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.646 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6a3bc2ab8006c2e766aa5a77df45d7f8e46f7dd5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.647 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6a3bc2ab8006c2e766aa5a77df45d7f8e46f7dd5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:07 compute-0 nova_compute[257802]: 2025-10-02 12:13:07.673 2 DEBUG nova.policy [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0df47040f1ff4ce69a6fbdfd9eba4955', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:13:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/741015868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:07.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:08.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.136 2 DEBUG nova.virt.libvirt.imagebackend [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/11a037e0-306c-4ca3-91ab-08e92bb1fae5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/11a037e0-306c-4ca3-91ab-08e92bb1fae5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.204 2 DEBUG nova.virt.libvirt.imagebackend [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/11a037e0-306c-4ca3-91ab-08e92bb1fae5/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.205 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] cloning images/11a037e0-306c-4ca3-91ab-08e92bb1fae5@snap to None/c5000eb8-3b94-4233-a49b-079f204f3543_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.347 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6a3bc2ab8006c2e766aa5a77df45d7f8e46f7dd5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.438 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Successfully created port: 3c995553-cbd9-4c64-8558-dee5d872f6ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.499 2 DEBUG nova.objects.instance [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'migration_context' on Instance uuid c5000eb8-3b94-4233-a49b-079f204f3543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.516 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.516 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Ensure instance console log exists: /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.517 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.517 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:08 compute-0 nova_compute[257802]: 2025-10-02 12:13:08.517 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:08 compute-0 ceph-mon[73607]: pgmap v1435: 305 pgs: 305 active+clean; 209 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 MiB/s wr, 141 op/s
Oct 02 12:13:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3968065184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3230038949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 213 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 5.6 MiB/s wr, 151 op/s
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.771 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Successfully updated port: 3c995553-cbd9-4c64-8558-dee5d872f6ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.793 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.793 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquired lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.793 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.945 2 DEBUG nova.compute.manager [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-changed-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.945 2 DEBUG nova.compute.manager [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Refreshing instance network info cache due to event network-changed-3c995553-cbd9-4c64-8558-dee5d872f6ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:09 compute-0 nova_compute[257802]: 2025-10-02 12:13:09.945 2 DEBUG oslo_concurrency.lockutils [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:10.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.144 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.829 2 DEBUG nova.network.neutron [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Updating instance_info_cache with network_info: [{"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.861 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Releasing lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.862 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Instance network_info: |[{"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.862 2 DEBUG oslo_concurrency.lockutils [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.863 2 DEBUG nova.network.neutron [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Refreshing network info cache for port 3c995553-cbd9-4c64-8558-dee5d872f6ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.866 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Start _get_guest_xml network_info=[{"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:12:54Z,direct_url=<?>,disk_format='raw',id=11a037e0-306c-4ca3-91ab-08e92bb1fae5,min_disk=1,min_ram=0,name='tempest-test-snap-900781146',owner='55d20ae21b6d4f0abfff3bccc371ee7a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:12:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': '11a037e0-306c-4ca3-91ab-08e92bb1fae5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.870 2 WARNING nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.874 2 DEBUG nova.virt.libvirt.host [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.874 2 DEBUG nova.virt.libvirt.host [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.878 2 DEBUG nova.virt.libvirt.host [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.879 2 DEBUG nova.virt.libvirt.host [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.880 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.880 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:12:54Z,direct_url=<?>,disk_format='raw',id=11a037e0-306c-4ca3-91ab-08e92bb1fae5,min_disk=1,min_ram=0,name='tempest-test-snap-900781146',owner='55d20ae21b6d4f0abfff3bccc371ee7a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:12:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.881 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.881 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.882 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.882 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.882 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.883 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.883 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.883 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.884 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.884 2 DEBUG nova.virt.hardware [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:10 compute-0 nova_compute[257802]: 2025-10-02 12:13:10.887 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:10 compute-0 ceph-mon[73607]: pgmap v1436: 305 pgs: 305 active+clean; 213 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 5.6 MiB/s wr, 151 op/s
Oct 02 12:13:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 213 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 4.7 MiB/s wr, 142 op/s
Oct 02 12:13:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390124373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.307 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.336 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.341 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613105974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.806 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.807 2 DEBUG nova.virt.libvirt.vif [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-506902227',display_name='tempest-ImagesTestJSON-server-506902227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-506902227',id=57,image_ref='11a037e0-306c-4ca3-91ab-08e92bb1fae5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-v1bbfj18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d8adf6f4-e7d3-4a21-87f7-4b2396126258',image_min_disk='1',image_min_ram='0',image_owner_id='55d20ae21b6d4f0abfff3bccc371ee7a',image_owner_project_name='tempest-ImagesTestJSON-2116266493',image_owner_user_name='tempest-ImagesTestJSON-2116266493-project-member',image_user_id='0df47040f1ff4ce69a6fbdfd9eba4955',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:07Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=c5000eb8-3b94-4233-a49b-079f204f3543,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.808 2 DEBUG nova.network.os_vif_util [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.808 2 DEBUG nova.network.os_vif_util [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.810 2 DEBUG nova.objects.instance [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'pci_devices' on Instance uuid c5000eb8-3b94-4233-a49b-079f204f3543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.834 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <uuid>c5000eb8-3b94-4233-a49b-079f204f3543</uuid>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <name>instance-00000039</name>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:name>tempest-ImagesTestJSON-server-506902227</nova:name>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:13:10</nova:creationTime>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:user uuid="0df47040f1ff4ce69a6fbdfd9eba4955">tempest-ImagesTestJSON-2116266493-project-member</nova:user>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:project uuid="55d20ae21b6d4f0abfff3bccc371ee7a">tempest-ImagesTestJSON-2116266493</nova:project>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="11a037e0-306c-4ca3-91ab-08e92bb1fae5"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <nova:port uuid="3c995553-cbd9-4c64-8558-dee5d872f6ac">
Oct 02 12:13:11 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <system>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="serial">c5000eb8-3b94-4233-a49b-079f204f3543</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="uuid">c5000eb8-3b94-4233-a49b-079f204f3543</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </system>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <os>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </os>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <features>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </features>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c5000eb8-3b94-4233-a49b-079f204f3543_disk">
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c5000eb8-3b94-4233-a49b-079f204f3543_disk.config">
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:13:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:89:75:4c"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <target dev="tap3c995553-cb"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/console.log" append="off"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <video>
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </video>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:13:11 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:13:11 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:13:11 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:13:11 compute-0 nova_compute[257802]: </domain>
Oct 02 12:13:11 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.835 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Preparing to wait for external event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.835 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.835 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.836 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.836 2 DEBUG nova.virt.libvirt.vif [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-506902227',display_name='tempest-ImagesTestJSON-server-506902227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-506902227',id=57,image_ref='11a037e0-306c-4ca3-91ab-08e92bb1fae5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-v1bbfj18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d8adf6f4-e7d3-4a21-87f7-4b2396126258',image_min_disk='1',image_min_ram='0',image_owner_id='55d20ae21b6d4f0abfff3bccc371ee7a',image_owner_project_name='tempest-ImagesTestJSON-2116266493',image_owner_user_name='tempest-ImagesTestJSON-2116266493-project-member',image_user_id='0df47040f1ff4ce69a6fbdfd9eba4955',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:07Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=c5000eb8-3b94-4233-a49b-079f204f3543,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.837 2 DEBUG nova.network.os_vif_util [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.837 2 DEBUG nova.network.os_vif_util [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.837 2 DEBUG os_vif [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c995553-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c995553-cb, col_values=(('external_ids', {'iface-id': '3c995553-cbd9-4c64-8558-dee5d872f6ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:75:4c', 'vm-uuid': 'c5000eb8-3b94-4233-a49b-079f204f3543'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:11 compute-0 NetworkManager[44987]: <info>  [1759407191.8434] manager: (tap3c995553-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.848 2 INFO os_vif [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb')
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.900 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.900 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.900 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No VIF found with MAC fa:16:3e:89:75:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:11.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.901 2 INFO nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Using config drive
Oct 02 12:13:11 compute-0 nova_compute[257802]: 2025-10-02 12:13:11.934 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2390124373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1613105974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.306 2 DEBUG nova.network.neutron [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Updated VIF entry in instance network info cache for port 3c995553-cbd9-4c64-8558-dee5d872f6ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.307 2 DEBUG nova.network.neutron [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Updating instance_info_cache with network_info: [{"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.331 2 DEBUG oslo_concurrency.lockutils [req-37be731d-39ef-43e0-817e-3811f52fc553 req-a7793a6c-3385-4ab7-90b3-9e00cfab3b75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c5000eb8-3b94-4233-a49b-079f204f3543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.421 2 INFO nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Creating config drive at /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.426 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1abhwbl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.553 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1abhwbl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.588 2 DEBUG nova.storage.rbd_utils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image c5000eb8-3b94-4233-a49b-079f204f3543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.592 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config c5000eb8-3b94-4233-a49b-079f204f3543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.848 2 DEBUG oslo_concurrency.processutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config c5000eb8-3b94-4233-a49b-079f204f3543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.849 2 INFO nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Deleting local config drive /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543/disk.config because it was imported into RBD.
Oct 02 12:13:12 compute-0 kernel: tap3c995553-cb: entered promiscuous mode
Oct 02 12:13:12 compute-0 NetworkManager[44987]: <info>  [1759407192.9022] manager: (tap3c995553-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 02 12:13:12 compute-0 ovn_controller[148183]: 2025-10-02T12:13:12Z|00207|binding|INFO|Claiming lport 3c995553-cbd9-4c64-8558-dee5d872f6ac for this chassis.
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:12 compute-0 ovn_controller[148183]: 2025-10-02T12:13:12Z|00208|binding|INFO|3c995553-cbd9-4c64-8558-dee5d872f6ac: Claiming fa:16:3e:89:75:4c 10.100.0.12
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:12 compute-0 systemd-udevd[294096]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:12 compute-0 systemd-machined[211836]: New machine qemu-27-instance-00000039.
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.930 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:75:4c 10.100.0.12'], port_security=['fa:16:3e:89:75:4c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c5000eb8-3b94-4233-a49b-079f204f3543', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=3c995553-cbd9-4c64-8558-dee5d872f6ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.932 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 3c995553-cbd9-4c64-8558-dee5d872f6ac in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 bound to our chassis
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.933 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct 02 12:13:12 compute-0 NetworkManager[44987]: <info>  [1759407192.9433] device (tap3c995553-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:12 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000039.
Oct 02 12:13:12 compute-0 NetworkManager[44987]: <info>  [1759407192.9443] device (tap3c995553-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.943 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9b589a-1dfd-48e0-adbf-d6fc8b1e21d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.944 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d00de8e-21 in ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.945 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d00de8e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.946 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5d8a52-64c7-412d-b323-b83fd8164f9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.947 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e5298acf-6c71-435b-969b-2e5caeab9332]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:12 compute-0 ceph-mon[73607]: pgmap v1437: 305 pgs: 305 active+clean; 213 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 4.7 MiB/s wr, 142 op/s
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.959 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[dd587eb0-5249-4197-a72c-026f7d9aa24c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:12.974 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8485fb30-470e-4d29-a68b-833dd06c7c60]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:12 compute-0 ovn_controller[148183]: 2025-10-02T12:13:12Z|00209|binding|INFO|Setting lport 3c995553-cbd9-4c64-8558-dee5d872f6ac ovn-installed in OVS
Oct 02 12:13:12 compute-0 ovn_controller[148183]: 2025-10-02T12:13:12Z|00210|binding|INFO|Setting lport 3c995553-cbd9-4c64-8558-dee5d872f6ac up in Southbound
Oct 02 12:13:12 compute-0 nova_compute[257802]: 2025-10-02 12:13:12.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.006 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[86053ef2-9b8a-442a-bc95-182f10e99972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.011 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8e30081e-9e82-4ea8-8bbb-128ca382a029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 systemd-udevd[294099]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:13 compute-0 NetworkManager[44987]: <info>  [1759407193.0128] manager: (tap6d00de8e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.042 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[746d961e-c980-4ca9-b4a4-c0fca1e0aff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.045 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[428b7672-d363-4f9d-9bbf-0827fcff8d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 podman[294100]: 2025-10-02 12:13:13.057736319 +0000 UTC m=+0.091200311 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:13:13 compute-0 NetworkManager[44987]: <info>  [1759407193.0821] device (tap6d00de8e-20): carrier: link connected
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.091 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf7d29-43a9-460b-a84f-fd9fd7014bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.106 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7ea524-30f8-4d74-8c5d-390e31b76d0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526070, 'reachable_time': 26166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294147, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.125 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7318b0a9-6555-4857-9dc8-ffa866de81ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:28f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526070, 'tstamp': 526070}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294148, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.143 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eaed3935-8712-4503-968d-4281c16c1338]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526070, 'reachable_time': 26166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294149, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.186 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c670e951-5e2a-4909-b4c5-a4f7ed880dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.244 2 DEBUG nova.compute.manager [req-bb0b64b7-f367-445e-a190-80183e2d9b64 req-20c4462f-f263-4b52-83cf-287a4173c719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.245 2 DEBUG oslo_concurrency.lockutils [req-bb0b64b7-f367-445e-a190-80183e2d9b64 req-20c4462f-f263-4b52-83cf-287a4173c719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.246 2 DEBUG oslo_concurrency.lockutils [req-bb0b64b7-f367-445e-a190-80183e2d9b64 req-20c4462f-f263-4b52-83cf-287a4173c719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.246 2 DEBUG oslo_concurrency.lockutils [req-bb0b64b7-f367-445e-a190-80183e2d9b64 req-20c4462f-f263-4b52-83cf-287a4173c719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.247 2 DEBUG nova.compute.manager [req-bb0b64b7-f367-445e-a190-80183e2d9b64 req-20c4462f-f263-4b52-83cf-287a4173c719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Processing event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.248 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff167a5-2091-4084-b049-c8510da8a58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.250 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.250 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.250 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d00de8e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:13 compute-0 NetworkManager[44987]: <info>  [1759407193.2540] manager: (tap6d00de8e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 02 12:13:13 compute-0 kernel: tap6d00de8e-20: entered promiscuous mode
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.258 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d00de8e-20, col_values=(('external_ids', {'iface-id': '4d0b2163-acbb-4b6a-b6d8-84f8212e1e02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:13 compute-0 ovn_controller[148183]: 2025-10-02T12:13:13Z|00211|binding|INFO|Releasing lport 4d0b2163-acbb-4b6a-b6d8-84f8212e1e02 from this chassis (sb_readonly=0)
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.278 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.279 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4d722ec1-a31f-456a-ba55-f90dbefa7e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.280 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:13.281 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'env', 'PROCESS_TAG=haproxy-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d00de8e-203c-4e94-b60f-36ba9ccef805.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:13 compute-0 podman[294223]: 2025-10-02 12:13:13.633329446 +0000 UTC m=+0.057628367 container create 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:13:13 compute-0 systemd[1]: Started libpod-conmon-1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9.scope.
Oct 02 12:13:13 compute-0 podman[294223]: 2025-10-02 12:13:13.603211857 +0000 UTC m=+0.027510798 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18bbc3d5af56f4be8c24799b920b5a94eb786d1c2985ec71cabcc7febc04327c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:13 compute-0 podman[294223]: 2025-10-02 12:13:13.719282709 +0000 UTC m=+0.143581650 container init 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:13 compute-0 podman[294223]: 2025-10-02 12:13:13.724378833 +0000 UTC m=+0.148677754 container start 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:13:13 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [NOTICE]   (294243) : New worker (294245) forked
Oct 02 12:13:13 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [NOTICE]   (294243) : Loading success.
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.868 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.869 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407193.8683896, c5000eb8-3b94-4233-a49b-079f204f3543 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.870 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] VM Started (Lifecycle Event)
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.872 2 DEBUG nova.virt.libvirt.driver [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.874 2 INFO nova.virt.libvirt.driver [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Instance spawned successfully.
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.875 2 INFO nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Took 6.34 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.875 2 DEBUG nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.888 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.891 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.912 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.912 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407193.86999, c5000eb8-3b94-4233-a49b-079f204f3543 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:13 compute-0 nova_compute[257802]: 2025-10-02 12:13:13.913 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] VM Paused (Lifecycle Event)
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.031 2 INFO nova.compute.manager [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Took 7.48 seconds to build instance.
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.051 2 DEBUG oslo_concurrency.lockutils [None req-79339a34-0610-4518-936e-843ea3b889b0 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.053 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.056 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407193.8729649, c5000eb8-3b94-4233-a49b-079f204f3543 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.056 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] VM Resumed (Lifecycle Event)
Oct 02 12:13:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.075 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:14 compute-0 nova_compute[257802]: 2025-10-02 12:13:14.078 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:14 compute-0 ceph-mon[73607]: pgmap v1438: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Oct 02 12:13:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.329 2 DEBUG nova.compute.manager [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.330 2 DEBUG oslo_concurrency.lockutils [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.330 2 DEBUG oslo_concurrency.lockutils [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.330 2 DEBUG oslo_concurrency.lockutils [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.331 2 DEBUG nova.compute.manager [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] No waiting events found dispatching network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.331 2 WARNING nova.compute.manager [req-de2b25ee-5c08-4690-b0eb-542a6e20dad1 req-d1d143e2-c453-484f-839a-ccbd2050047f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received unexpected event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac for instance with vm_state active and task_state deleting.
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.353 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.353 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.354 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.354 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.354 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.356 2 INFO nova.compute.manager [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Terminating instance
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.356 2 DEBUG nova.compute.manager [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:13:15 compute-0 kernel: tap3c995553-cb (unregistering): left promiscuous mode
Oct 02 12:13:15 compute-0 NetworkManager[44987]: <info>  [1759407195.4035] device (tap3c995553-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:13:15 compute-0 ovn_controller[148183]: 2025-10-02T12:13:15Z|00212|binding|INFO|Releasing lport 3c995553-cbd9-4c64-8558-dee5d872f6ac from this chassis (sb_readonly=0)
Oct 02 12:13:15 compute-0 ovn_controller[148183]: 2025-10-02T12:13:15Z|00213|binding|INFO|Setting lport 3c995553-cbd9-4c64-8558-dee5d872f6ac down in Southbound
Oct 02 12:13:15 compute-0 ovn_controller[148183]: 2025-10-02T12:13:15Z|00214|binding|INFO|Removing iface tap3c995553-cb ovn-installed in OVS
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:15.421 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:75:4c 10.100.0.12'], port_security=['fa:16:3e:89:75:4c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c5000eb8-3b94-4233-a49b-079f204f3543', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=3c995553-cbd9-4c64-8558-dee5d872f6ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:15 compute-0 sudo[294255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:15.423 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 3c995553-cbd9-4c64-8558-dee5d872f6ac in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 unbound from our chassis
Oct 02 12:13:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:15.425 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d00de8e-203c-4e94-b60f-36ba9ccef805, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:13:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:15.427 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dbfac7c3-1fb7-496e-bda8-020eaba26de0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:15.427 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace which is not needed anymore
Oct 02 12:13:15 compute-0 sudo[294255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:15 compute-0 sudo[294255]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:15 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 02 12:13:15 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000039.scope: Consumed 2.516s CPU time.
Oct 02 12:13:15 compute-0 systemd-machined[211836]: Machine qemu-27-instance-00000039 terminated.
Oct 02 12:13:15 compute-0 sudo[294286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:15 compute-0 sudo[294286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:15 compute-0 sudo[294286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:15 compute-0 NetworkManager[44987]: <info>  [1759407195.5727] manager: (tap3c995553-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.594 2 INFO nova.virt.libvirt.driver [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Instance destroyed successfully.
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.595 2 DEBUG nova.objects.instance [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'resources' on Instance uuid c5000eb8-3b94-4233-a49b-079f204f3543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.612 2 DEBUG nova.virt.libvirt.vif [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-506902227',display_name='tempest-ImagesTestJSON-server-506902227',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-506902227',id=57,image_ref='11a037e0-306c-4ca3-91ab-08e92bb1fae5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-v1bbfj18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d8adf6f4-e7d3-4a21-87f7-4b2396126258',image_min_disk='1',image_min_ram='0',image_owner_id='55d20ae21b6d4f0abfff3bccc371ee7a',image_owner_project_name='tempest-ImagesTestJSON-2116266493',image_owner_user_name='tempest-ImagesTestJSON-2116266493-project-member',image_user_id='0df47040f1ff4ce69a6fbdfd9eba4955',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:13Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=c5000eb8-3b94-4233-a49b-079f204f3543,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.613 2 DEBUG nova.network.os_vif_util [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "address": "fa:16:3e:89:75:4c", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c995553-cb", "ovs_interfaceid": "3c995553-cbd9-4c64-8558-dee5d872f6ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.613 2 DEBUG nova.network.os_vif_util [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.614 2 DEBUG os_vif [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c995553-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:15 compute-0 nova_compute[257802]: 2025-10-02 12:13:15.621 2 INFO os_vif [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:75:4c,bridge_name='br-int',has_traffic_filtering=True,id=3c995553-cbd9-4c64-8558-dee5d872f6ac,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c995553-cb')
Oct 02 12:13:15 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [NOTICE]   (294243) : haproxy version is 2.8.14-c23fe91
Oct 02 12:13:15 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [NOTICE]   (294243) : path to executable is /usr/sbin/haproxy
Oct 02 12:13:15 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [WARNING]  (294243) : Exiting Master process...
Oct 02 12:13:15 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [ALERT]    (294243) : Current worker (294245) exited with code 143 (Terminated)
Oct 02 12:13:15 compute-0 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[294239]: [WARNING]  (294243) : All workers exited. Exiting... (0)
Oct 02 12:13:15 compute-0 systemd[1]: libpod-1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9.scope: Deactivated successfully.
Oct 02 12:13:15 compute-0 podman[294327]: 2025-10-02 12:13:15.884099659 +0000 UTC m=+0.365115569 container died 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:13:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:15.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-18bbc3d5af56f4be8c24799b920b5a94eb786d1c2985ec71cabcc7febc04327c-merged.mount: Deactivated successfully.
Oct 02 12:13:15 compute-0 podman[294327]: 2025-10-02 12:13:15.980090615 +0000 UTC m=+0.461106545 container cleanup 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:15 compute-0 systemd[1]: libpod-conmon-1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9.scope: Deactivated successfully.
Oct 02 12:13:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:16.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:16 compute-0 podman[294388]: 2025-10-02 12:13:16.072713249 +0000 UTC m=+0.059346659 container remove 1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.080 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[116cdbc2-d1fb-4d52-b04b-77ec0dde31a3]: (4, ('Thu Oct  2 12:13:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9)\n1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9\nThu Oct  2 12:13:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9)\n1102c333f8c17a29d0271f393d0d5e2fee2d8d8b551269ee8558af9f4f8555b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.083 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5497dfc5-b548-4ad7-8ed8-d605232c17bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.085 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:16 compute-0 kernel: tap6d00de8e-20: left promiscuous mode
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.114 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc468e6-4e8d-4d35-bb85-456e04fb10a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.144 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2075a203-41a5-47ba-badc-d0e0112c8735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.146 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c958cdd6-3816-4f51-8e11-cb3b622c8e05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.174 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[769ef578-8a89-40dd-96eb-e58383821b44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526062, 'reachable_time': 34464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294403, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d00de8e\x2d203c\x2d4e94\x2db60f\x2d36ba9ccef805.mount: Deactivated successfully.
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.177 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:13:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:16.178 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[fc05963d-b459-40d7-a66e-d2a7661b5624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.227 2 INFO nova.virt.libvirt.driver [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Deleting instance files /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543_del
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.227 2 INFO nova.virt.libvirt.driver [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Deletion of /var/lib/nova/instances/c5000eb8-3b94-4233-a49b-079f204f3543_del complete
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.283 2 INFO nova.compute.manager [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.283 2 DEBUG oslo.service.loopingcall [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.284 2 DEBUG nova.compute.manager [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.284 2 DEBUG nova.network.neutron [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.834 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.835 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.842 2 DEBUG nova.network.neutron [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.855 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.864 2 INFO nova.compute.manager [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Took 0.58 seconds to deallocate network for instance.
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.918 2 DEBUG nova.compute.manager [req-53a3309f-f7b2-4930-8c2b-5f3bac4a95e9 req-07c0bb6e-7fd3-457e-915c-b926a3270328 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-vif-deleted-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.932 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.933 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.950 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:16 compute-0 ceph-mon[73607]: pgmap v1439: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:13:16 compute-0 nova_compute[257802]: 2025-10-02 12:13:16.994 2 DEBUG oslo_concurrency.processutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 975 KiB/s wr, 166 op/s
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.422 2 DEBUG nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-vif-unplugged-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.423 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.423 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.424 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.424 2 DEBUG nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] No waiting events found dispatching network-vif-unplugged-3c995553-cbd9-4c64-8558-dee5d872f6ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.424 2 WARNING nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received unexpected event network-vif-unplugged-3c995553-cbd9-4c64-8558-dee5d872f6ac for instance with vm_state deleted and task_state None.
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.424 2 DEBUG nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.425 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.425 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.425 2 DEBUG oslo_concurrency.lockutils [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.426 2 DEBUG nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] No waiting events found dispatching network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.426 2 WARNING nova.compute.manager [req-7c6f5a57-1b44-43b2-943e-2ecd4c59f59d req-d22b0a94-806b-4eed-b02e-e4be49d0c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Received unexpected event network-vif-plugged-3c995553-cbd9-4c64-8558-dee5d872f6ac for instance with vm_state deleted and task_state None.
Oct 02 12:13:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471379931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.462 2 DEBUG oslo_concurrency.processutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.468 2 DEBUG nova.compute.provider_tree [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.486 2 DEBUG nova.scheduler.client.report [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.509 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.512 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.518 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.519 2 INFO nova.compute.claims [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.534 2 INFO nova.scheduler.client.report [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Deleted allocations for instance c5000eb8-3b94-4233-a49b-079f204f3543
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.605 2 DEBUG oslo_concurrency.lockutils [None req-a438b754-1064-49dc-b8c6-5b9ef57f65bc 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "c5000eb8-3b94-4233-a49b-079f204f3543" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:17 compute-0 nova_compute[257802]: 2025-10-02 12:13:17.642 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:17.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:17 compute-0 podman[294450]: 2025-10-02 12:13:17.939528298 +0000 UTC m=+0.072911578 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 02 12:13:17 compute-0 podman[294449]: 2025-10-02 12:13:17.944436106 +0000 UTC m=+0.077818746 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 12:13:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1471379931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:18.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287781459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.207 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.213 2 DEBUG nova.compute.provider_tree [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.231 2 DEBUG nova.scheduler.client.report [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.254 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.256 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.302 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.303 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.330 2 INFO nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.356 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.434 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.436 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.437 2 INFO nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Creating image(s)
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.472 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.502 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.532 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.539 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.618 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.619 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.620 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.621 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.651 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.657 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 20f3c85c-395d-4603-b03a-6625d537159d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.869 2 DEBUG nova.policy [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6529c3301c674317b5af53daaa0ee15a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:13:18 compute-0 nova_compute[257802]: 2025-10-02 12:13:18.992 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 20f3c85c-395d-4603-b03a-6625d537159d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:19 compute-0 ceph-mon[73607]: pgmap v1440: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 975 KiB/s wr, 166 op/s
Oct 02 12:13:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2287781459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.099 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] resizing rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:13:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 02 12:13:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 02 12:13:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 228 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 608 KiB/s wr, 205 op/s
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.228 2 DEBUG nova.objects.instance [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 20f3c85c-395d-4603-b03a-6625d537159d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.243 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.243 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Ensure instance console log exists: /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.243 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.244 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.244 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:19 compute-0 nova_compute[257802]: 2025-10-02 12:13:19.933 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Successfully created port: b45fb4b5-77cd-42f8-91b9-473e20188e70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:20.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:20 compute-0 ceph-mon[73607]: osdmap e201: 3 total, 3 up, 3 in
Oct 02 12:13:20 compute-0 ceph-mon[73607]: pgmap v1442: 305 pgs: 305 active+clean; 228 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 608 KiB/s wr, 205 op/s
Oct 02 12:13:20 compute-0 nova_compute[257802]: 2025-10-02 12:13:20.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:20 compute-0 nova_compute[257802]: 2025-10-02 12:13:20.932 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Successfully updated port: b45fb4b5-77cd-42f8-91b9-473e20188e70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:20 compute-0 nova_compute[257802]: 2025-10-02 12:13:20.952 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:20 compute-0 nova_compute[257802]: 2025-10-02 12:13:20.952 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquired lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:20 compute-0 nova_compute[257802]: 2025-10-02 12:13:20.953 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:21 compute-0 nova_compute[257802]: 2025-10-02 12:13:21.067 2 DEBUG nova.compute.manager [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-changed-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:21 compute-0 nova_compute[257802]: 2025-10-02 12:13:21.067 2 DEBUG nova.compute.manager [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Refreshing instance network info cache due to event network-changed-b45fb4b5-77cd-42f8-91b9-473e20188e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:21 compute-0 nova_compute[257802]: 2025-10-02 12:13:21.068 2 DEBUG oslo_concurrency.lockutils [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:21 compute-0 nova_compute[257802]: 2025-10-02 12:13:21.178 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 210 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 634 KiB/s wr, 214 op/s
Oct 02 12:13:21 compute-0 nova_compute[257802]: 2025-10-02 12:13:21.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:21 compute-0 podman[294659]: 2025-10-02 12:13:21.960762211 +0000 UTC m=+0.100232590 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:22.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.128 2 DEBUG nova.network.neutron [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.165 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Releasing lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.165 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Instance network_info: |[{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.166 2 DEBUG oslo_concurrency.lockutils [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.166 2 DEBUG nova.network.neutron [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Refreshing network info cache for port b45fb4b5-77cd-42f8-91b9-473e20188e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.172 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Start _get_guest_xml network_info=[{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.179 2 WARNING nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.185 2 DEBUG nova.virt.libvirt.host [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.186 2 DEBUG nova.virt.libvirt.host [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.194 2 DEBUG nova.virt.libvirt.host [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.195 2 DEBUG nova.virt.libvirt.host [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.197 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.197 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.198 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.199 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.199 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.200 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.200 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.200 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.201 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.201 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.202 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.202 2 DEBUG nova.virt.hardware [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.207 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878227164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.684 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.732 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:22 compute-0 nova_compute[257802]: 2025-10-02 12:13:22.736 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:22 compute-0 ceph-mon[73607]: pgmap v1443: 305 pgs: 305 active+clean; 210 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 634 KiB/s wr, 214 op/s
Oct 02 12:13:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/746776297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2878227164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659249451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.181 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.183 2 DEBUG nova.virt.libvirt.vif [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:18Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.183 2 DEBUG nova.network.os_vif_util [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.184 2 DEBUG nova.network.os_vif_util [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.186 2 DEBUG nova.objects.instance [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20f3c85c-395d-4603-b03a-6625d537159d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 191 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 232 op/s
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.209 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <uuid>20f3c85c-395d-4603-b03a-6625d537159d</uuid>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <name>instance-0000003a</name>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachInterfacesV270Test-server-1269708573</nova:name>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:13:22</nova:creationTime>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:user uuid="6529c3301c674317b5af53daaa0ee15a">tempest-AttachInterfacesV270Test-1440782177-project-member</nova:user>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:project uuid="7f7fda5b2b6844ad82818a60ed39c0b0">tempest-AttachInterfacesV270Test-1440782177</nova:project>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <nova:port uuid="b45fb4b5-77cd-42f8-91b9-473e20188e70">
Oct 02 12:13:23 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <system>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="serial">20f3c85c-395d-4603-b03a-6625d537159d</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="uuid">20f3c85c-395d-4603-b03a-6625d537159d</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </system>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <os>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </os>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <features>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </features>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/20f3c85c-395d-4603-b03a-6625d537159d_disk">
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </source>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/20f3c85c-395d-4603-b03a-6625d537159d_disk.config">
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </source>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:13:23 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:46:cb:79"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <target dev="tapb45fb4b5-77"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/console.log" append="off"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <video>
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </video>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:13:23 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:13:23 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:13:23 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:13:23 compute-0 nova_compute[257802]: </domain>
Oct 02 12:13:23 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.211 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Preparing to wait for external event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.212 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.212 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.212 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.213 2 DEBUG nova.virt.libvirt.vif [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:18Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.214 2 DEBUG nova.network.os_vif_util [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.214 2 DEBUG nova.network.os_vif_util [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.215 2 DEBUG os_vif [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb45fb4b5-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.227 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb45fb4b5-77, col_values=(('external_ids', {'iface-id': 'b45fb4b5-77cd-42f8-91b9-473e20188e70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:cb:79', 'vm-uuid': '20f3c85c-395d-4603-b03a-6625d537159d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:23 compute-0 NetworkManager[44987]: <info>  [1759407203.2323] manager: (tapb45fb4b5-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.243 2 INFO os_vif [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77')
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.314 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.314 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.315 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No VIF found with MAC fa:16:3e:46:cb:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.315 2 INFO nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Using config drive
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.338 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.794 2 DEBUG nova.network.neutron [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updated VIF entry in instance network info cache for port b45fb4b5-77cd-42f8-91b9-473e20188e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.795 2 DEBUG nova.network.neutron [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.815 2 DEBUG oslo_concurrency.lockutils [req-8dfb3821-9b67-4cd0-aee8-fd60f160639a req-69988002-d18e-4644-9742-7d890ef5b29b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.879 2 INFO nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Creating config drive at /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config
Oct 02 12:13:23 compute-0 nova_compute[257802]: 2025-10-02 12:13:23.883 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0x4js4k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/659249451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/984207681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-0 nova_compute[257802]: 2025-10-02 12:13:24.035 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0x4js4k" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:24 compute-0 nova_compute[257802]: 2025-10-02 12:13:24.080 2 DEBUG nova.storage.rbd_utils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] rbd image 20f3c85c-395d-4603-b03a-6625d537159d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:24.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:24 compute-0 nova_compute[257802]: 2025-10-02 12:13:24.085 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config 20f3c85c-395d-4603-b03a-6625d537159d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:25 compute-0 ceph-mon[73607]: pgmap v1444: 305 pgs: 305 active+clean; 191 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 232 op/s
Oct 02 12:13:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 160 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.3 MiB/s wr, 229 op/s
Oct 02 12:13:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:26.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.541 2 DEBUG oslo_concurrency.processutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config 20f3c85c-395d-4603-b03a-6625d537159d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.542 2 INFO nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Deleting local config drive /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d/disk.config because it was imported into RBD.
Oct 02 12:13:26 compute-0 kernel: tapb45fb4b5-77: entered promiscuous mode
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.5962] manager: (tapb45fb4b5-77): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Oct 02 12:13:26 compute-0 ovn_controller[148183]: 2025-10-02T12:13:26Z|00215|binding|INFO|Claiming lport b45fb4b5-77cd-42f8-91b9-473e20188e70 for this chassis.
Oct 02 12:13:26 compute-0 ovn_controller[148183]: 2025-10-02T12:13:26Z|00216|binding|INFO|b45fb4b5-77cd-42f8-91b9-473e20188e70: Claiming fa:16:3e:46:cb:79 10.100.0.13
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.606 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:cb:79 10.100.0.13'], port_security=['fa:16:3e:46:cb:79 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b45fb4b5-77cd-42f8-91b9-473e20188e70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.607 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b45fb4b5-77cd-42f8-91b9-473e20188e70 in datapath 5e557191-367e-4f3a-8238-d1a8a649138a bound to our chassis
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.608 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e557191-367e-4f3a-8238-d1a8a649138a
Oct 02 12:13:26 compute-0 systemd-udevd[294822]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.618 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad0b028-f2ef-4874-9914-a62390e415ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.619 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e557191-31 in ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.621 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e557191-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.621 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a89208ad-13b7-4b41-af14-a6999c5e8dfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.621 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[54d2850b-2b84-4db7-82cc-85c6ce741c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.6321] device (tapb45fb4b5-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.6332] device (tapb45fb4b5-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.633 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[37cc05c1-3d74-49cf-beea-66940af02b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 systemd-machined[211836]: New machine qemu-28-instance-0000003a.
Oct 02 12:13:26 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000003a.
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.656 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd3260a-b8c2-448d-9f46-4448d0d24d4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_controller[148183]: 2025-10-02T12:13:26Z|00217|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 ovn-installed in OVS
Oct 02 12:13:26 compute-0 ovn_controller[148183]: 2025-10-02T12:13:26Z|00218|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 up in Southbound
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.687 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1dba7d87-059b-4e67-b0c1-e93f74f9e8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.6923] manager: (tap5e557191-30): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.692 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d1430f-6260-4d24-96a3-3976fb558bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.721 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[939e40dc-ec47-4829-843f-9556f47129f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.724 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5283537e-3472-42eb-abdf-211fe48fc83c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.7420] device (tap5e557191-30): carrier: link connected
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.746 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0013fa40-8747-4a79-b5da-b94f61d254b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.760 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da45cbf0-324c-4b4b-8be7-de0b0e86a947]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e557191-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:42:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527436, 'reachable_time': 22710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294857, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.772 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d107fec2-5186-4fbf-b417-e351fbdba3a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:4222'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527436, 'tstamp': 527436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294858, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.786 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a2dacf17-f04b-4efd-bbea-02d4e8df47d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e557191-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:42:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527436, 'reachable_time': 22710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294859, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.819 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50e13899-ab54-460c-a5e1-9572d8628d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.870 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb18bc8-a269-4742-bec4-b92e16861f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.871 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e557191-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.872 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.872 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e557191-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:26 compute-0 kernel: tap5e557191-30: entered promiscuous mode
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 NetworkManager[44987]: <info>  [1759407206.8749] manager: (tap5e557191-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.878 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e557191-30, col_values=(('external_ids', {'iface-id': 'dfd60aa0-b29c-4076-9b8b-209670b07c07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_controller[148183]: 2025-10-02T12:13:26Z|00219|binding|INFO|Releasing lport dfd60aa0-b29c-4076-9b8b-209670b07c07 from this chassis (sb_readonly=0)
Oct 02 12:13:26 compute-0 nova_compute[257802]: 2025-10-02 12:13:26.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.894 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e557191-367e-4f3a-8238-d1a8a649138a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e557191-367e-4f3a-8238-d1a8a649138a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.895 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7b2158-be58-4548-a153-84eaf36f7e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.896 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-5e557191-367e-4f3a-8238-d1a8a649138a
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/5e557191-367e-4f3a-8238-d1a8a649138a.pid.haproxy
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 5e557191-367e-4f3a-8238-d1a8a649138a
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.896 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'env', 'PROCESS_TAG=haproxy-5e557191-367e-4f3a-8238-d1a8a649138a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e557191-367e-4f3a-8238-d1a8a649138a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:26 compute-0 sudo[294865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:26 compute-0 sudo[294865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:26 compute-0 sudo[294865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.931 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.932 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:26.932 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:26 compute-0 sudo[294893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:26 compute-0 sudo[294893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:26 compute-0 sudo[294893]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 sudo[294919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:27 compute-0 sudo[294919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 sudo[294919]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 ceph-mon[73607]: pgmap v1445: 305 pgs: 305 active+clean; 160 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.3 MiB/s wr, 229 op/s
Oct 02 12:13:27 compute-0 sudo[294944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:13:27 compute-0 sudo[294944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 157 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 192 op/s
Oct 02 12:13:27 compute-0 podman[294992]: 2025-10-02 12:13:27.305124399 +0000 UTC m=+0.111088843 container create 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:27 compute-0 podman[294992]: 2025-10-02 12:13:27.216692065 +0000 UTC m=+0.022656529 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.384 2 DEBUG nova.compute.manager [req-474ac086-425c-455a-8589-e20b2549a927 req-d110230c-e644-4f6a-a92c-71b184b8c88a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.385 2 DEBUG oslo_concurrency.lockutils [req-474ac086-425c-455a-8589-e20b2549a927 req-d110230c-e644-4f6a-a92c-71b184b8c88a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.385 2 DEBUG oslo_concurrency.lockutils [req-474ac086-425c-455a-8589-e20b2549a927 req-d110230c-e644-4f6a-a92c-71b184b8c88a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.385 2 DEBUG oslo_concurrency.lockutils [req-474ac086-425c-455a-8589-e20b2549a927 req-d110230c-e644-4f6a-a92c-71b184b8c88a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.386 2 DEBUG nova.compute.manager [req-474ac086-425c-455a-8589-e20b2549a927 req-d110230c-e644-4f6a-a92c-71b184b8c88a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Processing event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:13:27 compute-0 systemd[1]: Started libpod-conmon-942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac.scope.
Oct 02 12:13:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b07aeab6496e114cc3c1a2c26c5778188d7e5ff54f0ba9b33f0987ef014dd2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:27 compute-0 podman[294992]: 2025-10-02 12:13:27.499898899 +0000 UTC m=+0.305863353 container init 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:27 compute-0 podman[294992]: 2025-10-02 12:13:27.50572305 +0000 UTC m=+0.311687504 container start 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:13:27 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [NOTICE]   (295080) : New worker (295083) forked
Oct 02 12:13:27 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [NOTICE]   (295080) : Loading success.
Oct 02 12:13:27 compute-0 sudo[294944]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 10a0ec3b-e61d-4707-bdcb-957b74dbed6d does not exist
Oct 02 12:13:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7e59b9f7-15f8-429c-8bb4-8f7b0cfaf0c0 does not exist
Oct 02 12:13:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6bcc3115-c49c-4835-b4aa-640e770397f2 does not exist
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:13:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:13:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:27 compute-0 sudo[295092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:27 compute-0 sudo[295092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 sudo[295092]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 sudo[295117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:27 compute-0 sudo[295117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 sudo[295117]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.908 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407207.907598, 20f3c85c-395d-4603-b03a-6625d537159d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.908 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] VM Started (Lifecycle Event)
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.910 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.914 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.917 2 INFO nova.virt.libvirt.driver [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Instance spawned successfully.
Oct 02 12:13:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.917 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:13:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:27.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:27 compute-0 sudo[295142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:27 compute-0 sudo[295142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 sudo[295142]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.953 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.958 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.961 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.961 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.962 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.962 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.962 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.963 2 DEBUG nova.virt.libvirt.driver [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:27 compute-0 sudo[295167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:13:27 compute-0 sudo[295167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.993 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.993 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407207.9078512, 20f3c85c-395d-4603-b03a-6625d537159d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:27 compute-0 nova_compute[257802]: 2025-10-02 12:13:27.994 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] VM Paused (Lifecycle Event)
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.019 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.022 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407207.9126294, 20f3c85c-395d-4603-b03a-6625d537159d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.022 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] VM Resumed (Lifecycle Event)
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.031 2 INFO nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Took 9.60 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.031 2 DEBUG nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.042 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.045 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.067 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.092 2 INFO nova.compute.manager [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Took 11.17 seconds to build instance.
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.124 2 DEBUG oslo_concurrency.lockutils [None req-e3b39233-3ee7-434a-84e0-ce86f23907b0 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:13:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.272732757 +0000 UTC m=+0.020505698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.37932677 +0000 UTC m=+0.127099731 container create 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:13:28 compute-0 systemd[1]: Started libpod-conmon-826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5.scope.
Oct 02 12:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.591250555 +0000 UTC m=+0.339023496 container init 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.602897198 +0000 UTC m=+0.350670119 container start 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.606364571 +0000 UTC m=+0.354137492 container attach 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:13:28 compute-0 vigilant_gauss[295248]: 167 167
Oct 02 12:13:28 compute-0 systemd[1]: libpod-826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5.scope: Deactivated successfully.
Oct 02 12:13:28 compute-0 conmon[295248]: conmon 826558c04d6869633df2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5.scope/container/memory.events
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.612180022 +0000 UTC m=+0.359952943 container died 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:13:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 02 12:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b83d9dd62301c7c7670f8741a9a8c3a6d9263f950a1341549409294e9889bb-merged.mount: Deactivated successfully.
Oct 02 12:13:28 compute-0 podman[295231]: 2025-10-02 12:13:28.678283934 +0000 UTC m=+0.426056865 container remove 826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:13:28 compute-0 systemd[1]: libpod-conmon-826558c04d6869633df2df19610bd8f4cb644950d4ca358b4bb128d8b06492e5.scope: Deactivated successfully.
Oct 02 12:13:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 02 12:13:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 02 12:13:28 compute-0 podman[295271]: 2025-10-02 12:13:28.888905188 +0000 UTC m=+0.054263366 container create 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:13:28 compute-0 systemd[1]: Started libpod-conmon-35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e.scope.
Oct 02 12:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:28 compute-0 podman[295271]: 2025-10-02 12:13:28.869277213 +0000 UTC m=+0.034635411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.966 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "interface-20f3c85c-395d-4603-b03a-6625d537159d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.967 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "interface-20f3c85c-395d-4603-b03a-6625d537159d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.968 2 DEBUG nova.objects.instance [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lazy-loading 'flavor' on Instance uuid 20f3c85c-395d-4603-b03a-6625d537159d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:28 compute-0 podman[295271]: 2025-10-02 12:13:28.988804809 +0000 UTC m=+0.154163007 container init 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:13:28 compute-0 nova_compute[257802]: 2025-10-02 12:13:28.995 2 DEBUG nova.objects.instance [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 20f3c85c-395d-4603-b03a-6625d537159d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:28 compute-0 podman[295271]: 2025-10-02 12:13:28.998136955 +0000 UTC m=+0.163495133 container start 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 12:13:29 compute-0 podman[295271]: 2025-10-02 12:13:29.002212664 +0000 UTC m=+0.167570872 container attach 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.015 2 DEBUG nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 202 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 880 KiB/s rd, 5.4 MiB/s wr, 233 op/s
Oct 02 12:13:29 compute-0 ceph-mon[73607]: pgmap v1446: 305 pgs: 305 active+clean; 157 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 192 op/s
Oct 02 12:13:29 compute-0 ceph-mon[73607]: osdmap e202: 3 total, 3 up, 3 in
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.479 2 DEBUG nova.compute.manager [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.482 2 DEBUG oslo_concurrency.lockutils [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.482 2 DEBUG oslo_concurrency.lockutils [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.483 2 DEBUG oslo_concurrency.lockutils [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.483 2 DEBUG nova.compute.manager [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.483 2 WARNING nova.compute.manager [req-130c3a3d-96dd-4ead-9614-927a1e2a88e6 req-f064a205-6d19-43ee-9b5a-3b8503f2d961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with vm_state active and task_state None.
Oct 02 12:13:29 compute-0 nova_compute[257802]: 2025-10-02 12:13:29.590 2 DEBUG nova.policy [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6529c3301c674317b5af53daaa0ee15a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:13:29 compute-0 pedantic_chebyshev[295287]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:13:29 compute-0 pedantic_chebyshev[295287]: --> relative data size: 1.0
Oct 02 12:13:29 compute-0 pedantic_chebyshev[295287]: --> All data devices are unavailable
Oct 02 12:13:29 compute-0 systemd[1]: libpod-35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e.scope: Deactivated successfully.
Oct 02 12:13:29 compute-0 podman[295302]: 2025-10-02 12:13:29.883318136 +0000 UTC m=+0.026364761 container died 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:13:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:29.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-933c53ca81306d64eb1f3edbb32f140f5fb29312cc9dd5a30d62a850b038c6ca-merged.mount: Deactivated successfully.
Oct 02 12:13:30 compute-0 nova_compute[257802]: 2025-10-02 12:13:30.428 2 DEBUG nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Successfully created port: 9dcf053a-b635-450a-9c27-8dfb456f8bdd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:30 compute-0 ceph-mon[73607]: pgmap v1448: 305 pgs: 305 active+clean; 202 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 880 KiB/s rd, 5.4 MiB/s wr, 233 op/s
Oct 02 12:13:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/361221018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/680505641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:30 compute-0 nova_compute[257802]: 2025-10-02 12:13:30.593 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407195.5924327, c5000eb8-3b94-4233-a49b-079f204f3543 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:30 compute-0 nova_compute[257802]: 2025-10-02 12:13:30.594 2 INFO nova.compute.manager [-] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] VM Stopped (Lifecycle Event)
Oct 02 12:13:30 compute-0 nova_compute[257802]: 2025-10-02 12:13:30.616 2 DEBUG nova.compute.manager [None req-f5a5c850-3650-40b0-b918-297a22e5a699 - - - - - -] [instance: c5000eb8-3b94-4233-a49b-079f204f3543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:30 compute-0 podman[295302]: 2025-10-02 12:13:30.64855553 +0000 UTC m=+0.791602135 container remove 35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:13:30 compute-0 systemd[1]: libpod-conmon-35fb00da3da186168eb2bc853fa787862a46b220e6578e91a6899b32685b0d4e.scope: Deactivated successfully.
Oct 02 12:13:30 compute-0 sudo[295167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:30 compute-0 sudo[295318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:30 compute-0 sudo[295318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:30 compute-0 sudo[295318]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:30 compute-0 sudo[295343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:30 compute-0 sudo[295343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:30 compute-0 sudo[295343]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:30 compute-0 sudo[295368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:30 compute-0 sudo[295368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:30 compute-0 sudo[295368]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:31 compute-0 sudo[295393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:13:31 compute-0 sudo[295393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 213 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.2 MiB/s wr, 224 op/s
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.207 2 DEBUG nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Successfully updated port: 9dcf053a-b635-450a-9c27-8dfb456f8bdd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.225 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.225 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquired lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.225 2 DEBUG nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.335 2 DEBUG nova.compute.manager [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-changed-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.336 2 DEBUG nova.compute.manager [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Refreshing instance network info cache due to event network-changed-9dcf053a-b635-450a-9c27-8dfb456f8bdd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.336 2 DEBUG oslo_concurrency.lockutils [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.435 2 WARNING nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] 5e557191-367e-4f3a-8238-d1a8a649138a already exists in list: networks containing: ['5e557191-367e-4f3a-8238-d1a8a649138a']. ignoring it
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.472420364 +0000 UTC m=+0.071320890 container create 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.426338567 +0000 UTC m=+0.025239113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:31 compute-0 nova_compute[257802]: 2025-10-02 12:13:31.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:31 compute-0 systemd[1]: Started libpod-conmon-0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef.scope.
Oct 02 12:13:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.664025937 +0000 UTC m=+0.262926553 container init 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.67327655 +0000 UTC m=+0.272177086 container start 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:13:31 compute-0 angry_jepsen[295475]: 167 167
Oct 02 12:13:31 compute-0 systemd[1]: libpod-0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef.scope: Deactivated successfully.
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.710924363 +0000 UTC m=+0.309824989 container attach 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:13:31 compute-0 podman[295458]: 2025-10-02 12:13:31.711414085 +0000 UTC m=+0.310314661 container died 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ce8f7a0d28a8230bbe6193779e7c112d48bc34d5b1b22696e2230425979be5-merged.mount: Deactivated successfully.
Oct 02 12:13:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:31.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:32 compute-0 podman[295458]: 2025-10-02 12:13:32.019235625 +0000 UTC m=+0.618136151 container remove 0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:13:32 compute-0 systemd[1]: libpod-conmon-0be5812858d055bb5be34c87377b14c32048611b63b9abe4f0776fa87595aeef.scope: Deactivated successfully.
Oct 02 12:13:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:32 compute-0 podman[295499]: 2025-10-02 12:13:32.244137724 +0000 UTC m=+0.080581293 container create b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:13:32 compute-0 podman[295499]: 2025-10-02 12:13:32.186287762 +0000 UTC m=+0.022731371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:32 compute-0 systemd[1]: Started libpod-conmon-b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e.scope.
Oct 02 12:13:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eb50094b3d8e0bc45b4bd01af37290cf695b5183d9af2dbac6bce4038f1bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eb50094b3d8e0bc45b4bd01af37290cf695b5183d9af2dbac6bce4038f1bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eb50094b3d8e0bc45b4bd01af37290cf695b5183d9af2dbac6bce4038f1bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eb50094b3d8e0bc45b4bd01af37290cf695b5183d9af2dbac6bce4038f1bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:32 compute-0 podman[295499]: 2025-10-02 12:13:32.453282573 +0000 UTC m=+0.289726142 container init b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:32 compute-0 podman[295499]: 2025-10-02 12:13:32.459592105 +0000 UTC m=+0.296035674 container start b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:13:32 compute-0 podman[295499]: 2025-10-02 12:13:32.52995488 +0000 UTC m=+0.366398449 container attach b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:13:32 compute-0 ceph-mon[73607]: pgmap v1449: 305 pgs: 305 active+clean; 213 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.2 MiB/s wr, 224 op/s
Oct 02 12:13:33 compute-0 fervent_feistel[295515]: {
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:     "1": [
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:         {
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "devices": [
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "/dev/loop3"
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             ],
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "lv_name": "ceph_lv0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "lv_size": "7511998464",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "name": "ceph_lv0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "tags": {
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.cluster_name": "ceph",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.crush_device_class": "",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.encrypted": "0",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.osd_id": "1",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.type": "block",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:                 "ceph.vdo": "0"
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             },
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "type": "block",
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:             "vg_name": "ceph_vg0"
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:         }
Oct 02 12:13:33 compute-0 fervent_feistel[295515]:     ]
Oct 02 12:13:33 compute-0 fervent_feistel[295515]: }
Oct 02 12:13:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Oct 02 12:13:33 compute-0 systemd[1]: libpod-b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e.scope: Deactivated successfully.
Oct 02 12:13:33 compute-0 podman[295499]: 2025-10-02 12:13:33.215922053 +0000 UTC m=+1.052365612 container died b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:13:33 compute-0 nova_compute[257802]: 2025-10-02 12:13:33.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-13eb50094b3d8e0bc45b4bd01af37290cf695b5183d9af2dbac6bce4038f1bd9-merged.mount: Deactivated successfully.
Oct 02 12:13:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:33 compute-0 podman[295499]: 2025-10-02 12:13:33.63712376 +0000 UTC m=+1.473567339 container remove b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_feistel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:13:33 compute-0 systemd[1]: libpod-conmon-b36ea21331a5cac65b29239f6eccbc3830e74028863c4f3ba83116c01d14638e.scope: Deactivated successfully.
Oct 02 12:13:33 compute-0 sudo[295393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:33 compute-0 sudo[295540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:33 compute-0 sudo[295540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:33 compute-0 sudo[295540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:33 compute-0 sudo[295565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:33 compute-0 sudo[295565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:33 compute-0 sudo[295565]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:33 compute-0 sudo[295590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:33 compute-0 sudo[295590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:33 compute-0 sudo[295590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:33 compute-0 sudo[295615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:13:33 compute-0 sudo[295615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:34.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.201173528 +0000 UTC m=+0.024011942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.371948857 +0000 UTC m=+0.194787251 container create bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:13:34 compute-0 systemd[1]: Started libpod-conmon-bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d.scope.
Oct 02 12:13:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.502655534 +0000 UTC m=+0.325493948 container init bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.509847568 +0000 UTC m=+0.332685962 container start bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:13:34 compute-0 relaxed_merkle[295697]: 167 167
Oct 02 12:13:34 compute-0 systemd[1]: libpod-bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d.scope: Deactivated successfully.
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.529211157 +0000 UTC m=+0.352049571 container attach bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.529662349 +0000 UTC m=+0.352500743 container died bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b6ece9df2be853e549584c19cc79683db21d4c07fd03f15b4531430ba168bf8-merged.mount: Deactivated successfully.
Oct 02 12:13:34 compute-0 podman[295680]: 2025-10-02 12:13:34.700281532 +0000 UTC m=+0.523119936 container remove bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:13:34 compute-0 systemd[1]: libpod-conmon-bc4ff561c30735a554bc816d5bbd46c77da7d61fc5dbd0cbbe5612ea5258a71d.scope: Deactivated successfully.
Oct 02 12:13:34 compute-0 podman[295720]: 2025-10-02 12:13:34.85408711 +0000 UTC m=+0.029310562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:35 compute-0 podman[295720]: 2025-10-02 12:13:35.041083281 +0000 UTC m=+0.216306733 container create 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.133 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.137 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 ceph-mon[73607]: pgmap v1450: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Oct 02 12:13:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Oct 02 12:13:35 compute-0 systemd[1]: Started libpod-conmon-667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f.scope.
Oct 02 12:13:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0654024aad7b8d5fd9ad88f4951b53fa27f46f2be43e556ed4cebd34a74badc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0654024aad7b8d5fd9ad88f4951b53fa27f46f2be43e556ed4cebd34a74badc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0654024aad7b8d5fd9ad88f4951b53fa27f46f2be43e556ed4cebd34a74badc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0654024aad7b8d5fd9ad88f4951b53fa27f46f2be43e556ed4cebd34a74badc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.255 2 DEBUG nova.network.neutron [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.273 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Releasing lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.274 2 DEBUG oslo_concurrency.lockutils [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.274 2 DEBUG nova.network.neutron [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Refreshing network info cache for port 9dcf053a-b635-450a-9c27-8dfb456f8bdd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.278 2 DEBUG nova.virt.libvirt.vif [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:28Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.278 2 DEBUG nova.network.os_vif_util [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.279 2 DEBUG nova.network.os_vif_util [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.279 2 DEBUG os_vif [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dcf053a-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dcf053a-b6, col_values=(('external_ids', {'iface-id': '9dcf053a-b635-450a-9c27-8dfb456f8bdd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:74:4f', 'vm-uuid': '20f3c85c-395d-4603-b03a-6625d537159d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 NetworkManager[44987]: <info>  [1759407215.2862] manager: (tap9dcf053a-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.293 2 INFO os_vif [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6')
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.294 2 DEBUG nova.virt.libvirt.vif [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:28Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.294 2 DEBUG nova.network.os_vif_util [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.295 2 DEBUG nova.network.os_vif_util [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.298 2 DEBUG nova.virt.libvirt.guest [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:9a:74:4f"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <target dev="tap9dcf053a-b6"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]: </interface>
Oct 02 12:13:35 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:13:35 compute-0 NetworkManager[44987]: <info>  [1759407215.3217] manager: (tap9dcf053a-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Oct 02 12:13:35 compute-0 kernel: tap9dcf053a-b6: entered promiscuous mode
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 ovn_controller[148183]: 2025-10-02T12:13:35Z|00220|binding|INFO|Claiming lport 9dcf053a-b635-450a-9c27-8dfb456f8bdd for this chassis.
Oct 02 12:13:35 compute-0 ovn_controller[148183]: 2025-10-02T12:13:35Z|00221|binding|INFO|9dcf053a-b635-450a-9c27-8dfb456f8bdd: Claiming fa:16:3e:9a:74:4f 10.100.0.12
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.335 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:74:4f 10.100.0.12'], port_security=['fa:16:3e:9a:74:4f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9dcf053a-b635-450a-9c27-8dfb456f8bdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.336 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9dcf053a-b635-450a-9c27-8dfb456f8bdd in datapath 5e557191-367e-4f3a-8238-d1a8a649138a bound to our chassis
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.338 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e557191-367e-4f3a-8238-d1a8a649138a
Oct 02 12:13:35 compute-0 ovn_controller[148183]: 2025-10-02T12:13:35Z|00222|binding|INFO|Setting lport 9dcf053a-b635-450a-9c27-8dfb456f8bdd ovn-installed in OVS
Oct 02 12:13:35 compute-0 ovn_controller[148183]: 2025-10-02T12:13:35Z|00223|binding|INFO|Setting lport 9dcf053a-b635-450a-9c27-8dfb456f8bdd up in Southbound
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.364 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb3e5dd-2a7a-4713-ba61-5e16b00ab3bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 systemd-udevd[295748]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:35 compute-0 NetworkManager[44987]: <info>  [1759407215.3941] device (tap9dcf053a-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:35 compute-0 NetworkManager[44987]: <info>  [1759407215.3953] device (tap9dcf053a-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:35 compute-0 podman[295720]: 2025-10-02 12:13:35.396916274 +0000 UTC m=+0.572139816 container init 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:13:35 compute-0 podman[295720]: 2025-10-02 12:13:35.408981617 +0000 UTC m=+0.584205079 container start 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.415 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[972fd873-4715-4e15-aead-1273e44ac2d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.418 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[33bac2f3-314a-4ae4-b6cd-d62a938c92b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.446 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[15601031-ddc2-4817-a49e-1ddd0c12ebae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.463 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2deee23-9e4f-4558-8194-3da019035508]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e557191-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:42:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527436, 'reachable_time': 22710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295757, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.493 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc05c69-df8b-4e4f-8f7f-bacd11ddca59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e557191-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527446, 'tstamp': 527446}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295758, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e557191-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527448, 'tstamp': 527448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295758, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.494 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e557191-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.497 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e557191-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.497 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.498 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e557191-30, col_values=(('external_ids', {'iface-id': 'dfd60aa0-b29c-4076-9b8b-209670b07c07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:35.498 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:35 compute-0 podman[295720]: 2025-10-02 12:13:35.525238954 +0000 UTC m=+0.700462496 container attach 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.570 2 DEBUG nova.virt.libvirt.driver [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.571 2 DEBUG nova.virt.libvirt.driver [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.572 2 DEBUG nova.virt.libvirt.driver [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No VIF found with MAC fa:16:3e:46:cb:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.573 2 DEBUG nova.virt.libvirt.driver [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] No VIF found with MAC fa:16:3e:9a:74:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:35 compute-0 sudo[295759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:35 compute-0 sudo[295759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:35 compute-0 sudo[295759]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.610 2 DEBUG nova.virt.libvirt.guest [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:name>tempest-AttachInterfacesV270Test-server-1269708573</nova:name>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:13:35</nova:creationTime>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:user uuid="6529c3301c674317b5af53daaa0ee15a">tempest-AttachInterfacesV270Test-1440782177-project-member</nova:user>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:project uuid="7f7fda5b2b6844ad82818a60ed39c0b0">tempest-AttachInterfacesV270Test-1440782177</nova:project>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:port uuid="b45fb4b5-77cd-42f8-91b9-473e20188e70">
Oct 02 12:13:35 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     <nova:port uuid="9dcf053a-b635-450a-9c27-8dfb456f8bdd">
Oct 02 12:13:35 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:13:35 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:13:35 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:13:35 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:13:35 compute-0 nova_compute[257802]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:13:35 compute-0 nova_compute[257802]: 2025-10-02 12:13:35.649 2 DEBUG oslo_concurrency.lockutils [None req-56c7f95a-d726-477f-94c6-1bc6c3394b36 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "interface-20f3c85c-395d-4603-b03a-6625d537159d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:35 compute-0 sudo[295784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:35 compute-0 sudo[295784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:35 compute-0 sudo[295784]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:36.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:36 compute-0 ceph-mon[73607]: pgmap v1451: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Oct 02 12:13:36 compute-0 funny_cori[295738]: {
Oct 02 12:13:36 compute-0 funny_cori[295738]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:13:36 compute-0 funny_cori[295738]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:13:36 compute-0 funny_cori[295738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:13:36 compute-0 funny_cori[295738]:         "osd_id": 1,
Oct 02 12:13:36 compute-0 funny_cori[295738]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:13:36 compute-0 funny_cori[295738]:         "type": "bluestore"
Oct 02 12:13:36 compute-0 funny_cori[295738]:     }
Oct 02 12:13:36 compute-0 funny_cori[295738]: }
Oct 02 12:13:36 compute-0 systemd[1]: libpod-667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f.scope: Deactivated successfully.
Oct 02 12:13:36 compute-0 podman[295720]: 2025-10-02 12:13:36.37726627 +0000 UTC m=+1.552489732 container died 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:13:36 compute-0 nova_compute[257802]: 2025-10-02 12:13:36.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0654024aad7b8d5fd9ad88f4951b53fa27f46f2be43e556ed4cebd34a74badc2-merged.mount: Deactivated successfully.
Oct 02 12:13:36 compute-0 nova_compute[257802]: 2025-10-02 12:13:36.796 2 DEBUG nova.network.neutron [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updated VIF entry in instance network info cache for port 9dcf053a-b635-450a-9c27-8dfb456f8bdd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:36 compute-0 nova_compute[257802]: 2025-10-02 12:13:36.797 2 DEBUG nova.network.neutron [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:36 compute-0 nova_compute[257802]: 2025-10-02 12:13:36.813 2 DEBUG oslo_concurrency.lockutils [req-1777cc5f-50c0-46a2-b517-9c43f5fd2ec4 req-fffb3192-ab2b-471a-98d8-d4f39354b2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-20f3c85c-395d-4603-b03a-6625d537159d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:37 compute-0 podman[295720]: 2025-10-02 12:13:37.104315718 +0000 UTC m=+2.279539180 container remove 667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:13:37 compute-0 systemd[1]: libpod-conmon-667b6dd9a1af5c0bd9645932de4694cf5fa35fd20737e97944116986616f454f.scope: Deactivated successfully.
Oct 02 12:13:37 compute-0 sudo[295615]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:13:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 231 op/s
Oct 02 12:13:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:13:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 17a69e14-5ad9-4432-bf9a-ca1d43fb9c94 does not exist
Oct 02 12:13:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7c088745-8a70-4437-924f-b3fa6e6bdebf does not exist
Oct 02 12:13:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ac520e9d-e92d-4519-89b5-5c0b8e53763d does not exist
Oct 02 12:13:37 compute-0 sudo[295840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:37 compute-0 sudo[295840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:37 compute-0 sudo[295840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:37 compute-0 sudo[295865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:13:37 compute-0 sudo[295865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:37 compute-0 sudo[295865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:38 compute-0 nova_compute[257802]: 2025-10-02 12:13:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:38 compute-0 nova_compute[257802]: 2025-10-02 12:13:38.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:13:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 02 12:13:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 02 12:13:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 02 12:13:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:38 compute-0 ceph-mon[73607]: pgmap v1452: 305 pgs: 305 active+clean; 214 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 231 op/s
Oct 02 12:13:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 169 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 910 KiB/s wr, 184 op/s
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.362 2 DEBUG nova.compute.manager [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.363 2 DEBUG oslo_concurrency.lockutils [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.364 2 DEBUG oslo_concurrency.lockutils [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.365 2 DEBUG oslo_concurrency.lockutils [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.365 2 DEBUG nova.compute.manager [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:39 compute-0 nova_compute[257802]: 2025-10-02 12:13:39.366 2 WARNING nova.compute.manager [req-74ca69fc-f54f-40f0-a40b-3b0f71041da8 req-2f2c747b-d3c6-4521-b0fb-7540eec91bdc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd for instance with vm_state active and task_state None.
Oct 02 12:13:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:39 compute-0 ceph-mon[73607]: osdmap e203: 3 total, 3 up, 3 in
Oct 02 12:13:40 compute-0 nova_compute[257802]: 2025-10-02 12:13:40.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:40 compute-0 nova_compute[257802]: 2025-10-02 12:13:40.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:41 compute-0 ceph-mon[73607]: pgmap v1454: 305 pgs: 305 active+clean; 169 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 910 KiB/s wr, 184 op/s
Oct 02 12:13:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1446482444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:41.140 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 134 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 43 KiB/s wr, 192 op/s
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.572 2 DEBUG nova.compute.manager [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.573 2 DEBUG oslo_concurrency.lockutils [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.573 2 DEBUG oslo_concurrency.lockutils [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.573 2 DEBUG oslo_concurrency.lockutils [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.574 2 DEBUG nova.compute.manager [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.574 2 WARNING nova.compute.manager [req-b735deff-5c4c-4971-9c38-c6e218e384a1 req-f4524eec-6afb-46ba-b28f-035b4beb8604 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd for instance with vm_state active and task_state None.
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.784 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.785 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.786 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.786 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.787 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.789 2 INFO nova.compute.manager [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Terminating instance
Oct 02 12:13:41 compute-0 nova_compute[257802]: 2025-10-02 12:13:41.790 2 DEBUG nova.compute.manager [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:13:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:42 compute-0 kernel: tapb45fb4b5-77 (unregistering): left promiscuous mode
Oct 02 12:13:42 compute-0 NetworkManager[44987]: <info>  [1759407222.0301] device (tapb45fb4b5-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00224|binding|INFO|Releasing lport b45fb4b5-77cd-42f8-91b9-473e20188e70 from this chassis (sb_readonly=0)
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 kernel: tap9dcf053a-b6 (unregistering): left promiscuous mode
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00225|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 down in Southbound
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00226|binding|INFO|Removing iface tapb45fb4b5-77 ovn-installed in OVS
Oct 02 12:13:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:42 compute-0 NetworkManager[44987]: <info>  [1759407222.1139] device (tap9dcf053a-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.114 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:cb:79 10.100.0.13'], port_security=['fa:16:3e:46:cb:79 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b45fb4b5-77cd-42f8-91b9-473e20188e70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.115 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b45fb4b5-77cd-42f8-91b9-473e20188e70 in datapath 5e557191-367e-4f3a-8238-d1a8a649138a unbound from our chassis
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.116 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e557191-367e-4f3a-8238-d1a8a649138a
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.135 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13797aa7-f3db-4aa9-a08f-af6bf565e887]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00227|binding|INFO|Releasing lport 9dcf053a-b635-450a-9c27-8dfb456f8bdd from this chassis (sb_readonly=0)
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00228|binding|INFO|Setting lport 9dcf053a-b635-450a-9c27-8dfb456f8bdd down in Southbound
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00229|binding|INFO|Removing iface tap9dcf053a-b6 ovn-installed in OVS
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.164 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:74:4f 10.100.0.12'], port_security=['fa:16:3e:9a:74:4f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9dcf053a-b635-450a-9c27-8dfb456f8bdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.175 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd14415-7e46-47af-aaef-26ca0f00cae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.179 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f5592643-3c11-42ea-bbc2-48ec19fc8e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 02 12:13:42 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003a.scope: Consumed 13.105s CPU time.
Oct 02 12:13:42 compute-0 systemd-machined[211836]: Machine qemu-28-instance-0000003a terminated.
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.217 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[65ed0e20-a1f9-4d8a-ad2c-8440408b24d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.236 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7199edb4-1c3f-48a2-aac1-919f3ffbf98c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e557191-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:42:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527436, 'reachable_time': 22710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295910, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.259 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00363d9e-17dd-44f0-b051-59daff9a3b70]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e557191-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527446, 'tstamp': 527446}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295911, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e557191-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527448, 'tstamp': 527448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295911, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.261 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e557191-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.272 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e557191-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.272 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.273 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e557191-30, col_values=(('external_ids', {'iface-id': 'dfd60aa0-b29c-4076-9b8b-209670b07c07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.273 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.274 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9dcf053a-b635-450a-9c27-8dfb456f8bdd in datapath 5e557191-367e-4f3a-8238-d1a8a649138a unbound from our chassis
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.276 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e557191-367e-4f3a-8238-d1a8a649138a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.277 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1f84b9cd-eed7-4950-80b5-6bd36538d372]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.277 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a namespace which is not needed anymore
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:13:42
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'backups', 'images']
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:13:42 compute-0 sshd-session[295918]: Invalid user partimag from 167.99.55.34 port 52644
Oct 02 12:13:42 compute-0 systemd-udevd[295896]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:42 compute-0 kernel: tapb45fb4b5-77: entered promiscuous mode
Oct 02 12:13:42 compute-0 NetworkManager[44987]: <info>  [1759407222.4107] manager: (tapb45fb4b5-77): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Oct 02 12:13:42 compute-0 kernel: tapb45fb4b5-77 (unregistering): left promiscuous mode
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00230|binding|INFO|Claiming lport b45fb4b5-77cd-42f8-91b9-473e20188e70 for this chassis.
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00231|binding|INFO|b45fb4b5-77cd-42f8-91b9-473e20188e70: Claiming fa:16:3e:46:cb:79 10.100.0.13
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.434 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:cb:79 10.100.0.13'], port_security=['fa:16:3e:46:cb:79 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b45fb4b5-77cd-42f8-91b9-473e20188e70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-0 sshd-session[295918]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:13:42 compute-0 sshd-session[295918]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00232|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 ovn-installed in OVS
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00233|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 up in Southbound
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00234|binding|INFO|Releasing lport b45fb4b5-77cd-42f8-91b9-473e20188e70 from this chassis (sb_readonly=1)
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00235|if_status|INFO|Not setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 down as sb is readonly
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00236|binding|INFO|Removing iface tapb45fb4b5-77 ovn-installed in OVS
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00237|binding|INFO|Releasing lport b45fb4b5-77cd-42f8-91b9-473e20188e70 from this chassis (sb_readonly=0)
Oct 02 12:13:42 compute-0 ovn_controller[148183]: 2025-10-02T12:13:42Z|00238|binding|INFO|Setting lport b45fb4b5-77cd-42f8-91b9-473e20188e70 down in Southbound
Oct 02 12:13:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:42.464 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:cb:79 10.100.0.13'], port_security=['fa:16:3e:46:cb:79 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '20f3c85c-395d-4603-b03a-6625d537159d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e557191-367e-4f3a-8238-d1a8a649138a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f7fda5b2b6844ad82818a60ed39c0b0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94a43b3a-a68b-43ea-b370-f2677c8855ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8b92627-1959-4b74-9379-e1c3dcad0904, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b45fb4b5-77cd-42f8-91b9-473e20188e70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.466 2 INFO nova.virt.libvirt.driver [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Instance destroyed successfully.
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.466 2 DEBUG nova.objects.instance [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lazy-loading 'resources' on Instance uuid 20f3c85c-395d-4603-b03a-6625d537159d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.481 2 DEBUG nova.virt.libvirt.vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:28Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.481 2 DEBUG nova.network.os_vif_util [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.482 2 DEBUG nova.network.os_vif_util [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.482 2 DEBUG os_vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb45fb4b5-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1195908143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.490 2 INFO os_vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:cb:79,bridge_name='br-int',has_traffic_filtering=True,id=b45fb4b5-77cd-42f8-91b9-473e20188e70,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb45fb4b5-77')
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.491 2 DEBUG nova.virt.libvirt.vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1269708573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1269708573',id=58,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f7fda5b2b6844ad82818a60ed39c0b0',ramdisk_id='',reservation_id='r-0wtd0h88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1440782177',owner_user_name='tempest-AttachInterfacesV270Test-1440782177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:28Z,user_data=None,user_id='6529c3301c674317b5af53daaa0ee15a',uuid=20f3c85c-395d-4603-b03a-6625d537159d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.491 2 DEBUG nova.network.os_vif_util [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converting VIF {"id": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "address": "fa:16:3e:9a:74:4f", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dcf053a-b6", "ovs_interfaceid": "9dcf053a-b635-450a-9c27-8dfb456f8bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.492 2 DEBUG nova.network.os_vif_util [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.492 2 DEBUG os_vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dcf053a-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:42 compute-0 nova_compute[257802]: 2025-10-02 12:13:42.496 2 INFO os_vif [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:74:4f,bridge_name='br-int',has_traffic_filtering=True,id=9dcf053a-b635-450a-9c27-8dfb456f8bdd,network=Network(5e557191-367e-4f3a-8238-d1a8a649138a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dcf053a-b6')
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [NOTICE]   (295080) : haproxy version is 2.8.14-c23fe91
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [NOTICE]   (295080) : path to executable is /usr/sbin/haproxy
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [WARNING]  (295080) : Exiting Master process...
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [WARNING]  (295080) : Exiting Master process...
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [ALERT]    (295080) : Current worker (295083) exited with code 143 (Terminated)
Oct 02 12:13:42 compute-0 neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a[295059]: [WARNING]  (295080) : All workers exited. Exiting... (0)
Oct 02 12:13:42 compute-0 systemd[1]: libpod-942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac.scope: Deactivated successfully.
Oct 02 12:13:42 compute-0 podman[295933]: 2025-10-02 12:13:42.513851666 +0000 UTC m=+0.106312928 container died 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac-userdata-shm.mount: Deactivated successfully.
Oct 02 12:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b07aeab6496e114cc3c1a2c26c5778188d7e5ff54f0ba9b33f0987ef014dd2-merged.mount: Deactivated successfully.
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:13:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:13:43 compute-0 podman[295933]: 2025-10-02 12:13:43.030575607 +0000 UTC m=+0.623036849 container cleanup 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:43 compute-0 systemd[1]: libpod-conmon-942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac.scope: Deactivated successfully.
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:13:43 compute-0 podman[295998]: 2025-10-02 12:13:43.111280973 +0000 UTC m=+0.054150294 container remove 942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.122 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5e8548-e7b1-43be-94b9-562c2e4df52f]: (4, ('Thu Oct  2 12:13:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a (942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac)\n942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac\nThu Oct  2 12:13:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a (942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac)\n942769ebd24865a2fd83542aa65f891574f14af15d7b413385f77f0996031fac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.125 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18003a3f-c7cd-4093-b281-e8dc7e22b8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.126 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e557191-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.128 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-0 kernel: tap5e557191-30: left promiscuous mode
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.137 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[741fa986-f388-48c5-8d9e-57c1a38cac3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.168 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[169f2138-b170-4c8b-a918-def5c477c606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.170 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c724b884-37cb-4e4e-9854-8206d071c011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 164 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.199 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8987b6e4-04e8-45d3-ac48-6a9e1fe29e9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527430, 'reachable_time': 39089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296015, 'error': None, 'target': 'ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e557191\x2d367e\x2d4f3a\x2d8238\x2dd1a8a649138a.mount: Deactivated successfully.
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.207 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e557191-367e-4f3a-8238-d1a8a649138a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.208 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[02101522-71af-458d-9d20-ea4a8a354e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.209 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b45fb4b5-77cd-42f8-91b9-473e20188e70 in datapath 5e557191-367e-4f3a-8238-d1a8a649138a unbound from our chassis
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.211 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e557191-367e-4f3a-8238-d1a8a649138a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.212 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa74322d-21aa-4369-b5f8-558725c59a95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.213 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b45fb4b5-77cd-42f8-91b9-473e20188e70 in datapath 5e557191-367e-4f3a-8238-d1a8a649138a unbound from our chassis
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.216 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e557191-367e-4f3a-8238-d1a8a649138a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:13:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:13:43.216 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[96dd75c9-aa9f-4833-9fb0-1636965c91b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:13:43 compute-0 podman[296012]: 2025-10-02 12:13:43.256971763 +0000 UTC m=+0.076944825 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:13:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 02 12:13:43 compute-0 ceph-mon[73607]: pgmap v1455: 305 pgs: 305 active+clean; 134 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 43 KiB/s wr, 192 op/s
Oct 02 12:13:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/284285891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1124511012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.661 2 DEBUG nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.661 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.661 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.661 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-unplugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG oslo_concurrency.lockutils [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.662 2 DEBUG nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:43 compute-0 nova_compute[257802]: 2025-10-02 12:13:43.663 2 WARNING nova.compute.manager [req-1e351b31-7bbc-4995-872a-060021ea3d28 req-ff4bf2f3-7834-4904-a93a-49f884cc69bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-9dcf053a-b635-450a-9c27-8dfb456f8bdd for instance with vm_state active and task_state deleting.
Oct 02 12:13:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 02 12:13:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 02 12:13:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:44.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.156 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.156 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.157 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.157 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.157 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.158 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.158 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.158 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.159 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.159 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.160 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.160 2 WARNING nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with vm_state active and task_state deleting.
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.160 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.161 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.161 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.161 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.162 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.162 2 WARNING nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with vm_state active and task_state deleting.
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.162 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.163 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.163 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.164 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.164 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.164 2 WARNING nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with vm_state active and task_state deleting.
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.165 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.165 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.166 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.166 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.166 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.167 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-unplugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.167 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.168 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f3c85c-395d-4603-b03a-6625d537159d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.168 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.169 2 DEBUG oslo_concurrency.lockutils [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.169 2 DEBUG nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] No waiting events found dispatching network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:44 compute-0 nova_compute[257802]: 2025-10-02 12:13:44.170 2 WARNING nova.compute.manager [req-f1d2edf6-b950-47f0-b342-b52a10fae173 req-9d7d1574-f02c-4a36-a958-0b89e24c29aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received unexpected event network-vif-plugged-b45fb4b5-77cd-42f8-91b9-473e20188e70 for instance with vm_state active and task_state deleting.
Oct 02 12:13:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 02 12:13:45 compute-0 sshd-session[295918]: Failed password for invalid user partimag from 167.99.55.34 port 52644 ssh2
Oct 02 12:13:45 compute-0 nova_compute[257802]: 2025-10-02 12:13:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:45 compute-0 nova_compute[257802]: 2025-10-02 12:13:45.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:13:45 compute-0 ceph-mon[73607]: pgmap v1456: 305 pgs: 305 active+clean; 164 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Oct 02 12:13:45 compute-0 ceph-mon[73607]: osdmap e204: 3 total, 3 up, 3 in
Oct 02 12:13:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 165 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Oct 02 12:13:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 02 12:13:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 02 12:13:45 compute-0 sshd-session[295918]: Connection closed by invalid user partimag 167.99.55.34 port 52644 [preauth]
Oct 02 12:13:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:13:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.166 2 INFO nova.virt.libvirt.driver [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Deleting instance files /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d_del
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.167 2 INFO nova.virt.libvirt.driver [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Deletion of /var/lib/nova/instances/20f3c85c-395d-4603-b03a-6625d537159d_del complete
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.235 2 INFO nova.compute.manager [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Took 4.44 seconds to destroy the instance on the hypervisor.
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.236 2 DEBUG oslo.service.loopingcall [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.236 2 DEBUG nova.compute.manager [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.236 2 DEBUG nova.network.neutron [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:13:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/428096603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:46 compute-0 ceph-mon[73607]: osdmap e205: 3 total, 3 up, 3 in
Oct 02 12:13:46 compute-0 nova_compute[257802]: 2025-10-02 12:13:46.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.112 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.134 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.134 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.134 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.149 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.149 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:13:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 200 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 177 op/s
Oct 02 12:13:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 02 12:13:47 compute-0 ceph-mon[73607]: pgmap v1458: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 165 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Oct 02 12:13:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2718261622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3643459010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1876584961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.479 2 DEBUG nova.compute.manager [req-6c523f4a-3182-4618-aa63-6beb181a36f0 req-ba7c9d98-a813-4ff1-a90a-a63e7a8d76e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-deleted-9dcf053a-b635-450a-9c27-8dfb456f8bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.480 2 INFO nova.compute.manager [req-6c523f4a-3182-4618-aa63-6beb181a36f0 req-ba7c9d98-a813-4ff1-a90a-a63e7a8d76e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Neutron deleted interface 9dcf053a-b635-450a-9c27-8dfb456f8bdd; detaching it from the instance and deleting it from the info cache
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.480 2 DEBUG nova.network.neutron [req-6c523f4a-3182-4618-aa63-6beb181a36f0 req-ba7c9d98-a813-4ff1-a90a-a63e7a8d76e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [{"id": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "address": "fa:16:3e:46:cb:79", "network": {"id": "5e557191-367e-4f3a-8238-d1a8a649138a", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1604492819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f7fda5b2b6844ad82818a60ed39c0b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb45fb4b5-77", "ovs_interfaceid": "b45fb4b5-77cd-42f8-91b9-473e20188e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.508 2 DEBUG nova.compute.manager [req-6c523f4a-3182-4618-aa63-6beb181a36f0 req-ba7c9d98-a813-4ff1-a90a-a63e7a8d76e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Detach interface failed, port_id=9dcf053a-b635-450a-9c27-8dfb456f8bdd, reason: Instance 20f3c85c-395d-4603-b03a-6625d537159d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:13:47 compute-0 nova_compute[257802]: 2025-10-02 12:13:47.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 02 12:13:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 02 12:13:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:47.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.095 2 DEBUG nova.network.neutron [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:48.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.133 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.134 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.134 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.134 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.135 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.159 2 INFO nova.compute.manager [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Took 1.92 seconds to deallocate network for instance.
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.204 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.205 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.304 2 DEBUG nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.457 2 DEBUG nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.458 2 DEBUG nova.compute.provider_tree [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.576 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.588 2 DEBUG nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.612 2 DEBUG nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:13:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.646 2 DEBUG oslo_concurrency.processutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:48 compute-0 podman[296061]: 2025-10-02 12:13:48.698708976 +0000 UTC m=+0.073998391 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:13:48 compute-0 podman[296060]: 2025-10-02 12:13:48.719804785 +0000 UTC m=+0.097998471 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.852 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.854 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4616MB free_disk=20.927169799804688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:13:48 compute-0 nova_compute[257802]: 2025-10-02 12:13:48.854 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:48 compute-0 ceph-mon[73607]: pgmap v1460: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 200 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 177 op/s
Oct 02 12:13:48 compute-0 ceph-mon[73607]: osdmap e206: 3 total, 3 up, 3 in
Oct 02 12:13:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/912660896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87011448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.136 2 DEBUG oslo_concurrency.processutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.143 2 DEBUG nova.compute.provider_tree [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.157 2 DEBUG nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.183 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.185 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 206 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 12 MiB/s wr, 299 op/s
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.232 2 INFO nova.scheduler.client.report [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Deleted allocations for instance 20f3c85c-395d-4603-b03a-6625d537159d
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.258 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.259 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.274 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.317 2 DEBUG oslo_concurrency.lockutils [None req-9459a49c-31a4-421f-9e35-6b18ac39724a 6529c3301c674317b5af53daaa0ee15a 7f7fda5b2b6844ad82818a60ed39c0b0 - - default default] Lock "20f3c85c-395d-4603-b03a-6625d537159d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.592 2 DEBUG nova.compute.manager [req-2bd921a7-88f8-4ca8-a7d0-328ecc00d981 req-8ed6158b-1f5d-4b31-b702-c155e515c95a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Received event network-vif-deleted-b45fb4b5-77cd-42f8-91b9-473e20188e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766426989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.730 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.738 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.758 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.778 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:13:49 compute-0 nova_compute[257802]: 2025-10-02 12:13:49.779 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:49.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/87011448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2766426989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 206 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 MiB/s wr, 259 op/s
Oct 02 12:13:51 compute-0 ceph-mon[73607]: pgmap v1462: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 206 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 12 MiB/s wr, 299 op/s
Oct 02 12:13:51 compute-0 nova_compute[257802]: 2025-10-02 12:13:51.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:52 compute-0 nova_compute[257802]: 2025-10-02 12:13:52.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:52 compute-0 ceph-mon[73607]: pgmap v1463: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 206 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 MiB/s wr, 259 op/s
Oct 02 12:13:52 compute-0 nova_compute[257802]: 2025-10-02 12:13:52.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:53 compute-0 podman[296147]: 2025-10-02 12:13:53.027764955 +0000 UTC m=+0.139008019 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 184 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.1 MiB/s wr, 181 op/s
Oct 02 12:13:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 02 12:13:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 02 12:13:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 02 12:13:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:13:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:53.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:13:54 compute-0 nova_compute[257802]: 2025-10-02 12:13:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:54 compute-0 nova_compute[257802]: 2025-10-02 12:13:54.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:54.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003126344977200819 of space, bias 1.0, pg target 0.9379034931602457 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002532373964731063 of space, bias 1.0, pg target 0.7597121894193188 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:13:54 compute-0 ceph-mon[73607]: pgmap v1464: 305 pgs: 305 active+clean; 184 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.1 MiB/s wr, 181 op/s
Oct 02 12:13:54 compute-0 ceph-mon[73607]: osdmap e207: 3 total, 3 up, 3 in
Oct 02 12:13:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 787 KiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:13:55 compute-0 sudo[296174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:55 compute-0 sudo[296174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:55 compute-0 sudo[296174]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:55 compute-0 sudo[296199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:55 compute-0 sudo[296199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:55 compute-0 sudo[296199]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2933500776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:13:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2933500776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:13:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:56.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:56 compute-0 nova_compute[257802]: 2025-10-02 12:13:56.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:57 compute-0 ceph-mon[73607]: pgmap v1466: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 787 KiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:13:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 186 op/s
Oct 02 12:13:57 compute-0 nova_compute[257802]: 2025-10-02 12:13:57.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407222.4615052, 20f3c85c-395d-4603-b03a-6625d537159d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:57 compute-0 nova_compute[257802]: 2025-10-02 12:13:57.463 2 INFO nova.compute.manager [-] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] VM Stopped (Lifecycle Event)
Oct 02 12:13:57 compute-0 nova_compute[257802]: 2025-10-02 12:13:57.514 2 DEBUG nova.compute.manager [None req-e736e896-3063-4a38-8fc2-f7232f9a3625 - - - - - -] [instance: 20f3c85c-395d-4603-b03a-6625d537159d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:57 compute-0 nova_compute[257802]: 2025-10-02 12:13:57.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:13:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:13:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:58 compute-0 ceph-mon[73607]: pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 186 op/s
Oct 02 12:13:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 02 12:13:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 02 12:13:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 02 12:13:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 110 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 157 KiB/s wr, 184 op/s
Oct 02 12:13:59 compute-0 ceph-mon[73607]: osdmap e208: 3 total, 3 up, 3 in
Oct 02 12:13:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:13:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:00.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:00 compute-0 ceph-mon[73607]: pgmap v1469: 305 pgs: 305 active+clean; 110 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 157 KiB/s wr, 184 op/s
Oct 02 12:14:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3227020014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 88 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 125 KiB/s wr, 172 op/s
Oct 02 12:14:01 compute-0 nova_compute[257802]: 2025-10-02 12:14:01.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:14:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:14:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:02 compute-0 nova_compute[257802]: 2025-10-02 12:14:02.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:02 compute-0 ceph-mon[73607]: pgmap v1470: 305 pgs: 305 active+clean; 88 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 125 KiB/s wr, 172 op/s
Oct 02 12:14:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 88 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 105 KiB/s wr, 149 op/s
Oct 02 12:14:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:04.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:04 compute-0 ceph-mon[73607]: pgmap v1471: 305 pgs: 305 active+clean; 88 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 105 KiB/s wr, 149 op/s
Oct 02 12:14:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 88 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 KiB/s wr, 107 op/s
Oct 02 12:14:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:06.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:06 compute-0 nova_compute[257802]: 2025-10-02 12:14:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:06 compute-0 ceph-mon[73607]: pgmap v1472: 305 pgs: 305 active+clean; 88 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 KiB/s wr, 107 op/s
Oct 02 12:14:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 89 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 502 KiB/s wr, 98 op/s
Oct 02 12:14:07 compute-0 nova_compute[257802]: 2025-10-02 12:14:07.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.652 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.652 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.674 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.758 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.759 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.765 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.766 2 INFO nova.compute.claims [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:14:08 compute-0 nova_compute[257802]: 2025-10-02 12:14:08.863 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:08 compute-0 ceph-mon[73607]: pgmap v1473: 305 pgs: 305 active+clean; 89 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 502 KiB/s wr, 98 op/s
Oct 02 12:14:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 120 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Oct 02 12:14:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845043967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.301 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.308 2 DEBUG nova.compute.provider_tree [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.326 2 DEBUG nova.scheduler.client.report [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.351 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.352 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.395 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.395 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.425 2 INFO nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.451 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.548 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.549 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.550 2 INFO nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Creating image(s)
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.575 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.603 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.630 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.634 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.724 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.726 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.727 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.727 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.754 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.758 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e9189687-089b-4ee0-9659-211d431f7c3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:09 compute-0 nova_compute[257802]: 2025-10-02 12:14:09.903 2 DEBUG nova.policy [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c085e57b7d944c7a372e604d34c96a3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '635effeb91ed4a7fb29066f894e25c5a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:14:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:09.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3845043967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.509 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e9189687-089b-4ee0-9659-211d431f7c3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.599 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] resizing rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.894 2 DEBUG nova.objects.instance [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lazy-loading 'migration_context' on Instance uuid e9189687-089b-4ee0-9659-211d431f7c3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.924 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.924 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Ensure instance console log exists: /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.925 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.926 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:10 compute-0 nova_compute[257802]: 2025-10-02 12:14:10.926 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:11 compute-0 ceph-mon[73607]: pgmap v1474: 305 pgs: 305 active+clean; 120 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Oct 02 12:14:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 140 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 3.0 MiB/s wr, 71 op/s
Oct 02 12:14:11 compute-0 nova_compute[257802]: 2025-10-02 12:14:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:11 compute-0 nova_compute[257802]: 2025-10-02 12:14:11.840 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Successfully created port: 238499d2-a655-4052-88a6-ec2f299f80c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:14:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:12 compute-0 ceph-mon[73607]: pgmap v1475: 305 pgs: 305 active+clean; 140 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 3.0 MiB/s wr, 71 op/s
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.654 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Successfully updated port: 238499d2-a655-4052-88a6-ec2f299f80c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.668 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.668 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquired lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.668 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.784 2 DEBUG nova.compute.manager [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-changed-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.785 2 DEBUG nova.compute.manager [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Refreshing instance network info cache due to event network-changed-238499d2-a655-4052-88a6-ec2f299f80c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.785 2 DEBUG oslo_concurrency.lockutils [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:12 compute-0 nova_compute[257802]: 2025-10-02 12:14:12.839 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:14:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 160 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Oct 02 12:14:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3971733623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.697 2 DEBUG nova.network.neutron [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updating instance_info_cache with network_info: [{"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.717 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Releasing lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.718 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Instance network_info: |[{"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.718 2 DEBUG oslo_concurrency.lockutils [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.718 2 DEBUG nova.network.neutron [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Refreshing network info cache for port 238499d2-a655-4052-88a6-ec2f299f80c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.720 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Start _get_guest_xml network_info=[{"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.725 2 WARNING nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.729 2 DEBUG nova.virt.libvirt.host [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.730 2 DEBUG nova.virt.libvirt.host [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.734 2 DEBUG nova.virt.libvirt.host [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.734 2 DEBUG nova.virt.libvirt.host [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.735 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.736 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.736 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.736 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.737 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.738 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.738 2 DEBUG nova.virt.hardware [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:13 compute-0 nova_compute[257802]: 2025-10-02 12:14:13.740 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:13 compute-0 podman[296424]: 2025-10-02 12:14:13.919244912 +0000 UTC m=+0.054006439 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 12:14:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827095765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.163 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.191 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.196 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:14 compute-0 ceph-mon[73607]: pgmap v1476: 305 pgs: 305 active+clean; 160 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Oct 02 12:14:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/827095765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257980047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.669 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.671 2 DEBUG nova.virt.libvirt.vif [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1324947878',display_name='tempest-ServersTestJSON-server-1324947878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1324947878',id=61,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPODLc73vcAGq+JixmgPJdhoD25wYT7PMF03wlPuV2ANGBasaLYR/ZYH1MuTT69HiEl5/O3OIWiNfd9wOXVscHdio0hIEDT6Zh/fsDlQOJGWeXnVHabv9+qTcPA1+2X0Q==',key_name='tempest-keypair-1726160167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='635effeb91ed4a7fb29066f894e25c5a',ramdisk_id='',reservation_id='r-5m9hoxnd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-230117370',owner_user_name='tempest-ServersTestJSON-230117370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c085e57b7d944c7a372e604d34c96a3',uuid=e9189687-089b-4ee0-9659-211d431f7c3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.671 2 DEBUG nova.network.os_vif_util [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converting VIF {"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.672 2 DEBUG nova.network.os_vif_util [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.673 2 DEBUG nova.objects.instance [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lazy-loading 'pci_devices' on Instance uuid e9189687-089b-4ee0-9659-211d431f7c3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.696 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <uuid>e9189687-089b-4ee0-9659-211d431f7c3c</uuid>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <name>instance-0000003d</name>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersTestJSON-server-1324947878</nova:name>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:14:13</nova:creationTime>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:user uuid="1c085e57b7d944c7a372e604d34c96a3">tempest-ServersTestJSON-230117370-project-member</nova:user>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:project uuid="635effeb91ed4a7fb29066f894e25c5a">tempest-ServersTestJSON-230117370</nova:project>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <nova:port uuid="238499d2-a655-4052-88a6-ec2f299f80c9">
Oct 02 12:14:14 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <system>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="serial">e9189687-089b-4ee0-9659-211d431f7c3c</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="uuid">e9189687-089b-4ee0-9659-211d431f7c3c</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </system>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <os>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </os>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <features>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </features>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e9189687-089b-4ee0-9659-211d431f7c3c_disk">
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e9189687-089b-4ee0-9659-211d431f7c3c_disk.config">
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:14:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:8c:61:1b"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <target dev="tap238499d2-a6"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/console.log" append="off"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <video>
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </video>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:14:14 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:14:14 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:14:14 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:14:14 compute-0 nova_compute[257802]: </domain>
Oct 02 12:14:14 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.698 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Preparing to wait for external event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.698 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.699 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.700 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.701 2 DEBUG nova.virt.libvirt.vif [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1324947878',display_name='tempest-ServersTestJSON-server-1324947878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1324947878',id=61,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPODLc73vcAGq+JixmgPJdhoD25wYT7PMF03wlPuV2ANGBasaLYR/ZYH1MuTT69HiEl5/O3OIWiNfd9wOXVscHdio0hIEDT6Zh/fsDlQOJGWeXnVHabv9+qTcPA1+2X0Q==',key_name='tempest-keypair-1726160167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='635effeb91ed4a7fb29066f894e25c5a',ramdisk_id='',reservation_id='r-5m9hoxnd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-230117370',owner_user_name='tempest-ServersTestJSON-230117370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c085e57b7d944c7a372e604d34c96a3',uuid=e9189687-089b-4ee0-9659-211d431f7c3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.701 2 DEBUG nova.network.os_vif_util [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converting VIF {"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.702 2 DEBUG nova.network.os_vif_util [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.702 2 DEBUG os_vif [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.704 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap238499d2-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap238499d2-a6, col_values=(('external_ids', {'iface-id': '238499d2-a655-4052-88a6-ec2f299f80c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:61:1b', 'vm-uuid': 'e9189687-089b-4ee0-9659-211d431f7c3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:14 compute-0 NetworkManager[44987]: <info>  [1759407254.7144] manager: (tap238499d2-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.721 2 INFO os_vif [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6')
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.778 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.780 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.780 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] No VIF found with MAC fa:16:3e:8c:61:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.780 2 INFO nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Using config drive
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.803 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 nova_compute[257802]: 2025-10-02 12:14:14.971 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.000 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid e9189687-089b-4ee0-9659-211d431f7c3c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.001 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 183 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 4.7 MiB/s wr, 93 op/s
Oct 02 12:14:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1257980047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.757 2 INFO nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Creating config drive at /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.762 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjtkhgmzi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.896 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjtkhgmzi" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:15 compute-0 sudo[296526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.927 2 DEBUG nova.storage.rbd_utils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] rbd image e9189687-089b-4ee0-9659-211d431f7c3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:15 compute-0 sudo[296526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:15 compute-0 sudo[296526]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:15 compute-0 nova_compute[257802]: 2025-10-02 12:14:15.932 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config e9189687-089b-4ee0-9659-211d431f7c3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:14:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:14:15 compute-0 sudo[296569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:15 compute-0 sudo[296569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:15 compute-0 sudo[296569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.069 2 DEBUG nova.network.neutron [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updated VIF entry in instance network info cache for port 238499d2-a655-4052-88a6-ec2f299f80c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.070 2 DEBUG nova.network.neutron [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updating instance_info_cache with network_info: [{"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.093 2 DEBUG oslo_concurrency.lockutils [req-5cc1aa53-6680-48d2-8dbf-904e3ba03616 req-01a3aa0f-79da-4c22-b3a7-aed492382eb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.145 2 DEBUG oslo_concurrency.processutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config e9189687-089b-4ee0-9659-211d431f7c3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.145 2 INFO nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Deleting local config drive /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c/disk.config because it was imported into RBD.
Oct 02 12:14:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:16.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.1973] manager: (tap238499d2-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 02 12:14:16 compute-0 kernel: tap238499d2-a6: entered promiscuous mode
Oct 02 12:14:16 compute-0 ovn_controller[148183]: 2025-10-02T12:14:16Z|00239|binding|INFO|Claiming lport 238499d2-a655-4052-88a6-ec2f299f80c9 for this chassis.
Oct 02 12:14:16 compute-0 ovn_controller[148183]: 2025-10-02T12:14:16Z|00240|binding|INFO|238499d2-a655-4052-88a6-ec2f299f80c9: Claiming fa:16:3e:8c:61:1b 10.100.0.14
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.213 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:61:1b 10.100.0.14'], port_security=['fa:16:3e:8c:61:1b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e9189687-089b-4ee0-9659-211d431f7c3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65340dfd-fd85-4050-a877-6f149aa218e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '635effeb91ed4a7fb29066f894e25c5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a096b700-3108-403f-99c0-1986d777ddd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=699c1c42-9e1c-42d3-a4ae-f989c55bc6e5, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=238499d2-a655-4052-88a6-ec2f299f80c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.214 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 238499d2-a655-4052-88a6-ec2f299f80c9 in datapath 65340dfd-fd85-4050-a877-6f149aa218e1 bound to our chassis
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.216 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65340dfd-fd85-4050-a877-6f149aa218e1
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.227 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1f00b99f-2a1d-491a-a893-6d4db6847c55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.227 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65340dfd-f1 in ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.229 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65340dfd-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ef11f99c-0f97-48e5-96d0-935208352a37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db8907cf-a917-4b43-b274-02a9462b50e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 systemd-udevd[296624]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.244 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d720b4a0-5af5-4978-84af-8bda61c368d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.2501] device (tap238499d2-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.2507] device (tap238499d2-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:14:16 compute-0 systemd-machined[211836]: New machine qemu-29-instance-0000003d.
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.269 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1956108f-5cd1-4e62-b19d-ce32ca6c7475]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.295 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[68160a28-5e45-4e70-aec5-609bb4c107df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.3020] manager: (tap65340dfd-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Oct 02 12:14:16 compute-0 systemd-udevd[296631]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.302 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33001eae-2513-473f-b009-1159ce29ed73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000003d.
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 ovn_controller[148183]: 2025-10-02T12:14:16Z|00241|binding|INFO|Setting lport 238499d2-a655-4052-88a6-ec2f299f80c9 ovn-installed in OVS
Oct 02 12:14:16 compute-0 ovn_controller[148183]: 2025-10-02T12:14:16Z|00242|binding|INFO|Setting lport 238499d2-a655-4052-88a6-ec2f299f80c9 up in Southbound
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.330 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3d123936-dc9d-441a-8d72-c79da6bef4fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.333 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[15b863d6-4e41-4340-ab24-494944e86640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.3541] device (tap65340dfd-f0): carrier: link connected
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.360 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2e0dad-5e7d-49d3-85a2-98ce9b463482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.380 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[68cff358-8e08-42b4-9089-0821125c4876]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65340dfd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:f4:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532397, 'reachable_time': 21414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296658, 'error': None, 'target': 'ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.397 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[62277eef-fa10-42c0-a063-dbe7fadf24d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:f43b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532397, 'tstamp': 532397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296660, 'error': None, 'target': 'ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.416 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da44bf1a-46e2-4697-b9de-fb9d8e92b55f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65340dfd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:f4:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532397, 'reachable_time': 21414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296662, 'error': None, 'target': 'ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.447 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1f9108-2a25-4d0e-8f4a-5f83758c430e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.513 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b2259a-2eff-4ccd-b475-8659c689396c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.514 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65340dfd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.515 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.515 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65340dfd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:16 compute-0 NetworkManager[44987]: <info>  [1759407256.5179] manager: (tap65340dfd-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 kernel: tap65340dfd-f0: entered promiscuous mode
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.520 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65340dfd-f0, col_values=(('external_ids', {'iface-id': '6db828a5-284b-44ff-863c-428cc2e73574'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:16 compute-0 ovn_controller[148183]: 2025-10-02T12:14:16Z|00243|binding|INFO|Releasing lport 6db828a5-284b-44ff-863c-428cc2e73574 from this chassis (sb_readonly=0)
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.546 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65340dfd-fd85-4050-a877-6f149aa218e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65340dfd-fd85-4050-a877-6f149aa218e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.547 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4da6d0-b8fa-46c7-bc2e-757bd076df23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.548 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-65340dfd-fd85-4050-a877-6f149aa218e1
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/65340dfd-fd85-4050-a877-6f149aa218e1.pid.haproxy
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 65340dfd-fd85-4050-a877-6f149aa218e1
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:14:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:16.549 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1', 'env', 'PROCESS_TAG=haproxy-65340dfd-fd85-4050-a877-6f149aa218e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65340dfd-fd85-4050-a877-6f149aa218e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:14:16 compute-0 nova_compute[257802]: 2025-10-02 12:14:16.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-0 ceph-mon[73607]: pgmap v1477: 305 pgs: 305 active+clean; 183 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 4.7 MiB/s wr, 93 op/s
Oct 02 12:14:16 compute-0 podman[296712]: 2025-10-02 12:14:16.953127158 +0000 UTC m=+0.064576089 container create bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:16 compute-0 systemd[1]: Started libpod-conmon-bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20.scope.
Oct 02 12:14:17 compute-0 podman[296712]: 2025-10-02 12:14:16.919586163 +0000 UTC m=+0.031035124 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c91eecf33932f3e524ed726da427d4db1f2ea01c28fe711e151cf6cd7732298/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:17 compute-0 podman[296712]: 2025-10-02 12:14:17.06057698 +0000 UTC m=+0.172025921 container init bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:14:17 compute-0 podman[296712]: 2025-10-02 12:14:17.072391351 +0000 UTC m=+0.183840272 container start bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:14:17 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [NOTICE]   (296755) : New worker (296757) forked
Oct 02 12:14:17 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [NOTICE]   (296755) : Loading success.
Oct 02 12:14:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 200 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 5.2 MiB/s wr, 118 op/s
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.410 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407257.409808, e9189687-089b-4ee0-9659-211d431f7c3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.410 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] VM Started (Lifecycle Event)
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.449 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.453 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407257.4108176, e9189687-089b-4ee0-9659-211d431f7c3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.453 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] VM Paused (Lifecycle Event)
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.474 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.477 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:17 compute-0 nova_compute[257802]: 2025-10-02 12:14:17.501 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:18.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.364 2 DEBUG nova.compute.manager [req-14ac2f5f-4d36-4010-a9b9-1bc4349d5844 req-59d53f8f-e265-4a95-985d-7718b486b317 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.365 2 DEBUG oslo_concurrency.lockutils [req-14ac2f5f-4d36-4010-a9b9-1bc4349d5844 req-59d53f8f-e265-4a95-985d-7718b486b317 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.365 2 DEBUG oslo_concurrency.lockutils [req-14ac2f5f-4d36-4010-a9b9-1bc4349d5844 req-59d53f8f-e265-4a95-985d-7718b486b317 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.365 2 DEBUG oslo_concurrency.lockutils [req-14ac2f5f-4d36-4010-a9b9-1bc4349d5844 req-59d53f8f-e265-4a95-985d-7718b486b317 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.365 2 DEBUG nova.compute.manager [req-14ac2f5f-4d36-4010-a9b9-1bc4349d5844 req-59d53f8f-e265-4a95-985d-7718b486b317 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Processing event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.366 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.370 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407258.3702767, e9189687-089b-4ee0-9659-211d431f7c3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.370 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] VM Resumed (Lifecycle Event)
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.372 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.377 2 INFO nova.virt.libvirt.driver [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Instance spawned successfully.
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.378 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.401 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.406 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.410 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.410 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.410 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.411 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.411 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.412 2 DEBUG nova.virt.libvirt.driver [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.444 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.515 2 INFO nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Took 8.97 seconds to spawn the instance on the hypervisor.
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.516 2 DEBUG nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.607 2 INFO nova.compute.manager [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Took 9.88 seconds to build instance.
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.624 2 DEBUG oslo_concurrency.lockutils [None req-ec0e53a2-393b-46d2-8cac-6934252dc75f 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.624 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.624 2 INFO nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:18 compute-0 nova_compute[257802]: 2025-10-02 12:14:18.625 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:18 compute-0 ceph-mon[73607]: pgmap v1478: 305 pgs: 305 active+clean; 200 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 5.2 MiB/s wr, 118 op/s
Oct 02 12:14:18 compute-0 podman[296768]: 2025-10-02 12:14:18.929911535 +0000 UTC m=+0.063559765 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:14:18 compute-0 podman[296767]: 2025-10-02 12:14:18.930642852 +0000 UTC m=+0.063759348 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:14:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Oct 02 12:14:19 compute-0 nova_compute[257802]: 2025-10-02 12:14:19.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:19.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:20.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.501 2 DEBUG nova.compute.manager [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.501 2 DEBUG oslo_concurrency.lockutils [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.501 2 DEBUG oslo_concurrency.lockutils [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.502 2 DEBUG oslo_concurrency.lockutils [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.502 2 DEBUG nova.compute.manager [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] No waiting events found dispatching network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.502 2 WARNING nova.compute.manager [req-07fdd191-5b7c-4c45-9a5e-0e2097ac5039 req-50eadd96-fa32-475d-a445-50c414b30b5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received unexpected event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 for instance with vm_state active and task_state None.
Oct 02 12:14:20 compute-0 NetworkManager[44987]: <info>  [1759407260.8156] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:20 compute-0 NetworkManager[44987]: <info>  [1759407260.8166] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct 02 12:14:20 compute-0 nova_compute[257802]: 2025-10-02 12:14:20.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:21 compute-0 ovn_controller[148183]: 2025-10-02T12:14:21Z|00244|binding|INFO|Releasing lport 6db828a5-284b-44ff-863c-428cc2e73574 from this chassis (sb_readonly=0)
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:21 compute-0 ceph-mon[73607]: pgmap v1479: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Oct 02 12:14:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.283 2 DEBUG nova.compute.manager [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-changed-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.284 2 DEBUG nova.compute.manager [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Refreshing instance network info cache due to event network-changed-238499d2-a655-4052-88a6-ec2f299f80c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.284 2 DEBUG oslo_concurrency.lockutils [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.285 2 DEBUG oslo_concurrency.lockutils [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.285 2 DEBUG nova.network.neutron [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Refreshing network info cache for port 238499d2-a655-4052-88a6-ec2f299f80c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:14:21 compute-0 nova_compute[257802]: 2025-10-02 12:14:21.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:22 compute-0 ceph-mon[73607]: pgmap v1480: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 12:14:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 112 op/s
Oct 02 12:14:23 compute-0 nova_compute[257802]: 2025-10-02 12:14:23.447 2 DEBUG nova.network.neutron [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updated VIF entry in instance network info cache for port 238499d2-a655-4052-88a6-ec2f299f80c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:14:23 compute-0 nova_compute[257802]: 2025-10-02 12:14:23.448 2 DEBUG nova.network.neutron [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updating instance_info_cache with network_info: [{"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:23 compute-0 nova_compute[257802]: 2025-10-02 12:14:23.509 2 DEBUG oslo_concurrency.lockutils [req-03facd7f-0d60-42d1-82c0-298639e1ec75 req-fb54462c-27d0-469f-9357-35ac5e6383df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e9189687-089b-4ee0-9659-211d431f7c3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:23.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:23 compute-0 podman[296810]: 2025-10-02 12:14:23.984276291 +0000 UTC m=+0.108181052 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:14:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:24 compute-0 ceph-mon[73607]: pgmap v1481: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 112 op/s
Oct 02 12:14:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3758610623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:24 compute-0 nova_compute[257802]: 2025-10-02 12:14:24.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 113 op/s
Oct 02 12:14:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:25.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:26.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:26 compute-0 nova_compute[257802]: 2025-10-02 12:14:26.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:26 compute-0 ceph-mon[73607]: pgmap v1482: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 113 op/s
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:26.932 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 204 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Oct 02 12:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:27.519 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:27.519 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:14:27 compute-0 nova_compute[257802]: 2025-10-02 12:14:27.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1707467713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4252505577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:14:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:14:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:28.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:28 compute-0 ceph-mon[73607]: pgmap v1483: 305 pgs: 305 active+clean; 204 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Oct 02 12:14:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3309648743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3712668963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3614499518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 176 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 02 12:14:29 compute-0 nova_compute[257802]: 2025-10-02 12:14:29.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:30.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:30 compute-0 ceph-mon[73607]: pgmap v1484: 305 pgs: 305 active+clean; 176 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 02 12:14:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 185 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 132 op/s
Oct 02 12:14:31 compute-0 ovn_controller[148183]: 2025-10-02T12:14:31Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:61:1b 10.100.0.14
Oct 02 12:14:31 compute-0 ovn_controller[148183]: 2025-10-02T12:14:31Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:61:1b 10.100.0.14
Oct 02 12:14:31 compute-0 nova_compute[257802]: 2025-10-02 12:14:31.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:32.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 184 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 192 op/s
Oct 02 12:14:33 compute-0 ceph-mon[73607]: pgmap v1485: 305 pgs: 305 active+clean; 185 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 132 op/s
Oct 02 12:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:34.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:34 compute-0 ceph-mon[73607]: pgmap v1486: 305 pgs: 305 active+clean; 184 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 192 op/s
Oct 02 12:14:34 compute-0 nova_compute[257802]: 2025-10-02 12:14:34.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 184 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Oct 02 12:14:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:36 compute-0 sudo[296842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:36 compute-0 sudo[296842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:36 compute-0 sudo[296842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:36.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:36 compute-0 sudo[296867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:36 compute-0 sudo[296867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:36 compute-0 sudo[296867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:36 compute-0 nova_compute[257802]: 2025-10-02 12:14:36.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:36 compute-0 ceph-mon[73607]: pgmap v1487: 305 pgs: 305 active+clean; 184 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Oct 02 12:14:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2864317009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1618711799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 229 op/s
Oct 02 12:14:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:37.522 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:37.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:38 compute-0 sudo[296893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:38 compute-0 sudo[296893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:38 compute-0 sudo[296893]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:38 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:14:38 compute-0 sudo[296918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:14:38 compute-0 sudo[296918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:38 compute-0 sudo[296918]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:38.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:38 compute-0 sudo[296943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:38 compute-0 sudo[296943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:38 compute-0 sudo[296943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:38 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 12:14:38 compute-0 sudo[296968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:14:38 compute-0 sudo[296968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:14:38 compute-0 sudo[296968]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:14:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:14:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.868 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.868 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.868 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.869 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.869 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.870 2 INFO nova.compute.manager [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Terminating instance
Oct 02 12:14:38 compute-0 nova_compute[257802]: 2025-10-02 12:14:38.871 2 DEBUG nova.compute.manager [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.958219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407278958317, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2003, "num_deletes": 258, "total_data_size": 3287864, "memory_usage": 3354800, "flush_reason": "Manual Compaction"}
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 02 12:14:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407278977401, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2055007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31604, "largest_seqno": 33606, "table_properties": {"data_size": 2047946, "index_size": 3815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18673, "raw_average_key_size": 21, "raw_value_size": 2032262, "raw_average_value_size": 2352, "num_data_blocks": 166, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407114, "oldest_key_time": 1759407114, "file_creation_time": 1759407278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 19207 microseconds, and 10750 cpu microseconds.
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.977444) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2055007 bytes OK
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.977467) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.979500) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.979513) EVENT_LOG_v1 {"time_micros": 1759407278979509, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.979531) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3279489, prev total WAL file size 3302018, number of live WAL files 2.
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.980562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303035' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2006KB)], [68(9901KB)]
Oct 02 12:14:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407278980622, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 12194282, "oldest_snapshot_seqno": -1}
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6022 keys, 9495941 bytes, temperature: kUnknown
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279077709, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9495941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9456069, "index_size": 23698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153292, "raw_average_key_size": 25, "raw_value_size": 9348393, "raw_average_value_size": 1552, "num_data_blocks": 957, "num_entries": 6022, "num_filter_entries": 6022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.078172) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9495941 bytes
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.080783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.4 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.6) write-amplify(4.6) OK, records in: 6478, records dropped: 456 output_compression: NoCompression
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.080819) EVENT_LOG_v1 {"time_micros": 1759407279080803, "job": 38, "event": "compaction_finished", "compaction_time_micros": 97266, "compaction_time_cpu_micros": 21065, "output_level": 6, "num_output_files": 1, "total_output_size": 9495941, "num_input_records": 6478, "num_output_records": 6022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279081698, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279085906, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:38.980435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.086031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.086042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.086046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.086049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:39.086053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 195 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 250 op/s
Oct 02 12:14:39 compute-0 ceph-mon[73607]: pgmap v1488: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 229 op/s
Oct 02 12:14:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:39 compute-0 sudo[297014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:39 compute-0 sudo[297014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:39 compute-0 sudo[297014]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:39 compute-0 sudo[297039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:14:39 compute-0 sudo[297039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:39 compute-0 sudo[297039]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:39 compute-0 sudo[297064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:39 compute-0 sudo[297064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:39 compute-0 sudo[297064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:39 compute-0 sudo[297089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:14:39 compute-0 sudo[297089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:39 compute-0 nova_compute[257802]: 2025-10-02 12:14:39.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:39.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:14:40 compute-0 kernel: tap238499d2-a6 (unregistering): left promiscuous mode
Oct 02 12:14:40 compute-0 NetworkManager[44987]: <info>  [1759407280.1373] device (tap238499d2-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:14:40 compute-0 ovn_controller[148183]: 2025-10-02T12:14:40Z|00245|binding|INFO|Releasing lport 238499d2-a655-4052-88a6-ec2f299f80c9 from this chassis (sb_readonly=0)
Oct 02 12:14:40 compute-0 ovn_controller[148183]: 2025-10-02T12:14:40Z|00246|binding|INFO|Setting lport 238499d2-a655-4052-88a6-ec2f299f80c9 down in Southbound
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 sudo[297089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:40 compute-0 ovn_controller[148183]: 2025-10-02T12:14:40Z|00247|binding|INFO|Removing iface tap238499d2-a6 ovn-installed in OVS
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.196 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:61:1b 10.100.0.14'], port_security=['fa:16:3e:8c:61:1b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e9189687-089b-4ee0-9659-211d431f7c3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65340dfd-fd85-4050-a877-6f149aa218e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '635effeb91ed4a7fb29066f894e25c5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a096b700-3108-403f-99c0-1986d777ddd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=699c1c42-9e1c-42d3-a4ae-f989c55bc6e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=238499d2-a655-4052-88a6-ec2f299f80c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.197 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 238499d2-a655-4052-88a6-ec2f299f80c9 in datapath 65340dfd-fd85-4050-a877-6f149aa218e1 unbound from our chassis
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.198 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65340dfd-fd85-4050-a877-6f149aa218e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.199 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d179dc0-7eae-4e9b-9b5c-1681f7f42178]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.200 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1 namespace which is not needed anymore
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:14:40 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct 02 12:14:40 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003d.scope: Consumed 14.315s CPU time.
Oct 02 12:14:40 compute-0 systemd-machined[211836]: Machine qemu-29-instance-0000003d terminated.
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 353f64cb-a0e9-4abb-a28d-e50d780f9ce8 does not exist
Oct 02 12:14:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 925ec4d9-dde9-4e0b-b3f9-f60bce0633cf does not exist
Oct 02 12:14:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e1899f39-86d4-4830-a001-70ef98b5c04e does not exist
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.310 2 INFO nova.virt.libvirt.driver [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Instance destroyed successfully.
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.311 2 DEBUG nova.objects.instance [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lazy-loading 'resources' on Instance uuid e9189687-089b-4ee0-9659-211d431f7c3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [NOTICE]   (296755) : haproxy version is 2.8.14-c23fe91
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [NOTICE]   (296755) : path to executable is /usr/sbin/haproxy
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [WARNING]  (296755) : Exiting Master process...
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [WARNING]  (296755) : Exiting Master process...
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.327 2 DEBUG nova.virt.libvirt.vif [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1324947878',display_name='tempest-ServersTestJSON-server-1324947878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1324947878',id=61,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPODLc73vcAGq+JixmgPJdhoD25wYT7PMF03wlPuV2ANGBasaLYR/ZYH1MuTT69HiEl5/O3OIWiNfd9wOXVscHdio0hIEDT6Zh/fsDlQOJGWeXnVHabv9+qTcPA1+2X0Q==',key_name='tempest-keypair-1726160167',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='635effeb91ed4a7fb29066f894e25c5a',ramdisk_id='',reservation_id='r-5m9hoxnd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-230117370',owner_user_name='tempest-ServersTestJSON-230117370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c085e57b7d944c7a372e604d34c96a3',uuid=e9189687-089b-4ee0-9659-211d431f7c3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.327 2 DEBUG nova.network.os_vif_util [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converting VIF {"id": "238499d2-a655-4052-88a6-ec2f299f80c9", "address": "fa:16:3e:8c:61:1b", "network": {"id": "65340dfd-fd85-4050-a877-6f149aa218e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-1305142537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "635effeb91ed4a7fb29066f894e25c5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap238499d2-a6", "ovs_interfaceid": "238499d2-a655-4052-88a6-ec2f299f80c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.328 2 DEBUG nova.network.os_vif_util [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.328 2 DEBUG os_vif [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [ALERT]    (296755) : Current worker (296757) exited with code 143 (Terminated)
Oct 02 12:14:40 compute-0 neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1[296750]: [WARNING]  (296755) : All workers exited. Exiting... (0)
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 systemd[1]: libpod-bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20.scope: Deactivated successfully.
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap238499d2-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.334 2 INFO os_vif [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:61:1b,bridge_name='br-int',has_traffic_filtering=True,id=238499d2-a655-4052-88a6-ec2f299f80c9,network=Network(65340dfd-fd85-4050-a877-6f149aa218e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap238499d2-a6')
Oct 02 12:14:40 compute-0 podman[297169]: 2025-10-02 12:14:40.338316049 +0000 UTC m=+0.054470360 container died bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:40 compute-0 sudo[297193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20-userdata-shm.mount: Deactivated successfully.
Oct 02 12:14:40 compute-0 sudo[297193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c91eecf33932f3e524ed726da427d4db1f2ea01c28fe711e151cf6cd7732298-merged.mount: Deactivated successfully.
Oct 02 12:14:40 compute-0 sudo[297193]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:40 compute-0 podman[297169]: 2025-10-02 12:14:40.386134825 +0000 UTC m=+0.102289136 container cleanup bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:14:40 compute-0 systemd[1]: libpod-conmon-bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20.scope: Deactivated successfully.
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.418 2 DEBUG nova.compute.manager [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-unplugged-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.418 2 DEBUG oslo_concurrency.lockutils [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.419 2 DEBUG oslo_concurrency.lockutils [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.419 2 DEBUG oslo_concurrency.lockutils [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.419 2 DEBUG nova.compute.manager [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] No waiting events found dispatching network-vif-unplugged-238499d2-a655-4052-88a6-ec2f299f80c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.419 2 DEBUG nova.compute.manager [req-7f9d8969-4054-4f23-80ef-167fadfb1b77 req-a3a92ae6-999d-4002-96e2-9ff225e799a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-unplugged-238499d2-a655-4052-88a6-ec2f299f80c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:14:40 compute-0 sudo[297247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:14:40 compute-0 sudo[297247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:40 compute-0 sudo[297247]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:40 compute-0 podman[297260]: 2025-10-02 12:14:40.46315799 +0000 UTC m=+0.054427280 container remove bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.469 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6473f98e-4df6-4af7-9552-fcc9074cc372]: (4, ('Thu Oct  2 12:14:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1 (bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20)\nbc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20\nThu Oct  2 12:14:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1 (bc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20)\nbc6cb1427f564e6c549429cc9894d1c1d3b62cff2b075a5094dd89976e53ef20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.471 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1d7d18-4410-429d-9404-c0df507f2f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.472 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65340dfd-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:40 compute-0 kernel: tap65340dfd-f0: left promiscuous mode
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.480 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f26c776-18b4-4e11-8232-8cb61947bf21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 nova_compute[257802]: 2025-10-02 12:14:40.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-0 sudo[297292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:40 compute-0 sudo[297292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.513 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4a1294-5d8e-4adb-a111-6c691e386fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.515 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[83632586-cee2-4692-9ee0-c0c7b75cfcf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:40 compute-0 ceph-mon[73607]: pgmap v1489: 305 pgs: 305 active+clean; 195 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 250 op/s
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:14:40 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:14:40 compute-0 sudo[297292]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.531 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6206742-c41f-4d7e-9619-e939952f9483]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532391, 'reachable_time': 42884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297319, 'error': None, 'target': 'ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.533 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65340dfd-fd85-4050-a877-6f149aa218e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:14:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:14:40.533 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d03c955c-b0c2-4bd7-a237-1d0923a197e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d65340dfd\x2dfd85\x2d4050\x2da877\x2d6f149aa218e1.mount: Deactivated successfully.
Oct 02 12:14:40 compute-0 sudo[297320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:14:40 compute-0 sudo[297320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:40 compute-0 podman[297386]: 2025-10-02 12:14:40.94254698 +0000 UTC m=+0.040890627 container create cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:14:40 compute-0 systemd[1]: Started libpod-conmon-cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278.scope.
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:40.924317001 +0000 UTC m=+0.022660648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:41.057442076 +0000 UTC m=+0.155785753 container init cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:41.075203922 +0000 UTC m=+0.173547559 container start cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:41.079773335 +0000 UTC m=+0.178117012 container attach cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:41 compute-0 funny_sutherland[297402]: 167 167
Oct 02 12:14:41 compute-0 systemd[1]: libpod-cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278.scope: Deactivated successfully.
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:41.087020373 +0000 UTC m=+0.185364000 container died cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-098edeea0440eca284f8d89d186c98adae1a22bf06335c4062d436a2d99a4030-merged.mount: Deactivated successfully.
Oct 02 12:14:41 compute-0 podman[297386]: 2025-10-02 12:14:41.144336023 +0000 UTC m=+0.242679640 container remove cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:14:41 compute-0 systemd[1]: libpod-conmon-cad5f93b7531aeb2b93c56e586c3b52617be74bc5de17494ba9df98ae40cd278.scope: Deactivated successfully.
Oct 02 12:14:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 213 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 220 op/s
Oct 02 12:14:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 02 12:14:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 02 12:14:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 02 12:14:41 compute-0 podman[297425]: 2025-10-02 12:14:41.357121746 +0000 UTC m=+0.052091753 container create 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:14:41 compute-0 systemd[1]: Started libpod-conmon-33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0.scope.
Oct 02 12:14:41 compute-0 podman[297425]: 2025-10-02 12:14:41.330357177 +0000 UTC m=+0.025327194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:41 compute-0 podman[297425]: 2025-10-02 12:14:41.4776394 +0000 UTC m=+0.172609407 container init 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:14:41 compute-0 podman[297425]: 2025-10-02 12:14:41.497182731 +0000 UTC m=+0.192152708 container start 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:14:41 compute-0 podman[297425]: 2025-10-02 12:14:41.501060915 +0000 UTC m=+0.196030942 container attach 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:14:41 compute-0 nova_compute[257802]: 2025-10-02 12:14:41.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:41.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:42 compute-0 sweet_kilby[297441]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:14:42 compute-0 sweet_kilby[297441]: --> relative data size: 1.0
Oct 02 12:14:42 compute-0 sweet_kilby[297441]: --> All data devices are unavailable
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:14:42
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'images']
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:14:42 compute-0 systemd[1]: libpod-33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0.scope: Deactivated successfully.
Oct 02 12:14:42 compute-0 podman[297425]: 2025-10-02 12:14:42.401966013 +0000 UTC m=+1.096936030 container died 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b85cf4c59b213cfdfee04690814922685e2eae10d7dffc13addb4d2849747df6-merged.mount: Deactivated successfully.
Oct 02 12:14:42 compute-0 podman[297425]: 2025-10-02 12:14:42.485019355 +0000 UTC m=+1.179989332 container remove 33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:14:42 compute-0 systemd[1]: libpod-conmon-33b781626f05f39f213eece8339777b964edede4b1cb70c5a2f53282d6c1a1f0.scope: Deactivated successfully.
Oct 02 12:14:42 compute-0 sudo[297320]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.542 2 DEBUG nova.compute.manager [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.542 2 DEBUG oslo_concurrency.lockutils [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.543 2 DEBUG oslo_concurrency.lockutils [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.543 2 DEBUG oslo_concurrency.lockutils [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.543 2 DEBUG nova.compute.manager [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] No waiting events found dispatching network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:42 compute-0 nova_compute[257802]: 2025-10-02 12:14:42.543 2 WARNING nova.compute.manager [req-1f60d570-6c6f-40b8-98e5-167a4c9141af req-117b938f-a9b8-44be-9412-8fbbbd3a4898 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received unexpected event network-vif-plugged-238499d2-a655-4052-88a6-ec2f299f80c9 for instance with vm_state active and task_state deleting.
Oct 02 12:14:42 compute-0 ceph-mon[73607]: pgmap v1490: 305 pgs: 305 active+clean; 213 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 220 op/s
Oct 02 12:14:42 compute-0 ceph-mon[73607]: osdmap e209: 3 total, 3 up, 3 in
Oct 02 12:14:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2550400584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/409931445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-0 sudo[297469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:42 compute-0 sudo[297469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:42 compute-0 sudo[297469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:42 compute-0 sudo[297494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:14:42 compute-0 sudo[297494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:42 compute-0 sudo[297494]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:42 compute-0 sudo[297519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:42 compute-0 sudo[297519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:42 compute-0 sudo[297519]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:42 compute-0 sudo[297544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:14:42 compute-0 sudo[297544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:14:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:14:43 compute-0 nova_compute[257802]: 2025-10-02 12:14:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 175 op/s
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.293029917 +0000 UTC m=+0.062782565 container create 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:14:43 compute-0 systemd[1]: Started libpod-conmon-59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6.scope.
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.263709686 +0000 UTC m=+0.033462354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.396049631 +0000 UTC m=+0.165802249 container init 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.410452945 +0000 UTC m=+0.180205563 container start 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:14:43 compute-0 tender_cori[297627]: 167 167
Oct 02 12:14:43 compute-0 systemd[1]: libpod-59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6.scope: Deactivated successfully.
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.495075097 +0000 UTC m=+0.264827725 container attach 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.495451886 +0000 UTC m=+0.265204494 container died 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e7bd8b67d4407a3bcc232531e6deaa18026703c591ea85840364ddcae398ee-merged.mount: Deactivated successfully.
Oct 02 12:14:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:43 compute-0 podman[297611]: 2025-10-02 12:14:43.736196367 +0000 UTC m=+0.505948975 container remove 59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:14:43 compute-0 systemd[1]: libpod-conmon-59ad43d5d53178a3b946e2606a50f8422662c6b7fc6920324d7a4d46d45139b6.scope: Deactivated successfully.
Oct 02 12:14:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2913259531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1650383396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:43.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:44 compute-0 podman[297651]: 2025-10-02 12:14:43.941620158 +0000 UTC m=+0.043206843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:44 compute-0 podman[297651]: 2025-10-02 12:14:44.075072521 +0000 UTC m=+0.176659116 container create e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:14:44 compute-0 nova_compute[257802]: 2025-10-02 12:14:44.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:44 compute-0 nova_compute[257802]: 2025-10-02 12:14:44.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:44 compute-0 systemd[1]: Started libpod-conmon-e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953.scope.
Oct 02 12:14:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238193baacdd6b7c37a931020d75bba7baf19c3375d592250c10111cb39d877a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238193baacdd6b7c37a931020d75bba7baf19c3375d592250c10111cb39d877a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238193baacdd6b7c37a931020d75bba7baf19c3375d592250c10111cb39d877a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238193baacdd6b7c37a931020d75bba7baf19c3375d592250c10111cb39d877a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:44 compute-0 podman[297651]: 2025-10-02 12:14:44.214595002 +0000 UTC m=+0.316181597 container init e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:14:44 compute-0 podman[297665]: 2025-10-02 12:14:44.216444988 +0000 UTC m=+0.086177691 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent)
Oct 02 12:14:44 compute-0 podman[297651]: 2025-10-02 12:14:44.227442019 +0000 UTC m=+0.329028614 container start e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:14:44 compute-0 podman[297651]: 2025-10-02 12:14:44.23155025 +0000 UTC m=+0.333136865 container attach e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:14:44 compute-0 ceph-mon[73607]: pgmap v1492: 305 pgs: 305 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 175 op/s
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]: {
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:     "1": [
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:         {
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "devices": [
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "/dev/loop3"
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             ],
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "lv_name": "ceph_lv0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "lv_size": "7511998464",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "name": "ceph_lv0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "tags": {
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.cluster_name": "ceph",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.crush_device_class": "",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.encrypted": "0",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.osd_id": "1",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.type": "block",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:                 "ceph.vdo": "0"
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             },
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "type": "block",
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:             "vg_name": "ceph_vg0"
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:         }
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]:     ]
Oct 02 12:14:45 compute-0 priceless_roentgen[297681]: }
Oct 02 12:14:45 compute-0 systemd[1]: libpod-e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953.scope: Deactivated successfully.
Oct 02 12:14:45 compute-0 podman[297651]: 2025-10-02 12:14:45.093739774 +0000 UTC m=+1.195326369 container died e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-238193baacdd6b7c37a931020d75bba7baf19c3375d592250c10111cb39d877a-merged.mount: Deactivated successfully.
Oct 02 12:14:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 192 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Oct 02 12:14:45 compute-0 nova_compute[257802]: 2025-10-02 12:14:45.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:45 compute-0 podman[297651]: 2025-10-02 12:14:45.5749921 +0000 UTC m=+1.676578685 container remove e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_roentgen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:14:45 compute-0 systemd[1]: libpod-conmon-e158389459c666057ec397fcfd24ea017b8bc1d06c4a9da8b706126b606bd953.scope: Deactivated successfully.
Oct 02 12:14:45 compute-0 sudo[297544]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:45 compute-0 sudo[297714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:45 compute-0 sudo[297714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:45 compute-0 sudo[297714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:45 compute-0 sudo[297739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:14:45 compute-0 sudo[297739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:45 compute-0 sudo[297739]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:45 compute-0 sudo[297764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:45 compute-0 sudo[297764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:45 compute-0 sudo[297764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:45 compute-0 sudo[297789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:14:45 compute-0 sudo[297789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.489157673 +0000 UTC m=+0.079849505 container create 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:14:46 compute-0 systemd[1]: Started libpod-conmon-6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391.scope.
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.464608788 +0000 UTC m=+0.055300660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.576580282 +0000 UTC m=+0.167272144 container init 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.58460824 +0000 UTC m=+0.175300122 container start 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:14:46 compute-0 nova_compute[257802]: 2025-10-02 12:14:46.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.590064644 +0000 UTC m=+0.180756476 container attach 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:14:46 compute-0 systemd[1]: libpod-6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391.scope: Deactivated successfully.
Oct 02 12:14:46 compute-0 priceless_rhodes[297873]: 167 167
Oct 02 12:14:46 compute-0 conmon[297873]: conmon 6e029a05d03ce0b50035 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391.scope/container/memory.events
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.595969399 +0000 UTC m=+0.186661271 container died 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0159a29e2c5ebe0341da2f03a9d93e288ef12894fca0ae18afe07d6e61603474-merged.mount: Deactivated successfully.
Oct 02 12:14:46 compute-0 podman[297855]: 2025-10-02 12:14:46.640652198 +0000 UTC m=+0.231344070 container remove 6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:14:46 compute-0 systemd[1]: libpod-conmon-6e029a05d03ce0b50035550dbae239c03e4a513f90ce27df3acb75a7e15f1391.scope: Deactivated successfully.
Oct 02 12:14:46 compute-0 podman[297897]: 2025-10-02 12:14:46.860541716 +0000 UTC m=+0.075286413 container create a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:14:46 compute-0 systemd[1]: Started libpod-conmon-a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139.scope.
Oct 02 12:14:46 compute-0 podman[297897]: 2025-10-02 12:14:46.830647731 +0000 UTC m=+0.045392468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:14:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90331e7a6ff74fe28145babacdb31b6af6df36d6cfb658fc24d2b4d26cc74be2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90331e7a6ff74fe28145babacdb31b6af6df36d6cfb658fc24d2b4d26cc74be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90331e7a6ff74fe28145babacdb31b6af6df36d6cfb658fc24d2b4d26cc74be2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90331e7a6ff74fe28145babacdb31b6af6df36d6cfb658fc24d2b4d26cc74be2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:46 compute-0 podman[297897]: 2025-10-02 12:14:46.965802735 +0000 UTC m=+0.180547432 container init a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:14:46 compute-0 podman[297897]: 2025-10-02 12:14:46.972721455 +0000 UTC m=+0.187466112 container start a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:14:46 compute-0 podman[297897]: 2025-10-02 12:14:46.978258591 +0000 UTC m=+0.193003258 container attach a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:14:47 compute-0 ceph-mon[73607]: pgmap v1493: 305 pgs: 305 active+clean; 192 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.008 2 INFO nova.virt.libvirt.driver [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Deleting instance files /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c_del
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.010 2 INFO nova.virt.libvirt.driver [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Deletion of /var/lib/nova/instances/e9189687-089b-4ee0-9659-211d431f7c3c_del complete
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.085 2 INFO nova.compute.manager [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Took 8.21 seconds to destroy the instance on the hypervisor.
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.085 2 DEBUG oslo.service.loopingcall [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.086 2 DEBUG nova.compute.manager [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:14:47 compute-0 nova_compute[257802]: 2025-10-02 12:14:47.086 2 DEBUG nova.network.neutron [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:14:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 171 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Oct 02 12:14:47 compute-0 nifty_payne[297914]: {
Oct 02 12:14:47 compute-0 nifty_payne[297914]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:14:47 compute-0 nifty_payne[297914]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:14:47 compute-0 nifty_payne[297914]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:14:47 compute-0 nifty_payne[297914]:         "osd_id": 1,
Oct 02 12:14:47 compute-0 nifty_payne[297914]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:14:47 compute-0 nifty_payne[297914]:         "type": "bluestore"
Oct 02 12:14:47 compute-0 nifty_payne[297914]:     }
Oct 02 12:14:47 compute-0 nifty_payne[297914]: }
Oct 02 12:14:47 compute-0 systemd[1]: libpod-a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139.scope: Deactivated successfully.
Oct 02 12:14:47 compute-0 podman[297897]: 2025-10-02 12:14:47.875003796 +0000 UTC m=+1.089748503 container died a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-90331e7a6ff74fe28145babacdb31b6af6df36d6cfb658fc24d2b4d26cc74be2-merged.mount: Deactivated successfully.
Oct 02 12:14:47 compute-0 podman[297897]: 2025-10-02 12:14:47.947868998 +0000 UTC m=+1.162613655 container remove a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:14:47 compute-0 systemd[1]: libpod-conmon-a54ed5532df6b6a7e96f49a27b0bc08e12581a687e42d6efdf344a9f817bf139.scope: Deactivated successfully.
Oct 02 12:14:47 compute-0 sudo[297789]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:14:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:47.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 02 12:14:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.174 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.174 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:14:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 02 12:14:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1652653518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 02 12:14:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ccfcead0-8046-4542-a2f2-d85014b6ea19 does not exist
Oct 02 12:14:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c9a2d688-a88e-421e-a081-3be3c0fae52b does not exist
Oct 02 12:14:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 785720d2-699d-422f-8c7e-0cec6c23fa1e does not exist
Oct 02 12:14:48 compute-0 sudo[297951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:48 compute-0 sudo[297951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:48 compute-0 sudo[297951]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:48 compute-0 sudo[297976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:14:48 compute-0 sudo[297976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:48 compute-0 sudo[297976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:48 compute-0 nova_compute[257802]: 2025-10-02 12:14:48.977 2 DEBUG nova.network.neutron [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.039 2 INFO nova.compute.manager [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Took 1.95 seconds to deallocate network for instance.
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.175 2 DEBUG nova.compute.manager [req-fc3573a6-245f-46a3-b08e-209ef58f43b8 req-20a17f2a-04e4-4d22-9347-eef4b6c8c985 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Received event network-vif-deleted-238499d2-a655-4052-88a6-ec2f299f80c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.175 2 INFO nova.compute.manager [req-fc3573a6-245f-46a3-b08e-209ef58f43b8 req-20a17f2a-04e4-4d22-9347-eef4b6c8c985 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Neutron deleted interface 238499d2-a655-4052-88a6-ec2f299f80c9; detaching it from the instance and deleting it from the info cache
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.176 2 DEBUG nova.network.neutron [req-fc3573a6-245f-46a3-b08e-209ef58f43b8 req-20a17f2a-04e4-4d22-9347-eef4b6c8c985 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.220 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.220 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 184 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 273 op/s
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.234 2 DEBUG nova.compute.manager [req-fc3573a6-245f-46a3-b08e-209ef58f43b8 req-20a17f2a-04e4-4d22-9347-eef4b6c8c985 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Detach interface failed, port_id=238499d2-a655-4052-88a6-ec2f299f80c9, reason: Instance e9189687-089b-4ee0-9659-211d431f7c3c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.279 2 DEBUG oslo_concurrency.processutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:49 compute-0 ceph-mon[73607]: pgmap v1494: 305 pgs: 305 active+clean; 171 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Oct 02 12:14:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:49 compute-0 ceph-mon[73607]: osdmap e210: 3 total, 3 up, 3 in
Oct 02 12:14:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:14:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953912691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.727 2 DEBUG oslo_concurrency.processutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.733 2 DEBUG nova.compute.provider_tree [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.801 2 DEBUG nova.scheduler.client.report [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.827 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:49 compute-0 nova_compute[257802]: 2025-10-02 12:14:49.879 2 INFO nova.scheduler.client.report [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Deleted allocations for instance e9189687-089b-4ee0-9659-211d431f7c3c
Oct 02 12:14:49 compute-0 podman[298024]: 2025-10-02 12:14:49.928037539 +0000 UTC m=+0.062238191 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 12:14:49 compute-0 podman[298023]: 2025-10-02 12:14:49.933574445 +0000 UTC m=+0.067815488 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 02 12:14:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:14:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:14:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.177 2 DEBUG oslo_concurrency.lockutils [None req-507428f1-a4b2-4424-bf87-415cf3e66f4a 1c085e57b7d944c7a372e604d34c96a3 635effeb91ed4a7fb29066f894e25c5a - - default default] Lock "e9189687-089b-4ee0-9659-211d431f7c3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.181 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.181 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.182 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.182 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.183 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:50 compute-0 ceph-mon[73607]: pgmap v1496: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 184 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 273 op/s
Oct 02 12:14:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/520319809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2811471746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/953912691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 ceph-mon[73607]: osdmap e211: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135714315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.640 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.803 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.804 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4617MB free_disk=20.930648803710938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.805 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.805 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.901 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.901 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:14:50 compute-0 nova_compute[257802]: 2025-10-02 12:14:50.935 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 208 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 304 op/s
Oct 02 12:14:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927349182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.369 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.375 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.406 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.462 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.462 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:51 compute-0 nova_compute[257802]: 2025-10-02 12:14:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3135714315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3927349182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:51.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:52.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:53 compute-0 ceph-mon[73607]: pgmap v1498: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 208 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 304 op/s
Oct 02 12:14:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 245 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.6 MiB/s wr, 364 op/s
Oct 02 12:14:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:54.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:54.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0038161001768099757 of space, bias 1.0, pg target 1.1448300530429927 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:14:54 compute-0 ceph-mon[73607]: pgmap v1499: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 245 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.6 MiB/s wr, 364 op/s
Oct 02 12:14:54 compute-0 nova_compute[257802]: 2025-10-02 12:14:54.462 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:54 compute-0 podman[298108]: 2025-10-02 12:14:54.964373062 +0000 UTC m=+0.102549043 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:14:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 252 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.9 MiB/s wr, 381 op/s
Oct 02 12:14:55 compute-0 nova_compute[257802]: 2025-10-02 12:14:55.309 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407280.3079915, e9189687-089b-4ee0-9659-211d431f7c3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:55 compute-0 nova_compute[257802]: 2025-10-02 12:14:55.309 2 INFO nova.compute.manager [-] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] VM Stopped (Lifecycle Event)
Oct 02 12:14:55 compute-0 nova_compute[257802]: 2025-10-02 12:14:55.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:55 compute-0 nova_compute[257802]: 2025-10-02 12:14:55.389 2 DEBUG nova.compute.manager [None req-b81061ee-d426-4abc-8fc9-d331d267a8db - - - - - -] [instance: e9189687-089b-4ee0-9659-211d431f7c3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 02 12:14:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/299602845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:14:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/299602845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:14:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:56 compute-0 sudo[298134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:56 compute-0 sudo[298134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:56 compute-0 sudo[298134]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 02 12:14:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 02 12:14:56 compute-0 sudo[298159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:56 compute-0 sudo[298159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:56 compute-0 sudo[298159]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:56 compute-0 nova_compute[257802]: 2025-10-02 12:14:56.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:57 compute-0 ceph-mon[73607]: pgmap v1500: 305 pgs: 305 active+clean; 252 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.9 MiB/s wr, 381 op/s
Oct 02 12:14:57 compute-0 ceph-mon[73607]: osdmap e212: 3 total, 3 up, 3 in
Oct 02 12:14:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 260 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 239 op/s
Oct 02 12:14:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:58 compute-0 nova_compute[257802]: 2025-10-02 12:14:58.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:14:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:58 compute-0 nova_compute[257802]: 2025-10-02 12:14:58.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:58 compute-0 ceph-mon[73607]: pgmap v1502: 305 pgs: 305 active+clean; 260 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 239 op/s
Oct 02 12:14:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 02 12:14:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 02 12:14:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 02 12:14:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 260 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.294284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299294359, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 528, "num_deletes": 252, "total_data_size": 535937, "memory_usage": 547232, "flush_reason": "Manual Compaction"}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299302653, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 520019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33607, "largest_seqno": 34134, "table_properties": {"data_size": 517003, "index_size": 988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7610, "raw_average_key_size": 19, "raw_value_size": 510696, "raw_average_value_size": 1336, "num_data_blocks": 43, "num_entries": 382, "num_filter_entries": 382, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407278, "oldest_key_time": 1759407278, "file_creation_time": 1759407299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8428 microseconds, and 4434 cpu microseconds.
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.302713) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 520019 bytes OK
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.302741) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.306068) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.306102) EVENT_LOG_v1 {"time_micros": 1759407299306091, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.306131) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 532841, prev total WAL file size 532841, number of live WAL files 2.
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.307199) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(507KB)], [71(9273KB)]
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299307249, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10015960, "oldest_snapshot_seqno": -1}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5883 keys, 8184651 bytes, temperature: kUnknown
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299387484, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8184651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8146843, "index_size": 22018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 151238, "raw_average_key_size": 25, "raw_value_size": 8042639, "raw_average_value_size": 1367, "num_data_blocks": 880, "num_entries": 5883, "num_filter_entries": 5883, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.388509) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8184651 bytes
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.390937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.6 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.1 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(35.0) write-amplify(15.7) OK, records in: 6404, records dropped: 521 output_compression: NoCompression
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.390970) EVENT_LOG_v1 {"time_micros": 1759407299390956, "job": 40, "event": "compaction_finished", "compaction_time_micros": 81045, "compaction_time_cpu_micros": 38099, "output_level": 6, "num_output_files": 1, "total_output_size": 8184651, "num_input_records": 6404, "num_output_records": 5883, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299391548, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299395748, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.307071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.395884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.395893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.395896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.395900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:14:59.395903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:15:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:15:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:15:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:00.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:00 compute-0 nova_compute[257802]: 2025-10-02 12:15:00.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:00 compute-0 ceph-mon[73607]: osdmap e213: 3 total, 3 up, 3 in
Oct 02 12:15:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 244 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 752 KiB/s rd, 932 KiB/s wr, 137 op/s
Oct 02 12:15:01 compute-0 nova_compute[257802]: 2025-10-02 12:15:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ceph-mon[73607]: pgmap v1504: 305 pgs: 305 active+clean; 260 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Oct 02 12:15:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3679141303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:02 compute-0 ceph-mon[73607]: pgmap v1505: 305 pgs: 305 active+clean; 244 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 752 KiB/s rd, 932 KiB/s wr, 137 op/s
Oct 02 12:15:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3454801387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 252 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 12:15:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 02 12:15:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 02 12:15:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 02 12:15:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:04 compute-0 ceph-mon[73607]: pgmap v1506: 305 pgs: 305 active+clean; 252 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 12:15:04 compute-0 ceph-mon[73607]: osdmap e214: 3 total, 3 up, 3 in
Oct 02 12:15:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 233 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 191 KiB/s rd, 2.4 MiB/s wr, 146 op/s
Oct 02 12:15:05 compute-0 nova_compute[257802]: 2025-10-02 12:15:05.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:06 compute-0 nova_compute[257802]: 2025-10-02 12:15:06.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-0 ceph-mon[73607]: pgmap v1508: 305 pgs: 305 active+clean; 233 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 191 KiB/s rd, 2.4 MiB/s wr, 146 op/s
Oct 02 12:15:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 3.0 MiB/s wr, 177 op/s
Oct 02 12:15:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:08.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:08 compute-0 ceph-mon[73607]: pgmap v1509: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 3.0 MiB/s wr, 177 op/s
Oct 02 12:15:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 188 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 247 op/s
Oct 02 12:15:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1171195345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:10.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:10 compute-0 nova_compute[257802]: 2025-10-02 12:15:10.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:10 compute-0 ceph-mon[73607]: pgmap v1510: 305 pgs: 305 active+clean; 188 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 247 op/s
Oct 02 12:15:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 160 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 257 op/s
Oct 02 12:15:11 compute-0 nova_compute[257802]: 2025-10-02 12:15:11.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:12.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:12 compute-0 ceph-mon[73607]: pgmap v1511: 305 pgs: 305 active+clean; 160 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 257 op/s
Oct 02 12:15:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 127 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.3 MiB/s wr, 259 op/s
Oct 02 12:15:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:14 compute-0 podman[298196]: 2025-10-02 12:15:14.954491599 +0000 UTC m=+0.089544184 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 12:15:15 compute-0 ceph-mon[73607]: pgmap v1512: 305 pgs: 305 active+clean; 127 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.3 MiB/s wr, 259 op/s
Oct 02 12:15:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2995499838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 236 op/s
Oct 02 12:15:15 compute-0 nova_compute[257802]: 2025-10-02 12:15:15.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:16.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:16 compute-0 ceph-mon[73607]: pgmap v1513: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 236 op/s
Oct 02 12:15:16 compute-0 sudo[298217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:16 compute-0 sudo[298217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:16 compute-0 sudo[298217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:16 compute-0 sudo[298242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:16 compute-0 sudo[298242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:16 compute-0 sudo[298242]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:16 compute-0 nova_compute[257802]: 2025-10-02 12:15:16.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 121 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 577 KiB/s wr, 193 op/s
Oct 02 12:15:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:18.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:18 compute-0 ceph-mon[73607]: pgmap v1514: 305 pgs: 305 active+clean; 121 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 577 KiB/s wr, 193 op/s
Oct 02 12:15:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 121 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 102 KiB/s wr, 162 op/s
Oct 02 12:15:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3477163817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.856 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.857 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.879 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.976 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.976 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.982 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:19 compute-0 nova_compute[257802]: 2025-10-02 12:15:19.983 2 INFO nova.compute.claims [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:15:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.105 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:20.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218584264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:20 compute-0 ceph-mon[73607]: pgmap v1515: 305 pgs: 305 active+clean; 121 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 102 KiB/s wr, 162 op/s
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.577 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.584 2 DEBUG nova.compute.provider_tree [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.631 2 DEBUG nova.scheduler.client.report [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.661 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.662 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.727 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.728 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.750 2 INFO nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.783 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.879 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.880 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.881 2 INFO nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Creating image(s)
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.919 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:20 compute-0 podman[298292]: 2025-10-02 12:15:20.920499816 +0000 UTC m=+0.056186213 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:15:20 compute-0 podman[298291]: 2025-10-02 12:15:20.935401563 +0000 UTC m=+0.075382365 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.951 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.976 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:20 compute-0 nova_compute[257802]: 2025-10-02 12:15:20.979 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.040 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.041 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.042 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.042 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.072 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.076 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.136 2 DEBUG nova.policy [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51229018510440858be9691ef4a0965f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0b70ca4a75844a9a91b4e116bc58df9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:15:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 129 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 405 KiB/s wr, 63 op/s
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3218584264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:21 compute-0 nova_compute[257802]: 2025-10-02 12:15:21.990 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.060 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] resizing rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.725 2 DEBUG nova.objects.instance [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'migration_context' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.746 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.747 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Ensure instance console log exists: /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.747 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.747 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:22 compute-0 nova_compute[257802]: 2025-10-02 12:15:22.748 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:23 compute-0 ceph-mon[73607]: pgmap v1516: 305 pgs: 305 active+clean; 129 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 405 KiB/s wr, 63 op/s
Oct 02 12:15:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 171 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Oct 02 12:15:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:23 compute-0 nova_compute[257802]: 2025-10-02 12:15:23.766 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Successfully created port: 4b13d695-1222-496f-b948-e5b9f432d54f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:15:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:24 compute-0 ceph-mon[73607]: pgmap v1517: 305 pgs: 305 active+clean; 171 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Oct 02 12:15:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:24.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.154 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Successfully updated port: 4b13d695-1222-496f-b948-e5b9f432d54f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.183 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.183 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquired lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.183 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/550724469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1953675138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 202 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.299 2 DEBUG nova.compute.manager [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-changed-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.299 2 DEBUG nova.compute.manager [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Refreshing instance network info cache due to event network-changed-4b13d695-1222-496f-b948-e5b9f432d54f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.300 2 DEBUG oslo_concurrency.lockutils [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:25 compute-0 nova_compute[257802]: 2025-10-02 12:15:25.592 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:25 compute-0 podman[298497]: 2025-10-02 12:15:25.987748969 +0000 UTC m=+0.120210717 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:26.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:26 compute-0 ceph-mon[73607]: pgmap v1518: 305 pgs: 305 active+clean; 202 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Oct 02 12:15:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:26.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.911 2 DEBUG nova.network.neutron [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Updating instance_info_cache with network_info: [{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:26.932 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.935 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Releasing lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.936 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance network_info: |[{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.936 2 DEBUG oslo_concurrency.lockutils [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.937 2 DEBUG nova.network.neutron [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Refreshing network info cache for port 4b13d695-1222-496f-b948-e5b9f432d54f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.941 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start _get_guest_xml network_info=[{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.945 2 WARNING nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.950 2 DEBUG nova.virt.libvirt.host [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.951 2 DEBUG nova.virt.libvirt.host [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.954 2 DEBUG nova.virt.libvirt.host [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.955 2 DEBUG nova.virt.libvirt.host [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.956 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.957 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.958 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.958 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.958 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.959 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.959 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.959 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.960 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.960 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.960 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.961 2 DEBUG nova.virt.hardware [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:26 compute-0 nova_compute[257802]: 2025-10-02 12:15:26.964 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Oct 02 12:15:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879648673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.400 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.433 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.438 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568885786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.914 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.918 2 DEBUG nova.virt.libvirt.vif [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:20Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.919 2 DEBUG nova.network.os_vif_util [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.921 2 DEBUG nova.network.os_vif_util [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:27 compute-0 nova_compute[257802]: 2025-10-02 12:15:27.923 2 DEBUG nova.objects.instance [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:28.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:28 compute-0 ceph-mon[73607]: pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Oct 02 12:15:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3879648673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1568885786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:28.753 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:28.754 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.960 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <uuid>ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</uuid>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <name>instance-00000043</name>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:name>tempest-InstanceActionsTestJSON-server-1422524610</nova:name>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:15:26</nova:creationTime>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:user uuid="51229018510440858be9691ef4a0965f">tempest-InstanceActionsTestJSON-662595495-project-member</nova:user>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:project uuid="a0b70ca4a75844a9a91b4e116bc58df9">tempest-InstanceActionsTestJSON-662595495</nova:project>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <nova:port uuid="4b13d695-1222-496f-b948-e5b9f432d54f">
Oct 02 12:15:28 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <system>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="serial">ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="uuid">ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </system>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <os>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </os>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <features>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </features>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk">
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </source>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config">
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </source>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:15:28 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:08:96:d3"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <target dev="tap4b13d695-12"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/console.log" append="off"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <video>
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </video>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:15:28 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:15:28 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:15:28 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:15:28 compute-0 nova_compute[257802]: </domain>
Oct 02 12:15:28 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.962 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Preparing to wait for external event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.962 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.962 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.963 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.963 2 DEBUG nova.virt.libvirt.vif [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:20Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.964 2 DEBUG nova.network.os_vif_util [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.964 2 DEBUG nova.network.os_vif_util [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.965 2 DEBUG os_vif [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b13d695-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b13d695-12, col_values=(('external_ids', {'iface-id': '4b13d695-1222-496f-b948-e5b9f432d54f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:96:d3', 'vm-uuid': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:28 compute-0 NetworkManager[44987]: <info>  [1759407328.9806] manager: (tap4b13d695-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:28 compute-0 nova_compute[257802]: 2025-10-02 12:15:28.986 2 INFO os_vif [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12')
Oct 02 12:15:29 compute-0 nova_compute[257802]: 2025-10-02 12:15:29.057 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:29 compute-0 nova_compute[257802]: 2025-10-02 12:15:29.057 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:29 compute-0 nova_compute[257802]: 2025-10-02 12:15:29.057 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] No VIF found with MAC fa:16:3e:08:96:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:15:29 compute-0 nova_compute[257802]: 2025-10-02 12:15:29.058 2 INFO nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Using config drive
Oct 02 12:15:29 compute-0 nova_compute[257802]: 2025-10-02 12:15:29.084 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 12:15:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:30.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:30 compute-0 ceph-mon[73607]: pgmap v1520: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 12:15:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:30.755 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:30 compute-0 nova_compute[257802]: 2025-10-02 12:15:30.993 2 INFO nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Creating config drive at /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.003 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6pye75v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.036 2 DEBUG nova.network.neutron [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Updated VIF entry in instance network info cache for port 4b13d695-1222-496f-b948-e5b9f432d54f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.037 2 DEBUG nova.network.neutron [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Updating instance_info_cache with network_info: [{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.071 2 DEBUG oslo_concurrency.lockutils [req-d136bce5-90a3-4e56-b601-90655186dd8d req-a0405800-105d-47e9-9d24-d3379d4860f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.139 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6pye75v" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.174 2 DEBUG nova.storage.rbd_utils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] rbd image ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.178 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.350 2 DEBUG oslo_concurrency.processutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.351 2 INFO nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Deleting local config drive /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/disk.config because it was imported into RBD.
Oct 02 12:15:31 compute-0 kernel: tap4b13d695-12: entered promiscuous mode
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.4173] manager: (tap4b13d695-12): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_controller[148183]: 2025-10-02T12:15:31Z|00248|binding|INFO|Claiming lport 4b13d695-1222-496f-b948-e5b9f432d54f for this chassis.
Oct 02 12:15:31 compute-0 ovn_controller[148183]: 2025-10-02T12:15:31Z|00249|binding|INFO|4b13d695-1222-496f-b948-e5b9f432d54f: Claiming fa:16:3e:08:96:d3 10.100.0.3
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.448 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:96:d3 10.100.0.3'], port_security=['fa:16:3e:08:96:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0b70ca4a75844a9a91b4e116bc58df9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fec22877-9c15-42d1-9b7b-0131e34b8da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f3c414c-ec09-487b-9da3-ac33f7c8b0de, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4b13d695-1222-496f-b948-e5b9f432d54f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.450 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4b13d695-1222-496f-b948-e5b9f432d54f in datapath efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea bound to our chassis
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.451 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:31 compute-0 systemd-machined[211836]: New machine qemu-30-instance-00000043.
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.465 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a4322b-1964-4ca1-baad-ed30dd9caf9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.466 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefbb222c-a1 in ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.468 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefbb222c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.469 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46a00d52-7be1-4c07-b039-00995e433815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.470 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33954274-c43e-4860-9c4d-386a258e33c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000043.
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.484 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b2daf2-f98a-47a5-a05a-72c18fbdef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_controller[148183]: 2025-10-02T12:15:31Z|00250|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f ovn-installed in OVS
Oct 02 12:15:31 compute-0 ovn_controller[148183]: 2025-10-02T12:15:31Z|00251|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f up in Southbound
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 systemd-udevd[298665]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.515 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[81ae8c2e-d943-450b-90cd-51a8fe75bee5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.5238] device (tap4b13d695-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.5251] device (tap4b13d695-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.544 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcfec35-e4ef-4ca5-b732-da5e7c9584ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.548 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb86eae-64dd-4b33-bb6d-5aaf444067a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.5515] manager: (tapefbb222c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.582 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[790fc440-6a47-4501-a42d-960f8a7d513c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.587 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[06b3df0c-a3dc-42e8-a0ab-ddd3601d0af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.6041] device (tapefbb222c-a0): carrier: link connected
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.610 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b049e0ea-9ce7-4dfa-b959-eee414157c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.623 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e5baf510-a617-42d5-9459-70d4f6b8b48c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefbb222c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:3f:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539922, 'reachable_time': 31848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298695, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.637 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a4f25b-4814-4bb1-9e90-193053ffb047]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:3f37'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539922, 'tstamp': 539922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298696, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.653 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0879ed69-676a-445c-bf6c-23da53128a56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefbb222c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:3f:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539922, 'reachable_time': 31848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298697, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.682 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac3b399-cf9d-44a0-8275-531e56b5085a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.734 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[34219867-b9ab-420e-acfc-fd83a8612d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.735 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefbb222c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.736 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.736 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefbb222c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:31 compute-0 NetworkManager[44987]: <info>  [1759407331.7383] manager: (tapefbb222c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct 02 12:15:31 compute-0 kernel: tapefbb222c-a0: entered promiscuous mode
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.740 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefbb222c-a0, col_values=(('external_ids', {'iface-id': 'a84bf0d3-c6c3-4760-a48a-4db671c03c75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_controller[148183]: 2025-10-02T12:15:31Z|00252|binding|INFO|Releasing lport a84bf0d3-c6c3-4760-a48a-4db671c03c75 from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.756 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:15:31 compute-0 nova_compute[257802]: 2025-10-02 12:15:31.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.758 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd4c453-82c8-408d-999d-19927e5f73b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.759 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:15:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:31.761 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'env', 'PROCESS_TAG=haproxy-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:15:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:32.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:32 compute-0 podman[298730]: 2025-10-02 12:15:32.128528114 +0000 UTC m=+0.064780704 container create 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.139 2 DEBUG nova.compute.manager [req-80d06423-577f-433c-af0b-4301f1ca94cd req-40ea4796-eab9-4575-abd9-1cedd05eebb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.139 2 DEBUG oslo_concurrency.lockutils [req-80d06423-577f-433c-af0b-4301f1ca94cd req-40ea4796-eab9-4575-abd9-1cedd05eebb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.140 2 DEBUG oslo_concurrency.lockutils [req-80d06423-577f-433c-af0b-4301f1ca94cd req-40ea4796-eab9-4575-abd9-1cedd05eebb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.140 2 DEBUG oslo_concurrency.lockutils [req-80d06423-577f-433c-af0b-4301f1ca94cd req-40ea4796-eab9-4575-abd9-1cedd05eebb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.140 2 DEBUG nova.compute.manager [req-80d06423-577f-433c-af0b-4301f1ca94cd req-40ea4796-eab9-4575-abd9-1cedd05eebb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Processing event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:15:32 compute-0 systemd[1]: Started libpod-conmon-1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188.scope.
Oct 02 12:15:32 compute-0 podman[298730]: 2025-10-02 12:15:32.088608903 +0000 UTC m=+0.024861583 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:15:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668cee259c4ef7e67bd461e094d7a11450cc97f99ebaec6408ba1a22a3a5f2e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:32 compute-0 podman[298730]: 2025-10-02 12:15:32.213690759 +0000 UTC m=+0.149943379 container init 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:15:32 compute-0 podman[298730]: 2025-10-02 12:15:32.219385379 +0000 UTC m=+0.155637969 container start 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:15:32 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [NOTICE]   (298767) : New worker (298785) forked
Oct 02 12:15:32 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [NOTICE]   (298767) : Loading success.
Oct 02 12:15:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:32 compute-0 ceph-mon[73607]: pgmap v1521: 305 pgs: 305 active+clean; 213 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.771 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.772 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407332.7715433, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.772 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Started (Lifecycle Event)
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.775 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.779 2 INFO nova.virt.libvirt.driver [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance spawned successfully.
Oct 02 12:15:32 compute-0 nova_compute[257802]: 2025-10-02 12:15:32.779 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 121 op/s
Oct 02 12:15:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.709 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.715 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.718 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.719 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.719 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.720 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.720 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.721 2 DEBUG nova.virt.libvirt.driver [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.767 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.769 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407332.7716339, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.769 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Paused (Lifecycle Event)
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.812 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.815 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407332.7742724, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.816 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Resumed (Lifecycle Event)
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.849 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.853 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.867 2 INFO nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Took 12.99 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.868 2 DEBUG nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.879 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.962 2 INFO nova.compute.manager [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Took 14.02 seconds to build instance.
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:33 compute-0 nova_compute[257802]: 2025-10-02 12:15:33.989 2 DEBUG oslo_concurrency.lockutils [None req-dd0a7b7c-b8c7-434e-9ab7-99c84e949806 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.252 2 DEBUG nova.compute.manager [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.252 2 DEBUG oslo_concurrency.lockutils [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.252 2 DEBUG oslo_concurrency.lockutils [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.253 2 DEBUG oslo_concurrency.lockutils [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.253 2 DEBUG nova.compute.manager [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:34 compute-0 nova_compute[257802]: 2025-10-02 12:15:34.253 2 WARNING nova.compute.manager [req-fb621d7f-6352-47c8-94e6-7c21761528a4 req-0314b783-bc69-4975-aa0f-76eb9d07385e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state None.
Oct 02 12:15:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 02 12:15:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 02 12:15:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 02 12:15:34 compute-0 ceph-mon[73607]: pgmap v1522: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 121 op/s
Oct 02 12:15:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 347 KiB/s wr, 140 op/s
Oct 02 12:15:35 compute-0 ceph-mon[73607]: osdmap e215: 3 total, 3 up, 3 in
Oct 02 12:15:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:36.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.328 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.329 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.329 2 INFO nova.compute.manager [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Rebooting instance
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.354 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.355 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquired lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.355 2 DEBUG nova.network.neutron [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:36 compute-0 sudo[298805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:36 compute-0 sudo[298805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:36 compute-0 sudo[298805]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:36 compute-0 nova_compute[257802]: 2025-10-02 12:15:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:36 compute-0 sudo[298830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:36 compute-0 sudo[298830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:36 compute-0 sudo[298830]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:37 compute-0 ceph-mon[73607]: pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 347 KiB/s wr, 140 op/s
Oct 02 12:15:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 02 12:15:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 02 12:15:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 32 KiB/s wr, 134 op/s
Oct 02 12:15:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:38.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:38 compute-0 ceph-mon[73607]: pgmap v1525: 305 pgs: 305 active+clean; 214 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 32 KiB/s wr, 134 op/s
Oct 02 12:15:38 compute-0 ceph-mon[73607]: osdmap e216: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.507 2 DEBUG nova.network.neutron [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Updating instance_info_cache with network_info: [{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.545 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Releasing lock "refresh_cache-ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.547 2 DEBUG nova.compute.manager [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:38 compute-0 kernel: tap4b13d695-12 (unregistering): left promiscuous mode
Oct 02 12:15:38 compute-0 NetworkManager[44987]: <info>  [1759407338.7294] device (tap4b13d695-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:38 compute-0 ovn_controller[148183]: 2025-10-02T12:15:38Z|00253|binding|INFO|Releasing lport 4b13d695-1222-496f-b948-e5b9f432d54f from this chassis (sb_readonly=0)
Oct 02 12:15:38 compute-0 ovn_controller[148183]: 2025-10-02T12:15:38Z|00254|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f down in Southbound
Oct 02 12:15:38 compute-0 ovn_controller[148183]: 2025-10-02T12:15:38Z|00255|binding|INFO|Removing iface tap4b13d695-12 ovn-installed in OVS
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:38.749 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:96:d3 10.100.0.3'], port_security=['fa:16:3e:08:96:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0b70ca4a75844a9a91b4e116bc58df9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fec22877-9c15-42d1-9b7b-0131e34b8da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f3c414c-ec09-487b-9da3-ac33f7c8b0de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4b13d695-1222-496f-b948-e5b9f432d54f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:38.751 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4b13d695-1222-496f-b948-e5b9f432d54f in datapath efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea unbound from our chassis
Oct 02 12:15:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:38.753 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:38.754 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1450e20d-d430-4b9c-b43a-fcc7c01dbfea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:38.755 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea namespace which is not needed anymore
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:38 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 02 12:15:38 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Consumed 7.355s CPU time.
Oct 02 12:15:38 compute-0 systemd-machined[211836]: Machine qemu-30-instance-00000043 terminated.
Oct 02 12:15:38 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [NOTICE]   (298767) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:38 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [NOTICE]   (298767) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:38 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [WARNING]  (298767) : Exiting Master process...
Oct 02 12:15:38 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [ALERT]    (298767) : Current worker (298785) exited with code 143 (Terminated)
Oct 02 12:15:38 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[298748]: [WARNING]  (298767) : All workers exited. Exiting... (0)
Oct 02 12:15:38 compute-0 systemd[1]: libpod-1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188.scope: Deactivated successfully.
Oct 02 12:15:38 compute-0 podman[298880]: 2025-10-02 12:15:38.899379178 +0000 UTC m=+0.047218612 container died 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.906 2 INFO nova.virt.libvirt.driver [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance destroyed successfully.
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.907 2 DEBUG nova.objects.instance [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'resources' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.929 2 DEBUG nova.virt.libvirt.vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:38Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.930 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.932 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.932 2 DEBUG os_vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b13d695-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.940 2 INFO os_vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12')
Oct 02 12:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-668cee259c4ef7e67bd461e094d7a11450cc97f99ebaec6408ba1a22a3a5f2e2-merged.mount: Deactivated successfully.
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.951 2 DEBUG nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start _get_guest_xml network_info=[{"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.958 2 WARNING nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:38 compute-0 podman[298880]: 2025-10-02 12:15:38.96043037 +0000 UTC m=+0.108269784 container cleanup 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.964 2 DEBUG nova.virt.libvirt.host [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.965 2 DEBUG nova.virt.libvirt.host [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.968 2 DEBUG nova.virt.libvirt.host [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.969 2 DEBUG nova.virt.libvirt.host [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:38 compute-0 systemd[1]: libpod-conmon-1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188.scope: Deactivated successfully.
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.970 2 DEBUG nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.970 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.971 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.971 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.971 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.972 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.972 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.972 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.973 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.973 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.973 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.973 2 DEBUG nova.virt.hardware [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.974 2 DEBUG nova.objects.instance [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:38 compute-0 nova_compute[257802]: 2025-10-02 12:15:38.990 2 DEBUG oslo_concurrency.processutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:39 compute-0 podman[298919]: 2025-10-02 12:15:39.036162203 +0000 UTC m=+0.049613392 container remove 1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.042 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dd046827-ee90-45f9-bf68-5972485d10c4]: (4, ('Thu Oct  2 12:15:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea (1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188)\n1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188\nThu Oct  2 12:15:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea (1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188)\n1365f836c73fe364516caf2d3034a7b84bf3a9a3b966f591c3e98130c822a188\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.044 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5c1ce5-2840-4115-8f93-36cd7cd549b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.045 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefbb222c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:39 compute-0 kernel: tapefbb222c-a0: left promiscuous mode
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.051 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce38730-c3d1-4ce4-8e6e-bc1e59df258f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.081 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0041d8b7-94b6-48fb-a460-cc8dcfa8b7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.083 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d4ee6d-8ebc-4208-8bd5-9171d85c8245]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.092 2 DEBUG nova.compute.manager [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.093 2 DEBUG oslo_concurrency.lockutils [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.093 2 DEBUG oslo_concurrency.lockutils [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.093 2 DEBUG oslo_concurrency.lockutils [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.094 2 DEBUG nova.compute.manager [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.094 2 WARNING nova.compute.manager [req-794fdede-45e6-4dbd-9f14-3d3b903d7252 req-3a38f76c-0e55-4a52-b2a6-e3e64ee1e53a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.104 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[904bf284-07d0-4462-835e-5da987becb1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539916, 'reachable_time': 19669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298934, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.107 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:39.107 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea3cf1e-080d-42a6-b538-07e4d60ce173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:39 compute-0 systemd[1]: run-netns-ovnmeta\x2defbb222c\x2daa63\x2d45fb\x2da3ac\x2d4d28b5fbd5ea.mount: Deactivated successfully.
Oct 02 12:15:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 244 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.6 MiB/s wr, 184 op/s
Oct 02 12:15:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408733536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.505 2 DEBUG oslo_concurrency.processutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:39 compute-0 nova_compute[257802]: 2025-10-02 12:15:39.558 2 DEBUG oslo_concurrency.processutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 02 12:15:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 02 12:15:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 02 12:15:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3408733536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969141937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.018 2 DEBUG oslo_concurrency.processutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.020 2 DEBUG nova.virt.libvirt.vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:38Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.021 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.023 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.024 2 DEBUG nova.objects.instance [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.045 2 DEBUG nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <uuid>ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</uuid>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <name>instance-00000043</name>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:name>tempest-InstanceActionsTestJSON-server-1422524610</nova:name>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:15:38</nova:creationTime>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:user uuid="51229018510440858be9691ef4a0965f">tempest-InstanceActionsTestJSON-662595495-project-member</nova:user>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:project uuid="a0b70ca4a75844a9a91b4e116bc58df9">tempest-InstanceActionsTestJSON-662595495</nova:project>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <nova:port uuid="4b13d695-1222-496f-b948-e5b9f432d54f">
Oct 02 12:15:40 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <system>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="serial">ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="uuid">ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </system>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <os>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </os>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <features>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </features>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk">
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_disk.config">
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:15:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:08:96:d3"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <target dev="tap4b13d695-12"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a/console.log" append="off"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <video>
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </video>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:15:40 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:15:40 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:15:40 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:15:40 compute-0 nova_compute[257802]: </domain>
Oct 02 12:15:40 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.050 2 DEBUG nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.051 2 DEBUG nova.virt.libvirt.driver [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.052 2 DEBUG nova.virt.libvirt.vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:38Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.052 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.052 2 DEBUG nova.network.os_vif_util [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.053 2 DEBUG os_vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b13d695-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b13d695-12, col_values=(('external_ids', {'iface-id': '4b13d695-1222-496f-b948-e5b9f432d54f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:96:d3', 'vm-uuid': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.0603] manager: (tap4b13d695-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.067 2 INFO os_vif [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12')
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:15:40 compute-0 kernel: tap4b13d695-12: entered promiscuous mode
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.1832] manager: (tap4b13d695-12): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 ovn_controller[148183]: 2025-10-02T12:15:40Z|00256|binding|INFO|Claiming lport 4b13d695-1222-496f-b948-e5b9f432d54f for this chassis.
Oct 02 12:15:40 compute-0 ovn_controller[148183]: 2025-10-02T12:15:40Z|00257|binding|INFO|4b13d695-1222-496f-b948-e5b9f432d54f: Claiming fa:16:3e:08:96:d3 10.100.0.3
Oct 02 12:15:40 compute-0 systemd-udevd[298861]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.193 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:96:d3 10.100.0.3'], port_security=['fa:16:3e:08:96:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0b70ca4a75844a9a91b4e116bc58df9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fec22877-9c15-42d1-9b7b-0131e34b8da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f3c414c-ec09-487b-9da3-ac33f7c8b0de, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4b13d695-1222-496f-b948-e5b9f432d54f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.195 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4b13d695-1222-496f-b948-e5b9f432d54f in datapath efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea bound to our chassis
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.197 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.2040] device (tap4b13d695-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.2051] device (tap4b13d695-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:15:40 compute-0 ovn_controller[148183]: 2025-10-02T12:15:40Z|00258|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f ovn-installed in OVS
Oct 02 12:15:40 compute-0 ovn_controller[148183]: 2025-10-02T12:15:40Z|00259|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f up in Southbound
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.209 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5f118f-c313-49aa-8536-a46ab74d9aed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.209 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefbb222c-a1 in ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.212 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefbb222c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.212 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[72074355-2655-4372-b417-8b5d69ec4761]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.214 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9be36226-9346-4690-9822-a2ce8d34c6b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.229 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[95cf2804-77be-4d7c-980b-b15dfe767fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 systemd-machined[211836]: New machine qemu-31-instance-00000043.
Oct 02 12:15:40 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-00000043.
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.267 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[57f441ae-53a3-4861-8e78-ec88e42d61b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:40.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.309 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[54705b7f-b0da-4e99-b8a4-b5da4f577cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.3159] manager: (tapefbb222c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.316 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2d169a-1ee3-4ba4-b15c-0878371f99fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.351 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcd581a-a249-4ce3-9866-320f1eff8438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.356 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[113dfe82-4c10-4552-89be-f74cff2f8082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.3822] device (tapefbb222c-a0): carrier: link connected
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.386 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a554a1-42de-438a-b2ab-63418836f536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.406 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a9d86c-41e8-4fb3-9595-7b0b5678e542]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefbb222c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:3f:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540800, 'reachable_time': 27632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299041, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.422 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[061f4739-31e4-4cac-abb6-051ab17890f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:3f37'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540800, 'tstamp': 540800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299043, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.446 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5b938d-34b9-43fb-9d94-97f9613c0997]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefbb222c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:3f:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540800, 'reachable_time': 27632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299044, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.478 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7948d230-6097-487c-aea3-f43a4c6efc9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.548 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5f36bbdc-e6f6-48e8-b108-de26fb8e6356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.553 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefbb222c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.554 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.554 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefbb222c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 kernel: tapefbb222c-a0: entered promiscuous mode
Oct 02 12:15:40 compute-0 NetworkManager[44987]: <info>  [1759407340.5577] manager: (tapefbb222c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.564 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefbb222c-a0, col_values=(('external_ids', {'iface-id': 'a84bf0d3-c6c3-4760-a48a-4db671c03c75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.566 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:15:40 compute-0 ovn_controller[148183]: 2025-10-02T12:15:40Z|00260|binding|INFO|Releasing lport a84bf0d3-c6c3-4760-a48a-4db671c03c75 from this chassis (sb_readonly=0)
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.568 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e6374313-e5bc-441d-b98c-6b1f739c780f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.569 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.pid.haproxy
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:15:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:40.569 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'env', 'PROCESS_TAG=haproxy-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:15:40 compute-0 nova_compute[257802]: 2025-10-02 12:15:40.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:41 compute-0 ceph-mon[73607]: pgmap v1527: 305 pgs: 305 active+clean; 244 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.6 MiB/s wr, 184 op/s
Oct 02 12:15:41 compute-0 ceph-mon[73607]: osdmap e217: 3 total, 3 up, 3 in
Oct 02 12:15:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1969141937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:41 compute-0 podman[299095]: 2025-10-02 12:15:40.933351502 +0000 UTC m=+0.024848593 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:41 compute-0 podman[299095]: 2025-10-02 12:15:41.114500746 +0000 UTC m=+0.205997817 container create 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:15:41 compute-0 systemd[1]: Started libpod-conmon-3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f.scope.
Oct 02 12:15:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296be99239005561b86d2b0643a560569ac81a202dde97603b98afca9692d7ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.182 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.184 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.185 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.185 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.185 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.186 2 WARNING nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.186 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.186 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.187 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.187 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.187 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.188 2 WARNING nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.188 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.188 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.189 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.189 2 DEBUG oslo_concurrency.lockutils [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.189 2 DEBUG nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.190 2 WARNING nova.compute.manager [req-1c13ae72-af98-4b72-88b3-1a32a3fe264e req-a2f10a4c-a8b4-4d30-90db-916dae05a93a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:15:41 compute-0 podman[299095]: 2025-10-02 12:15:41.195744514 +0000 UTC m=+0.287241605 container init 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:15:41 compute-0 podman[299095]: 2025-10-02 12:15:41.201307511 +0000 UTC m=+0.292804582 container start 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:41 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [NOTICE]   (299139) : New worker (299141) forked
Oct 02 12:15:41 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [NOTICE]   (299139) : Loading success.
Oct 02 12:15:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 266 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.4 MiB/s wr, 229 op/s
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.643 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.644 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407341.6435697, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.644 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Resumed (Lifecycle Event)
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.646 2 DEBUG nova.compute.manager [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.649 2 INFO nova.virt.libvirt.driver [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance rebooted successfully.
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.649 2 DEBUG nova.compute.manager [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.667 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.670 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.686 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.687 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407341.6465364, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.687 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Started (Lifecycle Event)
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.713 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.715 2 DEBUG oslo_concurrency.lockutils [None req-0b8c8607-8956-4fe1-8d5a-36c1d76906e0 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:41 compute-0 nova_compute[257802]: 2025-10-02 12:15:41.718 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 02 12:15:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 02 12:15:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 02 12:15:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:42.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:15:42
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes']
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:15:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:15:43 compute-0 ceph-mon[73607]: pgmap v1529: 305 pgs: 305 active+clean; 266 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.4 MiB/s wr, 229 op/s
Oct 02 12:15:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3157430924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-0 ceph-mon[73607]: osdmap e218: 3 total, 3 up, 3 in
Oct 02 12:15:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/397792814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 271 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.4 MiB/s wr, 334 op/s
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:15:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 02 12:15:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 02 12:15:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 02 12:15:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:44.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.095 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:44.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.399 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.400 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.400 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.400 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.400 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.401 2 INFO nova.compute.manager [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Terminating instance
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.402 2 DEBUG nova.compute.manager [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:44 compute-0 kernel: tap4b13d695-12 (unregistering): left promiscuous mode
Oct 02 12:15:44 compute-0 NetworkManager[44987]: <info>  [1759407344.4473] device (tap4b13d695-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:44 compute-0 ovn_controller[148183]: 2025-10-02T12:15:44Z|00261|binding|INFO|Releasing lport 4b13d695-1222-496f-b948-e5b9f432d54f from this chassis (sb_readonly=0)
Oct 02 12:15:44 compute-0 ovn_controller[148183]: 2025-10-02T12:15:44Z|00262|binding|INFO|Setting lport 4b13d695-1222-496f-b948-e5b9f432d54f down in Southbound
Oct 02 12:15:44 compute-0 ovn_controller[148183]: 2025-10-02T12:15:44Z|00263|binding|INFO|Removing iface tap4b13d695-12 ovn-installed in OVS
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.461 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:96:d3 10.100.0.3'], port_security=['fa:16:3e:08:96:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0b70ca4a75844a9a91b4e116bc58df9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fec22877-9c15-42d1-9b7b-0131e34b8da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f3c414c-ec09-487b-9da3-ac33f7c8b0de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4b13d695-1222-496f-b948-e5b9f432d54f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.462 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4b13d695-1222-496f-b948-e5b9f432d54f in datapath efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea unbound from our chassis
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.463 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.464 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cc9293-dd97-453f-8184-1faee6acca64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.464 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea namespace which is not needed anymore
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 02 12:15:44 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000043.scope: Consumed 4.045s CPU time.
Oct 02 12:15:44 compute-0 systemd-machined[211836]: Machine qemu-31-instance-00000043 terminated.
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [NOTICE]   (299139) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [NOTICE]   (299139) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [WARNING]  (299139) : Exiting Master process...
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [WARNING]  (299139) : Exiting Master process...
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [ALERT]    (299139) : Current worker (299141) exited with code 143 (Terminated)
Oct 02 12:15:44 compute-0 neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea[299130]: [WARNING]  (299139) : All workers exited. Exiting... (0)
Oct 02 12:15:44 compute-0 systemd[1]: libpod-3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f.scope: Deactivated successfully.
Oct 02 12:15:44 compute-0 podman[299176]: 2025-10-02 12:15:44.62162153 +0000 UTC m=+0.059351390 container died 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.638 2 INFO nova.virt.libvirt.driver [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Instance destroyed successfully.
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.639 2 DEBUG nova.objects.instance [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lazy-loading 'resources' on Instance uuid ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.653 2 DEBUG nova.virt.libvirt.vif [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1422524610',display_name='tempest-InstanceActionsTestJSON-server-1422524610',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1422524610',id=67,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0b70ca4a75844a9a91b4e116bc58df9',ramdisk_id='',reservation_id='r-6fnsbyuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-662595495',owner_user_name='tempest-InstanceActionsTestJSON-662595495-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:41Z,user_data=None,user_id='51229018510440858be9691ef4a0965f',uuid=ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.654 2 DEBUG nova.network.os_vif_util [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converting VIF {"id": "4b13d695-1222-496f-b948-e5b9f432d54f", "address": "fa:16:3e:08:96:d3", "network": {"id": "efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1657089235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0b70ca4a75844a9a91b4e116bc58df9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b13d695-12", "ovs_interfaceid": "4b13d695-1222-496f-b948-e5b9f432d54f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.655 2 DEBUG nova.network.os_vif_util [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.655 2 DEBUG os_vif [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b13d695-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.665 2 INFO os_vif [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:96:d3,bridge_name='br-int',has_traffic_filtering=True,id=4b13d695-1222-496f-b948-e5b9f432d54f,network=Network(efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b13d695-12')
Oct 02 12:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-296be99239005561b86d2b0643a560569ac81a202dde97603b98afca9692d7ce-merged.mount: Deactivated successfully.
Oct 02 12:15:44 compute-0 podman[299176]: 2025-10-02 12:15:44.707568793 +0000 UTC m=+0.145298663 container cleanup 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:15:44 compute-0 systemd[1]: libpod-conmon-3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f.scope: Deactivated successfully.
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.724 2 DEBUG nova.compute.manager [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.725 2 DEBUG oslo_concurrency.lockutils [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.726 2 DEBUG oslo_concurrency.lockutils [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.726 2 DEBUG oslo_concurrency.lockutils [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.726 2 DEBUG nova.compute.manager [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [req-4cb9d44b-a9bb-4fa8-b1af-7f5435914971 req-5ce2aaec-3f39-4cd3-a8d0-3beb9cde939e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-unplugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:15:44 compute-0 ceph-mon[73607]: pgmap v1531: 305 pgs: 305 active+clean; 271 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.4 MiB/s wr, 334 op/s
Oct 02 12:15:44 compute-0 ceph-mon[73607]: osdmap e219: 3 total, 3 up, 3 in
Oct 02 12:15:44 compute-0 podman[299233]: 2025-10-02 12:15:44.827715298 +0000 UTC m=+0.085669327 container remove 3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.839 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb412ac-a09c-4587-ada3-b1539e4b4020]: (4, ('Thu Oct  2 12:15:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea (3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f)\n3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f\nThu Oct  2 12:15:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea (3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f)\n3f167b671431785d69bed0bf9927e2e44739d92730c68a5171a504ebd474006f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.842 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[90a5680c-96dd-4c33-a3ca-4fc66eb82c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.845 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefbb222c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 kernel: tapefbb222c-a0: left promiscuous mode
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.856 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7e397b-9731-45ce-ae96-991678571bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 nova_compute[257802]: 2025-10-02 12:15:44.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.891 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4eef2c70-316f-4926-ae3a-765f030b66a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.893 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6f30c4e4-011e-4170-9ac3-5974915a5621]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.911 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[61215e73-fa20-472d-a12f-6ae50f40ad59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540792, 'reachable_time': 16040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299249, 'error': None, 'target': 'ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:44 compute-0 systemd[1]: run-netns-ovnmeta\x2defbb222c\x2daa63\x2d45fb\x2da3ac\x2d4d28b5fbd5ea.mount: Deactivated successfully.
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.914 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efbb222c-aa63-45fb-a3ac-4d28b5fbd5ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:15:44.915 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[051c4d2d-0010-409f-9ea5-2e9420f52a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 262 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 270 op/s
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.305 2 INFO nova.virt.libvirt.driver [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Deleting instance files /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_del
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.307 2 INFO nova.virt.libvirt.driver [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Deletion of /var/lib/nova/instances/ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a_del complete
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.390 2 INFO nova.compute.manager [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Took 0.99 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.391 2 DEBUG oslo.service.loopingcall [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.391 2 DEBUG nova.compute.manager [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:45 compute-0 nova_compute[257802]: 2025-10-02 12:15:45.392 2 DEBUG nova.network.neutron [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:45 compute-0 podman[299251]: 2025-10-02 12:15:45.980443438 +0000 UTC m=+0.094823172 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:15:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:46.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:46 compute-0 nova_compute[257802]: 2025-10-02 12:15:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:46 compute-0 ceph-mon[73607]: pgmap v1533: 305 pgs: 305 active+clean; 262 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 270 op/s
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.191 2 DEBUG nova.compute.manager [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.192 2 DEBUG oslo_concurrency.lockutils [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.192 2 DEBUG oslo_concurrency.lockutils [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.192 2 DEBUG oslo_concurrency.lockutils [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.192 2 DEBUG nova.compute.manager [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] No waiting events found dispatching network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:47 compute-0 nova_compute[257802]: 2025-10-02 12:15:47.193 2 WARNING nova.compute.manager [req-2c2e5c1a-ee52-43e8-8abf-9535c3403be9 req-c75c2077-c1a9-498f-9028-f0b6f64d9d15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received unexpected event network-vif-plugged-4b13d695-1222-496f-b948-e5b9f432d54f for instance with vm_state active and task_state deleting.
Oct 02 12:15:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 221 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 319 op/s
Oct 02 12:15:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:48.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.274 2 DEBUG nova.network.neutron [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:48.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.299 2 INFO nova.compute.manager [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Took 2.91 seconds to deallocate network for instance.
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.351 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.352 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.406 2 DEBUG oslo_concurrency.processutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 02 12:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 02 12:15:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 02 12:15:48 compute-0 ceph-mon[73607]: pgmap v1534: 305 pgs: 305 active+clean; 221 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 319 op/s
Oct 02 12:15:48 compute-0 ceph-mon[73607]: osdmap e220: 3 total, 3 up, 3 in
Oct 02 12:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/924843913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.837 2 DEBUG oslo_concurrency.processutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.847 2 DEBUG nova.compute.provider_tree [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.869 2 DEBUG nova.scheduler.client.report [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.892 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:48 compute-0 nova_compute[257802]: 2025-10-02 12:15:48.947 2 INFO nova.scheduler.client.report [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Deleted allocations for instance ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.030 2 DEBUG oslo_concurrency.lockutils [None req-33c2f9e9-c469-444f-926b-009a3a117b4d 51229018510440858be9691ef4a0965f a0b70ca4a75844a9a91b4e116bc58df9 - - default default] Lock "ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:15:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 121 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 338 op/s
Oct 02 12:15:49 compute-0 sudo[299294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:49 compute-0 sudo[299294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:49 compute-0 sudo[299294]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:49 compute-0 sudo[299319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:49 compute-0 sudo[299319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:49 compute-0 sudo[299319]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:49 compute-0 sudo[299344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:49 compute-0 sudo[299344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:49 compute-0 sudo[299344]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:49 compute-0 sudo[299369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:15:49 compute-0 sudo[299369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:49 compute-0 nova_compute[257802]: 2025-10-02 12:15:49.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/924843913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3264171307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:15:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:15:49 compute-0 sudo[299369]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:50.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:50 compute-0 nova_compute[257802]: 2025-10-02 12:15:50.604 2 DEBUG nova.compute.manager [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Received event network-vif-deleted-4b13d695-1222-496f-b948-e5b9f432d54f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:50 compute-0 ceph-mon[73607]: pgmap v1536: 305 pgs: 305 active+clean; 121 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 338 op/s
Oct 02 12:15:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9be93c6c-17bf-40e6-8c2d-19f8bd9cae2f does not exist
Oct 02 12:15:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7f033175-4fd2-465c-bcf5-709bef25af96 does not exist
Oct 02 12:15:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2a033c4b-9cdc-466e-a63e-618cccb5f55d does not exist
Oct 02 12:15:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:51 compute-0 sudo[299426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:51 compute-0 sudo[299426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:51 compute-0 sudo[299426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.145 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.146 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.146 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.146 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.147 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:51 compute-0 podman[299450]: 2025-10-02 12:15:51.20535697 +0000 UTC m=+0.076259856 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 12:15:51 compute-0 podman[299451]: 2025-10-02 12:15:51.206194911 +0000 UTC m=+0.077807115 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:15:51 compute-0 sudo[299462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:51 compute-0 sudo[299462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:51 compute-0 sudo[299462]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 296 KiB/s wr, 205 op/s
Oct 02 12:15:51 compute-0 sudo[299515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:51 compute-0 sudo[299515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:51 compute-0 sudo[299515]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:51 compute-0 sudo[299542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:15:51 compute-0 sudo[299542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573775697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.628 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.70549543 +0000 UTC m=+0.050046181 container create 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:51 compute-0 systemd[1]: Started libpod-conmon-495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b.scope.
Oct 02 12:15:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.687679462 +0000 UTC m=+0.032230243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.79045032 +0000 UTC m=+0.135001081 container init 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.7969667 +0000 UTC m=+0.141517441 container start 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:51 compute-0 blissful_poincare[299644]: 167 167
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.803151872 +0000 UTC m=+0.147702633 container attach 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:15:51 compute-0 systemd[1]: libpod-495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b.scope: Deactivated successfully.
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.803596053 +0000 UTC m=+0.148146794 container died 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.808 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.809 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=20.942665100097656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.810 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.810 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84d87e0afd85fe3d86017f71d2bf8d0de7043e3974f0b9be38ead5c1d997041-merged.mount: Deactivated successfully.
Oct 02 12:15:51 compute-0 podman[299628]: 2025-10-02 12:15:51.838498661 +0000 UTC m=+0.183049402 container remove 495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_poincare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:51 compute-0 systemd[1]: libpod-conmon-495a654c6e67d2a5138ebe910690f70805b0804664692d4c5acf835660f21f9b.scope: Deactivated successfully.
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.867 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.867 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:15:51 compute-0 nova_compute[257802]: 2025-10-02 12:15:51.908 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3937590854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1573775697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 podman[299670]: 2025-10-02 12:15:51.993178056 +0000 UTC m=+0.045594193 container create 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:52 compute-0 systemd[1]: Started libpod-conmon-09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04.scope.
Oct 02 12:15:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:52 compute-0 podman[299670]: 2025-10-02 12:15:51.974344612 +0000 UTC m=+0.026760739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:52 compute-0 podman[299670]: 2025-10-02 12:15:52.087923086 +0000 UTC m=+0.140339193 container init 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:52 compute-0 podman[299670]: 2025-10-02 12:15:52.09988787 +0000 UTC m=+0.152303967 container start 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:52 compute-0 podman[299670]: 2025-10-02 12:15:52.103453418 +0000 UTC m=+0.155869605 container attach 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:15:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:52.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/960292307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-0 nova_compute[257802]: 2025-10-02 12:15:52.325 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-0 nova_compute[257802]: 2025-10-02 12:15:52.336 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:52 compute-0 nova_compute[257802]: 2025-10-02 12:15:52.369 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:52 compute-0 nova_compute[257802]: 2025-10-02 12:15:52.402 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:15:52 compute-0 nova_compute[257802]: 2025-10-02 12:15:52.402 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:52 compute-0 ceph-mon[73607]: pgmap v1537: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 296 KiB/s wr, 205 op/s
Oct 02 12:15:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/225370352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/960292307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-0 elated_napier[299705]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:15:52 compute-0 elated_napier[299705]: --> relative data size: 1.0
Oct 02 12:15:52 compute-0 elated_napier[299705]: --> All data devices are unavailable
Oct 02 12:15:53 compute-0 systemd[1]: libpod-09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04.scope: Deactivated successfully.
Oct 02 12:15:53 compute-0 conmon[299705]: conmon 09c089c1b5f5ec7bf417 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04.scope/container/memory.events
Oct 02 12:15:53 compute-0 podman[299670]: 2025-10-02 12:15:53.002440397 +0000 UTC m=+1.054856514 container died 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8b995f9c9e0c6437af33a3fccbf7dfdeefe511458e1f2ab3012a8bef9a1e7f-merged.mount: Deactivated successfully.
Oct 02 12:15:53 compute-0 podman[299670]: 2025-10-02 12:15:53.084632778 +0000 UTC m=+1.137048885 container remove 09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:53 compute-0 systemd[1]: libpod-conmon-09c089c1b5f5ec7bf4175ccfebed7a74a7b6b647eeeb7161d5f8c2d131646d04.scope: Deactivated successfully.
Oct 02 12:15:53 compute-0 sudo[299542]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:53 compute-0 sudo[299737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:53 compute-0 sudo[299737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:53 compute-0 sudo[299737]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 249 KiB/s wr, 172 op/s
Oct 02 12:15:53 compute-0 sudo[299762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:53 compute-0 sudo[299762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:53 compute-0 sudo[299762]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:53 compute-0 sudo[299787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:53 compute-0 sudo[299787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:53 compute-0 sudo[299787]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:53 compute-0 sudo[299812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:15:53 compute-0 sudo[299812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.847371557 +0000 UTC m=+0.037894443 container create 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:53 compute-0 nova_compute[257802]: 2025-10-02 12:15:53.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:53 compute-0 systemd[1]: Started libpod-conmon-4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40.scope.
Oct 02 12:15:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.832524013 +0000 UTC m=+0.023046929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.929459596 +0000 UTC m=+0.119982512 container init 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.935440433 +0000 UTC m=+0.125963319 container start 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.938317034 +0000 UTC m=+0.128839920 container attach 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:15:53 compute-0 festive_gagarin[299893]: 167 167
Oct 02 12:15:53 compute-0 systemd[1]: libpod-4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40.scope: Deactivated successfully.
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.942169269 +0000 UTC m=+0.132692155 container died 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f06031f5d759c044f45416ead4f82cb3ce0e1915658c68b30da1f545dd9a6b87-merged.mount: Deactivated successfully.
Oct 02 12:15:53 compute-0 podman[299877]: 2025-10-02 12:15:53.977854026 +0000 UTC m=+0.168376912 container remove 4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:15:53 compute-0 systemd[1]: libpod-conmon-4dfa16957766af888d4200a47e0129599f5300f3c64fd9bf4f5279df96393d40.scope: Deactivated successfully.
Oct 02 12:15:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:54.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:54 compute-0 podman[299917]: 2025-10-02 12:15:54.133485053 +0000 UTC m=+0.042152887 container create ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:15:54 compute-0 systemd[1]: Started libpod-conmon-ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc.scope.
Oct 02 12:15:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf1e2af28c0cec8cab266edb24eb8ac6677d6fc6d10827a238e0dc4df8a928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:54 compute-0 podman[299917]: 2025-10-02 12:15:54.117153592 +0000 UTC m=+0.025821466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf1e2af28c0cec8cab266edb24eb8ac6677d6fc6d10827a238e0dc4df8a928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf1e2af28c0cec8cab266edb24eb8ac6677d6fc6d10827a238e0dc4df8a928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf1e2af28c0cec8cab266edb24eb8ac6677d6fc6d10827a238e0dc4df8a928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:54 compute-0 podman[299917]: 2025-10-02 12:15:54.236538758 +0000 UTC m=+0.145206642 container init ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:15:54 compute-0 podman[299917]: 2025-10-02 12:15:54.245595572 +0000 UTC m=+0.154263406 container start ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:15:54 compute-0 podman[299917]: 2025-10-02 12:15:54.248887752 +0000 UTC m=+0.157555586 container attach ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00217504623813512 of space, bias 1.0, pg target 0.652513871440536 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:15:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:54.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:54 compute-0 nova_compute[257802]: 2025-10-02 12:15:54.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:54 compute-0 ceph-mon[73607]: pgmap v1538: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 249 KiB/s wr, 172 op/s
Oct 02 12:15:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3542693089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:55 compute-0 condescending_jemison[299933]: {
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:     "1": [
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:         {
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "devices": [
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "/dev/loop3"
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             ],
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "lv_name": "ceph_lv0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "lv_size": "7511998464",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "name": "ceph_lv0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "tags": {
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.cluster_name": "ceph",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.crush_device_class": "",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.encrypted": "0",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.osd_id": "1",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.type": "block",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:                 "ceph.vdo": "0"
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             },
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "type": "block",
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:             "vg_name": "ceph_vg0"
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:         }
Oct 02 12:15:55 compute-0 condescending_jemison[299933]:     ]
Oct 02 12:15:55 compute-0 condescending_jemison[299933]: }
Oct 02 12:15:55 compute-0 systemd[1]: libpod-ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc.scope: Deactivated successfully.
Oct 02 12:15:55 compute-0 podman[299943]: 2025-10-02 12:15:55.084984775 +0000 UTC m=+0.022079364 container died ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bdf1e2af28c0cec8cab266edb24eb8ac6677d6fc6d10827a238e0dc4df8a928-merged.mount: Deactivated successfully.
Oct 02 12:15:55 compute-0 podman[299943]: 2025-10-02 12:15:55.159898408 +0000 UTC m=+0.096992997 container remove ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jemison, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:55 compute-0 systemd[1]: libpod-conmon-ae330c49ac305076b1c394696a415ebff33c982a7d13e8cdfdacc254ef83c6dc.scope: Deactivated successfully.
Oct 02 12:15:55 compute-0 sudo[299812]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 33 KiB/s wr, 117 op/s
Oct 02 12:15:55 compute-0 sudo[299958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:55 compute-0 sudo[299958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:55 compute-0 sudo[299958]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:55 compute-0 sudo[299983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:55 compute-0 sudo[299983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:55 compute-0 sudo[299983]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:55 compute-0 sudo[300008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:55 compute-0 sudo[300008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:55 compute-0 sudo[300008]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:55 compute-0 sudo[300033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:15:55 compute-0 sudo[300033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:55 compute-0 podman[300098]: 2025-10-02 12:15:55.900618875 +0000 UTC m=+0.050645277 container create 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:15:55 compute-0 systemd[1]: Started libpod-conmon-695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077.scope.
Oct 02 12:15:55 compute-0 podman[300098]: 2025-10-02 12:15:55.880502749 +0000 UTC m=+0.030529191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/413423865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/413423865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:56 compute-0 podman[300098]: 2025-10-02 12:15:56.003687389 +0000 UTC m=+0.153713811 container init 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:56 compute-0 podman[300098]: 2025-10-02 12:15:56.014094696 +0000 UTC m=+0.164121108 container start 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:15:56 compute-0 stoic_feynman[300115]: 167 167
Oct 02 12:15:56 compute-0 systemd[1]: libpod-695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077.scope: Deactivated successfully.
Oct 02 12:15:56 compute-0 podman[300098]: 2025-10-02 12:15:56.027687949 +0000 UTC m=+0.177714371 container attach 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:56 compute-0 podman[300098]: 2025-10-02 12:15:56.028151861 +0000 UTC m=+0.178178263 container died 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:15:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-947f0eb09eaac26748c26df34894c12c77147c6dd4d477e9dc55a7e540708193-merged.mount: Deactivated successfully.
Oct 02 12:15:56 compute-0 podman[300098]: 2025-10-02 12:15:56.098929821 +0000 UTC m=+0.248956213 container remove 695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:56 compute-0 systemd[1]: libpod-conmon-695a9780c8c830347e9200e81f6ab0a023b413e09eb8e82138461f2acbff4077.scope: Deactivated successfully.
Oct 02 12:15:56 compute-0 podman[300121]: 2025-10-02 12:15:56.205426381 +0000 UTC m=+0.140329692 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:15:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:56 compute-0 podman[300168]: 2025-10-02 12:15:56.318672406 +0000 UTC m=+0.054075031 container create f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:56 compute-0 systemd[1]: Started libpod-conmon-f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851.scope.
Oct 02 12:15:56 compute-0 podman[300168]: 2025-10-02 12:15:56.295367923 +0000 UTC m=+0.030770558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:56 compute-0 nova_compute[257802]: 2025-10-02 12:15:56.403 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5d283162a109774ba843695ab2130c56a220b6db26ed7edc7868282e75f56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5d283162a109774ba843695ab2130c56a220b6db26ed7edc7868282e75f56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5d283162a109774ba843695ab2130c56a220b6db26ed7edc7868282e75f56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5d283162a109774ba843695ab2130c56a220b6db26ed7edc7868282e75f56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:56 compute-0 podman[300168]: 2025-10-02 12:15:56.433364187 +0000 UTC m=+0.168766842 container init f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:15:56 compute-0 podman[300168]: 2025-10-02 12:15:56.44774612 +0000 UTC m=+0.183148745 container start f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:15:56 compute-0 podman[300168]: 2025-10-02 12:15:56.452061036 +0000 UTC m=+0.187463691 container attach f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:56 compute-0 nova_compute[257802]: 2025-10-02 12:15:56.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:56 compute-0 sudo[300190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:56 compute-0 sudo[300190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:56 compute-0 sudo[300190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:56 compute-0 sudo[300215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:56 compute-0 sudo[300215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:56 compute-0 sudo[300215]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:57 compute-0 ceph-mon[73607]: pgmap v1539: 305 pgs: 305 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 33 KiB/s wr, 117 op/s
Oct 02 12:15:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 140 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 506 KiB/s wr, 54 op/s
Oct 02 12:15:57 compute-0 stoic_faraday[300184]: {
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:         "osd_id": 1,
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:         "type": "bluestore"
Oct 02 12:15:57 compute-0 stoic_faraday[300184]:     }
Oct 02 12:15:57 compute-0 stoic_faraday[300184]: }
Oct 02 12:15:57 compute-0 systemd[1]: libpod-f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851.scope: Deactivated successfully.
Oct 02 12:15:57 compute-0 podman[300168]: 2025-10-02 12:15:57.3097393 +0000 UTC m=+1.045141925 container died f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-10e5d283162a109774ba843695ab2130c56a220b6db26ed7edc7868282e75f56-merged.mount: Deactivated successfully.
Oct 02 12:15:57 compute-0 podman[300168]: 2025-10-02 12:15:57.362267292 +0000 UTC m=+1.097669907 container remove f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:57 compute-0 systemd[1]: libpod-conmon-f4272c4de92d5aba46fd0774462efd3abf154b9ee05b32d4f20f02c241212851.scope: Deactivated successfully.
Oct 02 12:15:57 compute-0 sudo[300033]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:15:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:15:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5e5c702b-e56e-40f9-ac11-8c065e1962c8 does not exist
Oct 02 12:15:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 36367e24-1240-48c6-9160-dd652b717c11 does not exist
Oct 02 12:15:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 77af5441-3f61-404b-b830-1c1e298155b8 does not exist
Oct 02 12:15:57 compute-0 sudo[300268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:57 compute-0 sudo[300268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:57 compute-0 sudo[300268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:57 compute-0 sudo[300293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:15:57 compute-0 sudo[300293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:57 compute-0 sudo[300293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:15:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:15:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:15:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:58 compute-0 ceph-mon[73607]: pgmap v1540: 305 pgs: 305 active+clean; 140 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 506 KiB/s wr, 54 op/s
Oct 02 12:15:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:15:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3453082838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 167 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:15:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1066816460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.637 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407344.6360757, ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.638 2 INFO nova.compute.manager [-] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] VM Stopped (Lifecycle Event)
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.672 2 DEBUG nova.compute.manager [None req-7dd16860-970e-4326-9c1b-238fddd150ee - - - - - -] [instance: ae9a5b8d-9dd3-48c0-9d35-e4b1d26a452a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.850 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.851 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:59 compute-0 nova_compute[257802]: 2025-10-02 12:15:59.896 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.029 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.030 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.037 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.037 2 INFO nova.compute.claims [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:16:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.176 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166802244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.625 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.634 2 DEBUG nova.compute.provider_tree [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:00 compute-0 ceph-mon[73607]: pgmap v1541: 305 pgs: 305 active+clean; 167 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:16:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2166802244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.652 2 DEBUG nova.scheduler.client.report [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.677 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.678 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.738 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.796 2 INFO nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.824 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.959 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.962 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:00 compute-0 nova_compute[257802]: 2025-10-02 12:16:00.963 2 INFO nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Creating image(s)
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.007 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.052 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.109 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.114 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.192 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.194 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.195 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.195 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.233 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.238 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 167 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.542 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.628 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] resizing rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.730 2 DEBUG nova.objects.instance [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ce6ada0-03ae-4871-998a-8781162b5d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.743 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.744 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Ensure instance console log exists: /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.744 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.745 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.745 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.746 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.750 2 WARNING nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.754 2 DEBUG nova.virt.libvirt.host [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.755 2 DEBUG nova.virt.libvirt.host [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.757 2 DEBUG nova.virt.libvirt.host [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.758 2 DEBUG nova.virt.libvirt.host [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.759 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.759 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.760 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.760 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.760 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.760 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.761 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.761 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.761 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.762 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.762 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.762 2 DEBUG nova.virt.hardware [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:01 compute-0 nova_compute[257802]: 2025-10-02 12:16:01.765 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:02.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632641500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.220 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.249 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.252 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730470288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:02 compute-0 ceph-mon[73607]: pgmap v1542: 305 pgs: 305 active+clean; 167 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:16:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1632641500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/730470288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.657 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.660 2 DEBUG nova.objects.instance [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ce6ada0-03ae-4871-998a-8781162b5d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.680 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <uuid>5ce6ada0-03ae-4871-998a-8781162b5d4f</uuid>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <name>instance-00000045</name>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersAaction247Test-server-229731182</nova:name>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:16:01</nova:creationTime>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:user uuid="a77fa2684bba41bb93c60fc0f0279f5f">tempest-ServersAaction247Test-1223521410-project-member</nova:user>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <nova:project uuid="6796bbb960be4c70b7177bff719e7ac0">tempest-ServersAaction247Test-1223521410</nova:project>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <system>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="serial">5ce6ada0-03ae-4871-998a-8781162b5d4f</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="uuid">5ce6ada0-03ae-4871-998a-8781162b5d4f</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </system>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <os>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </os>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <features>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </features>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ce6ada0-03ae-4871-998a-8781162b5d4f_disk">
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </source>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config">
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </source>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:16:02 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/console.log" append="off"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <video>
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </video>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:16:02 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:16:02 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:16:02 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:16:02 compute-0 nova_compute[257802]: </domain>
Oct 02 12:16:02 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.760 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.760 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.761 2 INFO nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Using config drive
Oct 02 12:16:02 compute-0 nova_compute[257802]: 2025-10-02 12:16:02.783 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.049 2 INFO nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Creating config drive at /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.056 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphe8x8ft7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.191 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphe8x8ft7" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.219 2 DEBUG nova.storage.rbd_utils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] rbd image 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.222 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 195 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 55 op/s
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.508 2 DEBUG oslo_concurrency.processutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config 5ce6ada0-03ae-4871-998a-8781162b5d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:03 compute-0 nova_compute[257802]: 2025-10-02 12:16:03.509 2 INFO nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Deleting local config drive /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f/disk.config because it was imported into RBD.
Oct 02 12:16:03 compute-0 systemd-machined[211836]: New machine qemu-32-instance-00000045.
Oct 02 12:16:03 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-00000045.
Oct 02 12:16:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:04.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:04.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.662 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.663 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.663 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407364.6619797, 5ce6ada0-03ae-4871-998a-8781162b5d4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.665 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] VM Resumed (Lifecycle Event)
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.672 2 INFO nova.virt.libvirt.driver [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance spawned successfully.
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.672 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.697 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.702 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.705 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.705 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.706 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.706 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.706 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.707 2 DEBUG nova.virt.libvirt.driver [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.750 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.750 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407364.664059, 5ce6ada0-03ae-4871-998a-8781162b5d4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.751 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] VM Started (Lifecycle Event)
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.782 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.785 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.798 2 INFO nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Took 3.84 seconds to spawn the instance on the hypervisor.
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.798 2 DEBUG nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:04 compute-0 nova_compute[257802]: 2025-10-02 12:16:04.835 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:04 compute-0 ceph-mon[73607]: pgmap v1543: 305 pgs: 305 active+clean; 195 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 55 op/s
Oct 02 12:16:05 compute-0 nova_compute[257802]: 2025-10-02 12:16:05.044 2 INFO nova.compute.manager [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Took 5.06 seconds to build instance.
Oct 02 12:16:05 compute-0 nova_compute[257802]: 2025-10-02 12:16:05.068 2 DEBUG oslo_concurrency.lockutils [None req-52235a42-bdce-48fa-855e-360398f16c22 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 198 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 90 op/s
Oct 02 12:16:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:06.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:06 compute-0 nova_compute[257802]: 2025-10-02 12:16:06.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 213 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 123 op/s
Oct 02 12:16:07 compute-0 ceph-mon[73607]: pgmap v1544: 305 pgs: 305 active+clean; 198 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 90 op/s
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.633 2 DEBUG nova.compute.manager [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.684 2 INFO nova.compute.manager [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] instance snapshotting
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.685 2 DEBUG nova.objects.instance [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lazy-loading 'flavor' on Instance uuid 5ce6ada0-03ae-4871-998a-8781162b5d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.766 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.767 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.767 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "5ce6ada0-03ae-4871-998a-8781162b5d4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.768 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.768 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.770 2 INFO nova.compute.manager [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Terminating instance
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.771 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "refresh_cache-5ce6ada0-03ae-4871-998a-8781162b5d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.771 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquired lock "refresh_cache-5ce6ada0-03ae-4871-998a-8781162b5d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:07 compute-0 nova_compute[257802]: 2025-10-02 12:16:07.771 2 DEBUG nova.network.neutron [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.023 2 DEBUG nova.network.neutron [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.062 2 INFO nova.virt.libvirt.driver [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Beginning live snapshot process
Oct 02 12:16:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:08.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.117 2 DEBUG nova.compute.manager [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 02 12:16:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.426 2 DEBUG nova.network.neutron [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.440 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Releasing lock "refresh_cache-5ce6ada0-03ae-4871-998a-8781162b5d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.440 2 DEBUG nova.compute.manager [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:16:08 compute-0 ceph-mon[73607]: pgmap v1545: 305 pgs: 305 active+clean; 213 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 123 op/s
Oct 02 12:16:08 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 02 12:16:08 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000045.scope: Consumed 4.917s CPU time.
Oct 02 12:16:08 compute-0 systemd-machined[211836]: Machine qemu-32-instance-00000045 terminated.
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.662 2 INFO nova.virt.libvirt.driver [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance destroyed successfully.
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.663 2 DEBUG nova.objects.instance [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lazy-loading 'resources' on Instance uuid 5ce6ada0-03ae-4871-998a-8781162b5d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:08 compute-0 nova_compute[257802]: 2025-10-02 12:16:08.945 2 DEBUG nova.compute.manager [None req-764e2f96-8a61-4bde-bc50-e4c1eedde7b5 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 12:16:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 158 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.4 MiB/s wr, 241 op/s
Oct 02 12:16:09 compute-0 nova_compute[257802]: 2025-10-02 12:16:09.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3560707684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.169 2 INFO nova.virt.libvirt.driver [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Deleting instance files /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f_del
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.169 2 INFO nova.virt.libvirt.driver [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Deletion of /var/lib/nova/instances/5ce6ada0-03ae-4871-998a-8781162b5d4f_del complete
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.249 2 INFO nova.compute.manager [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Took 1.81 seconds to destroy the instance on the hypervisor.
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.250 2 DEBUG oslo.service.loopingcall [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.250 2 DEBUG nova.compute.manager [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.250 2 DEBUG nova.network.neutron [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:16:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.383 2 DEBUG nova.network.neutron [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.401 2 DEBUG nova.network.neutron [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.419 2 INFO nova.compute.manager [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Took 0.17 seconds to deallocate network for instance.
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.479 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.480 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:10 compute-0 nova_compute[257802]: 2025-10-02 12:16:10.534 2 DEBUG oslo_concurrency.processutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:10 compute-0 ceph-mon[73607]: pgmap v1546: 305 pgs: 305 active+clean; 158 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.4 MiB/s wr, 241 op/s
Oct 02 12:16:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3298619943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.028 2 DEBUG oslo_concurrency.processutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.035 2 DEBUG nova.compute.provider_tree [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.049 2 DEBUG nova.scheduler.client.report [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.067 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.093 2 INFO nova.scheduler.client.report [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Deleted allocations for instance 5ce6ada0-03ae-4871-998a-8781162b5d4f
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.165 2 DEBUG oslo_concurrency.lockutils [None req-386eda36-5846-4d0a-bf41-60a19cfeb132 a77fa2684bba41bb93c60fc0f0279f5f 6796bbb960be4c70b7177bff719e7ac0 - - default default] Lock "5ce6ada0-03ae-4871-998a-8781162b5d4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 116 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 282 op/s
Oct 02 12:16:11 compute-0 nova_compute[257802]: 2025-10-02 12:16:11.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3298619943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3663886084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:13 compute-0 ceph-mon[73607]: pgmap v1547: 305 pgs: 305 active+clean; 116 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 282 op/s
Oct 02 12:16:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 295 op/s
Oct 02 12:16:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:14.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:14 compute-0 nova_compute[257802]: 2025-10-02 12:16:14.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:15 compute-0 ceph-mon[73607]: pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 295 op/s
Oct 02 12:16:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 269 op/s
Oct 02 12:16:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:16 compute-0 nova_compute[257802]: 2025-10-02 12:16:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:16 compute-0 podman[300739]: 2025-10-02 12:16:16.938611929 +0000 UTC m=+0.075355474 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:16:16 compute-0 sudo[300756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:16 compute-0 sudo[300756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-0 sudo[300756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 sudo[300783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:17 compute-0 sudo[300783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:17 compute-0 sudo[300783]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 ceph-mon[73607]: pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 269 op/s
Oct 02 12:16:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.4 MiB/s wr, 234 op/s
Oct 02 12:16:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:17.861 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:17.862 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:16:17 compute-0 nova_compute[257802]: 2025-10-02 12:16:17.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:18.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:19 compute-0 ceph-mon[73607]: pgmap v1550: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.4 MiB/s wr, 234 op/s
Oct 02 12:16:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Oct 02 12:16:19 compute-0 nova_compute[257802]: 2025-10-02 12:16:19.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:20.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:20.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:21 compute-0 ceph-mon[73607]: pgmap v1551: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Oct 02 12:16:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 609 KiB/s wr, 72 op/s
Oct 02 12:16:21 compute-0 nova_compute[257802]: 2025-10-02 12:16:21.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:21 compute-0 podman[300810]: 2025-10-02 12:16:21.936759604 +0000 UTC m=+0.069031368 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:21 compute-0 podman[300811]: 2025-10-02 12:16:21.959638927 +0000 UTC m=+0.092840714 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:16:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:22.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:22.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 02 12:16:23 compute-0 ceph-mon[73607]: pgmap v1552: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 609 KiB/s wr, 72 op/s
Oct 02 12:16:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 02 12:16:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 02 12:16:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 614 B/s wr, 2 op/s
Oct 02 12:16:23 compute-0 nova_compute[257802]: 2025-10-02 12:16:23.661 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407368.659598, 5ce6ada0-03ae-4871-998a-8781162b5d4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:23 compute-0 nova_compute[257802]: 2025-10-02 12:16:23.661 2 INFO nova.compute.manager [-] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] VM Stopped (Lifecycle Event)
Oct 02 12:16:23 compute-0 nova_compute[257802]: 2025-10-02 12:16:23.688 2 DEBUG nova.compute.manager [None req-c0b084c3-1802-424d-8a29-6179df2bb88a - - - - - -] [instance: 5ce6ada0-03ae-4871-998a-8781162b5d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:24.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:24 compute-0 ceph-mon[73607]: osdmap e221: 3 total, 3 up, 3 in
Oct 02 12:16:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/758705718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:24 compute-0 nova_compute[257802]: 2025-10-02 12:16:24.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:25 compute-0 ceph-mon[73607]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 614 B/s wr, 2 op/s
Oct 02 12:16:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 818 B/s wr, 3 op/s
Oct 02 12:16:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:26.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 02 12:16:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 02 12:16:26 compute-0 ceph-mon[73607]: pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 818 B/s wr, 3 op/s
Oct 02 12:16:26 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 02 12:16:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:26 compute-0 nova_compute[257802]: 2025-10-02 12:16:26.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:26.863 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:26.933 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:16:26.934 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 podman[300851]: 2025-10-02 12:16:26.940664829 +0000 UTC m=+0.085407561 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:16:27 compute-0 ceph-mon[73607]: osdmap e222: 3 total, 3 up, 3 in
Oct 02 12:16:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 96 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 345 KiB/s wr, 35 op/s
Oct 02 12:16:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:28.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:28 compute-0 ceph-mon[73607]: pgmap v1557: 305 pgs: 305 active+clean; 96 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 345 KiB/s wr, 35 op/s
Oct 02 12:16:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.7 MiB/s wr, 87 op/s
Oct 02 12:16:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 02 12:16:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 02 12:16:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 02 12:16:29 compute-0 nova_compute[257802]: 2025-10-02 12:16:29.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:30.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:30.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:30 compute-0 ceph-mon[73607]: pgmap v1558: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.7 MiB/s wr, 87 op/s
Oct 02 12:16:30 compute-0 ceph-mon[73607]: osdmap e223: 3 total, 3 up, 3 in
Oct 02 12:16:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4277439159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/435369187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.7 MiB/s wr, 89 op/s
Oct 02 12:16:31 compute-0 nova_compute[257802]: 2025-10-02 12:16:31.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 nova_compute[257802]: 2025-10-02 12:16:31.894 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:31 compute-0 nova_compute[257802]: 2025-10-02 12:16:31.895 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:31 compute-0 nova_compute[257802]: 2025-10-02 12:16:31.916 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.005 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.006 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:32.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.193 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.194 2 INFO nova.compute.claims [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.346 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:32 compute-0 ceph-mon[73607]: pgmap v1560: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.7 MiB/s wr, 89 op/s
Oct 02 12:16:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976765985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.805 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.811 2 DEBUG nova.compute.provider_tree [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.853 2 DEBUG nova.scheduler.client.report [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.894 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.894 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.960 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.960 2 DEBUG nova.network.neutron [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:16:32 compute-0 nova_compute[257802]: 2025-10-02 12:16:32.991 2 INFO nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.008 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.093 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.094 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.094 2 INFO nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Creating image(s)
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.117 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.141 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.163 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.166 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.229 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.229 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.230 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.230 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.254 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.257 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 96 op/s
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.461 2 DEBUG nova.network.neutron [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.461 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.519 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1976765985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.599 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] resizing rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:16:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.704 2 DEBUG nova.objects.instance [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'migration_context' on Instance uuid 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.719 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.720 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Ensure instance console log exists: /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.720 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.721 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.721 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.723 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.727 2 WARNING nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.731 2 DEBUG nova.virt.libvirt.host [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.732 2 DEBUG nova.virt.libvirt.host [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.735 2 DEBUG nova.virt.libvirt.host [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.735 2 DEBUG nova.virt.libvirt.host [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.736 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.737 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.737 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.738 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.738 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.738 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.739 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.739 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.739 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.740 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.740 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.740 2 DEBUG nova.virt.hardware [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:33 compute-0 nova_compute[257802]: 2025-10-02 12:16:33.743 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:34.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/736048632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.210 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.241 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.245 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:34 compute-0 ceph-mon[73607]: pgmap v1561: 305 pgs: 305 active+clean; 134 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 96 op/s
Oct 02 12:16:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/736048632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/37436492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690857582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.680 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.682 2 DEBUG nova.objects.instance [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.696 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <uuid>4ef4cd21-73d9-4ceb-8bd5-316a831e8e92</uuid>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <name>instance-00000047</name>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1159358313</nova:name>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:16:33</nova:creationTime>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:user uuid="03f516d263c8402682568b55b658f885">tempest-ListImageFiltersTestJSON-713501412-project-member</nova:user>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <nova:project uuid="1f9bd65bc7864ca18e1c478ef7e03926">tempest-ListImageFiltersTestJSON-713501412</nova:project>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <system>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="serial">4ef4cd21-73d9-4ceb-8bd5-316a831e8e92</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="uuid">4ef4cd21-73d9-4ceb-8bd5-316a831e8e92</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </system>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <os>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </os>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <features>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </features>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk">
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config">
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:16:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/console.log" append="off"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <video>
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </video>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:16:34 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:16:34 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:16:34 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:16:34 compute-0 nova_compute[257802]: </domain>
Oct 02 12:16:34 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.783 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.783 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.784 2 INFO nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Using config drive
Oct 02 12:16:34 compute-0 nova_compute[257802]: 2025-10-02 12:16:34.808 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.091 2 INFO nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Creating config drive at /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.096 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmea6uamg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.227 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmea6uamg" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.257 2 DEBUG nova.storage.rbd_utils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.260 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 139 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.466 2 DEBUG oslo_concurrency.processutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:35 compute-0 nova_compute[257802]: 2025-10-02 12:16:35.467 2 INFO nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Deleting local config drive /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92/disk.config because it was imported into RBD.
Oct 02 12:16:35 compute-0 virtqemud[257280]: End of file while reading data: Input/output error
Oct 02 12:16:35 compute-0 virtqemud[257280]: End of file while reading data: Input/output error
Oct 02 12:16:35 compute-0 systemd-machined[211836]: New machine qemu-33-instance-00000047.
Oct 02 12:16:35 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-00000047.
Oct 02 12:16:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/690857582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:36 compute-0 ceph-mon[73607]: pgmap v1562: 305 pgs: 305 active+clean; 139 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:16:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/148503303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1814412693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-0 nova_compute[257802]: 2025-10-02 12:16:36.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:37 compute-0 sudo[301245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:37 compute-0 sudo[301245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:37 compute-0 sudo[301245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:37 compute-0 sudo[301273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:37 compute-0 sudo[301273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:37 compute-0 sudo[301273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 163 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1014 KiB/s rd, 2.9 MiB/s wr, 137 op/s
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.545 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407397.5445623, 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.546 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] VM Resumed (Lifecycle Event)
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.550 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.551 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.557 2 INFO nova.virt.libvirt.driver [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance spawned successfully.
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.557 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.574 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.585 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.589 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.589 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.590 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.590 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.591 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.591 2 DEBUG nova.virt.libvirt.driver [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.626 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.627 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407397.5451436, 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.627 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] VM Started (Lifecycle Event)
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.658 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.664 2 INFO nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 4.57 seconds to spawn the instance on the hypervisor.
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.664 2 DEBUG nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.665 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.697 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.737 2 INFO nova.compute.manager [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 5.77 seconds to build instance.
Oct 02 12:16:37 compute-0 nova_compute[257802]: 2025-10-02 12:16:37.760 2 DEBUG oslo_concurrency.lockutils [None req-e379f075-18fe-4b63-b43f-61e4ef2c918f 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.067 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.067 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.084 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:38.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.201 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.202 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.212 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.212 2 INFO nova.compute.claims [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.344 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310851050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.792 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.797 2 DEBUG nova.compute.provider_tree [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:38 compute-0 ceph-mon[73607]: pgmap v1563: 305 pgs: 305 active+clean; 163 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1014 KiB/s rd, 2.9 MiB/s wr, 137 op/s
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.817 2 DEBUG nova.scheduler.client.report [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.837 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.838 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.877 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.878 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.895 2 INFO nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:38 compute-0 nova_compute[257802]: 2025-10-02 12:16:38.996 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.077 2 INFO nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Booting with volume 7c616c2f-d398-4818-acf9-c1deefd666a5 at /dev/vda
Oct 02 12:16:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 227 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 188 op/s
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.340 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.342 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.354 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.354 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5a7a08-7887-405e-970b-e0f369605410]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.356 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.363 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.364 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dec2dd-9d12-4a23-823a-b351ab81350d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.366 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.375 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.376 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ea20cead-41c5-4128-93ac-4285d58cf224]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.378 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[af86b9a4-08dc-4250-a9cf-c767e8557cfd]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.379 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.412 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.416 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.417 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.417 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.418 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.419 2 DEBUG nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating existing volume attachment record: 66f30441-268c-44a2-b67b-424f14a0be77 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.618 2 DEBUG nova.policy [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4b073f3365d481cabadfd39389c66ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9d3eca266284ae9950c491e566b2523', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:16:39 compute-0 nova_compute[257802]: 2025-10-02 12:16:39.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/310851050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/751861772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:39 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:16:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835534404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.403 2 DEBUG nova.compute.manager [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.471 2 INFO nova.compute.manager [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] instance snapshotting
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.586 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully created port: 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.639 2 INFO nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Booting with volume 8c49b0fe-d42a-4dc2-8485-1c114233c6fc at /dev/vdb
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.749 2 INFO nova.virt.libvirt.driver [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Beginning live snapshot process
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.773 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.774 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.784 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.785 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c34ea5c4-c659-4f2c-9b83-c8850e531c7b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.786 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.792 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.792 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2e134a5d-7f92-4a95-9269-d96b77e82136]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.793 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.801 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.801 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5c2fb0-1fd1-41d6-a751-9ba41194ca43]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.802 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[094494e3-f868-41de-983c-7af549d491e9]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.805 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.831 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.834 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.834 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.834 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.835 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.835 2 DEBUG nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating existing volume attachment record: 291b04fe-1a16-460c-ae11-af9f070c1703 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:16:40 compute-0 nova_compute[257802]: 2025-10-02 12:16:40.901 2 DEBUG nova.virt.libvirt.imagebackend [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:16:40 compute-0 ceph-mon[73607]: pgmap v1564: 305 pgs: 305 active+clean; 227 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 188 op/s
Oct 02 12:16:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2835534404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 nova_compute[257802]: 2025-10-02 12:16:41.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 250 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 197 op/s
Oct 02 12:16:41 compute-0 nova_compute[257802]: 2025-10-02 12:16:41.278 2 DEBUG nova.storage.rbd_utils [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(31d53385bd214a47b3a4c67f56b3ea4b) on rbd image(4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:16:41 compute-0 nova_compute[257802]: 2025-10-02 12:16:41.502 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully created port: c23e360e-b27a-4c79-a3ce-5c8850ccc846 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:16:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2996305155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 nova_compute[257802]: 2025-10-02 12:16:41.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.016 2 INFO nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Booting with volume 5566db4c-b0ba-4d2b-84a6-1fa1272fec0f at /dev/vdc
Oct 02 12:16:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 02 12:16:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2996305155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:42 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 02 12:16:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.139 2 DEBUG nova.storage.rbd_utils [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] cloning vms/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk@31d53385bd214a47b3a4c67f56b3ea4b to images/3242b862-74d4-4558-9399-2a866969329a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.191 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully created port: eea85d66-4089-4708-9cb0-a7d2ab18e6ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.194 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.196 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.210 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.210 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[70c9ae20-60fe-486d-b417-0e1f9134d59f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.211 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.220 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.221 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[95383e27-bbb5-4a04-bd89-bd77dd0497d4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.223 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.232 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.232 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ed91ae-4530-4490-94d7-d955ac0ea916]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.234 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b8afaf-4bf8-460c-a08b-b15ab8eb2ffd]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.235 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.261 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.263 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.264 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.264 2 DEBUG os_brick.initiator.connectors.lightos [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.264 2 DEBUG os_brick.utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:16:42 compute-0 nova_compute[257802]: 2025-10-02 12:16:42.265 2 DEBUG nova.virt.block_device [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating existing volume attachment record: 1c5135d3-fcf3-4e66-8457-e1dbd53ff6a6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:16:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:42.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:16:42
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', 'vms', 'backups', '.mgr', 'default.rgw.log', '.rgw.root']
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:16:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.007 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully created port: 7eba4256-8a01-49a2-aec1-98d48eda4d3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.037 2 DEBUG nova.storage.rbd_utils [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] flattening images/3242b862-74d4-4558-9399-2a866969329a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:16:43 compute-0 ceph-mon[73607]: pgmap v1565: 305 pgs: 305 active+clean; 250 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 197 op/s
Oct 02 12:16:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2394753011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:43 compute-0 ceph-mon[73607]: osdmap e224: 3 total, 3 up, 3 in
Oct 02 12:16:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1734109519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2421507239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 258 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.8 MiB/s wr, 336 op/s
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.413 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.417 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.418 2 INFO nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Creating image(s)
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.419 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.420 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Ensure instance console log exists: /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.421 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.421 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.422 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:43 compute-0 nova_compute[257802]: 2025-10-02 12:16:43.652 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully created port: 137dbaa1-1ef7-4aea-bc14-9f99b461690c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:16:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1354439005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3515920537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:44.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.450 2 DEBUG nova.storage.rbd_utils [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] removing snapshot(31d53385bd214a47b3a4c67f56b3ea4b) on rbd image(4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.634 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.822 2 DEBUG nova.compute.manager [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.823 2 DEBUG nova.compute.manager [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.823 2 DEBUG oslo_concurrency.lockutils [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.824 2 DEBUG oslo_concurrency.lockutils [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:44 compute-0 nova_compute[257802]: 2025-10-02 12:16:44.824 2 DEBUG nova.network.neutron [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:45 compute-0 nova_compute[257802]: 2025-10-02 12:16:45.051 2 DEBUG nova.network.neutron [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:45 compute-0 nova_compute[257802]: 2025-10-02 12:16:45.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 02 12:16:45 compute-0 ceph-mon[73607]: pgmap v1567: 305 pgs: 305 active+clean; 258 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.8 MiB/s wr, 336 op/s
Oct 02 12:16:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 281 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.9 MiB/s wr, 347 op/s
Oct 02 12:16:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 02 12:16:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 02 12:16:45 compute-0 nova_compute[257802]: 2025-10-02 12:16:45.332 2 DEBUG nova.storage.rbd_utils [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(snap) on rbd image(3242b862-74d4-4558-9399-2a866969329a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:16:45 compute-0 nova_compute[257802]: 2025-10-02 12:16:45.498 2 DEBUG nova.network.neutron [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:45 compute-0 nova_compute[257802]: 2025-10-02 12:16:45.518 2 DEBUG oslo_concurrency.lockutils [req-07c74473-9789-4c9a-bfa6-411797eade3e req-6288398f-9417-4f94-bc65-2ea4a893ce36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.043 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: d56785b1-aeee-49be-af2d-71e8fc0a6c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:46.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 02 12:16:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:46.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 02 12:16:46 compute-0 ceph-mon[73607]: pgmap v1568: 305 pgs: 305 active+clean; 281 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.9 MiB/s wr, 347 op/s
Oct 02 12:16:46 compute-0 ceph-mon[73607]: osdmap e225: 3 total, 3 up, 3 in
Oct 02 12:16:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.822 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: 4a1bf892-fd3f-40df-95f2-779ffc41ade1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.921 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-d56785b1-aeee-49be-af2d-71e8fc0a6c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.922 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-d56785b1-aeee-49be-af2d-71e8fc0a6c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.922 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.923 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:46 compute-0 nova_compute[257802]: 2025-10-02 12:16:46.923 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port d56785b1-aeee-49be-af2d-71e8fc0a6c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.242 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 313 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.7 MiB/s wr, 369 op/s
Oct 02 12:16:47 compute-0 ceph-mon[73607]: osdmap e226: 3 total, 3 up, 3 in
Oct 02 12:16:47 compute-0 ovn_controller[148183]: 2025-10-02T12:16:47Z|00264|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.885 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.906 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.906 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-4a1bf892-fd3f-40df-95f2-779ffc41ade1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.907 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-4a1bf892-fd3f-40df-95f2-779ffc41ade1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.907 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.907 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:47 compute-0 nova_compute[257802]: 2025-10-02 12:16:47.907 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port 4a1bf892-fd3f-40df-95f2-779ffc41ade1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:47 compute-0 podman[301488]: 2025-10-02 12:16:47.92157748 +0000 UTC m=+0.057271410 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.089 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.175 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: c23e360e-b27a-4c79-a3ce-5c8850ccc846 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.248 2 INFO nova.virt.libvirt.driver [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Snapshot image upload complete
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.249 2 INFO nova.compute.manager [None req-55250bcd-662b-48bb-ab98-a125a7b78724 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 7.78 seconds to snapshot the instance on the hypervisor.
Oct 02 12:16:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:48.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.537 2 DEBUG nova.network.neutron [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:48 compute-0 ceph-mon[73607]: pgmap v1571: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 313 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.7 MiB/s wr, 369 op/s
Oct 02 12:16:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4044166331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:48 compute-0 nova_compute[257802]: 2025-10-02 12:16:48.560 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.015 2 DEBUG nova.compute.manager [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.016 2 DEBUG nova.compute.manager [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-c23e360e-b27a-4c79-a3ce-5c8850ccc846. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.017 2 DEBUG oslo_concurrency.lockutils [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.017 2 DEBUG oslo_concurrency.lockutils [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.018 2 DEBUG nova.network.neutron [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port c23e360e-b27a-4c79-a3ce-5c8850ccc846 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.131 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:16:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 362 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 8.6 MiB/s wr, 457 op/s
Oct 02 12:16:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1989877382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.934 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.935 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.935 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.935 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:49 compute-0 nova_compute[257802]: 2025-10-02 12:16:49.993 2 DEBUG nova.network.neutron [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:50.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.424 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.452 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: eea85d66-4089-4708-9cb0-a7d2ab18e6ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:50 compute-0 ceph-mon[73607]: pgmap v1572: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 362 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 8.6 MiB/s wr, 457 op/s
Oct 02 12:16:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/164442424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.634 2 DEBUG nova.network.neutron [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.645 2 DEBUG oslo_concurrency.lockutils [req-756856ea-e493-4192-8268-92e69353cc6f req-91931deb-9c2b-481e-ab80-b7fb4c10e93e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.737 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.759 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:50 compute-0 nova_compute[257802]: 2025-10-02 12:16:50.759 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.138 2 DEBUG nova.compute.manager [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.138 2 DEBUG nova.compute.manager [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-eea85d66-4089-4708-9cb0-a7d2ab18e6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.138 2 DEBUG oslo_concurrency.lockutils [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.139 2 DEBUG oslo_concurrency.lockutils [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.139 2 DEBUG nova.network.neutron [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port eea85d66-4089-4708-9cb0-a7d2ab18e6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.142 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.142 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.143 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.143 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.143 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 380 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 9.2 MiB/s wr, 410 op/s
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.450 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: 7eba4256-8a01-49a2-aec1-98d48eda4d3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.455 2 DEBUG nova.network.neutron [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/518681732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.598 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/518681732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 02 12:16:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.659 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.659 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.765 2 DEBUG nova.network.neutron [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.794 2 DEBUG oslo_concurrency.lockutils [req-6232244a-9118-48ab-93a9-62fccc3e2cb1 req-74024dcc-f3c8-4442-b21d-224a2009caf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.804 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.805 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4411MB free_disk=20.874622344970703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.805 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.806 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.886 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.886 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.886 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.887 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:16:51 compute-0 nova_compute[257802]: 2025-10-02 12:16:51.949 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219888247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.381 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:52.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.386 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.399 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.413 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Successfully updated port: 137dbaa1-1ef7-4aea-bc14-9f99b461690c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.418 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.418 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.423 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.423 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.423 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 02 12:16:52 compute-0 ceph-mon[73607]: pgmap v1573: 305 pgs: 305 active+clean; 380 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 9.2 MiB/s wr, 410 op/s
Oct 02 12:16:52 compute-0 ceph-mon[73607]: osdmap e227: 3 total, 3 up, 3 in
Oct 02 12:16:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/219888247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:52 compute-0 nova_compute[257802]: 2025-10-02 12:16:52.630 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 02 12:16:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 02 12:16:52 compute-0 podman[301554]: 2025-10-02 12:16:52.926741065 +0000 UTC m=+0.068991528 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:16:52 compute-0 podman[301555]: 2025-10-02 12:16:52.948719645 +0000 UTC m=+0.076362678 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:16:53 compute-0 nova_compute[257802]: 2025-10-02 12:16:53.237 2 DEBUG nova.compute.manager [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:53 compute-0 nova_compute[257802]: 2025-10-02 12:16:53.237 2 DEBUG nova.compute.manager [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-7eba4256-8a01-49a2-aec1-98d48eda4d3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:53 compute-0 nova_compute[257802]: 2025-10-02 12:16:53.237 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 459 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 475 op/s
Oct 02 12:16:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 02 12:16:53 compute-0 ceph-mon[73607]: osdmap e228: 3 total, 3 up, 3 in
Oct 02 12:16:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1216961707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 02 12:16:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 02 12:16:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 02 12:16:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 02 12:16:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 02 12:16:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00809748976363298 of space, bias 1.0, pg target 2.429246929089894 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.29491533186767166 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0032150407709845524 of space, bias 1.0, pg target 0.9580821497533967 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:16:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:54.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:54 compute-0 nova_compute[257802]: 2025-10-02 12:16:54.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:54 compute-0 ceph-mon[73607]: pgmap v1576: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 459 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 475 op/s
Oct 02 12:16:54 compute-0 ceph-mon[73607]: osdmap e229: 3 total, 3 up, 3 in
Oct 02 12:16:54 compute-0 ceph-mon[73607]: osdmap e230: 3 total, 3 up, 3 in
Oct 02 12:16:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/325269327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 501 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 18 MiB/s wr, 447 op/s
Oct 02 12:16:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4088404689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:16:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4088404689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:16:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:16:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:56.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:16:56 compute-0 ceph-mon[73607]: pgmap v1579: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 501 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 18 MiB/s wr, 447 op/s
Oct 02 12:16:56 compute-0 nova_compute[257802]: 2025-10-02 12:16:56.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 544 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 17 MiB/s wr, 382 op/s
Oct 02 12:16:57 compute-0 sudo[301593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:57 compute-0 sudo[301593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:57 compute-0 sudo[301593]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:57 compute-0 sudo[301624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:57 compute-0 sudo[301624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:57 compute-0 sudo[301624]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:57 compute-0 podman[301617]: 2025-10-02 12:16:57.441712997 +0000 UTC m=+0.095076000 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:16:57 compute-0 nova_compute[257802]: 2025-10-02 12:16:57.677 2 DEBUG nova.compute.manager [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:57 compute-0 nova_compute[257802]: 2025-10-02 12:16:57.720 2 INFO nova.compute.manager [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] instance snapshotting
Oct 02 12:16:58 compute-0 nova_compute[257802]: 2025-10-02 12:16:58.027 2 INFO nova.virt.libvirt.driver [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Beginning live snapshot process
Oct 02 12:16:58 compute-0 sudo[301670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:58 compute-0 sudo[301670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:58 compute-0 sudo[301670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:58 compute-0 sudo[301695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:58 compute-0 sudo[301695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:58 compute-0 sudo[301695]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:58.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:58 compute-0 sudo[301720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:58 compute-0 sudo[301720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:58 compute-0 sudo[301720]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:58 compute-0 nova_compute[257802]: 2025-10-02 12:16:58.183 2 DEBUG nova.virt.libvirt.imagebackend [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:16:58 compute-0 sudo[301776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:16:58 compute-0 sudo[301776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:16:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:58 compute-0 nova_compute[257802]: 2025-10-02 12:16:58.418 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:58 compute-0 nova_compute[257802]: 2025-10-02 12:16:58.442 2 DEBUG nova.storage.rbd_utils [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(8239563c95b949789091c8b50da46556) on rbd image(4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:16:58 compute-0 sudo[301776]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 02 12:16:58 compute-0 ceph-mon[73607]: pgmap v1580: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 544 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 17 MiB/s wr, 382 op/s
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 02 12:16:58 compute-0 nova_compute[257802]: 2025-10-02 12:16:58.887 2 DEBUG nova.storage.rbd_utils [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] cloning vms/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk@8239563c95b949789091c8b50da46556 to images/fdf855eb-5dbd-44e1-9b72-cac174cddd41 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:16:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 09a0ee4c-1e4f-4ce2-af1e-32468fc3100f does not exist
Oct 02 12:16:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5c06a04e-8def-4b20-a9fc-7196d8f0a735 does not exist
Oct 02 12:16:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cdfc99d2-9ce9-436d-aa9e-97aabcffd534 does not exist
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:59 compute-0 sudo[301885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:59 compute-0 sudo[301885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:59 compute-0 sudo[301885]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:59 compute-0 nova_compute[257802]: 2025-10-02 12:16:59.077 2 DEBUG nova.storage.rbd_utils [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] flattening images/fdf855eb-5dbd-44e1-9b72-cac174cddd41 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:16:59 compute-0 sudo[301913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:59 compute-0 sudo[301913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:59 compute-0 sudo[301913]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:59 compute-0 sudo[301956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:59 compute-0 sudo[301956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:59 compute-0 sudo[301956]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:59 compute-0 sudo[301981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:16:59 compute-0 sudo[301981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 545 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.8 MiB/s wr, 234 op/s
Oct 02 12:16:59 compute-0 nova_compute[257802]: 2025-10-02 12:16:59.507 2 DEBUG nova.storage.rbd_utils [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] removing snapshot(8239563c95b949789091c8b50da46556) on rbd image(4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.52371049 +0000 UTC m=+0.048993446 container create 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:16:59 compute-0 systemd[1]: Started libpod-conmon-57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e.scope.
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.498391968 +0000 UTC m=+0.023674944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.61271231 +0000 UTC m=+0.137995286 container init 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.625785811 +0000 UTC m=+0.151068777 container start 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.629104392 +0000 UTC m=+0.154387348 container attach 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:59 compute-0 systemd[1]: libpod-57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e.scope: Deactivated successfully.
Oct 02 12:16:59 compute-0 great_hoover[302082]: 167 167
Oct 02 12:16:59 compute-0 conmon[302082]: conmon 57203317fbf4d844b32c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e.scope/container/memory.events
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.633118961 +0000 UTC m=+0.158401937 container died 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42592388f05c5d33e8539aa755ec0aab58bf718c098769e8052d5b33c57fd16-merged.mount: Deactivated successfully.
Oct 02 12:16:59 compute-0 podman[302061]: 2025-10-02 12:16:59.674763515 +0000 UTC m=+0.200046471 container remove 57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:16:59 compute-0 systemd[1]: libpod-conmon-57203317fbf4d844b32c1290c69358b493fc710458ceee6feeff9f7995922f3e.scope: Deactivated successfully.
Oct 02 12:16:59 compute-0 nova_compute[257802]: 2025-10-02 12:16:59.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:59 compute-0 ceph-mon[73607]: osdmap e231: 3 total, 3 up, 3 in
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:16:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:59 compute-0 podman[302105]: 2025-10-02 12:16:59.833894599 +0000 UTC m=+0.047114900 container create 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:59 compute-0 systemd[1]: Started libpod-conmon-928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c.scope.
Oct 02 12:16:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:59 compute-0 podman[302105]: 2025-10-02 12:16:59.809224242 +0000 UTC m=+0.022444543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:59 compute-0 podman[302105]: 2025-10-02 12:16:59.910408621 +0000 UTC m=+0.123628962 container init 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:16:59 compute-0 podman[302105]: 2025-10-02 12:16:59.916010948 +0000 UTC m=+0.129231249 container start 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:59 compute-0 podman[302105]: 2025-10-02 12:16:59.924260781 +0000 UTC m=+0.137481102 container attach 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:16:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 02 12:16:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 02 12:16:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 02 12:16:59 compute-0 nova_compute[257802]: 2025-10-02 12:16:59.980 2 DEBUG nova.storage.rbd_utils [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(snap) on rbd image(fdf855eb-5dbd-44e1-9b72-cac174cddd41) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:17:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:00.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:00.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:00 compute-0 charming_babbage[302121]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:17:00 compute-0 charming_babbage[302121]: --> relative data size: 1.0
Oct 02 12:17:00 compute-0 charming_babbage[302121]: --> All data devices are unavailable
Oct 02 12:17:00 compute-0 systemd[1]: libpod-928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c.scope: Deactivated successfully.
Oct 02 12:17:00 compute-0 podman[302105]: 2025-10-02 12:17:00.73740445 +0000 UTC m=+0.950624801 container died 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6c767ed68b737b5c03f6b13e5c9fff96c6eceae240d347a2189aca8b58c21de-merged.mount: Deactivated successfully.
Oct 02 12:17:00 compute-0 podman[302105]: 2025-10-02 12:17:00.812357123 +0000 UTC m=+1.025577424 container remove 928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:17:00 compute-0 systemd[1]: libpod-conmon-928129e85000c0f47954503e9966d5ae52f4f4cc16e1c6db9df42671b9db088c.scope: Deactivated successfully.
Oct 02 12:17:00 compute-0 sudo[301981]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:00 compute-0 sudo[302168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:00 compute-0 sudo[302168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:00 compute-0 sudo[302168]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 02 12:17:00 compute-0 ceph-mon[73607]: pgmap v1582: 305 pgs: 305 active+clean; 545 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.8 MiB/s wr, 234 op/s
Oct 02 12:17:00 compute-0 ceph-mon[73607]: osdmap e232: 3 total, 3 up, 3 in
Oct 02 12:17:00 compute-0 sudo[302193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:00 compute-0 sudo[302193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:00 compute-0 sudo[302193]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:01 compute-0 sudo[302218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:01 compute-0 sudo[302218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 02 12:17:01 compute-0 sudo[302218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 02 12:17:01 compute-0 sudo[302243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:17:01 compute-0 sudo[302243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 596 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 228 op/s
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.540298436 +0000 UTC m=+0.047701944 container create d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:01 compute-0 systemd[1]: Started libpod-conmon-d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029.scope.
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.518432578 +0000 UTC m=+0.025836137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.635338593 +0000 UTC m=+0.142742141 container init d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.64410889 +0000 UTC m=+0.151512398 container start d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.647915332 +0000 UTC m=+0.155318920 container attach d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:17:01 compute-0 serene_dhawan[302324]: 167 167
Oct 02 12:17:01 compute-0 systemd[1]: libpod-d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029.scope: Deactivated successfully.
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.651292856 +0000 UTC m=+0.158696374 container died d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad1dbfd75198b878278c5eb1c429f9a42dc1d0aeeaea0cafb5c15d8bb67cd91a-merged.mount: Deactivated successfully.
Oct 02 12:17:01 compute-0 podman[302308]: 2025-10-02 12:17:01.695543844 +0000 UTC m=+0.202947352 container remove d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:17:01 compute-0 systemd[1]: libpod-conmon-d32c0bdcf84b01a6d60cf832ac0866e8b9d359ed986a321d3262c8d0f8b61029.scope: Deactivated successfully.
Oct 02 12:17:01 compute-0 nova_compute[257802]: 2025-10-02 12:17:01.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:01 compute-0 podman[302348]: 2025-10-02 12:17:01.889680809 +0000 UTC m=+0.051405076 container create 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:17:01 compute-0 systemd[1]: Started libpod-conmon-8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc.scope.
Oct 02 12:17:01 compute-0 podman[302348]: 2025-10-02 12:17:01.868243412 +0000 UTC m=+0.029967709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94da2af5a25042e2946341361f329db07931e93422de39712e40273b0256bb01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94da2af5a25042e2946341361f329db07931e93422de39712e40273b0256bb01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94da2af5a25042e2946341361f329db07931e93422de39712e40273b0256bb01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94da2af5a25042e2946341361f329db07931e93422de39712e40273b0256bb01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:01 compute-0 podman[302348]: 2025-10-02 12:17:01.98570099 +0000 UTC m=+0.147425287 container init 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:01 compute-0 podman[302348]: 2025-10-02 12:17:01.993244036 +0000 UTC m=+0.154968303 container start 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:17:01 compute-0 podman[302348]: 2025-10-02 12:17:01.996463035 +0000 UTC m=+0.158187322 container attach 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:02.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:02 compute-0 ceph-mon[73607]: osdmap e233: 3 total, 3 up, 3 in
Oct 02 12:17:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:02.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:02 compute-0 funny_feynman[302364]: {
Oct 02 12:17:02 compute-0 funny_feynman[302364]:     "1": [
Oct 02 12:17:02 compute-0 funny_feynman[302364]:         {
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "devices": [
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "/dev/loop3"
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             ],
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "lv_name": "ceph_lv0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "lv_size": "7511998464",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "name": "ceph_lv0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "tags": {
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.cluster_name": "ceph",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.crush_device_class": "",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.encrypted": "0",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.osd_id": "1",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.type": "block",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:                 "ceph.vdo": "0"
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             },
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "type": "block",
Oct 02 12:17:02 compute-0 funny_feynman[302364]:             "vg_name": "ceph_vg0"
Oct 02 12:17:02 compute-0 funny_feynman[302364]:         }
Oct 02 12:17:02 compute-0 funny_feynman[302364]:     ]
Oct 02 12:17:02 compute-0 funny_feynman[302364]: }
Oct 02 12:17:02 compute-0 systemd[1]: libpod-8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc.scope: Deactivated successfully.
Oct 02 12:17:02 compute-0 podman[302348]: 2025-10-02 12:17:02.821758632 +0000 UTC m=+0.983482919 container died 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-94da2af5a25042e2946341361f329db07931e93422de39712e40273b0256bb01-merged.mount: Deactivated successfully.
Oct 02 12:17:02 compute-0 podman[302348]: 2025-10-02 12:17:02.887376985 +0000 UTC m=+1.049101252 container remove 8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:17:02 compute-0 systemd[1]: libpod-conmon-8670a4b523955bec65b95a78310cef50740e45a02008626331fdb93f402966cc.scope: Deactivated successfully.
Oct 02 12:17:02 compute-0 sudo[302243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:02 compute-0 sudo[302385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:02 compute-0 sudo[302385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:02 compute-0 sudo[302385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:03 compute-0 sudo[302410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:03 compute-0 sudo[302410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:03 compute-0 sudo[302410]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.073 2 INFO nova.virt.libvirt.driver [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Snapshot image upload complete
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.073 2 INFO nova.compute.manager [None req-d0de3252-298b-4dd7-9550-1f7275f97e61 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 5.35 seconds to snapshot the instance on the hypervisor.
Oct 02 12:17:03 compute-0 sudo[302435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:03 compute-0 sudo[302435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:03 compute-0 sudo[302435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.165 2 DEBUG nova.network.neutron [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:03 compute-0 sudo[302460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:17:03 compute-0 ceph-mon[73607]: pgmap v1585: 305 pgs: 305 active+clean; 596 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 228 op/s
Oct 02 12:17:03 compute-0 sudo[302460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.199 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.200 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance network_info: |[{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.200 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.200 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port 7eba4256-8a01-49a2-aec1-98d48eda4d3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.207 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Start _get_guest_xml network_info=[{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_
Oct 02 12:17:03 compute-0 nova_compute[257802]: ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '66f30441-268c-44a2-b67b-424f14a0be77', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c616c2f-d398-4818-acf9-c1deefd666a5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c616c2f-d398-4818-acf9-c1deefd666a5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'attached_at': '', 'detached_at': '', 'volume_id': '7c616c2f-d398-4818-acf9-c1deefd666a5', 'serial': '7c616c2f-d398-4818-acf9-c1deefd666a5'}, 'device_type': 'disk', 'volume_type': None}, {'boot_index': 1, 'guest_format': None, 'attachment_id': '291b04fe-1a16-460c-ae11-af9f070c1703', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8c49b0fe-d42a-4dc2-8485-1c114233c6fc', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8c49b0fe-d42a-4dc2-8485-1c114233c6fc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'attached_at': '', 'detached_at': '', 'volume_id': '8c49b0fe-d42a-4dc2-8485-1c114233c6fc', 'serial': '8c49b0fe-d42a-4dc2-8485-1c114233c6fc'}, 'device_type': 'disk', 'volume_type': None}, {'boot_index': 2, 'guest_format': None, 'attachment_id': '1c5135d3-fcf3-4e66-8457-e1dbd53ff6a6', 'mount_device': '/dev/vdc', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5566db4c-b0ba-4d2b-84a6-1fa1272fec0f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5566db4c-b0ba-4d2b-84a6-1fa1272fec0f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'attached_at': '', 'detached_at': '', 'volume_id': '5566db4c-b0ba-4d2b-84a6-1fa1272fec0f', 'serial': '5566db4c-b0ba-4d2b-84a6-1fa1272fec0f'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.214 2 WARNING nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.229 2 DEBUG nova.virt.libvirt.host [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.231 2 DEBUG nova.virt.libvirt.host [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.237 2 DEBUG nova.virt.libvirt.host [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.238 2 DEBUG nova.virt.libvirt.host [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.239 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.239 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.239 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.239 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.239 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.240 2 DEBUG nova.virt.hardware [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.269 2 DEBUG nova.storage.rbd_utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] rbd image d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.274 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 630 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 9.2 MiB/s wr, 367 op/s
Oct 02 12:17:03 compute-0 rsyslogd[1007]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 12:17:03.207 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 02 12:17:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.53528914 +0000 UTC m=+0.053240720 container create 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:03 compute-0 systemd[1]: Started libpod-conmon-4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857.scope.
Oct 02 12:17:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.51206664 +0000 UTC m=+0.030018280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.622227378 +0000 UTC m=+0.140178958 container init 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.635247329 +0000 UTC m=+0.153198879 container start 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.638612282 +0000 UTC m=+0.156563832 container attach 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:17:03 compute-0 nostalgic_dhawan[302577]: 167 167
Oct 02 12:17:03 compute-0 systemd[1]: libpod-4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857.scope: Deactivated successfully.
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.642670362 +0000 UTC m=+0.160621912 container died 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dca0af4b842c5f93b99d2548de5106df89a48d429f9fb9220d00ab04acf5503-merged.mount: Deactivated successfully.
Oct 02 12:17:03 compute-0 podman[302561]: 2025-10-02 12:17:03.689511203 +0000 UTC m=+0.207462733 container remove 4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:17:03 compute-0 systemd[1]: libpod-conmon-4e22d5d9a7fa873d10c5a516ee6e37a148c4f196033ca3dcc39ca7be46474857.scope: Deactivated successfully.
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284290376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 02 12:17:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.745 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.848 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.849 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.850 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.851 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.851 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.851 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.852 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.852 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.853 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.854 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.854 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.854 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.855 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.855 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.856 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.856 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.856 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.857 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.857 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.858 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.858 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.859 2 DEBUG nova.objects.instance [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7e14705-7aeb-440f-8e7e-926cc5b5ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.878 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <uuid>d7e14705-7aeb-440f-8e7e-926cc5b5ab6f</uuid>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <name>instance-00000049</name>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:name>tempest-device-tagging-server-1586743939</nova:name>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:17:03</nova:creationTime>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:user uuid="d4b073f3365d481cabadfd39389c66ba">tempest-TaggedBootDevicesTest-1070421781-project-member</nova:user>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:project uuid="a9d3eca266284ae9950c491e566b2523">tempest-TaggedBootDevicesTest-1070421781</nova:project>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="1b5f1c6a-6b73-4b1d-8560-6a38d7e59732">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="d56785b1-aeee-49be-af2d-71e8fc0a6c9b">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.1.1.24" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="4a1bf892-fd3f-40df-95f2-779ffc41ade1">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.1.1.128" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="c23e360e-b27a-4c79-a3ce-5c8850ccc846">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.1.1.222" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="eea85d66-4089-4708-9cb0-a7d2ab18e6ec">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.1.1.9" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="7eba4256-8a01-49a2-aec1-98d48eda4d3f">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <nova:port uuid="137dbaa1-1ef7-4aea-bc14-9f99b461690c">
Oct 02 12:17:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <system>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="serial">d7e14705-7aeb-440f-8e7e-926cc5b5ab6f</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="uuid">d7e14705-7aeb-440f-8e7e-926cc5b5ab6f</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </system>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <os>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </os>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <features>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </features>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-7c616c2f-d398-4818-acf9-c1deefd666a5">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <serial>7c616c2f-d398-4818-acf9-c1deefd666a5</serial>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-8c49b0fe-d42a-4dc2-8485-1c114233c6fc">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <serial>8c49b0fe-d42a-4dc2-8485-1c114233c6fc</serial>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-5566db4c-b0ba-4d2b-84a6-1fa1272fec0f">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:17:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="vdc" bus="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <serial>5566db4c-b0ba-4d2b-84a6-1fa1272fec0f</serial>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:c2:83:d4"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tap1b5f1c6a-6b"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:86:0a:06"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tapd56785b1-ae"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3b:4c:02"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tap4a1bf892-fd"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1d:5e:56"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tapc23e360e-b2"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:11:f2:18"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tapeea85d66-40"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:ee:27:7c"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tap7eba4256-8a"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:4f:43:a5"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <target dev="tap137dbaa1-1e"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/console.log" append="off"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <video>
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </video>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:17:03 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:17:03 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:17:03 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:17:03 compute-0 nova_compute[257802]: </domain>
Oct 02 12:17:03 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.879 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.880 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.880 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.880 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.880 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.881 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.882 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.883 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Preparing to wait for external event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.884 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.884 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.884 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.884 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.885 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.885 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.886 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.890 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b5f1c6a-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b5f1c6a-6b, col_values=(('external_ids', {'iface-id': '1b5f1c6a-6b73-4b1d-8560-6a38d7e59732', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:83:d4', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 podman[302603]: 2025-10-02 12:17:03.891598074 +0000 UTC m=+0.060042698 container create 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.8933] manager: (tap1b5f1c6a-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.901 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b')
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.901 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.902 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.902 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.902 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd56785b1-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd56785b1-ae, col_values=(('external_ids', {'iface-id': 'd56785b1-aeee-49be-af2d-71e8fc0a6c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:0a:06', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.9075] manager: (tapd56785b1-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.913 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae')
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.914 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.914 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.915 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.915 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a1bf892-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a1bf892-fd, col_values=(('external_ids', {'iface-id': '4a1bf892-fd3f-40df-95f2-779ffc41ade1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:4c:02', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.9205] manager: (tap4a1bf892-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.930 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd')
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.931 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.931 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.932 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.932 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc23e360e-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc23e360e-b2, col_values=(('external_ids', {'iface-id': 'c23e360e-b27a-4c79-a3ce-5c8850ccc846', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:5e:56', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.9373] manager: (tapc23e360e-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:03 compute-0 systemd[1]: Started libpod-conmon-92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065.scope.
Oct 02 12:17:03 compute-0 podman[302603]: 2025-10-02 12:17:03.85566939 +0000 UTC m=+0.024114034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.953 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2')
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.954 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.955 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.955 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.956 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeea85d66-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeea85d66-40, col_values=(('external_ids', {'iface-id': 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:f2:18', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.9633] manager: (tapeea85d66-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725c035d6c0bfe62ddba75564a1fbb6c64dac87f7335c9c69782bdccc55b8a2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725c035d6c0bfe62ddba75564a1fbb6c64dac87f7335c9c69782bdccc55b8a2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725c035d6c0bfe62ddba75564a1fbb6c64dac87f7335c9c69782bdccc55b8a2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725c035d6c0bfe62ddba75564a1fbb6c64dac87f7335c9c69782bdccc55b8a2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.983 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40')
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.984 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.985 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.985 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.986 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.990 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7eba4256-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7eba4256-8a, col_values=(('external_ids', {'iface-id': '7eba4256-8a01-49a2-aec1-98d48eda4d3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:27:7c', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:03 compute-0 NetworkManager[44987]: <info>  [1759407423.9935] manager: (tap7eba4256-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:03 compute-0 nova_compute[257802]: 2025-10-02 12:17:03.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:04 compute-0 podman[302603]: 2025-10-02 12:17:04.003396434 +0000 UTC m=+0.171841078 container init 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 podman[302603]: 2025-10-02 12:17:04.011493972 +0000 UTC m=+0.179938596 container start 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.012 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a')
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.013 2 DEBUG nova.virt.libvirt.vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.014 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.014 2 DEBUG nova.network.os_vif_util [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.015 2 DEBUG os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap137dbaa1-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap137dbaa1-1e, col_values=(('external_ids', {'iface-id': '137dbaa1-1ef7-4aea-bc14-9f99b461690c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:43:a5', 'vm-uuid': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 NetworkManager[44987]: <info>  [1759407424.0209] manager: (tap137dbaa1-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 02 12:17:04 compute-0 podman[302603]: 2025-10-02 12:17:04.021614292 +0000 UTC m=+0.190058916 container attach 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.039 2 INFO os_vif [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e')
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.136 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] No VIF found with MAC fa:16:3e:c2:83:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.136 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] No VIF found with MAC fa:16:3e:11:f2:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.137 2 INFO nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Using config drive
Oct 02 12:17:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:04 compute-0 nova_compute[257802]: 2025-10-02 12:17:04.168 2 DEBUG nova.storage.rbd_utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] rbd image d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:04 compute-0 ceph-mon[73607]: pgmap v1586: 305 pgs: 305 active+clean; 630 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 9.2 MiB/s wr, 367 op/s
Oct 02 12:17:04 compute-0 ceph-mon[73607]: osdmap e234: 3 total, 3 up, 3 in
Oct 02 12:17:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/284290376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:04 compute-0 ceph-mon[73607]: osdmap e235: 3 total, 3 up, 3 in
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]: {
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:         "osd_id": 1,
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:         "type": "bluestore"
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]:     }
Oct 02 12:17:04 compute-0 vibrant_faraday[302628]: }
Oct 02 12:17:04 compute-0 systemd[1]: libpod-92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065.scope: Deactivated successfully.
Oct 02 12:17:04 compute-0 podman[302689]: 2025-10-02 12:17:04.900698011 +0000 UTC m=+0.024365550 container died 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-725c035d6c0bfe62ddba75564a1fbb6c64dac87f7335c9c69782bdccc55b8a2e-merged.mount: Deactivated successfully.
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.008 2 INFO nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Creating config drive at /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config
Oct 02 12:17:05 compute-0 podman[302689]: 2025-10-02 12:17:05.00964586 +0000 UTC m=+0.133313439 container remove 92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_faraday, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.014 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcar7n_32 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:05 compute-0 systemd[1]: libpod-conmon-92942f848bf4c7e45f5921e54fa98fc6c18f0312ef4e23897ad9d5b9fbede065.scope: Deactivated successfully.
Oct 02 12:17:05 compute-0 sudo[302460]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:17:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:17:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.166 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcar7n_32" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.197 2 DEBUG nova.storage.rbd_utils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] rbd image d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.201 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:17:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5ec675d1-6b64-48f9-aba1-666e8b620530 does not exist
Oct 02 12:17:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 33a36e4f-dd70-4296-8ce7-762f32ba0934 does not exist
Oct 02 12:17:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5713bb3c-6b84-4b76-9678-748badcc2462 does not exist
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.281 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updated VIF entry in instance network info cache for port 7eba4256-8a01-49a2-aec1-98d48eda4d3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 656 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 487 op/s
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.283 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:05 compute-0 sudo[302726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:05 compute-0 sudo[302726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:05 compute-0 sudo[302726]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.309 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.310 2 DEBUG nova.compute.manager [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.310 2 DEBUG nova.compute.manager [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-137dbaa1-1ef7-4aea-bc14-9f99b461690c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.310 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.311 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.311 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port 137dbaa1-1ef7-4aea-bc14-9f99b461690c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:05 compute-0 sudo[302766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:17:05 compute-0 sudo[302766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:05 compute-0 sudo[302766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.377 2 DEBUG oslo_concurrency.processutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.378 2 INFO nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Deleting local config drive /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f/disk.config because it was imported into RBD.
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4257] manager: (tap1b5f1c6a-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Oct 02 12:17:05 compute-0 kernel: tap1b5f1c6a-6b: entered promiscuous mode
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4397] manager: (tapd56785b1-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00265|binding|INFO|Claiming lport 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00266|binding|INFO|1b5f1c6a-6b73-4b1d-8560-6a38d7e59732: Claiming fa:16:3e:c2:83:d4 10.100.0.5
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 kernel: tapd56785b1-ae: entered promiscuous mode
Oct 02 12:17:05 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4543] manager: (tap4a1bf892-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 02 12:17:05 compute-0 systemd-udevd[302820]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:05 compute-0 kernel: tap4a1bf892-fd: entered promiscuous mode
Oct 02 12:17:05 compute-0 systemd-udevd[302818]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:05 compute-0 systemd-udevd[302816]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4658] manager: (tapc23e360e-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct 02 12:17:05 compute-0 systemd-udevd[302829]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.469 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:83:d4 10.100.0.5'], port_security=['fa:16:3e:c2:83:d4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c9b866-0d06-4d2c-a073-d724f233a261', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76a7d478-dac8-494b-afd5-42c66456bac5, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.473 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 in datapath 18c9b866-0d06-4d2c-a073-d724f233a261 bound to our chassis
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.475 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c9b866-0d06-4d2c-a073-d724f233a261
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4763] device (tap1b5f1c6a-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4772] device (tapd56785b1-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4785] device (tap1b5f1c6a-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4792] device (tapd56785b1-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4797] device (tap4a1bf892-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4814] device (tap4a1bf892-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.4874] manager: (tapeea85d66-40): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.487 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[afedf1f5-0d7e-4b2e-92d2-e3c8ee09292f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.488 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18c9b866-01 in ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.490 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18c9b866-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5e589006-c9aa-47c9-8f9f-ebfa847e9d85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a06ead-d748-4015-a8e5-c8eeb58a18df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5047] manager: (tap7eba4256-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.505 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2c627a3d-1c56-45f4-987e-d9e07fa49dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5200] manager: (tap137dbaa1-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 02 12:17:05 compute-0 systemd-udevd[302831]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.536 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2e701643-3af7-4c70-b273-a17174e18393]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 kernel: tapc23e360e-b2: entered promiscuous mode
Oct 02 12:17:05 compute-0 kernel: tap7eba4256-8a: entered promiscuous mode
Oct 02 12:17:05 compute-0 kernel: tap137dbaa1-1e: entered promiscuous mode
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00267|binding|INFO|Claiming lport 4a1bf892-fd3f-40df-95f2-779ffc41ade1 for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00268|binding|INFO|4a1bf892-fd3f-40df-95f2-779ffc41ade1: Claiming fa:16:3e:3b:4c:02 10.1.1.128
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00269|binding|INFO|Claiming lport d56785b1-aeee-49be-af2d-71e8fc0a6c9b for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00270|binding|INFO|d56785b1-aeee-49be-af2d-71e8fc0a6c9b: Claiming fa:16:3e:86:0a:06 10.1.1.24
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 kernel: tapeea85d66-40: entered promiscuous mode
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5540] device (tapc23e360e-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5553] device (tap7eba4256-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5564] device (tap137dbaa1-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 systemd-machined[211836]: New machine qemu-34-instance-00000049.
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5579] device (tapeea85d66-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5600] device (tapc23e360e-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5604] device (tap7eba4256-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5608] device (tap137dbaa1-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00271|binding|INFO|Claiming lport c23e360e-b27a-4c79-a3ce-5c8850ccc846 for this chassis.
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5613] device (tapeea85d66-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00272|binding|INFO|c23e360e-b27a-4c79-a3ce-5c8850ccc846: Claiming fa:16:3e:1d:5e:56 10.1.1.222
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00273|binding|INFO|Claiming lport 7eba4256-8a01-49a2-aec1-98d48eda4d3f for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00274|binding|INFO|7eba4256-8a01-49a2-aec1-98d48eda4d3f: Claiming fa:16:3e:ee:27:7c 10.2.2.100
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00275|binding|INFO|Claiming lport 137dbaa1-1ef7-4aea-bc14-9f99b461690c for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00276|binding|INFO|137dbaa1-1ef7-4aea-bc14-9f99b461690c: Claiming fa:16:3e:4f:43:a5 10.2.2.200
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00277|binding|INFO|Claiming lport eea85d66-4089-4708-9cb0-a7d2ab18e6ec for this chassis.
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00278|binding|INFO|eea85d66-4089-4708-9cb0-a7d2ab18e6ec: Claiming fa:16:3e:11:f2:18 10.1.1.9
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.562 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:0a:06 10.1.1.24'], port_security=['fa:16:3e:86:0a:06 10.1.1.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1251303844', 'neutron:cidrs': '10.1.1.24/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1251303844', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fe43832-bbac-4905-ae6c-ce5bd827e90a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d56785b1-aeee-49be-af2d-71e8fc0a6c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.564 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:4c:02 10.1.1.128'], port_security=['fa:16:3e:3b:4c:02 10.1.1.128'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1253942574', 'neutron:cidrs': '10.1.1.128/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1253942574', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fe43832-bbac-4905-ae6c-ce5bd827e90a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4a1bf892-fd3f-40df-95f2-779ffc41ade1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-00000049.
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.567 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[21550208-1b4f-4915-8c1f-d4266e6a6bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00279|binding|INFO|Setting lport 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 ovn-installed in OVS
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8e492a79-bed7-48e1-b378-1d99e256ecb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.5750] manager: (tap18c9b866-00): new Veth device (/org/freedesktop/NetworkManager/Devices/134)
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.575 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:27:7c 10.2.2.100'], port_security=['fa:16:3e:ee:27:7c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82780a24-ba32-4888-a78c-cf80dc7ecc16, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7eba4256-8a01-49a2-aec1-98d48eda4d3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00280|binding|INFO|Setting lport 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 up in Southbound
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.576 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:43:a5 10.2.2.200'], port_security=['fa:16:3e:4f:43:a5 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82780a24-ba32-4888-a78c-cf80dc7ecc16, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=137dbaa1-1ef7-4aea-bc14-9f99b461690c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.577 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:f2:18 10.1.1.9'], port_security=['fa:16:3e:11:f2:18 10.1.1.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.9/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eea85d66-4089-4708-9cb0-a7d2ab18e6ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.579 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:5e:56 10.1.1.222'], port_security=['fa:16:3e:1d:5e:56 10.1.1.222'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.222/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '2', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c23e360e-b27a-4c79-a3ce-5c8850ccc846) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.614 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecc3f98-f22a-479b-828f-980dee782ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.617 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb0ce20-d54c-42da-a4b9-d05da7dcfead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.6431] device (tap18c9b866-00): carrier: link connected
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.652 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f88cec46-5817-44a7-ad89-767125184053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.701 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e14e357b-ac3e-448c-b5d9-7bb8a4e2fc31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c9b866-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:58:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549326, 'reachable_time': 29407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302874, 'error': None, 'target': 'ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00281|binding|INFO|Setting lport c23e360e-b27a-4c79-a3ce-5c8850ccc846 ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00282|binding|INFO|Setting lport c23e360e-b27a-4c79-a3ce-5c8850ccc846 up in Southbound
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00283|binding|INFO|Setting lport 4a1bf892-fd3f-40df-95f2-779ffc41ade1 ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00284|binding|INFO|Setting lport 4a1bf892-fd3f-40df-95f2-779ffc41ade1 up in Southbound
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00285|binding|INFO|Setting lport 7eba4256-8a01-49a2-aec1-98d48eda4d3f ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00286|binding|INFO|Setting lport 7eba4256-8a01-49a2-aec1-98d48eda4d3f up in Southbound
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00287|binding|INFO|Setting lport d56785b1-aeee-49be-af2d-71e8fc0a6c9b ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00288|binding|INFO|Setting lport d56785b1-aeee-49be-af2d-71e8fc0a6c9b up in Southbound
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00289|binding|INFO|Setting lport eea85d66-4089-4708-9cb0-a7d2ab18e6ec ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00290|binding|INFO|Setting lport eea85d66-4089-4708-9cb0-a7d2ab18e6ec up in Southbound
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00291|binding|INFO|Setting lport 137dbaa1-1ef7-4aea-bc14-9f99b461690c ovn-installed in OVS
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00292|binding|INFO|Setting lport 137dbaa1-1ef7-4aea-bc14-9f99b461690c up in Southbound
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.721 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[544e6792-7e1c-4436-90cd-c50a4dc3ebcd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:5818'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549326, 'tstamp': 549326}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302876, 'error': None, 'target': 'ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.738 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbfbbaf-b323-4049-8e35-6244f3af25e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c9b866-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:58:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549326, 'reachable_time': 29407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302877, 'error': None, 'target': 'ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.769 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d64a4f-c69c-4b55-8c35-cc4b4d18b638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.848 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da65e99c-31b9-4da2-b4f3-f6f8c058af9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.850 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c9b866-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.850 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.851 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c9b866-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 NetworkManager[44987]: <info>  [1759407425.8532] manager: (tap18c9b866-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Oct 02 12:17:05 compute-0 kernel: tap18c9b866-00: entered promiscuous mode
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.855 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c9b866-00, col_values=(('external_ids', {'iface-id': '0f9e5149-7f4c-4549-8a41-1857fb606a88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 ovn_controller[148183]: 2025-10-02T12:17:05Z|00293|binding|INFO|Releasing lport 0f9e5149-7f4c-4549-8a41-1857fb606a88 from this chassis (sb_readonly=0)
Oct 02 12:17:05 compute-0 nova_compute[257802]: 2025-10-02 12:17:05.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.873 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18c9b866-0d06-4d2c-a073-d724f233a261.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18c9b866-0d06-4d2c-a073-d724f233a261.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.874 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb33d77-fbb6-4992-ac9e-8f2c9639ec9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.875 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-18c9b866-0d06-4d2c-a073-d724f233a261
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/18c9b866-0d06-4d2c-a073-d724f233a261.pid.haproxy
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 18c9b866-0d06-4d2c-a073-d724f233a261
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:17:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:05.876 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261', 'env', 'PROCESS_TAG=haproxy-18c9b866-0d06-4d2c-a073-d724f233a261', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18c9b866-0d06-4d2c-a073-d724f233a261.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:17:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:17:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:17:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:17:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:17:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 02 12:17:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 02 12:17:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 02 12:17:06 compute-0 podman[302961]: 2025-10-02 12:17:06.309771526 +0000 UTC m=+0.063415341 container create 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:17:06 compute-0 systemd[1]: Started libpod-conmon-01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41.scope.
Oct 02 12:17:06 compute-0 podman[302961]: 2025-10-02 12:17:06.28188788 +0000 UTC m=+0.035531715 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:17:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef7665c785d1366f04882cd1d45e647d79ee1a4e4358d43246ee9df09eaf35b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:06 compute-0 podman[302961]: 2025-10-02 12:17:06.402026525 +0000 UTC m=+0.155670360 container init 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:17:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:06 compute-0 podman[302961]: 2025-10-02 12:17:06.409455037 +0000 UTC m=+0.163098853 container start 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:17:06 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [NOTICE]   (303010) : New worker (303012) forked
Oct 02 12:17:06 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [NOTICE]   (303010) : Loading success.
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.498 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d56785b1-aeee-49be-af2d-71e8fc0a6c9b in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.501 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.515 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[238fc731-b41d-43be-aa14-830ad671a4a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.516 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12d2f684-11 in ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.519 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12d2f684-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.519 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1a1aca-5d1a-4a1a-847a-76cc4ba487c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.519 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9487ca81-6855-4696-805a-22514e76be1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.531 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a19442b8-735c-40fd-9789-57d9f1e43cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.555 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[732e6e94-d64b-4f41-91c9-3ad4b63ff225]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.564 2 DEBUG nova.compute.manager [req-85a5b4eb-a30d-47fc-9bc4-7ed31bb1faaf req-20418290-4ef0-4475-aaf2-73512c4425bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.564 2 DEBUG oslo_concurrency.lockutils [req-85a5b4eb-a30d-47fc-9bc4-7ed31bb1faaf req-20418290-4ef0-4475-aaf2-73512c4425bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.564 2 DEBUG oslo_concurrency.lockutils [req-85a5b4eb-a30d-47fc-9bc4-7ed31bb1faaf req-20418290-4ef0-4475-aaf2-73512c4425bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.565 2 DEBUG oslo_concurrency.lockutils [req-85a5b4eb-a30d-47fc-9bc4-7ed31bb1faaf req-20418290-4ef0-4475-aaf2-73512c4425bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.565 2 DEBUG nova.compute.manager [req-85a5b4eb-a30d-47fc-9bc4-7ed31bb1faaf req-20418290-4ef0-4475-aaf2-73512c4425bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.587 2 DEBUG nova.compute.manager [req-6e860400-2f79-4d96-8edd-89f68523ddea req-86961fc1-1721-4f0f-950a-61dc3d36da93 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.587 2 DEBUG oslo_concurrency.lockutils [req-6e860400-2f79-4d96-8edd-89f68523ddea req-86961fc1-1721-4f0f-950a-61dc3d36da93 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.587 2 DEBUG oslo_concurrency.lockutils [req-6e860400-2f79-4d96-8edd-89f68523ddea req-86961fc1-1721-4f0f-950a-61dc3d36da93 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.587 2 DEBUG oslo_concurrency.lockutils [req-6e860400-2f79-4d96-8edd-89f68523ddea req-86961fc1-1721-4f0f-950a-61dc3d36da93 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.588 2 DEBUG nova.compute.manager [req-6e860400-2f79-4d96-8edd-89f68523ddea req-86961fc1-1721-4f0f-950a-61dc3d36da93 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.594 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb15024-5f9b-4d61-91d0-3f99fa553114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 NetworkManager[44987]: <info>  [1759407426.6013] manager: (tap12d2f684-10): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.600 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[684ea107-a42a-4632-aaa4-f5848d996985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 systemd-udevd[302864]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.640 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[79300abb-2957-45d3-b765-aee67ec08b9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.644 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f9be9512-587e-49b0-a30a-d4e7377e9799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 NetworkManager[44987]: <info>  [1759407426.6698] device (tap12d2f684-10): carrier: link connected
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.677 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cde8e1c9-bb34-4b7e-ac38-d91aa5097fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.706 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1303e0-180d-49e3-a9b1-c51fc513cde8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12d2f684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:a1:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549429, 'reachable_time': 44206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303031, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.725 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f35457c7-36c2-4189-bb45-ba42e1b4bc1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:a1ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549429, 'tstamp': 549429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303032, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.753 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[feb29b7e-6b3e-4c15-a0a2-38f8ff93a03c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12d2f684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:a1:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549429, 'reachable_time': 44206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303033, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.823 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e011f09e-1d51-44e1-9871-41351d993c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.842 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407426.842219, d7e14705-7aeb-440f-8e7e-926cc5b5ab6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.843 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] VM Started (Lifecycle Event)
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.861 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.865 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407426.8424523, d7e14705-7aeb-440f-8e7e-926cc5b5ab6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.865 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] VM Paused (Lifecycle Event)
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.885 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9c24c496-33fc-47d2-86a8-ca01c34aa11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.887 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12d2f684-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.887 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.888 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12d2f684-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:06 compute-0 NetworkManager[44987]: <info>  [1759407426.8909] manager: (tap12d2f684-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 02 12:17:06 compute-0 kernel: tap12d2f684-10: entered promiscuous mode
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.897 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12d2f684-10, col_values=(('external_ids', {'iface-id': '1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:06 compute-0 ovn_controller[148183]: 2025-10-02T12:17:06Z|00294|binding|INFO|Releasing lport 1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae from this chassis (sb_readonly=0)
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.899 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12d2f684-17ee-45ff-8e0b-9e61463b4f7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12d2f684-17ee-45ff-8e0b-9e61463b4f7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.900 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.900 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da31ec7e-758b-4e92-8ee4-3bacf37d96f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.901 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/12d2f684-17ee-45ff-8e0b-9e61463b4f7c.pid.haproxy
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:17:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:06.901 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'env', 'PROCESS_TAG=haproxy-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12d2f684-17ee-45ff-8e0b-9e61463b4f7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.903 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:06 compute-0 nova_compute[257802]: 2025-10-02 12:17:06.926 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:17:07 compute-0 ceph-mon[73607]: pgmap v1589: 305 pgs: 305 active+clean; 656 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 487 op/s
Oct 02 12:17:07 compute-0 ceph-mon[73607]: osdmap e236: 3 total, 3 up, 3 in
Oct 02 12:17:07 compute-0 podman[303066]: 2025-10-02 12:17:07.247398335 +0000 UTC m=+0.055179617 container create 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.261 2 DEBUG nova.compute.manager [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.261 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG nova.compute.manager [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG nova.compute.manager [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.262 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.263 2 DEBUG oslo_concurrency.lockutils [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.263 2 DEBUG nova.compute.manager [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 in dict_keys([('network-vif-plugged', 'd56785b1-aeee-49be-af2d-71e8fc0a6c9b'), ('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec'), ('network-vif-plugged', '7eba4256-8a01-49a2-aec1-98d48eda4d3f'), ('network-vif-plugged', '137dbaa1-1ef7-4aea-bc14-9f99b461690c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.263 2 WARNING nova.compute.manager [req-82d6c64a-192e-4835-af6e-130be3954eb7 req-3c79348f-cd94-4955-9968-f3ff538e6d5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 for instance with vm_state building and task_state spawning.
Oct 02 12:17:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 02 12:17:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 02 12:17:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 02 12:17:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 671 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.6 MiB/s wr, 253 op/s
Oct 02 12:17:07 compute-0 systemd[1]: Started libpod-conmon-77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17.scope.
Oct 02 12:17:07 compute-0 podman[303066]: 2025-10-02 12:17:07.218338681 +0000 UTC m=+0.026120023 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:17:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327582ce23201baa3800f67f27ab9384fcf9a9ad8316dbe8161716d1ce1ea7ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:07 compute-0 podman[303066]: 2025-10-02 12:17:07.345710294 +0000 UTC m=+0.153491676 container init 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:17:07 compute-0 podman[303066]: 2025-10-02 12:17:07.355214497 +0000 UTC m=+0.162995809 container start 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:17:07 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [NOTICE]   (303085) : New worker (303087) forked
Oct 02 12:17:07 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [NOTICE]   (303085) : Loading success.
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.418 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4a1bf892-fd3f-40df-95f2-779ffc41ade1 in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.420 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.443 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ef448f3a-c654-4f82-acf0-c5f442ad418a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.493 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[47456ab5-623c-474b-8494-34156f1afd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.500 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8dd8c5-c8db-40d2-a2a7-4d3be0b7c67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.553 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3e24afee-1292-4066-9703-9a0eb2d99602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.581 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4af44cb2-94ea-4f16-9c22-31f13db47dc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12d2f684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:a1:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549429, 'reachable_time': 44206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303101, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.613 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a68fe9-f8ec-4cd6-8aa2-b71f3a0ed609]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549447, 'tstamp': 549447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303102, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549450, 'tstamp': 549450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303102, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.615 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12d2f684-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.620 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12d2f684-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.621 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.622 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12d2f684-10, col_values=(('external_ids', {'iface-id': '1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.623 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.625 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7eba4256-8a01-49a2-aec1-98d48eda4d3f in datapath 8f6e5e22-d751-48cd-82a1-be33a11e92d9 unbound from our chassis
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.630 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f6e5e22-d751-48cd-82a1-be33a11e92d9
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.652 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1d706f-fc9b-4a0b-8011-dc74d41870b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.654 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f6e5e22-d1 in ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.657 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f6e5e22-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.657 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d0e01e-9aa7-4140-9cc7-a6cf480ccf7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.660 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e81607-9fc6-426f-b08c-f9f59759ba97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.682 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[86901206-22fb-4be1-ab00-9d9eaa09ce70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.714 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6d10c2be-f533-4862-8cfa-36c5e94ca2e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.759 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f53e719f-054a-4b61-8d72-d8fd907382e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.769 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa36df1-3313-4ab0-b46f-84a91ce6c4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 NetworkManager[44987]: <info>  [1759407427.7711] manager: (tap8f6e5e22-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.806 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b44c928a-1bc6-40e9-a3d0-74f06a69ee49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.808 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c3114601-829a-4ff5-8abb-5b21e1e65e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 NetworkManager[44987]: <info>  [1759407427.8405] device (tap8f6e5e22-d0): carrier: link connected
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.847 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3419ce-170d-47c6-8abc-bb0a7fd758f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.869 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb14bfd-d731-4fe8-9c67-b73cb728902b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f6e5e22-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:a7:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549546, 'reachable_time': 24230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303113, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.884 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updated VIF entry in instance network info cache for port 137dbaa1-1ef7-4aea-bc14-9f99b461690c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.885 2 DEBUG nova.network.neutron [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.889 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d67ae676-1d0f-4550-9c01-98068507d7fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:a735'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549546, 'tstamp': 549546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303114, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.919 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f20c6b92-1d3d-44ca-a6c0-2b97b65e5d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f6e5e22-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:a7:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549546, 'reachable_time': 24230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303115, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:07 compute-0 nova_compute[257802]: 2025-10-02 12:17:07.925 2 DEBUG oslo_concurrency.lockutils [req-0ca3a373-8c95-4cba-a4e8-ffd0f111e73d req-1f0b83e7-4199-4249-9060-eb41dbe89f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:07.956 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76481d68-8187-4bf8-97d8-dd3dd8d301bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.023 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cda503-5c68-444b-af8e-1e55ca89da94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.026 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f6e5e22-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.027 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.027 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f6e5e22-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 NetworkManager[44987]: <info>  [1759407428.0301] manager: (tap8f6e5e22-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct 02 12:17:08 compute-0 kernel: tap8f6e5e22-d0: entered promiscuous mode
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.034 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f6e5e22-d0, col_values=(('external_ids', {'iface-id': 'b8f144d1-47bb-4eb6-881b-3e350b7e079a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 ovn_controller[148183]: 2025-10-02T12:17:08Z|00295|binding|INFO|Releasing lport b8f144d1-47bb-4eb6-881b-3e350b7e079a from this chassis (sb_readonly=0)
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.073 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f6e5e22-d751-48cd-82a1-be33a11e92d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f6e5e22-d751-48cd-82a1-be33a11e92d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.074 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c59dcdcb-6e75-4a03-9ef9-2086317a0f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.075 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-8f6e5e22-d751-48cd-82a1-be33a11e92d9
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/8f6e5e22-d751-48cd-82a1-be33a11e92d9.pid.haproxy
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 8f6e5e22-d751-48cd-82a1-be33a11e92d9
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.076 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'env', 'PROCESS_TAG=haproxy-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f6e5e22-d751-48cd-82a1-be33a11e92d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:17:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:08 compute-0 ceph-mon[73607]: osdmap e237: 3 total, 3 up, 3 in
Oct 02 12:17:08 compute-0 ceph-mon[73607]: pgmap v1592: 305 pgs: 305 active+clean; 671 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.6 MiB/s wr, 253 op/s
Oct 02 12:17:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:08 compute-0 podman[303149]: 2025-10-02 12:17:08.515472832 +0000 UTC m=+0.064175509 container create 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:17:08 compute-0 systemd[1]: Started libpod-conmon-021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e.scope.
Oct 02 12:17:08 compute-0 podman[303149]: 2025-10-02 12:17:08.479045857 +0000 UTC m=+0.027748584 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:17:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc7e1b1365c6cd4aa80ce03b6a91593ac328a484868386802d55eb343712feb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:08 compute-0 podman[303149]: 2025-10-02 12:17:08.614135429 +0000 UTC m=+0.162838106 container init 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:17:08 compute-0 podman[303149]: 2025-10-02 12:17:08.619182643 +0000 UTC m=+0.167885330 container start 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:17:08 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [NOTICE]   (303169) : New worker (303171) forked
Oct 02 12:17:08 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [NOTICE]   (303169) : Loading success.
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.675 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 137dbaa1-1ef7-4aea-bc14-9f99b461690c in datapath 8f6e5e22-d751-48cd-82a1-be33a11e92d9 unbound from our chassis
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.678 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f6e5e22-d751-48cd-82a1-be33a11e92d9
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.691 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf22502-d744-498b-ad46-59af8758bb1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.696 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 in dict_keys([('network-vif-plugged', 'd56785b1-aeee-49be-af2d-71e8fc0a6c9b'), ('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec'), ('network-vif-plugged', '7eba4256-8a01-49a2-aec1-98d48eda4d3f'), ('network-vif-plugged', '137dbaa1-1ef7-4aea-bc14-9f99b461690c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 WARNING nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 for instance with vm_state building and task_state spawning.
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.697 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.698 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.699 2 DEBUG oslo_concurrency.lockutils [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.699 2 DEBUG nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c in dict_keys([('network-vif-plugged', 'd56785b1-aeee-49be-af2d-71e8fc0a6c9b'), ('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec'), ('network-vif-plugged', '7eba4256-8a01-49a2-aec1-98d48eda4d3f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.699 2 WARNING nova.compute.manager [req-10cc4826-9875-4a3d-bd8d-4ca79454e9e6 req-c1d444a6-935b-4cfa-ac71-8707999e6bf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c for instance with vm_state building and task_state spawning.
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.708 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.709 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.709 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.709 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.709 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 in dict_keys([('network-vif-plugged', 'd56785b1-aeee-49be-af2d-71e8fc0a6c9b'), ('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec'), ('network-vif-plugged', '7eba4256-8a01-49a2-aec1-98d48eda4d3f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.709 2 WARNING nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 for instance with vm_state building and task_state spawning.
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.710 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.711 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.711 2 DEBUG oslo_concurrency.lockutils [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.711 2 DEBUG nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b in dict_keys([('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec'), ('network-vif-plugged', '7eba4256-8a01-49a2-aec1-98d48eda4d3f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.711 2 WARNING nova.compute.manager [req-4db39ab0-1f7a-4d8f-8480-d65faae8e5a3 req-bd1451d7-8141-42dc-9cb5-cb31fd79350f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b for instance with vm_state building and task_state spawning.
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.728 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0e53c2a2-576b-41c9-9a81-1970380530cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.734 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[14b9101f-f648-428f-bf66-8f0ea6b2413a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 02 12:17:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.767 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[63356c12-8d15-4fe4-b4ed-94958d1c9bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.784 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5c56ad-c48e-48df-876b-8a00acb3649b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f6e5e22-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:a7:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549546, 'reachable_time': 24230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303185, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.801 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbc24d9-9a45-4948-8b4b-3f044c0a67f1]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap8f6e5e22-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549561, 'tstamp': 549561}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303186, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f6e5e22-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549564, 'tstamp': 549564}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303186, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.804 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f6e5e22-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:08 compute-0 nova_compute[257802]: 2025-10-02 12:17:08.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.809 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f6e5e22-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.809 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.810 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f6e5e22-d0, col_values=(('external_ids', {'iface-id': 'b8f144d1-47bb-4eb6-881b-3e350b7e079a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.810 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.811 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eea85d66-4089-4708-9cb0-a7d2ab18e6ec in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.813 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.830 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf76d15-a839-4e2e-9a62-67bfeb6a1476]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.875 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[897307f8-e70d-4938-a5b1-55576b496e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.880 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a73318-cd02-4241-be88-1a206df049d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.930 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6c02967b-232d-4183-af4e-a401abf0c710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.955 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3751718b-bed5-4b5a-87f5-35e565bc3fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12d2f684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:a1:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 702, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 702, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549429, 'reachable_time': 44206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 604, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 604, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303192, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.973 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ff383c74-1ab1-49b0-9c9c-56c16bef3571]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549447, 'tstamp': 549447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303193, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549450, 'tstamp': 549450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303193, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:08.975 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12d2f684-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.014 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12d2f684-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.015 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.015 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12d2f684-10, col_values=(('external_ids', {'iface-id': '1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.015 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.016 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c23e360e-b27a-4c79-a3ce-5c8850ccc846 in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.018 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.033 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2c55e6da-fb96-4389-849f-12d39d47dc33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.063 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f424b3a1-9fc3-48cd-9383-cb3e487df6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.066 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[86739ebf-43cc-43cb-85c6-eca4b0ca7642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.100 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[19133a4e-baae-45c6-84de-1d3540c38f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.124 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[21ea052c-3981-4766-a870-09db69423a0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12d2f684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:a1:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549429, 'reachable_time': 44206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303199, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.142 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1a521f-3478-450f-a699-b44af170f9e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549447, 'tstamp': 549447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303200, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap12d2f684-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549450, 'tstamp': 549450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303200, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.143 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12d2f684-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.146 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12d2f684-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.146 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.147 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12d2f684-10, col_values=(('external_ids', {'iface-id': '1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:09.147 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 702 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 281 op/s
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.436 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.436 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.436 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.437 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.437 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.437 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.437 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.438 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.438 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.438 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No event matching network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f in dict_keys([('network-vif-plugged', 'eea85d66-4089-4708-9cb0-a7d2ab18e6ec')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.439 2 WARNING nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f for instance with vm_state building and task_state spawning.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.439 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.439 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.439 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.440 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.440 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Processing event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.440 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.440 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.440 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.441 2 DEBUG oslo_concurrency.lockutils [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.441 2 DEBUG nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.441 2 WARNING nova.compute.manager [req-44a23c20-1660-4599-aa4e-20f0e5c86107 req-755929b5-f160-40dd-99dc-265ae045efa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec for instance with vm_state building and task_state spawning.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.442 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.447 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407429.4471729, d7e14705-7aeb-440f-8e7e-926cc5b5ab6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.447 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] VM Resumed (Lifecycle Event)
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.449 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.455 2 INFO nova.virt.libvirt.driver [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance spawned successfully.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.456 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.476 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.480 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.491 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.491 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.492 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.492 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.493 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.493 2 DEBUG nova.virt.libvirt.driver [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.538 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.591 2 INFO nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Took 26.18 seconds to spawn the instance on the hypervisor.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.592 2 DEBUG nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.663 2 INFO nova.compute.manager [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Took 31.49 seconds to build instance.
Oct 02 12:17:09 compute-0 nova_compute[257802]: 2025-10-02 12:17:09.685 2 DEBUG oslo_concurrency.lockutils [None req-e73feff1-f090-4632-9662-0b38f1ad44fb d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 31.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:09 compute-0 ceph-mon[73607]: osdmap e238: 3 total, 3 up, 3 in
Oct 02 12:17:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:10.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:10.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:10 compute-0 ceph-mon[73607]: pgmap v1594: 305 pgs: 305 active+clean; 702 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 281 op/s
Oct 02 12:17:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 702 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Oct 02 12:17:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 02 12:17:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 02 12:17:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 02 12:17:11 compute-0 nova_compute[257802]: 2025-10-02 12:17:11.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:12.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:12 compute-0 ceph-mon[73607]: pgmap v1595: 305 pgs: 305 active+clean; 702 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Oct 02 12:17:12 compute-0 ceph-mon[73607]: osdmap e239: 3 total, 3 up, 3 in
Oct 02 12:17:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 02 12:17:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 02 12:17:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 02 12:17:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 703 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.9 MiB/s wr, 223 op/s
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:13 compute-0 NetworkManager[44987]: <info>  [1759407433.4080] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 02 12:17:13 compute-0 NetworkManager[44987]: <info>  [1759407433.4087] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:13 compute-0 ovn_controller[148183]: 2025-10-02T12:17:13Z|00296|binding|INFO|Releasing lport b8f144d1-47bb-4eb6-881b-3e350b7e079a from this chassis (sb_readonly=0)
Oct 02 12:17:13 compute-0 ovn_controller[148183]: 2025-10-02T12:17:13Z|00297|binding|INFO|Releasing lport 1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae from this chassis (sb_readonly=0)
Oct 02 12:17:13 compute-0 ovn_controller[148183]: 2025-10-02T12:17:13Z|00298|binding|INFO|Releasing lport 0f9e5149-7f4c-4549-8a41-1857fb606a88 from this chassis (sb_readonly=0)
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 02 12:17:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 02 12:17:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.784 2 DEBUG nova.compute.manager [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-changed-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.784 2 DEBUG nova.compute.manager [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing instance network info cache due to event network-changed-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.785 2 DEBUG oslo_concurrency.lockutils [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.785 2 DEBUG oslo_concurrency.lockutils [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:13 compute-0 nova_compute[257802]: 2025-10-02 12:17:13.785 2 DEBUG nova.network.neutron [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Refreshing network info cache for port 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:13 compute-0 ceph-mon[73607]: osdmap e240: 3 total, 3 up, 3 in
Oct 02 12:17:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2573772718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:13 compute-0 ceph-mon[73607]: osdmap e241: 3 total, 3 up, 3 in
Oct 02 12:17:14 compute-0 nova_compute[257802]: 2025-10-02 12:17:14.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:14.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 02 12:17:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 02 12:17:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 02 12:17:15 compute-0 ceph-mon[73607]: pgmap v1598: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 703 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.9 MiB/s wr, 223 op/s
Oct 02 12:17:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 652 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 355 op/s
Oct 02 12:17:15 compute-0 nova_compute[257802]: 2025-10-02 12:17:15.542 2 DEBUG nova.network.neutron [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updated VIF entry in instance network info cache for port 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:15 compute-0 nova_compute[257802]: 2025-10-02 12:17:15.542 2 DEBUG nova.network.neutron [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:15 compute-0 nova_compute[257802]: 2025-10-02 12:17:15.571 2 DEBUG oslo_concurrency.lockutils [req-061de624-ac77-4c6f-b0dc-2da4dc9ebbf2 req-4b9e4953-6ae9-48fb-ad29-c114e6430b70 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:16 compute-0 ceph-mon[73607]: osdmap e242: 3 total, 3 up, 3 in
Oct 02 12:17:16 compute-0 ceph-mon[73607]: pgmap v1601: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 652 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 355 op/s
Oct 02 12:17:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:17:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:17:16 compute-0 nova_compute[257802]: 2025-10-02 12:17:16.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 590 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 365 op/s
Oct 02 12:17:17 compute-0 sudo[303207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:17 compute-0 sudo[303207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:17 compute-0 sudo[303207]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:17 compute-0 sudo[303232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:17 compute-0 sudo[303232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:17 compute-0 sudo[303232]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:18 compute-0 ceph-mon[73607]: pgmap v1602: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 590 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 365 op/s
Oct 02 12:17:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:18.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428326141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 02 12:17:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 02 12:17:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 02 12:17:18 compute-0 podman[303258]: 2025-10-02 12:17:18.912730881 +0000 UTC m=+0.051513868 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 495 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 MiB/s wr, 288 op/s
Oct 02 12:17:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3428326141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:19 compute-0 ceph-mon[73607]: osdmap e243: 3 total, 3 up, 3 in
Oct 02 12:17:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2249381316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:19.806 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:19.807 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:17:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:19.808 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.950 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.951 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.952 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.952 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.953 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.954 2 INFO nova.compute.manager [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Terminating instance
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.956 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.956 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquired lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:19 compute-0 nova_compute[257802]: 2025-10-02 12:17:19.957 2 DEBUG nova.network.neutron [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:20.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:20 compute-0 ceph-mon[73607]: pgmap v1604: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 495 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 MiB/s wr, 288 op/s
Oct 02 12:17:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2212109734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:20 compute-0 nova_compute[257802]: 2025-10-02 12:17:20.976 2 DEBUG nova.network.neutron [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 451 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 251 op/s
Oct 02 12:17:21 compute-0 nova_compute[257802]: 2025-10-02 12:17:21.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:22.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:22 compute-0 nova_compute[257802]: 2025-10-02 12:17:22.189 2 DEBUG nova.network.neutron [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:22 compute-0 nova_compute[257802]: 2025-10-02 12:17:22.223 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Releasing lock "refresh_cache-4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:22 compute-0 nova_compute[257802]: 2025-10-02 12:17:22.224 2 DEBUG nova.compute.manager [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:22 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 02 12:17:22 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Consumed 16.035s CPU time.
Oct 02 12:17:22 compute-0 systemd-machined[211836]: Machine qemu-33-instance-00000047 terminated.
Oct 02 12:17:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:22.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:22 compute-0 nova_compute[257802]: 2025-10-02 12:17:22.452 2 INFO nova.virt.libvirt.driver [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance destroyed successfully.
Oct 02 12:17:22 compute-0 nova_compute[257802]: 2025-10-02 12:17:22.452 2 DEBUG nova.objects.instance [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'resources' on Instance uuid 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:22 compute-0 ceph-mon[73607]: pgmap v1605: 305 pgs: 305 active+clean; 451 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 251 op/s
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.002 2 INFO nova.virt.libvirt.driver [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Deleting instance files /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_del
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.003 2 INFO nova.virt.libvirt.driver [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Deletion of /var/lib/nova/instances/4ef4cd21-73d9-4ceb-8bd5-316a831e8e92_del complete
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.055 2 INFO nova.compute.manager [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.056 2 DEBUG oslo.service.loopingcall [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.056 2 DEBUG nova.compute.manager [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.056 2 DEBUG nova.network.neutron [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 418 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 194 op/s
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:f2:18 10.1.1.9
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:f2:18 10.1.1.9
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:43:a5 10.2.2.200
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:43:a5 10.2.2.200
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:5e:56 10.1.1.222
Oct 02 12:17:23 compute-0 ovn_controller[148183]: 2025-10-02T12:17:23Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:5e:56 10.1.1.222
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.721 2 DEBUG nova.network.neutron [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.738 2 DEBUG nova.network.neutron [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.755 2 INFO nova.compute.manager [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Took 0.70 seconds to deallocate network for instance.
Oct 02 12:17:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 02 12:17:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:23 compute-0 nova_compute[257802]: 2025-10-02 12:17:23.869 2 DEBUG oslo_concurrency.processutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:23 compute-0 podman[303304]: 2025-10-02 12:17:23.969256431 +0000 UTC m=+0.086565199 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:17:24 compute-0 podman[303303]: 2025-10-02 12:17:24.009714516 +0000 UTC m=+0.125735302 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:4c:02 10.1.1.128
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:4c:02 10.1.1.128
Oct 02 12:17:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610354956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.332 2 DEBUG oslo_concurrency.processutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.337 2 DEBUG nova.compute.provider_tree [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.372 2 DEBUG nova.scheduler.client.report [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.398 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.425 2 INFO nova.scheduler.client.report [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Deleted allocations for instance 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92
Oct 02 12:17:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:24.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:27:7c 10.2.2.100
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:27:7c 10.2.2.100
Oct 02 12:17:24 compute-0 nova_compute[257802]: 2025-10-02 12:17:24.504 2 DEBUG oslo_concurrency.lockutils [None req-b9560c98-5123-450f-927e-b7e8dc62a93d 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "4ef4cd21-73d9-4ceb-8bd5-316a831e8e92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:24 compute-0 ceph-mon[73607]: pgmap v1606: 305 pgs: 305 active+clean; 418 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 194 op/s
Oct 02 12:17:24 compute-0 ceph-mon[73607]: osdmap e244: 3 total, 3 up, 3 in
Oct 02 12:17:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1610354956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:0a:06 10.1.1.24
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:0a:06 10.1.1.24
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:83:d4 10.100.0.5
Oct 02 12:17:24 compute-0 ovn_controller[148183]: 2025-10-02T12:17:24Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:83:d4 10.100.0.5
Oct 02 12:17:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 417 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 2.6 MiB/s wr, 144 op/s
Oct 02 12:17:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 02 12:17:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 02 12:17:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 02 12:17:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:26.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:26 compute-0 nova_compute[257802]: 2025-10-02 12:17:26.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:26 compute-0 ceph-mon[73607]: pgmap v1608: 305 pgs: 305 active+clean; 417 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 2.6 MiB/s wr, 144 op/s
Oct 02 12:17:26 compute-0 ceph-mon[73607]: osdmap e245: 3 total, 3 up, 3 in
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:26.934 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:26.934 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:26.935 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 366 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.2 MiB/s wr, 186 op/s
Oct 02 12:17:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 02 12:17:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 02 12:17:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 02 12:17:27 compute-0 podman[303366]: 2025-10-02 12:17:27.957965969 +0000 UTC m=+0.087413630 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 12:17:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:28.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 02 12:17:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 02 12:17:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 02 12:17:28 compute-0 ceph-mon[73607]: pgmap v1610: 305 pgs: 305 active+clean; 366 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.2 MiB/s wr, 186 op/s
Oct 02 12:17:28 compute-0 ceph-mon[73607]: osdmap e246: 3 total, 3 up, 3 in
Oct 02 12:17:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3409883207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:29 compute-0 nova_compute[257802]: 2025-10-02 12:17:29.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 326 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 802 KiB/s rd, 2.4 MiB/s wr, 290 op/s
Oct 02 12:17:29 compute-0 ceph-mon[73607]: osdmap e247: 3 total, 3 up, 3 in
Oct 02 12:17:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:30.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:30 compute-0 ceph-mon[73607]: pgmap v1613: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 326 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 802 KiB/s rd, 2.4 MiB/s wr, 290 op/s
Oct 02 12:17:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 307 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 565 KiB/s rd, 945 KiB/s wr, 289 op/s
Oct 02 12:17:31 compute-0 nova_compute[257802]: 2025-10-02 12:17:31.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:32.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:33 compute-0 ceph-mon[73607]: pgmap v1614: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 307 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 565 KiB/s rd, 945 KiB/s wr, 289 op/s
Oct 02 12:17:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/638047331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 268 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 107 KiB/s wr, 167 op/s
Oct 02 12:17:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 02 12:17:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 02 12:17:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 02 12:17:34 compute-0 nova_compute[257802]: 2025-10-02 12:17:34.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:34 compute-0 ceph-mon[73607]: pgmap v1615: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 268 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 107 KiB/s wr, 167 op/s
Oct 02 12:17:34 compute-0 ceph-mon[73607]: osdmap e248: 3 total, 3 up, 3 in
Oct 02 12:17:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 247 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 113 KiB/s wr, 190 op/s
Oct 02 12:17:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:36 compute-0 ceph-mon[73607]: pgmap v1617: 305 pgs: 305 active+clean; 247 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 113 KiB/s wr, 190 op/s
Oct 02 12:17:36 compute-0 nova_compute[257802]: 2025-10-02 12:17:36.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 28 KiB/s wr, 118 op/s
Oct 02 12:17:37 compute-0 nova_compute[257802]: 2025-10-02 12:17:37.451 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407442.4496677, 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:37 compute-0 nova_compute[257802]: 2025-10-02 12:17:37.451 2 INFO nova.compute.manager [-] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] VM Stopped (Lifecycle Event)
Oct 02 12:17:37 compute-0 sudo[303398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:37 compute-0 sudo[303398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:37 compute-0 sudo[303398]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:37 compute-0 sudo[303423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:37 compute-0 sudo[303423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:37 compute-0 sudo[303423]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:38.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:38 compute-0 nova_compute[257802]: 2025-10-02 12:17:38.363 2 DEBUG nova.compute.manager [None req-0bf7800c-c7c7-4d1d-8599-244c9ecfb433 - - - - - -] [instance: 4ef4cd21-73d9-4ceb-8bd5-316a831e8e92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 02 12:17:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 02 12:17:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.802188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458802250, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2163, "num_deletes": 268, "total_data_size": 3427998, "memory_usage": 3476480, "flush_reason": "Manual Compaction"}
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458947369, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3369958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34135, "largest_seqno": 36297, "table_properties": {"data_size": 3359936, "index_size": 6388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21114, "raw_average_key_size": 20, "raw_value_size": 3339827, "raw_average_value_size": 3319, "num_data_blocks": 275, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407299, "oldest_key_time": 1759407299, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 145221 microseconds, and 8028 cpu microseconds.
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:38 compute-0 ceph-mon[73607]: pgmap v1618: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 28 KiB/s wr, 118 op/s
Oct 02 12:17:38 compute-0 ceph-mon[73607]: osdmap e249: 3 total, 3 up, 3 in
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.947415) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3369958 bytes OK
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.947436) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.974532) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.974576) EVENT_LOG_v1 {"time_micros": 1759407458974566, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.974600) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3418925, prev total WAL file size 3420986, number of live WAL files 2.
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.975587) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3290KB)], [74(7992KB)]
Oct 02 12:17:38 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458975723, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11554609, "oldest_snapshot_seqno": -1}
Oct 02 12:17:39 compute-0 nova_compute[257802]: 2025-10-02 12:17:39.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6344 keys, 11404094 bytes, temperature: kUnknown
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459194200, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11404094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359289, "index_size": 27863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162239, "raw_average_key_size": 25, "raw_value_size": 11243136, "raw_average_value_size": 1772, "num_data_blocks": 1122, "num_entries": 6344, "num_filter_entries": 6344, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.194472) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11404094 bytes
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.207887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.9 rd, 52.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.4) OK, records in: 6889, records dropped: 545 output_compression: NoCompression
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.207914) EVENT_LOG_v1 {"time_micros": 1759407459207902, "job": 42, "event": "compaction_finished", "compaction_time_micros": 218555, "compaction_time_cpu_micros": 42299, "output_level": 6, "num_output_files": 1, "total_output_size": 11404094, "num_input_records": 6889, "num_output_records": 6344, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459208755, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459210897, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:38.975399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.210944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.210948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.210949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.210951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:39.210953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 31 KiB/s wr, 56 op/s
Oct 02 12:17:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:40.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:40 compute-0 ceph-mon[73607]: pgmap v1620: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 31 KiB/s wr, 56 op/s
Oct 02 12:17:41 compute-0 nova_compute[257802]: 2025-10-02 12:17:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:41 compute-0 nova_compute[257802]: 2025-10-02 12:17:41.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:41 compute-0 nova_compute[257802]: 2025-10-02 12:17:41.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:17:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 10 KiB/s wr, 18 op/s
Oct 02 12:17:41 compute-0 nova_compute[257802]: 2025-10-02 12:17:41.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:17:42
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:17:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:17:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:17:43 compute-0 ceph-mon[73607]: pgmap v1621: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 10 KiB/s wr, 18 op/s
Oct 02 12:17:43 compute-0 nova_compute[257802]: 2025-10-02 12:17:43.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:17:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 11 KiB/s wr, 15 op/s
Oct 02 12:17:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1166877677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:44 compute-0 nova_compute[257802]: 2025-10-02 12:17:44.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:44 compute-0 ovn_controller[148183]: 2025-10-02T12:17:44Z|00299|binding|INFO|Releasing lport b8f144d1-47bb-4eb6-881b-3e350b7e079a from this chassis (sb_readonly=0)
Oct 02 12:17:44 compute-0 ovn_controller[148183]: 2025-10-02T12:17:44Z|00300|binding|INFO|Releasing lport 1bd07b5c-4a44-47f4-bd3d-0ddce3cee0ae from this chassis (sb_readonly=0)
Oct 02 12:17:44 compute-0 ovn_controller[148183]: 2025-10-02T12:17:44Z|00301|binding|INFO|Releasing lport 0f9e5149-7f4c-4549-8a41-1857fb606a88 from this chassis (sb_readonly=0)
Oct 02 12:17:44 compute-0 nova_compute[257802]: 2025-10-02 12:17:44.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:45 compute-0 ceph-mon[73607]: pgmap v1622: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 11 KiB/s wr, 15 op/s
Oct 02 12:17:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3984017574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 12:17:46 compute-0 nova_compute[257802]: 2025-10-02 12:17:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:46.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:46.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:46 compute-0 nova_compute[257802]: 2025-10-02 12:17:46.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:47 compute-0 nova_compute[257802]: 2025-10-02 12:17:47.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:47 compute-0 nova_compute[257802]: 2025-10-02 12:17:47.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:47 compute-0 ceph-mon[73607]: pgmap v1623: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 12:17:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 12:17:48 compute-0 nova_compute[257802]: 2025-10-02 12:17:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:48 compute-0 ceph-mon[73607]: pgmap v1624: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 12:17:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:48.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:48.705 158368 DEBUG eventlet.wsgi.server [-] (158368) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:48.707 158368 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: Accept: */*
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: Connection: close
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: Content-Type: text/plain
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: Host: 169.254.169.254
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: User-Agent: curl/7.84.0
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: X-Forwarded-For: 10.100.0.5
Oct 02 12:17:48 compute-0 ovn_metadata_agent[158256]: X-Ovn-Network-Id: 18c9b866-0d06-4d2c-a073-d724f233a261 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 12:17:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:49 compute-0 nova_compute[257802]: 2025-10-02 12:17:49.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 777 B/s rd, 6.1 KiB/s wr, 1 op/s
Oct 02 12:17:49 compute-0 podman[303454]: 2025-10-02 12:17:49.930007757 +0000 UTC m=+0.063575654 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:17:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:50.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.452 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.452 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.453 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:17:50 compute-0 nova_compute[257802]: 2025-10-02 12:17:50.453 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7e14705-7aeb-440f-8e7e-926cc5b5ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:50.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:50 compute-0 ceph-mon[73607]: pgmap v1625: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 777 B/s rd, 6.1 KiB/s wr, 1 op/s
Oct 02 12:17:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:50.719 158368 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 12:17:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:50.719 158368 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2552 time: 2.0127718
Oct 02 12:17:50 compute-0 haproxy-metadata-proxy-18c9b866-0d06-4d2c-a073-d724f233a261[303012]: 10.100.0.5:44308 [02/Oct/2025:12:17:48.704] listener listener/metadata 0/0/0/2015/2015 200 2536 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct 02 12:17:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 02 12:17:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2033746805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-0 nova_compute[257802]: 2025-10-02 12:17:51.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:52.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:52 compute-0 ceph-mon[73607]: pgmap v1626: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 02 12:17:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3221452715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.858 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.859 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.859 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.859 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.860 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.861 2 INFO nova.compute.manager [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Terminating instance
Oct 02 12:17:52 compute-0 nova_compute[257802]: 2025-10-02 12:17:52.862 2 DEBUG nova.compute.manager [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:53 compute-0 kernel: tap1b5f1c6a-6b (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.0118] device (tap1b5f1c6a-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00302|binding|INFO|Releasing lport 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00303|binding|INFO|Setting lport 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 down in Southbound
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00304|binding|INFO|Removing iface tap1b5f1c6a-6b ovn-installed in OVS
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.040 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:83:d4 10.100.0.5'], port_security=['fa:16:3e:c2:83:d4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c9b866-0d06-4d2c-a073-d724f233a261', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76a7d478-dac8-494b-afd5-42c66456bac5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.043 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 in datapath 18c9b866-0d06-4d2c-a073-d724f233a261 unbound from our chassis
Oct 02 12:17:53 compute-0 kernel: tapd56785b1-ae (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.048 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18c9b866-0d06-4d2c-a073-d724f233a261, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.0513] device (tapd56785b1-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.049 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86d0f973-de78-438f-90b1-5bfc922926fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.054 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261 namespace which is not needed anymore
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00305|binding|INFO|Releasing lport d56785b1-aeee-49be-af2d-71e8fc0a6c9b from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00306|binding|INFO|Setting lport d56785b1-aeee-49be-af2d-71e8fc0a6c9b down in Southbound
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00307|binding|INFO|Removing iface tapd56785b1-ae ovn-installed in OVS
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.073 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:0a:06 10.1.1.24'], port_security=['fa:16:3e:86:0a:06 10.1.1.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1251303844', 'neutron:cidrs': '10.1.1.24/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1251303844', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fe43832-bbac-4905-ae6c-ce5bd827e90a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d56785b1-aeee-49be-af2d-71e8fc0a6c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 kernel: tap4a1bf892-fd (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.0962] device (tap4a1bf892-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00308|binding|INFO|Releasing lport 4a1bf892-fd3f-40df-95f2-779ffc41ade1 from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00309|binding|INFO|Setting lport 4a1bf892-fd3f-40df-95f2-779ffc41ade1 down in Southbound
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00310|binding|INFO|Removing iface tap4a1bf892-fd ovn-installed in OVS
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 kernel: tapc23e360e-b2 (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.1209] device (tapc23e360e-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.125 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:4c:02 10.1.1.128'], port_security=['fa:16:3e:3b:4c:02 10.1.1.128'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1253942574', 'neutron:cidrs': '10.1.1.128/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1253942574', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fe43832-bbac-4905-ae6c-ce5bd827e90a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4a1bf892-fd3f-40df-95f2-779ffc41ade1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 kernel: tapeea85d66-40 (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.1461] device (tapeea85d66-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00311|binding|INFO|Releasing lport c23e360e-b27a-4c79-a3ce-5c8850ccc846 from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00312|binding|INFO|Setting lport c23e360e-b27a-4c79-a3ce-5c8850ccc846 down in Southbound
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00313|binding|INFO|Releasing lport eea85d66-4089-4708-9cb0-a7d2ab18e6ec from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00314|binding|INFO|Setting lport eea85d66-4089-4708-9cb0-a7d2ab18e6ec down in Southbound
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00315|binding|INFO|Removing iface tapc23e360e-b2 ovn-installed in OVS
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00316|binding|INFO|Removing iface tapeea85d66-40 ovn-installed in OVS
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.171 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:f2:18 10.1.1.9'], port_security=['fa:16:3e:11:f2:18 10.1.1.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.9/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eea85d66-4089-4708-9cb0-a7d2ab18e6ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.173 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:5e:56 10.1.1.222'], port_security=['fa:16:3e:1d:5e:56 10.1.1.222'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.222/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c0792c7-0535-44fc-9646-b20d704d1986, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c23e360e-b27a-4c79-a3ce-5c8850ccc846) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 kernel: tap7eba4256-8a (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.1814] device (tap7eba4256-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00317|binding|INFO|Releasing lport 7eba4256-8a01-49a2-aec1-98d48eda4d3f from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00318|binding|INFO|Setting lport 7eba4256-8a01-49a2-aec1-98d48eda4d3f down in Southbound
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00319|binding|INFO|Removing iface tap7eba4256-8a ovn-installed in OVS
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.209 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:27:7c 10.2.2.100'], port_security=['fa:16:3e:ee:27:7c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82780a24-ba32-4888-a78c-cf80dc7ecc16, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7eba4256-8a01-49a2-aec1-98d48eda4d3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 kernel: tap137dbaa1-1e (unregistering): left promiscuous mode
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.2181] device (tap137dbaa1-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00320|binding|INFO|Releasing lport 137dbaa1-1ef7-4aea-bc14-9f99b461690c from this chassis (sb_readonly=0)
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00321|binding|INFO|Setting lport 137dbaa1-1ef7-4aea-bc14-9f99b461690c down in Southbound
Oct 02 12:17:53 compute-0 ovn_controller[148183]: 2025-10-02T12:17:53Z|00322|binding|INFO|Removing iface tap137dbaa1-1e ovn-installed in OVS
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [NOTICE]   (303010) : haproxy version is 2.8.14-c23fe91
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [NOTICE]   (303010) : path to executable is /usr/sbin/haproxy
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [WARNING]  (303010) : Exiting Master process...
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [ALERT]    (303010) : Current worker (303012) exited with code 143 (Terminated)
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261[303004]: [WARNING]  (303010) : All workers exited. Exiting... (0)
Oct 02 12:17:53 compute-0 systemd[1]: libpod-01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 conmon[303004]: conmon 01fd1965aa0d10972d3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41.scope/container/memory.events
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.254 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:43:a5 10.2.2.200'], port_security=['fa:16:3e:4f:43:a5 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'd7e14705-7aeb-440f-8e7e-926cc5b5ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9d3eca266284ae9950c491e566b2523', 'neutron:revision_number': '4', 'neutron:security_group_ids': '746f60f4-bb48-4286-9b73-3fc07720fb36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82780a24-ba32-4888-a78c-cf80dc7ecc16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=137dbaa1-1ef7-4aea-bc14-9f99b461690c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:53 compute-0 podman[303517]: 2025-10-02 12:17:53.254779147 +0000 UTC m=+0.065228716 container died 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000049.scope: Consumed 16.944s CPU time.
Oct 02 12:17:53 compute-0 systemd-machined[211836]: Machine qemu-34-instance-00000049 terminated.
Oct 02 12:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41-userdata-shm.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ef7665c785d1366f04882cd1d45e647d79ee1a4e4358d43246ee9df09eaf35b-merged.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[303517]: 2025-10-02 12:17:53.30006007 +0000 UTC m=+0.110509629 container cleanup 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:17:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 6.0 KiB/s wr, 1 op/s
Oct 02 12:17:53 compute-0 systemd[1]: libpod-conmon-01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[303572]: 2025-10-02 12:17:53.358744793 +0000 UTC m=+0.039446201 container remove 01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.363 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a17acce-6133-4b50-a086-e4a78c3dadfc]: (4, ('Thu Oct  2 12:17:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261 (01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41)\n01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41\nThu Oct  2 12:17:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261 (01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41)\n01fd1965aa0d10972d3b65eb0fc0c8afe95d9f2205ef5f0341955e6bda207e41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.367 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1425283-2412-4446-9e29-3b1d777d2fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.368 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c9b866-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 kernel: tap18c9b866-00: left promiscuous mode
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[567725bc-de13-4059-a86f-23360f7a5373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.459 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad6f339-f093-47af-addb-51fe0e140ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.460 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[264918fa-be0e-4d7e-83e5-d760b62a4954]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.479 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ba168cb9-1e02-4f60-8b88-d233d26ee0bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549318, 'reachable_time': 21792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303608, 'error': None, 'target': 'ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d18c9b866\x2d0d06\x2d4d2c\x2da073\x2dd724f233a261.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.481 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18c9b866-0d06-4d2c-a073-d724f233a261 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.481 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d610f415-2836-4527-9f58-8748e73cd9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.483 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d56785b1-aeee-49be-af2d-71e8fc0a6c9b in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.484 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.485 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5399b401-55a0-448e-9642-b9d87e94da5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.485 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c namespace which is not needed anymore
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.5050] manager: (tapd56785b1-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.5243] manager: (tap4a1bf892-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.5366] manager: (tapc23e360e-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct 02 12:17:53 compute-0 NetworkManager[44987]: <info>  [1759407473.5821] manager: (tap137dbaa1-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.600 2 INFO nova.virt.libvirt.driver [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Instance destroyed successfully.
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.600 2 DEBUG nova.objects.instance [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lazy-loading 'resources' on Instance uuid d7e14705-7aeb-440f-8e7e-926cc5b5ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [NOTICE]   (303085) : haproxy version is 2.8.14-c23fe91
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [NOTICE]   (303085) : path to executable is /usr/sbin/haproxy
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [WARNING]  (303085) : Exiting Master process...
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [ALERT]    (303085) : Current worker (303087) exited with code 143 (Terminated)
Oct 02 12:17:53 compute-0 neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c[303081]: [WARNING]  (303085) : All workers exited. Exiting... (0)
Oct 02 12:17:53 compute-0 systemd[1]: libpod-77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[303705]: 2025-10-02 12:17:53.662164876 +0000 UTC m=+0.042814544 container died 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17-userdata-shm.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-327582ce23201baa3800f67f27ab9384fcf9a9ad8316dbe8161716d1ce1ea7ef-merged.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.688 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.689 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.689 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.690 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b5f1c6a-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 podman[303705]: 2025-10-02 12:17:53.694587463 +0000 UTC m=+0.075237111 container cleanup 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 systemd[1]: libpod-conmon-77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.713 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:83:d4,bridge_name='br-int',has_traffic_filtering=True,id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732,network=Network(18c9b866-0d06-4d2c-a073-d724f233a261),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b5f1c6a-6b')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.714 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.714 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.715 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.716 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd56785b1-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.734 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:0a:06,bridge_name='br-int',has_traffic_filtering=True,id=d56785b1-aeee-49be-af2d-71e8fc0a6c9b,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd56785b1-ae')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.735 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.735 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.736 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.736 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a1bf892-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.752 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:4c:02,bridge_name='br-int',has_traffic_filtering=True,id=4a1bf892-fd3f-40df-95f2-779ffc41ade1,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4a1bf892-fd')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.753 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.754 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.754 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.754 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc23e360e-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.767 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:5e:56,bridge_name='br-int',has_traffic_filtering=True,id=c23e360e-b27a-4c79-a3ce-5c8850ccc846,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23e360e-b2')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.769 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.769 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.770 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.770 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 podman[303748]: 2025-10-02 12:17:53.771943826 +0000 UTC m=+0.050213667 container remove 77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeea85d66-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.777 2 DEBUG nova.compute.manager [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.778 2 DEBUG oslo_concurrency.lockutils [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.778 2 DEBUG oslo_concurrency.lockutils [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.778 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9c4bd0-6f85-4b43-8311-00e7663ec61c]: (4, ('Thu Oct  2 12:17:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c (77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17)\n77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17\nThu Oct  2 12:17:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c (77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17)\n77df440c4b10d02e30ba2b066391c8dd177254cc9194d21a55e155bed4877e17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.779 2 DEBUG oslo_concurrency.lockutils [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.779 2 DEBUG nova.compute.manager [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.779 2 DEBUG nova.compute.manager [req-59888099-f5fb-46cf-8feb-13b89bef495a req-bf05189b-7e10-4587-8b13-2235468123f5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.779 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2785e710-ff42-4ffc-afff-90af6f91808a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.780 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12d2f684-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 kernel: tap12d2f684-10: left promiscuous mode
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.784 2 DEBUG nova.compute.manager [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.784 2 DEBUG oslo_concurrency.lockutils [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.785 2 DEBUG oslo_concurrency.lockutils [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.785 2 DEBUG oslo_concurrency.lockutils [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.785 2 DEBUG nova.compute.manager [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.785 2 DEBUG nova.compute.manager [req-757dcfc8-1bf4-4178-85f5-8255044538af req-79a1cb53-3129-4631-84ce-f97fe545053c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.788 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f2:18,bridge_name='br-int',has_traffic_filtering=True,id=eea85d66-4089-4708-9cb0-a7d2ab18e6ec,network=Network(12d2f684-17ee-45ff-8e0b-9e61463b4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea85d66-40')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.789 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.789 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.790 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.790 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7eba4256-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.804 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e0caedc3-6165-4723-80c3-acca0d0c7e8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.812 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:27:7c,bridge_name='br-int',has_traffic_filtering=True,id=7eba4256-8a01-49a2-aec1-98d48eda4d3f,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7eba4256-8a')
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.813 2 DEBUG nova.virt.libvirt.vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1586743939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1586743939',id=73,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWM4YUMOoSm6GBNS5HDC9AWc4kDdtYuM7ALyCkO4vM9Maq4z2BEgsAZS104YCyjVd+8u5++0So1UucHRWrrFc68q06aupZKWB7gDCQ0BUAh1k4LbTy9YWCguazjF74qNw==',key_name='tempest-keypair-1315632604',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9d3eca266284ae9950c491e566b2523',ramdisk_id='',reservation_id='r-tm18varp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1070421781',owner_user_name='tempest-TaggedBootDevicesTest-1070421781-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4b073f3365d481cabadfd39389c66ba',uuid=d7e14705-7aeb-440f-8e7e-926cc5b5ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.814 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converting VIF {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.815 2 DEBUG nova.network.os_vif_util [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.816 2 DEBUG os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap137dbaa1-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.858 2 DEBUG nova.compute.manager [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.858 2 DEBUG oslo_concurrency.lockutils [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.859 2 DEBUG oslo_concurrency.lockutils [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.859 2 DEBUG oslo_concurrency.lockutils [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.859 2 DEBUG nova.compute.manager [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.859 2 DEBUG nova.compute.manager [req-47a6ff3c-aa49-4c6b-a9dc-b00c652a3542 req-0370bfb6-3423-4325-a788-d7fb780b7fae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:53 compute-0 nova_compute[257802]: 2025-10-02 12:17:53.880 2 INFO os_vif [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:43:a5,bridge_name='br-int',has_traffic_filtering=True,id=137dbaa1-1ef7-4aea-bc14-9f99b461690c,network=Network(8f6e5e22-d751-48cd-82a1-be33a11e92d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap137dbaa1-1e')
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.892 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fa39305b-979d-4007-8d63-f8143297b848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.893 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aae27a61-5f88-4129-bc84-c053cb2438ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.907 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8cbd93-6cb9-4669-a914-c56b946ca555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549421, 'reachable_time': 30013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303795, 'error': None, 'target': 'ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.910 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12d2f684-17ee-45ff-8e0b-9e61463b4f7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.910 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfe58d9-b89c-4385-98f8-b28246e7a292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.911 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4a1bf892-fd3f-40df-95f2-779ffc41ade1 in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.912 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.914 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb7f3f8-3551-4b9d-b598-14b66bd395e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.915 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eea85d66-4089-4708-9cb0-a7d2ab18e6ec in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.916 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.916 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca77c6-bead-4423-97d1-c6c5252ae84e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.917 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c23e360e-b27a-4c79-a3ce-5c8850ccc846 in datapath 12d2f684-17ee-45ff-8e0b-9e61463b4f7c unbound from our chassis
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.918 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12d2f684-17ee-45ff-8e0b-9e61463b4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.919 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[92716f25-e2b1-4db0-8cf5-4e4007ae5096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.920 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7eba4256-8a01-49a2-aec1-98d48eda4d3f in datapath 8f6e5e22-d751-48cd-82a1-be33a11e92d9 unbound from our chassis
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.921 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f6e5e22-d751-48cd-82a1-be33a11e92d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.922 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3d2618-0cc0-47c2-bc38-4d3cff731f4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:53.922 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9 namespace which is not needed anymore
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [NOTICE]   (303169) : haproxy version is 2.8.14-c23fe91
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [NOTICE]   (303169) : path to executable is /usr/sbin/haproxy
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [WARNING]  (303169) : Exiting Master process...
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [WARNING]  (303169) : Exiting Master process...
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [ALERT]    (303169) : Current worker (303171) exited with code 143 (Terminated)
Oct 02 12:17:54 compute-0 neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9[303165]: [WARNING]  (303169) : All workers exited. Exiting... (0)
Oct 02 12:17:54 compute-0 systemd[1]: libpod-021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e.scope: Deactivated successfully.
Oct 02 12:17:54 compute-0 podman[303816]: 2025-10-02 12:17:54.044744485 +0000 UTC m=+0.045450449 container died 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:17:54 compute-0 podman[303816]: 2025-10-02 12:17:54.118283443 +0000 UTC m=+0.118989397 container cleanup 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:54 compute-0 systemd[1]: libpod-conmon-021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e.scope: Deactivated successfully.
Oct 02 12:17:54 compute-0 podman[303840]: 2025-10-02 12:17:54.164378097 +0000 UTC m=+0.092973917 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 02 12:17:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:54.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.204 2 INFO nova.virt.libvirt.driver [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Deleting instance files /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_del
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.205 2 INFO nova.virt.libvirt.driver [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Deletion of /var/lib/nova/instances/d7e14705-7aeb-440f-8e7e-926cc5b5ab6f_del complete
Oct 02 12:17:54 compute-0 podman[303832]: 2025-10-02 12:17:54.211582078 +0000 UTC m=+0.144385662 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:17:54 compute-0 podman[303865]: 2025-10-02 12:17:54.219940844 +0000 UTC m=+0.083442134 container remove 021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.225 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2906a5-9078-4184-acc3-e0db521666c3]: (4, ('Thu Oct  2 12:17:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9 (021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e)\n021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e\nThu Oct  2 12:17:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9 (021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e)\n021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.227 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bbea7d65-e28f-447a-a3db-e33376ef85b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.228 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f6e5e22-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:54 compute-0 kernel: tap8f6e5e22-d0: left promiscuous mode
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.251 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cef43c38-5d72-469c-a53c-3188b6a04765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021834069072212917 of space, bias 1.0, pg target 0.6550220721663875 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002170865903592034 of space, bias 1.0, pg target 0.6512597710776102 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.280 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[47dbc102-bf50-492d-963a-fe5553c65f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc7e1b1365c6cd4aa80ce03b6a91593ac328a484868386802d55eb343712feb-merged.mount: Deactivated successfully.
Oct 02 12:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-021dd0c938eefcafd10d2fdf0a48f2bb334bce8dc0666c22dfb6a986d617b10e-userdata-shm.mount: Deactivated successfully.
Oct 02 12:17:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d12d2f684\x2d17ee\x2d45ff\x2d8e0b\x2d9e61463b4f7c.mount: Deactivated successfully.
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.282 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cab503-a62d-43f7-8350-001aef43dad4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.305 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c19e4e5-0fdd-4ec0-88ae-be8c377c6a75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549537, 'reachable_time': 33983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303898, 'error': None, 'target': 'ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.307 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f6e5e22-d751-48cd-82a1-be33a11e92d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.307 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[828fbb83-1518-44ee-8b7e-0a4bd4a261bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.308 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 137dbaa1-1ef7-4aea-bc14-9f99b461690c in datapath 8f6e5e22-d751-48cd-82a1-be33a11e92d9 unbound from our chassis
Oct 02 12:17:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f6e5e22\x2dd751\x2d48cd\x2d82a1\x2dbe33a11e92d9.mount: Deactivated successfully.
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.310 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f6e5e22-d751-48cd-82a1-be33a11e92d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:17:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:17:54.311 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9527851a-edb5-4880-9396-39a3694487b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.328 2 INFO nova.compute.manager [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Took 1.47 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.329 2 DEBUG oslo.service.loopingcall [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.329 2 DEBUG nova.compute.manager [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:54 compute-0 nova_compute[257802]: 2025-10-02 12:17:54.329 2 DEBUG nova.network.neutron [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:54.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:54 compute-0 ceph-mon[73607]: pgmap v1627: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 6.0 KiB/s wr, 1 op/s
Oct 02 12:17:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 4.3 KiB/s wr, 1 op/s
Oct 02 12:17:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:17:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1469933789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:17:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1469933789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.760296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475760362, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 406, "num_deletes": 251, "total_data_size": 311415, "memory_usage": 319176, "flush_reason": "Manual Compaction"}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 02 12:17:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1469933789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1469933789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475765472, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 308298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36298, "largest_seqno": 36703, "table_properties": {"data_size": 305884, "index_size": 514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5950, "raw_average_key_size": 18, "raw_value_size": 301121, "raw_average_value_size": 949, "num_data_blocks": 23, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407458, "oldest_key_time": 1759407458, "file_creation_time": 1759407475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 5200 microseconds, and 1897 cpu microseconds.
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.765511) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 308298 bytes OK
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.765527) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.768697) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.768712) EVENT_LOG_v1 {"time_micros": 1759407475768706, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.768728) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 308862, prev total WAL file size 308862, number of live WAL files 2.
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.769333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(301KB)], [77(10MB)]
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475769410, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11712392, "oldest_snapshot_seqno": -1}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6151 keys, 9867115 bytes, temperature: kUnknown
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475851912, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9867115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9824948, "index_size": 25673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 158935, "raw_average_key_size": 25, "raw_value_size": 9713448, "raw_average_value_size": 1579, "num_data_blocks": 1023, "num_entries": 6151, "num_filter_entries": 6151, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.852193) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9867115 bytes
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.854559) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.1 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(70.0) write-amplify(32.0) OK, records in: 6661, records dropped: 510 output_compression: NoCompression
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.854586) EVENT_LOG_v1 {"time_micros": 1759407475854574, "job": 44, "event": "compaction_finished", "compaction_time_micros": 82430, "compaction_time_cpu_micros": 33122, "output_level": 6, "num_output_files": 1, "total_output_size": 9867115, "num_input_records": 6661, "num_output_records": 6151, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475854786, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475857381, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.769265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.857512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.857523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.857526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.857529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:55 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:17:55.857533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.105 2 DEBUG nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.106 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.106 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.106 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.107 2 DEBUG nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.107 2 WARNING nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 for instance with vm_state active and task_state deleting.
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.107 2 DEBUG nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.108 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.108 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.108 2 DEBUG oslo_concurrency.lockutils [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.108 2 DEBUG nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.109 2 DEBUG nova.compute.manager [req-52f6dfdc-1958-43d4-8348-79bccf8dbf24 req-629104f2-45ec-4e5b-b6d5-6b414304140f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:17:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.345 2 DEBUG nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.345 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.346 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.346 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.346 2 DEBUG nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.347 2 WARNING nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-d56785b1-aeee-49be-af2d-71e8fc0a6c9b for instance with vm_state active and task_state deleting.
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.347 2 DEBUG nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.347 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.348 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.348 2 DEBUG oslo_concurrency.lockutils [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.349 2 DEBUG nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.349 2 DEBUG nova.compute.manager [req-1fa1aee6-218e-40a2-978f-98a5cf4d755b req-c7b7ffb0-6f79-4ad3-acc0-98a9b96659bc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:56.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.486 2 DEBUG nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.487 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.487 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.487 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.488 2 DEBUG nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.488 2 WARNING nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-c23e360e-b27a-4c79-a3ce-5c8850ccc846 for instance with vm_state active and task_state deleting.
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.488 2 DEBUG nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.489 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.489 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.489 2 DEBUG oslo_concurrency.lockutils [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.489 2 DEBUG nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.490 2 DEBUG nova.compute.manager [req-26a162e7-6047-48a6-b003-23e7c8823248 req-612810ae-7073-4426-af1a-db43943873a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:56 compute-0 ceph-mon[73607]: pgmap v1628: 305 pgs: 305 active+clean; 247 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 4.3 KiB/s wr, 1 op/s
Oct 02 12:17:56 compute-0 nova_compute[257802]: 2025-10-02 12:17:56.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 246 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 3 op/s
Oct 02 12:17:57 compute-0 sudo[303901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:57 compute-0 sudo[303901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:57 compute-0 sudo[303901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:58 compute-0 sudo[303926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:58 compute-0 sudo[303926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:58 compute-0 sudo[303926]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:58 compute-0 podman[303950]: 2025-10-02 12:17:58.099883716 +0000 UTC m=+0.084226043 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 12:17:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:58.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.289 2 DEBUG nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.290 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.290 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.290 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.291 2 DEBUG nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.291 2 WARNING nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-eea85d66-4089-4708-9cb0-a7d2ab18e6ec for instance with vm_state active and task_state deleting.
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.291 2 DEBUG nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.292 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.292 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.292 2 DEBUG oslo_concurrency.lockutils [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.292 2 DEBUG nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-unplugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.293 2 DEBUG nova.compute.manager [req-64b63226-6409-43f0-a9b4-1d6ade65b52e req-701fafe1-0193-48ec-b378-9a5c0cb0e143 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-unplugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:17:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:58 compute-0 ceph-mon[73607]: pgmap v1629: 305 pgs: 305 active+clean; 246 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 3.6 KiB/s wr, 3 op/s
Oct 02 12:17:58 compute-0 nova_compute[257802]: 2025-10-02 12:17:58.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 16 op/s
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.328 2 DEBUG nova.compute.manager [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.328 2 DEBUG oslo_concurrency.lockutils [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.329 2 DEBUG oslo_concurrency.lockutils [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.329 2 DEBUG oslo_concurrency.lockutils [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.329 2 DEBUG nova.compute.manager [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.329 2 WARNING nova.compute.manager [req-60e9bf43-d693-448c-8098-1b51ebc58429 req-afc4271d-4af7-43fa-bf3c-7348391ff23b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-4a1bf892-fd3f-40df-95f2-779ffc41ade1 for instance with vm_state active and task_state deleting.
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.438 2 DEBUG nova.compute.manager [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.439 2 DEBUG oslo_concurrency.lockutils [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.439 2 DEBUG oslo_concurrency.lockutils [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.439 2 DEBUG oslo_concurrency.lockutils [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.440 2 DEBUG nova.compute.manager [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-0 nova_compute[257802]: 2025-10-02 12:17:59.440 2 WARNING nova.compute.manager [req-d9d97fba-dafe-4b89-82df-ead67b2aa7dd req-7ed1cb48-1569-4ff7-9abd-763fe7df42e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-137dbaa1-1ef7-4aea-bc14-9f99b461690c for instance with vm_state active and task_state deleting.
Oct 02 12:18:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:00.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.484 2 DEBUG nova.compute.manager [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.484 2 DEBUG oslo_concurrency.lockutils [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.484 2 DEBUG oslo_concurrency.lockutils [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.484 2 DEBUG oslo_concurrency.lockutils [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.485 2 DEBUG nova.compute.manager [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] No waiting events found dispatching network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:00 compute-0 nova_compute[257802]: 2025-10-02 12:18:00.485 2 WARNING nova.compute.manager [req-61e21559-3218-472a-84f4-4c5280c21bb2 req-7a3c7a3e-83a6-4dc9-95aa-0cb3fb038ae1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received unexpected event network-vif-plugged-7eba4256-8a01-49a2-aec1-98d48eda4d3f for instance with vm_state active and task_state deleting.
Oct 02 12:18:00 compute-0 ceph-mon[73607]: pgmap v1630: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 16 op/s
Oct 02 12:18:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 10 KiB/s wr, 15 op/s
Oct 02 12:18:01 compute-0 nova_compute[257802]: 2025-10-02 12:18:01.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:02 compute-0 nova_compute[257802]: 2025-10-02 12:18:02.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:02 compute-0 ceph-mon[73607]: pgmap v1631: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 10 KiB/s wr, 15 op/s
Oct 02 12:18:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 10 KiB/s wr, 15 op/s
Oct 02 12:18:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:03 compute-0 nova_compute[257802]: 2025-10-02 12:18:03.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:03 compute-0 nova_compute[257802]: 2025-10-02 12:18:03.983 2 DEBUG nova.compute.manager [req-27dafc52-ff8d-460e-b04c-a6963a6bb63c req-db499631-77fb-4df8-833b-4951728ca5ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-deleted-c23e360e-b27a-4c79-a3ce-5c8850ccc846 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:03 compute-0 nova_compute[257802]: 2025-10-02 12:18:03.984 2 INFO nova.compute.manager [req-27dafc52-ff8d-460e-b04c-a6963a6bb63c req-db499631-77fb-4df8-833b-4951728ca5ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Neutron deleted interface c23e360e-b27a-4c79-a3ce-5c8850ccc846; detaching it from the instance and deleting it from the info cache
Oct 02 12:18:03 compute-0 nova_compute[257802]: 2025-10-02 12:18:03.984 2 DEBUG nova.network.neutron [req-27dafc52-ff8d-460e-b04c-a6963a6bb63c req-db499631-77fb-4df8-833b-4951728ca5ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:04 compute-0 nova_compute[257802]: 2025-10-02 12:18:04.059 2 DEBUG nova.compute.manager [req-27dafc52-ff8d-460e-b04c-a6963a6bb63c req-db499631-77fb-4df8-833b-4951728ca5ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Detach interface failed, port_id=c23e360e-b27a-4c79-a3ce-5c8850ccc846, reason: Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:18:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:04.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:04 compute-0 ceph-mon[73607]: pgmap v1632: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 10 KiB/s wr, 15 op/s
Oct 02 12:18:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 8.7 KiB/s wr, 15 op/s
Oct 02 12:18:05 compute-0 sudo[303982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:05 compute-0 sudo[303982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:05 compute-0 sudo[303982]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:05 compute-0 sudo[304007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:05 compute-0 sudo[304007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:05 compute-0 sudo[304007]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:05 compute-0 sudo[304032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:05 compute-0 sudo[304032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:05 compute-0 sudo[304032]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:05 compute-0 sudo[304057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:18:05 compute-0 sudo[304057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:06 compute-0 nova_compute[257802]: 2025-10-02 12:18:06.114 2 DEBUG nova.compute.manager [req-1a7a26d7-0cd7-438d-a4bd-3b1cfccffe39 req-51abb65a-15e2-4ea4-83cb-a169f2f9ed96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-deleted-137dbaa1-1ef7-4aea-bc14-9f99b461690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:06 compute-0 nova_compute[257802]: 2025-10-02 12:18:06.114 2 INFO nova.compute.manager [req-1a7a26d7-0cd7-438d-a4bd-3b1cfccffe39 req-51abb65a-15e2-4ea4-83cb-a169f2f9ed96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Neutron deleted interface 137dbaa1-1ef7-4aea-bc14-9f99b461690c; detaching it from the instance and deleting it from the info cache
Oct 02 12:18:06 compute-0 nova_compute[257802]: 2025-10-02 12:18:06.114 2 DEBUG nova.network.neutron [req-1a7a26d7-0cd7-438d-a4bd-3b1cfccffe39 req-51abb65a-15e2-4ea4-83cb-a169f2f9ed96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:06 compute-0 nova_compute[257802]: 2025-10-02 12:18:06.223 2 DEBUG nova.compute.manager [req-1a7a26d7-0cd7-438d-a4bd-3b1cfccffe39 req-51abb65a-15e2-4ea4-83cb-a169f2f9ed96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Detach interface failed, port_id=137dbaa1-1ef7-4aea-bc14-9f99b461690c, reason: Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:18:06 compute-0 sudo[304057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:06.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2916ee6c-d2b1-4d2e-9942-e9f67539bc12 does not exist
Oct 02 12:18:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6c285063-160d-4d7f-8b9a-4551929e8e0d does not exist
Oct 02 12:18:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f97d7e9a-356c-42d9-990c-6a861547717d does not exist
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:18:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:06 compute-0 sudo[304115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:06 compute-0 sudo[304115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:06 compute-0 sudo[304115]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:06 compute-0 nova_compute[257802]: 2025-10-02 12:18:06.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:06 compute-0 sudo[304140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:06 compute-0 sudo[304140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:06 compute-0 sudo[304140]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:06 compute-0 ceph-mon[73607]: pgmap v1633: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 8.7 KiB/s wr, 15 op/s
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:07 compute-0 sudo[304165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:07 compute-0 sudo[304165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:07 compute-0 sudo[304165]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:07 compute-0 sudo[304190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:18:07 compute-0 sudo[304190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 8.7 KiB/s wr, 14 op/s
Oct 02 12:18:07 compute-0 podman[304257]: 2025-10-02 12:18:07.487187087 +0000 UTC m=+0.022828781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:07 compute-0 podman[304257]: 2025-10-02 12:18:07.631710887 +0000 UTC m=+0.167352591 container create aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:18:07 compute-0 systemd[1]: Started libpod-conmon-aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346.scope.
Oct 02 12:18:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:07 compute-0 podman[304257]: 2025-10-02 12:18:07.939741162 +0000 UTC m=+0.475382926 container init aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:18:07 compute-0 podman[304257]: 2025-10-02 12:18:07.948742392 +0000 UTC m=+0.484384076 container start aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:18:07 compute-0 musing_pike[304273]: 167 167
Oct 02 12:18:07 compute-0 systemd[1]: libpod-aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346.scope: Deactivated successfully.
Oct 02 12:18:07 compute-0 conmon[304273]: conmon aaab65b9aebc285e9517 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346.scope/container/memory.events
Oct 02 12:18:08 compute-0 podman[304257]: 2025-10-02 12:18:08.036210075 +0000 UTC m=+0.571851809 container attach aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:08 compute-0 podman[304257]: 2025-10-02 12:18:08.037501657 +0000 UTC m=+0.573143341 container died aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7656dbafd5751af6dfe2bbeaabd9a410fe4eead5f83950a4eba8a69f4913659b-merged.mount: Deactivated successfully.
Oct 02 12:18:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:08.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:08 compute-0 podman[304257]: 2025-10-02 12:18:08.289197221 +0000 UTC m=+0.824838905 container remove aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:18:08 compute-0 systemd[1]: libpod-conmon-aaab65b9aebc285e9517ff6389714605b6331cd995aa7ae01b9717067b786346.scope: Deactivated successfully.
Oct 02 12:18:08 compute-0 podman[304301]: 2025-10-02 12:18:08.477567646 +0000 UTC m=+0.048792096 container create a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:18:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.510 2 DEBUG nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-deleted-1b5f1c6a-6b73-4b1d-8560-6a38d7e59732 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.511 2 INFO nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Neutron deleted interface 1b5f1c6a-6b73-4b1d-8560-6a38d7e59732; detaching it from the instance and deleting it from the info cache
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.511 2 DEBUG nova.network.neutron [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:08 compute-0 systemd[1]: Started libpod-conmon-a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579.scope.
Oct 02 12:18:08 compute-0 podman[304301]: 2025-10-02 12:18:08.451296592 +0000 UTC m=+0.022521062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.584 2 DEBUG nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Detach interface failed, port_id=1b5f1c6a-6b73-4b1d-8560-6a38d7e59732, reason: Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.585 2 DEBUG nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-deleted-7eba4256-8a01-49a2-aec1-98d48eda4d3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.585 2 INFO nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Neutron deleted interface 7eba4256-8a01-49a2-aec1-98d48eda4d3f; detaching it from the instance and deleting it from the info cache
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.585 2 DEBUG nova.network.neutron [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.598 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407473.5980191, d7e14705-7aeb-440f-8e7e-926cc5b5ab6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.599 2 INFO nova.compute.manager [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] VM Stopped (Lifecycle Event)
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.628 2 DEBUG nova.compute.manager [req-40283d59-7bad-40e7-bdc7-2ce1a880b768 req-bda74a36-f5fa-4bef-8423-53780b006c47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Detach interface failed, port_id=7eba4256-8a01-49a2-aec1-98d48eda4d3f, reason: Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.629 2 DEBUG nova.compute.manager [None req-8d444097-f939-4765-ba4a-70efdd29f426 - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:08 compute-0 podman[304301]: 2025-10-02 12:18:08.698724733 +0000 UTC m=+0.269949213 container init a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:18:08 compute-0 podman[304301]: 2025-10-02 12:18:08.712955102 +0000 UTC m=+0.284179562 container start a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:18:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:08 compute-0 podman[304301]: 2025-10-02 12:18:08.84067705 +0000 UTC m=+0.411901590 container attach a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:08 compute-0 nova_compute[257802]: 2025-10-02 12:18:08.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:09 compute-0 ceph-mon[73607]: pgmap v1634: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 8.7 KiB/s wr, 14 op/s
Oct 02 12:18:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 8.4 KiB/s wr, 12 op/s
Oct 02 12:18:09 compute-0 recursing_easley[304318]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:18:09 compute-0 recursing_easley[304318]: --> relative data size: 1.0
Oct 02 12:18:09 compute-0 recursing_easley[304318]: --> All data devices are unavailable
Oct 02 12:18:09 compute-0 systemd[1]: libpod-a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579.scope: Deactivated successfully.
Oct 02 12:18:09 compute-0 podman[304301]: 2025-10-02 12:18:09.549476312 +0000 UTC m=+1.120700762 container died a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4253544c249169ec233acce66796c978dfc2cdcfdc8660d26ce12a5481d9a5b-merged.mount: Deactivated successfully.
Oct 02 12:18:10 compute-0 podman[304301]: 2025-10-02 12:18:10.023032203 +0000 UTC m=+1.594256653 container remove a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:10 compute-0 systemd[1]: libpod-conmon-a827c6a6e3c6da4186171cf4975fa15d8185f9176053724a61b63d5e6978a579.scope: Deactivated successfully.
Oct 02 12:18:10 compute-0 sudo[304190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:10 compute-0 sudo[304346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:10 compute-0 sudo[304346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:10 compute-0 sudo[304346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:10 compute-0 sudo[304371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:10 compute-0 sudo[304371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:10 compute-0 sudo[304371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:10 compute-0 sudo[304396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:10 compute-0 sudo[304396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:10 compute-0 sudo[304396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:10 compute-0 sudo[304421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:18:10 compute-0 sudo[304421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.718140209 +0000 UTC m=+0.063101046 container create 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:18:10 compute-0 systemd[1]: Started libpod-conmon-3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532.scope.
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.681561063 +0000 UTC m=+0.026521930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.820511417 +0000 UTC m=+0.165472284 container init 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.827326394 +0000 UTC m=+0.172287231 container start 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:18:10 compute-0 zealous_diffie[304503]: 167 167
Oct 02 12:18:10 compute-0 systemd[1]: libpod-3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532.scope: Deactivated successfully.
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.846676238 +0000 UTC m=+0.191637095 container attach 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.847076417 +0000 UTC m=+0.192037254 container died 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f0a23cae9af0af19dd893572f29cb7c1664f5cea76e1600916c500c7bb9b17-merged.mount: Deactivated successfully.
Oct 02 12:18:10 compute-0 podman[304487]: 2025-10-02 12:18:10.888938503 +0000 UTC m=+0.233899340 container remove 3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:18:10 compute-0 systemd[1]: libpod-conmon-3130bb04f711cfb020d941d6abc961fc38b4ba942f055e2eeffa029daa915532.scope: Deactivated successfully.
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.045091068 +0000 UTC m=+0.044682135 container create f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:18:11 compute-0 systemd[1]: Started libpod-conmon-f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c.scope.
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.023999222 +0000 UTC m=+0.023590319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88127f0433b57f871610ef2ac9cc0183329d0410036faee9c7232cb403786d8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88127f0433b57f871610ef2ac9cc0183329d0410036faee9c7232cb403786d8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88127f0433b57f871610ef2ac9cc0183329d0410036faee9c7232cb403786d8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88127f0433b57f871610ef2ac9cc0183329d0410036faee9c7232cb403786d8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.141371586 +0000 UTC m=+0.140962643 container init f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:18:11 compute-0 ceph-mon[73607]: pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 8.4 KiB/s wr, 12 op/s
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.15048632 +0000 UTC m=+0.150077347 container start f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.155595775 +0000 UTC m=+0.155186832 container attach f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 12:18:11 compute-0 happy_booth[304544]: {
Oct 02 12:18:11 compute-0 happy_booth[304544]:     "1": [
Oct 02 12:18:11 compute-0 happy_booth[304544]:         {
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "devices": [
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "/dev/loop3"
Oct 02 12:18:11 compute-0 happy_booth[304544]:             ],
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "lv_name": "ceph_lv0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "lv_size": "7511998464",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "name": "ceph_lv0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "tags": {
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.cluster_name": "ceph",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.crush_device_class": "",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.encrypted": "0",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.osd_id": "1",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.type": "block",
Oct 02 12:18:11 compute-0 happy_booth[304544]:                 "ceph.vdo": "0"
Oct 02 12:18:11 compute-0 happy_booth[304544]:             },
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "type": "block",
Oct 02 12:18:11 compute-0 happy_booth[304544]:             "vg_name": "ceph_vg0"
Oct 02 12:18:11 compute-0 happy_booth[304544]:         }
Oct 02 12:18:11 compute-0 happy_booth[304544]:     ]
Oct 02 12:18:11 compute-0 happy_booth[304544]: }
Oct 02 12:18:11 compute-0 nova_compute[257802]: 2025-10-02 12:18:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-0 systemd[1]: libpod-f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c.scope: Deactivated successfully.
Oct 02 12:18:11 compute-0 podman[304528]: 2025-10-02 12:18:11.935451668 +0000 UTC m=+0.935042725 container died f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-88127f0433b57f871610ef2ac9cc0183329d0410036faee9c7232cb403786d8b-merged.mount: Deactivated successfully.
Oct 02 12:18:12 compute-0 podman[304528]: 2025-10-02 12:18:12.022939511 +0000 UTC m=+1.022530578 container remove f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:18:12 compute-0 systemd[1]: libpod-conmon-f31b6d2fc2d5b9d6e3942e744ab71bac66b0c4d40629995041338cc553db4a7c.scope: Deactivated successfully.
Oct 02 12:18:12 compute-0 sudo[304421]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:12 compute-0 sudo[304567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:12 compute-0 sudo[304567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:12 compute-0 sudo[304567]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:12 compute-0 sudo[304592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:12 compute-0 sudo[304592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:12 compute-0 sudo[304592]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:12 compute-0 sudo[304617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:12 compute-0 sudo[304617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:12 compute-0 sudo[304617]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.438 2 DEBUG nova.network.neutron [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:12 compute-0 sudo[304642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:18:12 compute-0 sudo[304642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.458 2 INFO nova.compute.manager [-] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Took 18.13 seconds to deallocate network for instance.
Oct 02 12:18:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.546 2 DEBUG nova.compute.manager [req-f4bc9479-7423-4d2c-9261-a78d1e2986df req-f879a6e1-bebf-4781-b975-5d93cd3ee509 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Received event network-vif-deleted-eea85d66-4089-4708-9cb0-a7d2ab18e6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.650 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updating instance_info_cache with network_info: [{"id": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "address": "fa:16:3e:c2:83:d4", "network": {"id": "18c9b866-0d06-4d2c-a073-d724f233a261", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-215265968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b5f1c6a-6b", "ovs_interfaceid": "1b5f1c6a-6b73-4b1d-8560-6a38d7e59732", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "address": "fa:16:3e:86:0a:06", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd56785b1-ae", "ovs_interfaceid": "d56785b1-aeee-49be-af2d-71e8fc0a6c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "address": "fa:16:3e:3b:4c:02", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a1bf892-fd", "ovs_interfaceid": "4a1bf892-fd3f-40df-95f2-779ffc41ade1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "address": "fa:16:3e:1d:5e:56", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23e360e-b2", "ovs_interfaceid": "c23e360e-b27a-4c79-a3ce-5c8850ccc846", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "address": "fa:16:3e:11:f2:18", "network": {"id": "12d2f684-17ee-45ff-8e0b-9e61463b4f7c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1285182711", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea85d66-40", "ovs_interfaceid": "eea85d66-4089-4708-9cb0-a7d2ab18e6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "address": "fa:16:3e:ee:27:7c", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7eba4256-8a", "ovs_interfaceid": "7eba4256-8a01-49a2-aec1-98d48eda4d3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "address": "fa:16:3e:4f:43:a5", "network": {"id": "8f6e5e22-d751-48cd-82a1-be33a11e92d9", "bridge": "br-int", "label": "tempest-device-tagging-net2-1945824075", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9d3eca266284ae9950c491e566b2523", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap137dbaa1-1e", "ovs_interfaceid": "137dbaa1-1ef7-4aea-bc14-9f99b461690c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.680 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.680 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.680 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.680 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.711 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.711 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.711 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.711 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:18:12 compute-0 nova_compute[257802]: 2025-10-02 12:18:12.712 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:12 compute-0 podman[304709]: 2025-10-02 12:18:12.890708957 +0000 UTC m=+0.058612976 container create eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:18:12 compute-0 systemd[1]: Started libpod-conmon-eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6.scope.
Oct 02 12:18:12 compute-0 podman[304709]: 2025-10-02 12:18:12.863338386 +0000 UTC m=+0.031242425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:13 compute-0 podman[304709]: 2025-10-02 12:18:13.026470703 +0000 UTC m=+0.194374782 container init eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:13 compute-0 podman[304709]: 2025-10-02 12:18:13.035415772 +0000 UTC m=+0.203319831 container start eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:18:13 compute-0 mystifying_khayyam[304742]: 167 167
Oct 02 12:18:13 compute-0 podman[304709]: 2025-10-02 12:18:13.048356318 +0000 UTC m=+0.216260347 container attach eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:18:13 compute-0 systemd[1]: libpod-eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6.scope: Deactivated successfully.
Oct 02 12:18:13 compute-0 podman[304709]: 2025-10-02 12:18:13.063760196 +0000 UTC m=+0.231664215 container died eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c562a222064b9333737efde3fede21cb216e3dd9e44343b6b4fd28581d9fb14-merged.mount: Deactivated successfully.
Oct 02 12:18:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704865594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-0 podman[304709]: 2025-10-02 12:18:13.162270789 +0000 UTC m=+0.330174808 container remove eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.171 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:13 compute-0 ceph-mon[73607]: pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 12:18:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2704865594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-0 systemd[1]: libpod-conmon-eaf5650e0553ee63ac274c17d924b75c6d0e8a5660aca73db287ac02beafa3d6.scope: Deactivated successfully.
Oct 02 12:18:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.379 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.380 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=20.94263458251953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.380 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.380 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:13 compute-0 podman[304773]: 2025-10-02 12:18:13.405031276 +0000 UTC m=+0.116415863 container create d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:18:13 compute-0 podman[304773]: 2025-10-02 12:18:13.321060249 +0000 UTC m=+0.032444856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.470 2 INFO nova.compute.manager [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] [instance: d7e14705-7aeb-440f-8e7e-926cc5b5ab6f] Took 1.01 seconds to detach 3 volumes for instance.
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.500 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.500 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.500 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:18:13 compute-0 systemd[1]: Started libpod-conmon-d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d.scope.
Oct 02 12:18:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fd9cdba2c6e3a6d5945df3987a5c08ef678d4807d77b9e3ce111cb5e1f8a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fd9cdba2c6e3a6d5945df3987a5c08ef678d4807d77b9e3ce111cb5e1f8a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fd9cdba2c6e3a6d5945df3987a5c08ef678d4807d77b9e3ce111cb5e1f8a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fd9cdba2c6e3a6d5945df3987a5c08ef678d4807d77b9e3ce111cb5e1f8a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.592 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.595 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:13 compute-0 podman[304773]: 2025-10-02 12:18:13.628312445 +0000 UTC m=+0.339697072 container init d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:18:13 compute-0 podman[304773]: 2025-10-02 12:18:13.63708573 +0000 UTC m=+0.348470367 container start d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:18:13 compute-0 podman[304773]: 2025-10-02 12:18:13.732329773 +0000 UTC m=+0.443714390 container attach d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:18:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:13 compute-0 nova_compute[257802]: 2025-10-02 12:18:13.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403190340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.113 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.119 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.158 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:18:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:18:14 compute-0 ceph-mon[73607]: pgmap v1637: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1403190340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.226 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.227 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.227 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.286 2 DEBUG oslo_concurrency.processutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:14 compute-0 friendly_noether[304790]: {
Oct 02 12:18:14 compute-0 friendly_noether[304790]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:18:14 compute-0 friendly_noether[304790]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:18:14 compute-0 friendly_noether[304790]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:18:14 compute-0 friendly_noether[304790]:         "osd_id": 1,
Oct 02 12:18:14 compute-0 friendly_noether[304790]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:18:14 compute-0 friendly_noether[304790]:         "type": "bluestore"
Oct 02 12:18:14 compute-0 friendly_noether[304790]:     }
Oct 02 12:18:14 compute-0 friendly_noether[304790]: }
Oct 02 12:18:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:14 compute-0 systemd[1]: libpod-d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d.scope: Deactivated successfully.
Oct 02 12:18:14 compute-0 podman[304773]: 2025-10-02 12:18:14.513877917 +0000 UTC m=+1.225262504 container died d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b7fd9cdba2c6e3a6d5945df3987a5c08ef678d4807d77b9e3ce111cb5e1f8a2-merged.mount: Deactivated successfully.
Oct 02 12:18:14 compute-0 podman[304773]: 2025-10-02 12:18:14.614513342 +0000 UTC m=+1.325897929 container remove d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:18:14 compute-0 systemd[1]: libpod-conmon-d274c4791b01d56859519db16387729c753e954b0a373171b753414006ba053d.scope: Deactivated successfully.
Oct 02 12:18:14 compute-0 sudo[304642]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:18:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:18:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 26136e1b-2ee6-44f4-ab43-47bb983d15b5 does not exist
Oct 02 12:18:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0abd4617-dad3-4e4d-a1c9-96ae2f10fe42 does not exist
Oct 02 12:18:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1626a48e-ef28-49c7-a669-8aed9ddceb24 does not exist
Oct 02 12:18:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074671851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.718 2 DEBUG oslo_concurrency.processutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.725 2 DEBUG nova.compute.provider_tree [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:14 compute-0 sudo[304867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:14 compute-0 sudo[304867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:14 compute-0 sudo[304867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.753 2 DEBUG nova.scheduler.client.report [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.786 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:14 compute-0 sudo[304894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:18:14 compute-0 sudo[304894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:14 compute-0 sudo[304894]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.852 2 INFO nova.scheduler.client.report [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Deleted allocations for instance d7e14705-7aeb-440f-8e7e-926cc5b5ab6f
Oct 02 12:18:14 compute-0 nova_compute[257802]: 2025-10-02 12:18:14.975 2 DEBUG oslo_concurrency.lockutils [None req-bfe3e00d-1b79-4245-9fd3-d7331da2a16a d4b073f3365d481cabadfd39389c66ba a9d3eca266284ae9950c491e566b2523 - - default default] Lock "d7e14705-7aeb-440f-8e7e-926cc5b5ab6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 22.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:18:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2074671851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:16.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:16 compute-0 ceph-mon[73607]: pgmap v1638: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:16 compute-0 nova_compute[257802]: 2025-10-02 12:18:16.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:18 compute-0 sudo[304921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:18 compute-0 sudo[304921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:18 compute-0 sudo[304921]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:18 compute-0 sudo[304946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:18 compute-0 sudo[304946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:18 compute-0 sudo[304946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:18.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:18.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:18 compute-0 ceph-mon[73607]: pgmap v1639: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:18 compute-0 nova_compute[257802]: 2025-10-02 12:18:18.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:19 compute-0 nova_compute[257802]: 2025-10-02 12:18:19.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:20.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:20 compute-0 ceph-mon[73607]: pgmap v1640: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:20 compute-0 podman[304973]: 2025-10-02 12:18:20.919011413 +0000 UTC m=+0.058755811 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:18:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:21 compute-0 nova_compute[257802]: 2025-10-02 12:18:21.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:18:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:18:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:22.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:22 compute-0 nova_compute[257802]: 2025-10-02 12:18:22.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:22 compute-0 ceph-mon[73607]: pgmap v1641: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:23 compute-0 nova_compute[257802]: 2025-10-02 12:18:23.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1481952339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:23 compute-0 nova_compute[257802]: 2025-10-02 12:18:23.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:18:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:18:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:24.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:24 compute-0 ceph-mon[73607]: pgmap v1642: 305 pgs: 305 active+clean; 246 MiB data, 749 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:18:24 compute-0 podman[304996]: 2025-10-02 12:18:24.911958082 +0000 UTC m=+0.047831343 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:24 compute-0 podman[304995]: 2025-10-02 12:18:24.917552989 +0000 UTC m=+0.056597418 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:18:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 258 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 370 KiB/s wr, 2 op/s
Oct 02 12:18:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2452682330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2452682330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:26.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:26.935 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:26.935 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:26.936 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:26 compute-0 nova_compute[257802]: 2025-10-02 12:18:26.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:26 compute-0 ceph-mon[73607]: pgmap v1643: 305 pgs: 305 active+clean; 258 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 370 KiB/s wr, 2 op/s
Oct 02 12:18:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3737825568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3737825568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 258 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 370 KiB/s wr, 2 op/s
Oct 02 12:18:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:28.525 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:28 compute-0 nova_compute[257802]: 2025-10-02 12:18:28.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:28.526 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:18:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:28 compute-0 nova_compute[257802]: 2025-10-02 12:18:28.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:28 compute-0 podman[305033]: 2025-10-02 12:18:28.930805414 +0000 UTC m=+0.071923903 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Oct 02 12:18:28 compute-0 ceph-mon[73607]: pgmap v1644: 305 pgs: 305 active+clean; 258 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 370 KiB/s wr, 2 op/s
Oct 02 12:18:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3420369335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3194550672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3194550672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3055176312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 271 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:18:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:30.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:31 compute-0 ceph-mon[73607]: pgmap v1645: 305 pgs: 305 active+clean; 271 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:18:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3744318202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2878355219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 243 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:18:31 compute-0 nova_compute[257802]: 2025-10-02 12:18:31.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:33 compute-0 ceph-mon[73607]: pgmap v1646: 305 pgs: 305 active+clean; 243 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:18:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 214 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 12:18:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:33 compute-0 nova_compute[257802]: 2025-10-02 12:18:33.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:35 compute-0 ceph-mon[73607]: pgmap v1647: 305 pgs: 305 active+clean; 214 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 12:18:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 244 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Oct 02 12:18:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:36 compute-0 ceph-mon[73607]: pgmap v1648: 305 pgs: 305 active+clean; 244 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Oct 02 12:18:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:36 compute-0 nova_compute[257802]: 2025-10-02 12:18:36.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 244 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 197 op/s
Oct 02 12:18:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:37.528 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:38.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:38 compute-0 sudo[305063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:38 compute-0 sudo[305063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:38 compute-0 sudo[305063]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.310 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.311 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:38 compute-0 sudo[305088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.332 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:18:38 compute-0 sudo[305088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:38 compute-0 sudo[305088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.407 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.407 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.415 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.415 2 INFO nova.compute.claims [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:18:38 compute-0 ceph-mon[73607]: pgmap v1649: 305 pgs: 305 active+clean; 244 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 197 op/s
Oct 02 12:18:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:38.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.559 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313861960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.980 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:38 compute-0 nova_compute[257802]: 2025-10-02 12:18:38.986 2 DEBUG nova.compute.provider_tree [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.021 2 DEBUG nova.scheduler.client.report [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.045 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.046 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.089 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.089 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.141 2 INFO nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.158 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.197 2 INFO nova.virt.block_device [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Booting with volume 6418770c-0612-4553-8861-921425b3c82e at /dev/vda
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.294 2 DEBUG os_brick.utils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.295 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.306 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.307 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd39532-eee6-48ac-875a-f8356c1c6ea4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.308 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.315 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.315 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[a1256f6a-2187-436e-b96f-e063dacf78b8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.317 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.324 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.324 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[39679fe6-f927-4cc3-94f8-314d8cd8adbc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.326 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c09c0586-a4fc-4d52-bec8-704ef40f9cbc]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.326 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.354 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.356 2 DEBUG os_brick.initiator.connectors.lightos [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.357 2 DEBUG os_brick.initiator.connectors.lightos [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.357 2 DEBUG os_brick.initiator.connectors.lightos [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.358 2 DEBUG os_brick.utils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:18:39 compute-0 nova_compute[257802]: 2025-10-02 12:18:39.358 2 DEBUG nova.virt.block_device [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating existing volume attachment record: 1b8757e1-8403-4389-af1f-b49cc389a462 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:18:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2313861960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4255823465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.153 2 DEBUG nova.policy [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e268197f7054a098b565874c3fdd76c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a07342ec339549a4b091ddbcfedae271', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:18:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:18:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.428 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.429 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.430 2 INFO nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Creating image(s)
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.430 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.430 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Ensure instance console log exists: /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.431 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.431 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:40 compute-0 nova_compute[257802]: 2025-10-02 12:18:40.431 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:40 compute-0 ceph-mon[73607]: pgmap v1650: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Oct 02 12:18:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4255823465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:41 compute-0 nova_compute[257802]: 2025-10-02 12:18:41.156 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Successfully created port: 29d9f254-ad8e-43f0-81ab-f104655d2f97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:18:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Oct 02 12:18:41 compute-0 nova_compute[257802]: 2025-10-02 12:18:41.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:42.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:18:42
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'images', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log']
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:18:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:42.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:42 compute-0 ceph-mon[73607]: pgmap v1651: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:18:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.146 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Successfully updated port: 29d9f254-ad8e-43f0-81ab-f104655d2f97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.168 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.169 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquired lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.169 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:18:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.644 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.645 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.645 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.761 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:18:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:43 compute-0 nova_compute[257802]: 2025-10-02 12:18:43.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:44 compute-0 nova_compute[257802]: 2025-10-02 12:18:44.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:18:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:44.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:18:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:44.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:44 compute-0 ceph-mon[73607]: pgmap v1652: 305 pgs: 305 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:18:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 271 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.698 2 DEBUG nova.network.neutron [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.722 2 DEBUG nova.compute.manager [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-changed-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.723 2 DEBUG nova.compute.manager [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Refreshing instance network info cache due to event network-changed-29d9f254-ad8e-43f0-81ab-f104655d2f97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.723 2 DEBUG oslo_concurrency.lockutils [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.736 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Releasing lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.737 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Instance network_info: |[{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.737 2 DEBUG oslo_concurrency.lockutils [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.738 2 DEBUG nova.network.neutron [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Refreshing network info cache for port 29d9f254-ad8e-43f0-81ab-f104655d2f97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.742 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Start _get_guest_xml network_info=[{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '1b8757e1-8403-4389-af1f-b49cc389a462', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6418770c-0612-4553-8861-921425b3c82e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6418770c-0612-4553-8861-921425b3c82e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fab74f5b-b97e-42c7-89ac-6cc796bf5b74', 'attached_at': '', 'detached_at': '', 'volume_id': '6418770c-0612-4553-8861-921425b3c82e', 'serial': '6418770c-0612-4553-8861-921425b3c82e'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.748 2 WARNING nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.753 2 DEBUG nova.virt.libvirt.host [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.755 2 DEBUG nova.virt.libvirt.host [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.760 2 DEBUG nova.virt.libvirt.host [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.761 2 DEBUG nova.virt.libvirt.host [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.763 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.764 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.764 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.765 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.765 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.765 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.765 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.765 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.766 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.766 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.766 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.766 2 DEBUG nova.virt.hardware [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.816 2 DEBUG nova.storage.rbd_utils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] rbd image fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:45 compute-0 nova_compute[257802]: 2025-10-02 12:18:45.822 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/942433783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.120 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:18:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:46.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/286069963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.296 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.325 2 DEBUG nova.virt.libvirt.vif [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-772132433',display_name='tempest-ServersTestBootFromVolume-server-772132433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-772132433',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJSxeBsrnH5GThsMacmN5Gh9iZZlU/CvAh8lf0P9OkxKZVjcsvsd4aA81Qv0/kY9JJpl22sAXkmrW+qBOwAs5WJkJD3lG7aZC8qJ+6aVZ8N0Ln3qVDvI+VB/wcXZWr0e5w==',key_name='tempest-keypair-1254580889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a07342ec339549a4b091ddbcfedae271',ramdisk_id='',reservation_id='r-lskvlc0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-2081000680',owner_user_name='tempest-ServersTestBootFromVolume-2081000680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e268197f7054a098b565874c3fdd76c',uuid=fab74f5b-b97e-42c7-89ac-6cc796bf5b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.326 2 DEBUG nova.network.os_vif_util [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converting VIF {"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.326 2 DEBUG nova.network.os_vif_util [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.328 2 DEBUG nova.objects.instance [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lazy-loading 'pci_devices' on Instance uuid fab74f5b-b97e-42c7-89ac-6cc796bf5b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.346 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <uuid>fab74f5b-b97e-42c7-89ac-6cc796bf5b74</uuid>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <name>instance-0000004e</name>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersTestBootFromVolume-server-772132433</nova:name>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:18:45</nova:creationTime>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:user uuid="0e268197f7054a098b565874c3fdd76c">tempest-ServersTestBootFromVolume-2081000680-project-member</nova:user>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:project uuid="a07342ec339549a4b091ddbcfedae271">tempest-ServersTestBootFromVolume-2081000680</nova:project>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <nova:port uuid="29d9f254-ad8e-43f0-81ab-f104655d2f97">
Oct 02 12:18:46 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <system>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="serial">fab74f5b-b97e-42c7-89ac-6cc796bf5b74</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="uuid">fab74f5b-b97e-42c7-89ac-6cc796bf5b74</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </system>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <os>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </os>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <features>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </features>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config">
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </source>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-6418770c-0612-4553-8861-921425b3c82e">
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </source>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:18:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <serial>6418770c-0612-4553-8861-921425b3c82e</serial>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6f:fc:0f"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <target dev="tap29d9f254-ad"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/console.log" append="off"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <video>
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </video>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:18:46 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:18:46 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:18:46 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:18:46 compute-0 nova_compute[257802]: </domain>
Oct 02 12:18:46 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.348 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Preparing to wait for external event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.348 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.348 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.348 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.349 2 DEBUG nova.virt.libvirt.vif [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-772132433',display_name='tempest-ServersTestBootFromVolume-server-772132433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-772132433',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJSxeBsrnH5GThsMacmN5Gh9iZZlU/CvAh8lf0P9OkxKZVjcsvsd4aA81Qv0/kY9JJpl22sAXkmrW+qBOwAs5WJkJD3lG7aZC8qJ+6aVZ8N0Ln3qVDvI+VB/wcXZWr0e5w==',key_name='tempest-keypair-1254580889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a07342ec339549a4b091ddbcfedae271',ramdisk_id='',reservation_id='r-lskvlc0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-2081000680',owner_user_name='tempest-ServersTestBootFromVolume-2081000680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e268197f7054a098b565874c3fdd76c',uuid=fab74f5b-b97e-42c7-89ac-6cc796bf5b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.350 2 DEBUG nova.network.os_vif_util [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converting VIF {"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.350 2 DEBUG nova.network.os_vif_util [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.351 2 DEBUG os_vif [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29d9f254-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29d9f254-ad, col_values=(('external_ids', {'iface-id': '29d9f254-ad8e-43f0-81ab-f104655d2f97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:fc:0f', 'vm-uuid': 'fab74f5b-b97e-42c7-89ac-6cc796bf5b74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-0 NetworkManager[44987]: <info>  [1759407526.4143] manager: (tap29d9f254-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.422 2 INFO os_vif [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad')
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.465 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.466 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.466 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] No VIF found with MAC fa:16:3e:6f:fc:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.466 2 INFO nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Using config drive
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.489 2 DEBUG nova.storage.rbd_utils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] rbd image fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:46 compute-0 ceph-mon[73607]: pgmap v1653: 305 pgs: 305 active+clean; 271 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Oct 02 12:18:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/286069963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1135563654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:46 compute-0 nova_compute[257802]: 2025-10-02 12:18:46.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 271 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.367 2 INFO nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Creating config drive at /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.372 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ou11grh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.521 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ou11grh" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.545 2 DEBUG nova.storage.rbd_utils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] rbd image fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.549 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.775 2 DEBUG oslo_concurrency.processutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config fab74f5b-b97e-42c7-89ac-6cc796bf5b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.776 2 INFO nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Deleting local config drive /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74/disk.config because it was imported into RBD.
Oct 02 12:18:47 compute-0 kernel: tap29d9f254-ad: entered promiscuous mode
Oct 02 12:18:47 compute-0 NetworkManager[44987]: <info>  [1759407527.8367] manager: (tap29d9f254-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Oct 02 12:18:47 compute-0 ovn_controller[148183]: 2025-10-02T12:18:47Z|00323|binding|INFO|Claiming lport 29d9f254-ad8e-43f0-81ab-f104655d2f97 for this chassis.
Oct 02 12:18:47 compute-0 ovn_controller[148183]: 2025-10-02T12:18:47Z|00324|binding|INFO|29d9f254-ad8e-43f0-81ab-f104655d2f97: Claiming fa:16:3e:6f:fc:0f 10.100.0.4
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.855 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:fc:0f 10.100.0.4'], port_security=['fa:16:3e:6f:fc:0f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fab74f5b-b97e-42c7-89ac-6cc796bf5b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a07342ec339549a4b091ddbcfedae271', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30912c5c-8f90-4a42-9ed3-ef0100e87b28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39ac99e8-48cc-4487-b968-24fcd53497f9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=29d9f254-ad8e-43f0-81ab-f104655d2f97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.858 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 29d9f254-ad8e-43f0-81ab-f104655d2f97 in datapath 09bdffbb-7025-43ba-8bc7-0c1b73b98426 bound to our chassis
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.860 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09bdffbb-7025-43ba-8bc7-0c1b73b98426
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.871 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cea6c659-dea3-4f52-98a5-de02285eaee6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.873 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09bdffbb-71 in ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:18:47 compute-0 systemd-udevd[305261]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.875 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09bdffbb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:18:47 compute-0 systemd-machined[211836]: New machine qemu-35-instance-0000004e.
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.875 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f613327-deff-49d1-acc0-23b4e749b7f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.879 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df02e6ff-8061-467f-b8d0-a9a1669928c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 NetworkManager[44987]: <info>  [1759407527.8872] device (tap29d9f254-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:18:47 compute-0 NetworkManager[44987]: <info>  [1759407527.8886] device (tap29d9f254-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:18:47 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000004e.
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.892 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e96c25db-7579-4d42-b95b-cbf173f5451b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.922 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c6120-2981-4062-99de-2f2f9788960b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 ovn_controller[148183]: 2025-10-02T12:18:47Z|00325|binding|INFO|Setting lport 29d9f254-ad8e-43f0-81ab-f104655d2f97 ovn-installed in OVS
Oct 02 12:18:47 compute-0 ovn_controller[148183]: 2025-10-02T12:18:47Z|00326|binding|INFO|Setting lport 29d9f254-ad8e-43f0-81ab-f104655d2f97 up in Southbound
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.956 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[55e1f9f0-f0cb-43ad-b706-038424550424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 systemd-udevd[305264]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:47 compute-0 NetworkManager[44987]: <info>  [1759407527.9637] manager: (tap09bdffbb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Oct 02 12:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:47.962 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae1db2c-e877-4f01-b5d0-23bf129787bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.967 2 DEBUG nova.network.neutron [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updated VIF entry in instance network info cache for port 29d9f254-ad8e-43f0-81ab-f104655d2f97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.968 2 DEBUG nova.network.neutron [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:47 compute-0 nova_compute[257802]: 2025-10-02 12:18:47.987 2 DEBUG oslo_concurrency.lockutils [req-61092fda-4b94-419d-970a-140f114f4903 req-d2eba3f0-ce62-4dbf-abb7-5512bad4f6d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.003 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a57bd779-58b5-4bab-9d93-c2858e118084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.006 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0af98d-7c7d-4c5a-a95e-a3dd411bb0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 NetworkManager[44987]: <info>  [1759407528.0312] device (tap09bdffbb-70): carrier: link connected
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.039 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[743a79e3-df16-42ec-9c93-5af71bdc7fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.060 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[431622be-9438-4fb4-9e54-0087af09a221]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09bdffbb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:55:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559565, 'reachable_time': 33713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305293, 'error': None, 'target': 'ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.077 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b8ccb5-f205-4fd5-9c82-ace8d761e659]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:55a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559565, 'tstamp': 559565}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305294, 'error': None, 'target': 'ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.096 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[16e11818-65cf-4153-b7a9-509da94c89ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09bdffbb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:55:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559565, 'reachable_time': 33713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305295, 'error': None, 'target': 'ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.132 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ed055650-dc3e-4158-a5d3-e62ad4fbdd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.169 2 DEBUG nova.compute.manager [req-d6f420a7-9a79-4367-9d3b-5fad4247612a req-cb72bc4d-59a9-4e77-aeb1-15a588609e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.169 2 DEBUG oslo_concurrency.lockutils [req-d6f420a7-9a79-4367-9d3b-5fad4247612a req-cb72bc4d-59a9-4e77-aeb1-15a588609e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.170 2 DEBUG oslo_concurrency.lockutils [req-d6f420a7-9a79-4367-9d3b-5fad4247612a req-cb72bc4d-59a9-4e77-aeb1-15a588609e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.170 2 DEBUG oslo_concurrency.lockutils [req-d6f420a7-9a79-4367-9d3b-5fad4247612a req-cb72bc4d-59a9-4e77-aeb1-15a588609e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.170 2 DEBUG nova.compute.manager [req-d6f420a7-9a79-4367-9d3b-5fad4247612a req-cb72bc4d-59a9-4e77-aeb1-15a588609e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Processing event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.197 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca0db8a-c6f8-4244-8396-6c2b3869b697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09bdffbb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09bdffbb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:48 compute-0 kernel: tap09bdffbb-70: entered promiscuous mode
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:48 compute-0 NetworkManager[44987]: <info>  [1759407528.2022] manager: (tap09bdffbb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.205 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09bdffbb-70, col_values=(('external_ids', {'iface-id': '80f72fa9-c415-4ae0-baae-90b592fde178'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:48 compute-0 ovn_controller[148183]: 2025-10-02T12:18:48Z|00327|binding|INFO|Releasing lport 80f72fa9-c415-4ae0-baae-90b592fde178 from this chassis (sb_readonly=0)
Oct 02 12:18:48 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.228 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09bdffbb-7025-43ba-8bc7-0c1b73b98426.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09bdffbb-7025-43ba-8bc7-0c1b73b98426.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13168987-7ccf-4af6-86a8-121dcb5a4316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.230 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-09bdffbb-7025-43ba-8bc7-0c1b73b98426
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/09bdffbb-7025-43ba-8bc7-0c1b73b98426.pid.haproxy
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 09bdffbb-7025-43ba-8bc7-0c1b73b98426
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:18:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:18:48.230 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'env', 'PROCESS_TAG=haproxy-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09bdffbb-7025-43ba-8bc7-0c1b73b98426.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:18:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:18:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:18:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:48 compute-0 podman[305370]: 2025-10-02 12:18:48.640452271 +0000 UTC m=+0.027024224 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:18:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:48 compute-0 podman[305370]: 2025-10-02 12:18:48.946558359 +0000 UTC m=+0.333130292 container create 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:48.998 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407528.9978852, fab74f5b-b97e-42c7-89ac-6cc796bf5b74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.001 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] VM Started (Lifecycle Event)
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.003 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.008 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.012 2 INFO nova.virt.libvirt.driver [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Instance spawned successfully.
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.012 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.029 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.036 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.041 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.042 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.042 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.043 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.043 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.043 2 DEBUG nova.virt.libvirt.driver [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.071 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.071 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407528.998031, fab74f5b-b97e-42c7-89ac-6cc796bf5b74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.072 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] VM Paused (Lifecycle Event)
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.107 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.110 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407529.006568, fab74f5b-b97e-42c7-89ac-6cc796bf5b74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.110 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] VM Resumed (Lifecycle Event)
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.129 2 INFO nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Took 8.70 seconds to spawn the instance on the hypervisor.
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.130 2 DEBUG nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.144 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-0 ceph-mon[73607]: pgmap v1654: 305 pgs: 305 active+clean; 271 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.151 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.181 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:49 compute-0 systemd[1]: Started libpod-conmon-3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e.scope.
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.211 2 INFO nova.compute.manager [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Took 10.83 seconds to build instance.
Oct 02 12:18:49 compute-0 nova_compute[257802]: 2025-10-02 12:18:49.238 2 DEBUG oslo_concurrency.lockutils [None req-500b7c14-c611-43a1-95de-160cb1e79961 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f52cd0fac15d3c5869e5ec3bab9f9abd6103226a1066c2ee5bd10860287e8f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 290 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 186 op/s
Oct 02 12:18:49 compute-0 podman[305370]: 2025-10-02 12:18:49.351153839 +0000 UTC m=+0.737725802 container init 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:49 compute-0 podman[305370]: 2025-10-02 12:18:49.360200391 +0000 UTC m=+0.746772324 container start 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:49 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [NOTICE]   (305390) : New worker (305392) forked
Oct 02 12:18:49 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [NOTICE]   (305390) : Loading success.
Oct 02 12:18:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.260 2 DEBUG nova.compute.manager [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.261 2 DEBUG oslo_concurrency.lockutils [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.261 2 DEBUG oslo_concurrency.lockutils [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.262 2 DEBUG oslo_concurrency.lockutils [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.262 2 DEBUG nova.compute.manager [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] No waiting events found dispatching network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:50 compute-0 nova_compute[257802]: 2025-10-02 12:18:50.263 2 WARNING nova.compute.manager [req-60c795ff-d433-421e-ae68-78b12bd1e2a0 req-47852a37-87ff-47f2-b049-0d99c05a7144 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received unexpected event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 for instance with vm_state active and task_state None.
Oct 02 12:18:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1625404945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:50.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:51 compute-0 ceph-mon[73607]: pgmap v1655: 305 pgs: 305 active+clean; 290 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 186 op/s
Oct 02 12:18:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 293 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:18:51 compute-0 nova_compute[257802]: 2025-10-02 12:18:51.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:51 compute-0 podman[305402]: 2025-10-02 12:18:51.904504225 +0000 UTC m=+0.045161667 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:51 compute-0 nova_compute[257802]: 2025-10-02 12:18:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:18:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.278 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.278 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.279 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.279 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid fab74f5b-b97e-42c7-89ac-6cc796bf5b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:52 compute-0 ceph-mon[73607]: pgmap v1656: 305 pgs: 305 active+clean; 293 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:18:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:52 compute-0 NetworkManager[44987]: <info>  [1759407532.8436] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 02 12:18:52 compute-0 NetworkManager[44987]: <info>  [1759407532.8445] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:52 compute-0 ovn_controller[148183]: 2025-10-02T12:18:52Z|00328|binding|INFO|Releasing lport 80f72fa9-c415-4ae0-baae-90b592fde178 from this chassis (sb_readonly=0)
Oct 02 12:18:52 compute-0 nova_compute[257802]: 2025-10-02 12:18:52.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 271 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 174 op/s
Oct 02 12:18:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/132418034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.784 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.805 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.805 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.806 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.831 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.832 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.832 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.832 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:18:53 compute-0 nova_compute[257802]: 2025-10-02 12:18:53.833 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:18:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629569520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0038871658640424345 of space, bias 1.0, pg target 1.1661497592127303 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.302 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.395 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.396 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:54.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:54 compute-0 ceph-mon[73607]: pgmap v1657: 305 pgs: 305 active+clean; 271 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 174 op/s
Oct 02 12:18:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3629569520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.606 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.608 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=20.906696319580078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.608 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.609 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.722 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance fab74f5b-b97e-42c7-89ac-6cc796bf5b74 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.723 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.723 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.771 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.890 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.891 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.913 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:18:54 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.941 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:54.999 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.032 2 DEBUG nova.compute.manager [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-changed-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.033 2 DEBUG nova.compute.manager [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Refreshing instance network info cache due to event network-changed-29d9f254-ad8e-43f0-81ab-f104655d2f97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.033 2 DEBUG oslo_concurrency.lockutils [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.033 2 DEBUG oslo_concurrency.lockutils [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.034 2 DEBUG nova.network.neutron [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Refreshing network info cache for port 29d9f254-ad8e-43f0-81ab-f104655d2f97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:18:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631690753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:18:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631690753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 260 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Oct 02 12:18:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545556443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.460 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.466 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.499 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.524 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.524 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:55 compute-0 nova_compute[257802]: 2025-10-02 12:18:55.525 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1918085884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3631690753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3631690753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/545556443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:55 compute-0 podman[305469]: 2025-10-02 12:18:55.923802319 +0000 UTC m=+0.061754553 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:55 compute-0 podman[305470]: 2025-10-02 12:18:55.923907922 +0000 UTC m=+0.061521788 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 02 12:18:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:56 compute-0 nova_compute[257802]: 2025-10-02 12:18:56.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:56 compute-0 nova_compute[257802]: 2025-10-02 12:18:56.521 2 DEBUG nova.network.neutron [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updated VIF entry in instance network info cache for port 29d9f254-ad8e-43f0-81ab-f104655d2f97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:56 compute-0 nova_compute[257802]: 2025-10-02 12:18:56.522 2 DEBUG nova.network.neutron [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [{"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:56 compute-0 nova_compute[257802]: 2025-10-02 12:18:56.535 2 DEBUG oslo_concurrency.lockutils [req-10cf60dc-f7e0-49e4-b78d-e1e625395ed1 req-1dba071c-e975-4a8f-aa9d-7bf12ddc1fa8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fab74f5b-b97e-42c7-89ac-6cc796bf5b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:56.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:56 compute-0 ceph-mon[73607]: pgmap v1658: 305 pgs: 305 active+clean; 260 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Oct 02 12:18:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1652796999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2916340495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:56 compute-0 nova_compute[257802]: 2025-10-02 12:18:56.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:57 compute-0 nova_compute[257802]: 2025-10-02 12:18:57.113 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:57 compute-0 nova_compute[257802]: 2025-10-02 12:18:57.113 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:18:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 260 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 177 op/s
Oct 02 12:18:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/844289913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:58.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:58 compute-0 sudo[305508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:58 compute-0 sudo[305508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:58 compute-0 sudo[305508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:58 compute-0 sudo[305533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:58 compute-0 sudo[305533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:58 compute-0 sudo[305533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:18:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:58.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:58 compute-0 ceph-mon[73607]: pgmap v1659: 305 pgs: 305 active+clean; 260 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 177 op/s
Oct 02 12:18:59 compute-0 nova_compute[257802]: 2025-10-02 12:18:59.121 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Oct 02 12:18:59 compute-0 nova_compute[257802]: 2025-10-02 12:18:59.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:59 compute-0 podman[305558]: 2025-10-02 12:18:59.981261288 +0000 UTC m=+0.105963017 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:19:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:00.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 12:19:01 compute-0 nova_compute[257802]: 2025-10-02 12:19:01.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:01 compute-0 ceph-mon[73607]: pgmap v1660: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Oct 02 12:19:01 compute-0 ovn_controller[148183]: 2025-10-02T12:19:01Z|00329|binding|INFO|Releasing lport 80f72fa9-c415-4ae0-baae-90b592fde178 from this chassis (sb_readonly=0)
Oct 02 12:19:01 compute-0 nova_compute[257802]: 2025-10-02 12:19:01.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:02 compute-0 nova_compute[257802]: 2025-10-02 12:19:01.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:02 compute-0 ceph-mon[73607]: pgmap v1661: 305 pgs: 305 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 12:19:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 270 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:19:03 compute-0 ovn_controller[148183]: 2025-10-02T12:19:03Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:fc:0f 10.100.0.4
Oct 02 12:19:03 compute-0 ovn_controller[148183]: 2025-10-02T12:19:03Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:fc:0f 10.100.0.4
Oct 02 12:19:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4013145950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:04 compute-0 ceph-mon[73607]: pgmap v1662: 305 pgs: 305 active+clean; 270 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:19:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 295 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 174 op/s
Oct 02 12:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 02 12:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 02 12:19:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 02 12:19:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:06 compute-0 nova_compute[257802]: 2025-10-02 12:19:06.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 02 12:19:07 compute-0 nova_compute[257802]: 2025-10-02 12:19:07.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:07 compute-0 ceph-mon[73607]: pgmap v1663: 305 pgs: 305 active+clean; 295 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 174 op/s
Oct 02 12:19:07 compute-0 ceph-mon[73607]: osdmap e250: 3 total, 3 up, 3 in
Oct 02 12:19:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1779518239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 02 12:19:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 02 12:19:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 295 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 182 op/s
Oct 02 12:19:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 02 12:19:08 compute-0 ceph-mon[73607]: osdmap e251: 3 total, 3 up, 3 in
Oct 02 12:19:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/946818258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 02 12:19:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 02 12:19:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:08.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:09 compute-0 nova_compute[257802]: 2025-10-02 12:19:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 374 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.0 MiB/s wr, 359 op/s
Oct 02 12:19:09 compute-0 ceph-mon[73607]: pgmap v1666: 305 pgs: 305 active+clean; 295 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 182 op/s
Oct 02 12:19:09 compute-0 ceph-mon[73607]: osdmap e252: 3 total, 3 up, 3 in
Oct 02 12:19:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:10.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:10 compute-0 ceph-mon[73607]: pgmap v1668: 305 pgs: 305 active+clean; 374 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.0 MiB/s wr, 359 op/s
Oct 02 12:19:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.5 MiB/s wr, 224 op/s
Oct 02 12:19:11 compute-0 nova_compute[257802]: 2025-10-02 12:19:11.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:12 compute-0 nova_compute[257802]: 2025-10-02 12:19:12.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:12.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:12 compute-0 ceph-mon[73607]: pgmap v1669: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.5 MiB/s wr, 224 op/s
Oct 02 12:19:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.3 MiB/s wr, 243 op/s
Oct 02 12:19:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 02 12:19:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 02 12:19:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 02 12:19:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:14.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.703 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.703 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.749 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.841 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.841 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.848 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:14 compute-0 nova_compute[257802]: 2025-10-02 12:19:14.848 2 INFO nova.compute.claims [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:19:14 compute-0 ceph-mon[73607]: pgmap v1670: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.3 MiB/s wr, 243 op/s
Oct 02 12:19:14 compute-0 ceph-mon[73607]: osdmap e253: 3 total, 3 up, 3 in
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.002 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:15 compute-0 sudo[305604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:15 compute-0 sudo[305604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:15 compute-0 sudo[305604]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:15 compute-0 sudo[305638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:15 compute-0 sudo[305638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:15 compute-0 sudo[305638]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:15 compute-0 sudo[305663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:15 compute-0 sudo[305663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:15 compute-0 sudo[305663]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 380 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 313 op/s
Oct 02 12:19:15 compute-0 sudo[305688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:19:15 compute-0 sudo[305688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2720491723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.421 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.427 2 DEBUG nova.compute.provider_tree [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.447 2 DEBUG nova.scheduler.client.report [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.483 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.484 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.574 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.578 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.605 2 INFO nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.642 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.716 2 INFO nova.virt.block_device [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Booting with volume-backed-image c2d0c2bc-fe21-4689-86ae-d6728c15874c at /dev/vda
Oct 02 12:19:15 compute-0 nova_compute[257802]: 2025-10-02 12:19:15.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:16 compute-0 nova_compute[257802]: 2025-10-02 12:19:16.101 2 DEBUG nova.policy [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69d8e29c6d3747e98a5985a584f4c814', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8efba404696b40fbbaa6431b934b87f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:19:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:16.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2720491723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:16 compute-0 nova_compute[257802]: 2025-10-02 12:19:16.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:19:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:19:16 compute-0 podman[305784]: 2025-10-02 12:19:16.611638429 +0000 UTC m=+0.826922816 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:19:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:16 compute-0 podman[305784]: 2025-10-02 12:19:16.744088554 +0000 UTC m=+0.959372911 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:19:17 compute-0 nova_compute[257802]: 2025-10-02 12:19:17.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 380 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 199 op/s
Oct 02 12:19:17 compute-0 podman[305922]: 2025-10-02 12:19:17.348649423 +0000 UTC m=+0.114703691 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:19:17 compute-0 nova_compute[257802]: 2025-10-02 12:19:17.362 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Successfully created port: fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:19:17 compute-0 ceph-mon[73607]: pgmap v1672: 305 pgs: 305 active+clean; 380 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 313 op/s
Oct 02 12:19:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3391873375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:17 compute-0 podman[305943]: 2025-10-02 12:19:17.470023165 +0000 UTC m=+0.106614642 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:19:17 compute-0 podman[305922]: 2025-10-02 12:19:17.479978699 +0000 UTC m=+0.246032927 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:19:17 compute-0 podman[305987]: 2025-10-02 12:19:17.80248465 +0000 UTC m=+0.154822044 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Oct 02 12:19:17 compute-0 podman[306007]: 2025-10-02 12:19:17.893064938 +0000 UTC m=+0.072277981 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, vendor=Red Hat, Inc., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 02 12:19:17 compute-0 podman[305987]: 2025-10-02 12:19:17.937456585 +0000 UTC m=+0.289793949 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 02 12:19:18 compute-0 sudo[305688]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:18 compute-0 sudo[306038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:18 compute-0 sudo[306038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 sudo[306038]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 sudo[306063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:18 compute-0 sudo[306063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 sudo[306063]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 sudo[306088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:18 compute-0 sudo[306088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 sudo[306088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 sudo[306113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:19:18 compute-0 sudo[306113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.565 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Successfully updated port: fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.584 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.585 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.585 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:19:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:18.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:18 compute-0 sudo[306153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:18 compute-0 sudo[306153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 sudo[306153]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.683 2 DEBUG nova.compute.manager [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-changed-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.683 2 DEBUG nova.compute.manager [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Refreshing instance network info cache due to event network-changed-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.684 2 DEBUG oslo_concurrency.lockutils [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:18 compute-0 sudo[306179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:18 compute-0 sudo[306179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:18 compute-0 sudo[306179]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 nova_compute[257802]: 2025-10-02 12:19:18.764 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:18 compute-0 sudo[306113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 26bfd230-a84c-4656-8cd2-920d08d9634e does not exist
Oct 02 12:19:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6108a34e-9e76-47c8-9cfe-6117b9219626 does not exist
Oct 02 12:19:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3aba5283-ccb8-428e-b906-1a8ff6c50291 does not exist
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:19:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:19:19 compute-0 sudo[306220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:19 compute-0 sudo[306220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:19 compute-0 sudo[306220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:19 compute-0 sudo[306245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:19 compute-0 sudo[306245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:19 compute-0 sudo[306245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:19 compute-0 sudo[306270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:19 compute-0 sudo[306270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:19 compute-0 sudo[306270]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:19 compute-0 sudo[306295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:19:19 compute-0 sudo[306295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:19 compute-0 ceph-mon[73607]: pgmap v1673: 305 pgs: 305 active+clean; 380 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 199 op/s
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 371 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 213 op/s
Oct 02 12:19:19 compute-0 podman[306360]: 2025-10-02 12:19:19.5253093 +0000 UTC m=+0.026567522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:19 compute-0 nova_compute[257802]: 2025-10-02 12:19:19.682 2 DEBUG nova.network.neutron [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:19 compute-0 nova_compute[257802]: 2025-10-02 12:19:19.715 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:19 compute-0 nova_compute[257802]: 2025-10-02 12:19:19.715 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance network_info: |[{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:19:19 compute-0 nova_compute[257802]: 2025-10-02 12:19:19.715 2 DEBUG oslo_concurrency.lockutils [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:19 compute-0 nova_compute[257802]: 2025-10-02 12:19:19.716 2 DEBUG nova.network.neutron [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Refreshing network info cache for port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:19:19 compute-0 podman[306360]: 2025-10-02 12:19:19.787068492 +0000 UTC m=+0.288326614 container create ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:19:20 compute-0 systemd[1]: Started libpod-conmon-ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a.scope.
Oct 02 12:19:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:20 compute-0 podman[306360]: 2025-10-02 12:19:20.17238814 +0000 UTC m=+0.673646282 container init ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 12:19:20 compute-0 podman[306360]: 2025-10-02 12:19:20.178195912 +0000 UTC m=+0.679454034 container start ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:20 compute-0 eager_benz[306377]: 167 167
Oct 02 12:19:20 compute-0 systemd[1]: libpod-ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a.scope: Deactivated successfully.
Oct 02 12:19:20 compute-0 podman[306360]: 2025-10-02 12:19:20.259369681 +0000 UTC m=+0.760627803 container attach ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:19:20 compute-0 podman[306360]: 2025-10-02 12:19:20.260013597 +0000 UTC m=+0.761271719 container died ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:19:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:20.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:20 compute-0 ceph-mon[73607]: pgmap v1674: 305 pgs: 305 active+clean; 371 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 213 op/s
Oct 02 12:19:20 compute-0 ovn_controller[148183]: 2025-10-02T12:19:20Z|00330|binding|INFO|Releasing lport 80f72fa9-c415-4ae0-baae-90b592fde178 from this chassis (sb_readonly=0)
Oct 02 12:19:20 compute-0 nova_compute[257802]: 2025-10-02 12:19:20.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:20.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e316f9772904382e7f06f73aa4e9b5e9b953e20b4a87c319c246844ed09a56e-merged.mount: Deactivated successfully.
Oct 02 12:19:20 compute-0 podman[306360]: 2025-10-02 12:19:20.925206311 +0000 UTC m=+1.426464433 container remove ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_benz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:19:20 compute-0 systemd[1]: libpod-conmon-ad42bc7727d4acd533fc18485080d529e8e0f54d600972e5cee475630bddad1a.scope: Deactivated successfully.
Oct 02 12:19:21 compute-0 podman[306402]: 2025-10-02 12:19:21.099611504 +0000 UTC m=+0.024800899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:21 compute-0 nova_compute[257802]: 2025-10-02 12:19:21.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 190 op/s
Oct 02 12:19:21 compute-0 nova_compute[257802]: 2025-10-02 12:19:21.353 2 DEBUG nova.network.neutron [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updated VIF entry in instance network info cache for port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:19:21 compute-0 nova_compute[257802]: 2025-10-02 12:19:21.354 2 DEBUG nova.network.neutron [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:21 compute-0 nova_compute[257802]: 2025-10-02 12:19:21.380 2 DEBUG oslo_concurrency.lockutils [req-5c22e803-3844-4ef0-902b-969bf3d500de req-e01e56f8-afbd-499c-8c92-66f383b64e14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:21 compute-0 nova_compute[257802]: 2025-10-02 12:19:21.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:22 compute-0 nova_compute[257802]: 2025-10-02 12:19:22.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:22.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:22 compute-0 podman[306402]: 2025-10-02 12:19:22.54751735 +0000 UTC m=+1.472706735 container create 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:19:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:22 compute-0 systemd[1]: Started libpod-conmon-7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52.scope.
Oct 02 12:19:22 compute-0 podman[306417]: 2025-10-02 12:19:22.927513818 +0000 UTC m=+0.326115090 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 12:19:22 compute-0 ceph-mon[73607]: pgmap v1675: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 190 op/s
Oct 02 12:19:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:23 compute-0 podman[306402]: 2025-10-02 12:19:23.30155181 +0000 UTC m=+2.226741225 container init 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:19:23 compute-0 podman[306402]: 2025-10-02 12:19:23.315602814 +0000 UTC m=+2.240792199 container start 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:19:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Oct 02 12:19:23 compute-0 podman[306402]: 2025-10-02 12:19:23.652988068 +0000 UTC m=+2.578177443 container attach 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:19:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:24 compute-0 nova_compute[257802]: 2025-10-02 12:19:24.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:24 compute-0 laughing_kowalevski[306439]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:19:24 compute-0 laughing_kowalevski[306439]: --> relative data size: 1.0
Oct 02 12:19:24 compute-0 laughing_kowalevski[306439]: --> All data devices are unavailable
Oct 02 12:19:24 compute-0 systemd[1]: libpod-7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52.scope: Deactivated successfully.
Oct 02 12:19:24 compute-0 podman[306402]: 2025-10-02 12:19:24.41932033 +0000 UTC m=+3.344509725 container died 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:24.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:25 compute-0 ceph-mon[73607]: pgmap v1676: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Oct 02 12:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-038a6cd12cb71b350baeb09cd52412b842b3ec785e0f061ff49abab556ad6089-merged.mount: Deactivated successfully.
Oct 02 12:19:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 02 12:19:25 compute-0 podman[306402]: 2025-10-02 12:19:25.761797734 +0000 UTC m=+4.686987129 container remove 7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:25 compute-0 sudo[306295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:25 compute-0 systemd[1]: libpod-conmon-7e0fdfa17224a10bf1242e977408272cef4268eca547b65af38855d8c717ce52.scope: Deactivated successfully.
Oct 02 12:19:25 compute-0 sudo[306471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:25 compute-0 sudo[306471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:25 compute-0 sudo[306471]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:25 compute-0 sudo[306496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:25 compute-0 sudo[306496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:25 compute-0 sudo[306496]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:26 compute-0 sudo[306533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:26 compute-0 podman[306521]: 2025-10-02 12:19:26.022563573 +0000 UTC m=+0.055621354 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 12:19:26 compute-0 sudo[306533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:26 compute-0 podman[306520]: 2025-10-02 12:19:26.027721168 +0000 UTC m=+0.062960863 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:19:26 compute-0 sudo[306533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:26 compute-0 sudo[306587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:19:26 compute-0 sudo[306587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:26 compute-0 nova_compute[257802]: 2025-10-02 12:19:26.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:26 compute-0 podman[306655]: 2025-10-02 12:19:26.385343059 +0000 UTC m=+0.019738714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:26 compute-0 ceph-mon[73607]: pgmap v1677: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 02 12:19:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:26 compute-0 podman[306655]: 2025-10-02 12:19:26.850536514 +0000 UTC m=+0.484932139 container create 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:26.936 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:26.938 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:26.939 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.022 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.023 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.023 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.024 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.024 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.026 2 INFO nova.compute.manager [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Terminating instance
Oct 02 12:19:27 compute-0 nova_compute[257802]: 2025-10-02 12:19:27.028 2 DEBUG nova.compute.manager [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:19:27 compute-0 systemd[1]: Started libpod-conmon-4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4.scope.
Oct 02 12:19:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Oct 02 12:19:27 compute-0 podman[306655]: 2025-10-02 12:19:27.365388886 +0000 UTC m=+0.999784561 container init 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:19:27 compute-0 podman[306655]: 2025-10-02 12:19:27.374774455 +0000 UTC m=+1.009170060 container start 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:19:27 compute-0 tender_driscoll[306673]: 167 167
Oct 02 12:19:27 compute-0 systemd[1]: libpod-4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4.scope: Deactivated successfully.
Oct 02 12:19:27 compute-0 podman[306655]: 2025-10-02 12:19:27.679205273 +0000 UTC m=+1.313600908 container attach 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:19:27 compute-0 podman[306655]: 2025-10-02 12:19:27.680049304 +0000 UTC m=+1.314444929 container died 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:19:27 compute-0 kernel: tap29d9f254-ad (unregistering): left promiscuous mode
Oct 02 12:19:27 compute-0 NetworkManager[44987]: <info>  [1759407567.9904] device (tap29d9f254-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 ovn_controller[148183]: 2025-10-02T12:19:28Z|00331|binding|INFO|Releasing lport 29d9f254-ad8e-43f0-81ab-f104655d2f97 from this chassis (sb_readonly=0)
Oct 02 12:19:28 compute-0 ovn_controller[148183]: 2025-10-02T12:19:28Z|00332|binding|INFO|Setting lport 29d9f254-ad8e-43f0-81ab-f104655d2f97 down in Southbound
Oct 02 12:19:28 compute-0 ovn_controller[148183]: 2025-10-02T12:19:28Z|00333|binding|INFO|Removing iface tap29d9f254-ad ovn-installed in OVS
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:28.033 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:fc:0f 10.100.0.4'], port_security=['fa:16:3e:6f:fc:0f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fab74f5b-b97e-42c7-89ac-6cc796bf5b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a07342ec339549a4b091ddbcfedae271', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30912c5c-8f90-4a42-9ed3-ef0100e87b28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39ac99e8-48cc-4487-b968-24fcd53497f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=29d9f254-ad8e-43f0-81ab-f104655d2f97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:28.034 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 29d9f254-ad8e-43f0-81ab-f104655d2f97 in datapath 09bdffbb-7025-43ba-8bc7-0c1b73b98426 unbound from our chassis
Oct 02 12:19:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:28.037 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09bdffbb-7025-43ba-8bc7-0c1b73b98426, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:19:28 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004e.scope: Consumed 15.143s CPU time.
Oct 02 12:19:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:28.040 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca99de9-aef6-4b1b-bc8b-76b3e919871a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:28.041 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426 namespace which is not needed anymore
Oct 02 12:19:28 compute-0 systemd-machined[211836]: Machine qemu-35-instance-0000004e terminated.
Oct 02 12:19:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.287 2 INFO nova.virt.libvirt.driver [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Instance destroyed successfully.
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.288 2 DEBUG nova.objects.instance [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lazy-loading 'resources' on Instance uuid fab74f5b-b97e-42c7-89ac-6cc796bf5b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.319 2 DEBUG nova.virt.libvirt.vif [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-772132433',display_name='tempest-ServersTestBootFromVolume-server-772132433',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-772132433',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJSxeBsrnH5GThsMacmN5Gh9iZZlU/CvAh8lf0P9OkxKZVjcsvsd4aA81Qv0/kY9JJpl22sAXkmrW+qBOwAs5WJkJD3lG7aZC8qJ+6aVZ8N0Ln3qVDvI+VB/wcXZWr0e5w==',key_name='tempest-keypair-1254580889',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a07342ec339549a4b091ddbcfedae271',ramdisk_id='',reservation_id='r-lskvlc0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-2081000680',owner_user_name='tempest-ServersTestBootFromVolume-2081000680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e268197f7054a098b565874c3fdd76c',uuid=fab74f5b-b97e-42c7-89ac-6cc796bf5b74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.319 2 DEBUG nova.network.os_vif_util [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converting VIF {"id": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "address": "fa:16:3e:6f:fc:0f", "network": {"id": "09bdffbb-7025-43ba-8bc7-0c1b73b98426", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-2093507142-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a07342ec339549a4b091ddbcfedae271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29d9f254-ad", "ovs_interfaceid": "29d9f254-ad8e-43f0-81ab-f104655d2f97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.321 2 DEBUG nova.network.os_vif_util [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.321 2 DEBUG os_vif [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29d9f254-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.335 2 INFO os_vif [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:fc:0f,bridge_name='br-int',has_traffic_filtering=True,id=29d9f254-ad8e-43f0-81ab-f104655d2f97,network=Network(09bdffbb-7025-43ba-8bc7-0c1b73b98426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29d9f254-ad')
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.401 2 DEBUG nova.compute.manager [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-unplugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.402 2 DEBUG oslo_concurrency.lockutils [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.402 2 DEBUG oslo_concurrency.lockutils [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.402 2 DEBUG oslo_concurrency.lockutils [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.403 2 DEBUG nova.compute.manager [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] No waiting events found dispatching network-vif-unplugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.403 2 DEBUG nova.compute.manager [req-937060a8-bdee-4c4e-be4d-62d430c33666 req-2b2477d3-70c4-4ac0-9784-a260f8377f23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-unplugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5251cf328a0ea2d6c08acb34068515ec4ba999f1e1d961a6e4143821eaa13d-merged.mount: Deactivated successfully.
Oct 02 12:19:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:19:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:19:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:28 compute-0 nova_compute[257802]: 2025-10-02 12:19:28.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:29 compute-0 ceph-mon[73607]: pgmap v1678: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Oct 02 12:19:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Oct 02 12:19:29 compute-0 podman[306655]: 2025-10-02 12:19:29.481237665 +0000 UTC m=+3.115633260 container remove 4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:19:29 compute-0 systemd[1]: libpod-conmon-4aa6f06aa1ff5071a7145529c0c49c480b948cf8deb726e6293788ec380e4ed4.scope: Deactivated successfully.
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [NOTICE]   (305390) : haproxy version is 2.8.14-c23fe91
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [NOTICE]   (305390) : path to executable is /usr/sbin/haproxy
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [WARNING]  (305390) : Exiting Master process...
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [WARNING]  (305390) : Exiting Master process...
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [ALERT]    (305390) : Current worker (305392) exited with code 143 (Terminated)
Oct 02 12:19:30 compute-0 neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426[305386]: [WARNING]  (305390) : All workers exited. Exiting... (0)
Oct 02 12:19:30 compute-0 systemd[1]: libpod-3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e.scope: Deactivated successfully.
Oct 02 12:19:30 compute-0 podman[306746]: 2025-10-02 12:19:30.088772487 +0000 UTC m=+0.496197477 container died 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:19:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.517 2 DEBUG nova.compute.manager [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.518 2 DEBUG oslo_concurrency.lockutils [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.518 2 DEBUG oslo_concurrency.lockutils [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.518 2 DEBUG oslo_concurrency.lockutils [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.518 2 DEBUG nova.compute.manager [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] No waiting events found dispatching network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:19:30 compute-0 nova_compute[257802]: 2025-10-02 12:19:30.518 2 WARNING nova.compute.manager [req-1c354894-90f7-40e5-834e-21ccc2370071 req-b0d03f9a-553f-4c86-9784-0582fd414ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received unexpected event network-vif-plugged-29d9f254-ad8e-43f0-81ab-f104655d2f97 for instance with vm_state active and task_state deleting.
Oct 02 12:19:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:30.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:30 compute-0 ceph-mon[73607]: pgmap v1679: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Oct 02 12:19:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 34 op/s
Oct 02 12:19:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e-userdata-shm.mount: Deactivated successfully.
Oct 02 12:19:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2f52cd0fac15d3c5869e5ec3bab9f9abd6103226a1066c2ee5bd10860287e8f-merged.mount: Deactivated successfully.
Oct 02 12:19:31 compute-0 podman[306779]: 2025-10-02 12:19:31.588573024 +0000 UTC m=+1.461837629 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:19:31 compute-0 podman[306762]: 2025-10-02 12:19:31.792985661 +0000 UTC m=+2.147218638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:32 compute-0 nova_compute[257802]: 2025-10-02 12:19:32.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:32 compute-0 podman[306746]: 2025-10-02 12:19:32.196464585 +0000 UTC m=+2.603889585 container cleanup 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:19:32 compute-0 systemd[1]: libpod-conmon-3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e.scope: Deactivated successfully.
Oct 02 12:19:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:32.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:32 compute-0 podman[306762]: 2025-10-02 12:19:32.691978653 +0000 UTC m=+3.046211580 container create b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:19:32 compute-0 nova_compute[257802]: 2025-10-02 12:19:32.817 2 INFO nova.virt.libvirt.driver [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Deleting instance files /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74_del
Oct 02 12:19:32 compute-0 nova_compute[257802]: 2025-10-02 12:19:32.817 2 INFO nova.virt.libvirt.driver [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Deletion of /var/lib/nova/instances/fab74f5b-b97e-42c7-89ac-6cc796bf5b74_del complete
Oct 02 12:19:33 compute-0 systemd[1]: Started libpod-conmon-b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8.scope.
Oct 02 12:19:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0d913f1bb6809c9f7bad3626dd450678481d1b78227f877092fdcd77a5f3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0d913f1bb6809c9f7bad3626dd450678481d1b78227f877092fdcd77a5f3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0d913f1bb6809c9f7bad3626dd450678481d1b78227f877092fdcd77a5f3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0d913f1bb6809c9f7bad3626dd450678481d1b78227f877092fdcd77a5f3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 14 KiB/s wr, 30 op/s
Oct 02 12:19:33 compute-0 ceph-mon[73607]: pgmap v1680: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 34 op/s
Oct 02 12:19:33 compute-0 podman[306762]: 2025-10-02 12:19:33.410630437 +0000 UTC m=+3.764863354 container init b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:19:33 compute-0 podman[306762]: 2025-10-02 12:19:33.423611995 +0000 UTC m=+3.777844912 container start b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:19:33 compute-0 podman[306762]: 2025-10-02 12:19:33.471654342 +0000 UTC m=+3.825887269 container attach b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:19:33 compute-0 podman[306823]: 2025-10-02 12:19:33.516404558 +0000 UTC m=+1.278727634 container remove 3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.528 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[51a1a8bf-6704-4efa-8aab-e867e95779d0]: (4, ('Thu Oct  2 12:19:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426 (3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e)\n3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e\nThu Oct  2 12:19:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426 (3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e)\n3f841ed814405dd13ec07a6343ee89d5492a7ed556ee68bb045d165bd578ef4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.530 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c202d7ac-35ab-48e3-96a1-2c3e2a0936b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.531 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09bdffbb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:33 compute-0 kernel: tap09bdffbb-70: left promiscuous mode
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.548 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5f490e33-675b-45a8-b724-6dc687886d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.585 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b03b67ce-14d2-46b6-83ee-1ef01bf991d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.586 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45bf9463-c5d7-4b00-8d97-412d683aec82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.600 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c6c02c-20fd-4555-9136-f3d5bcbeaa98]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559557, 'reachable_time': 40535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306851, 'error': None, 'target': 'ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.603 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09bdffbb-7025-43ba-8bc7-0c1b73b98426 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:19:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:33.603 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[86373f4b-bbd0-4f02-9686-6cc4217110d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d09bdffbb\x2d7025\x2d43ba\x2d8bc7\x2d0c1b73b98426.mount: Deactivated successfully.
Oct 02 12:19:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.910 2 INFO nova.compute.manager [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Took 6.88 seconds to destroy the instance on the hypervisor.
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.911 2 DEBUG oslo.service.loopingcall [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.912 2 DEBUG nova.compute.manager [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:19:33 compute-0 nova_compute[257802]: 2025-10-02 12:19:33.912 2 DEBUG nova.network.neutron [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]: {
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:     "1": [
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:         {
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "devices": [
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "/dev/loop3"
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             ],
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "lv_name": "ceph_lv0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "lv_size": "7511998464",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "name": "ceph_lv0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "tags": {
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.cluster_name": "ceph",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.crush_device_class": "",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.encrypted": "0",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.osd_id": "1",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.type": "block",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:                 "ceph.vdo": "0"
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             },
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "type": "block",
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:             "vg_name": "ceph_vg0"
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:         }
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]:     ]
Oct 02 12:19:34 compute-0 optimistic_kepler[306843]: }
Oct 02 12:19:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:34.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:34 compute-0 systemd[1]: libpod-b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8.scope: Deactivated successfully.
Oct 02 12:19:34 compute-0 podman[306762]: 2025-10-02 12:19:34.300769401 +0000 UTC m=+4.655002308 container died b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:19:34 compute-0 ceph-mon[73607]: pgmap v1681: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 14 KiB/s wr, 30 op/s
Oct 02 12:19:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af0d913f1bb6809c9f7bad3626dd450678481d1b78227f877092fdcd77a5f3a-merged.mount: Deactivated successfully.
Oct 02 12:19:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 36 op/s
Oct 02 12:19:35 compute-0 podman[306762]: 2025-10-02 12:19:35.38913438 +0000 UTC m=+5.743367277 container remove b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:19:35 compute-0 sudo[306587]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:35 compute-0 systemd[1]: libpod-conmon-b0548063612d0264daeff0e409004a0491b712f63b3feffcb7ffcfd6313f18d8.scope: Deactivated successfully.
Oct 02 12:19:35 compute-0 sudo[306873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:35 compute-0 sudo[306873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:35 compute-0 sudo[306873]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:35 compute-0 sudo[306898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:35 compute-0 sudo[306898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:35 compute-0 sudo[306898]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:35 compute-0 sudo[306923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:35 compute-0 sudo[306923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:35 compute-0 sudo[306923]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:35 compute-0 sudo[306948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:19:35 compute-0 sudo[306948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.260407823 +0000 UTC m=+0.113971213 container create ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.178489146 +0000 UTC m=+0.032052586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:36.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:36 compute-0 systemd[1]: Started libpod-conmon-ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d.scope.
Oct 02 12:19:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.437124631 +0000 UTC m=+0.290688041 container init ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.4460898 +0000 UTC m=+0.299653190 container start ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:19:36 compute-0 great_nightingale[307030]: 167 167
Oct 02 12:19:36 compute-0 systemd[1]: libpod-ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d.scope: Deactivated successfully.
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.53996346 +0000 UTC m=+0.393526860 container attach ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.541020716 +0000 UTC m=+0.394584116 container died ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:19:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:36.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-91f9a9172a53402dd1ab6aeb820eefc3daf6a7f29fd98619bef77a18bfe76ae0-merged.mount: Deactivated successfully.
Oct 02 12:19:36 compute-0 ceph-mon[73607]: pgmap v1682: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 36 op/s
Oct 02 12:19:36 compute-0 podman[307014]: 2025-10-02 12:19:36.966295443 +0000 UTC m=+0.819858863 container remove ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:37 compute-0 systemd[1]: libpod-conmon-ee06d5f0453f865f1ad33e702ac51f6c7b7f6edc044a05d6e5af23ebc61c755d.scope: Deactivated successfully.
Oct 02 12:19:37 compute-0 podman[307055]: 2025-10-02 12:19:37.131289695 +0000 UTC m=+0.023081247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:37 compute-0 podman[307055]: 2025-10-02 12:19:37.243503813 +0000 UTC m=+0.135295315 container create 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:19:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 35 op/s
Oct 02 12:19:37 compute-0 systemd[1]: Started libpod-conmon-7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22.scope.
Oct 02 12:19:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83e9c6c8ed5d7bc0fef15341ba083c25c3ec4ed54600fe884af75d34b49ba3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83e9c6c8ed5d7bc0fef15341ba083c25c3ec4ed54600fe884af75d34b49ba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83e9c6c8ed5d7bc0fef15341ba083c25c3ec4ed54600fe884af75d34b49ba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83e9c6c8ed5d7bc0fef15341ba083c25c3ec4ed54600fe884af75d34b49ba3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.404 2 DEBUG nova.network.neutron [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:37 compute-0 podman[307055]: 2025-10-02 12:19:37.418985991 +0000 UTC m=+0.310777513 container init 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:19:37 compute-0 podman[307055]: 2025-10-02 12:19:37.43078351 +0000 UTC m=+0.322575002 container start 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:19:37 compute-0 podman[307055]: 2025-10-02 12:19:37.467817977 +0000 UTC m=+0.359609589 container attach 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:19:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:37.578 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:37.580 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.857 2 DEBUG nova.compute.manager [req-06769c5d-f30f-44d1-871f-1a3b2ecd5227 req-f6ccfe6a-8d60-48f7-b940-871f1d01cb1d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Received event network-vif-deleted-29d9f254-ad8e-43f0-81ab-f104655d2f97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.858 2 INFO nova.compute.manager [req-06769c5d-f30f-44d1-871f-1a3b2ecd5227 req-f6ccfe6a-8d60-48f7-b940-871f1d01cb1d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Neutron deleted interface 29d9f254-ad8e-43f0-81ab-f104655d2f97; detaching it from the instance and deleting it from the info cache
Oct 02 12:19:37 compute-0 nova_compute[257802]: 2025-10-02 12:19:37.858 2 DEBUG nova.network.neutron [req-06769c5d-f30f-44d1-871f-1a3b2ecd5227 req-f6ccfe6a-8d60-48f7-b940-871f1d01cb1d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:38 compute-0 nova_compute[257802]: 2025-10-02 12:19:38.170 2 INFO nova.compute.manager [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Took 4.26 seconds to deallocate network for instance.
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]: {
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:         "osd_id": 1,
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:         "type": "bluestore"
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]:     }
Oct 02 12:19:38 compute-0 priceless_meninsky[307072]: }
Oct 02 12:19:38 compute-0 systemd[1]: libpod-7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22.scope: Deactivated successfully.
Oct 02 12:19:38 compute-0 podman[307055]: 2025-10-02 12:19:38.282981665 +0000 UTC m=+1.174773187 container died 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:38 compute-0 nova_compute[257802]: 2025-10-02 12:19:38.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc83e9c6c8ed5d7bc0fef15341ba083c25c3ec4ed54600fe884af75d34b49ba3-merged.mount: Deactivated successfully.
Oct 02 12:19:38 compute-0 nova_compute[257802]: 2025-10-02 12:19:38.474 2 DEBUG nova.compute.manager [req-06769c5d-f30f-44d1-871f-1a3b2ecd5227 req-f6ccfe6a-8d60-48f7-b940-871f1d01cb1d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Detach interface failed, port_id=29d9f254-ad8e-43f0-81ab-f104655d2f97, reason: Instance fab74f5b-b97e-42c7-89ac-6cc796bf5b74 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:19:38 compute-0 podman[307055]: 2025-10-02 12:19:38.525042524 +0000 UTC m=+1.416834036 container remove 7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:19:38 compute-0 systemd[1]: libpod-conmon-7c5fac38c3adc644fb717707be9c1119479158a545b51b799c5133b5f64c4f22.scope: Deactivated successfully.
Oct 02 12:19:38 compute-0 sudo[306948]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:19:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:19:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 22cafa75-ef24-4c59-afab-d5afcbb504d2 does not exist
Oct 02 12:19:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev db46bd48-5777-4689-88f2-d2424bacea03 does not exist
Oct 02 12:19:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3134cf05-c0cd-4cf4-b1c6-6045360f6f0e does not exist
Oct 02 12:19:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:38 compute-0 sudo[307106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:38 compute-0 sudo[307106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:38 compute-0 sudo[307106]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:38 compute-0 sudo[307131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:19:38 compute-0 sudo[307131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:38 compute-0 sudo[307131]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:38 compute-0 sudo[307156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:38 compute-0 sudo[307156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:38 compute-0 sudo[307156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:38 compute-0 ceph-mon[73607]: pgmap v1683: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 35 op/s
Oct 02 12:19:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:19:38 compute-0 sudo[307181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:38 compute-0 sudo[307181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:38 compute-0 sudo[307181]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:38 compute-0 nova_compute[257802]: 2025-10-02 12:19:38.984 2 INFO nova.compute.manager [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Took 0.81 seconds to detach 1 volumes for instance.
Oct 02 12:19:38 compute-0 nova_compute[257802]: 2025-10-02 12:19:38.986 2 DEBUG nova.compute.manager [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Deleting volume: 6418770c-0612-4553-8861-921425b3c82e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:19:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 447 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 74 op/s
Oct 02 12:19:39 compute-0 nova_compute[257802]: 2025-10-02 12:19:39.671 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:39 compute-0 nova_compute[257802]: 2025-10-02 12:19:39.671 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:39 compute-0 nova_compute[257802]: 2025-10-02 12:19:39.739 2 DEBUG oslo_concurrency.processutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914282701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 nova_compute[257802]: 2025-10-02 12:19:40.170 2 DEBUG oslo_concurrency.processutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:40 compute-0 nova_compute[257802]: 2025-10-02 12:19:40.178 2 DEBUG nova.compute.provider_tree [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:40 compute-0 nova_compute[257802]: 2025-10-02 12:19:40.231 2 DEBUG nova.scheduler.client.report [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000071s ======
Oct 02 12:19:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Oct 02 12:19:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:40.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:40 compute-0 nova_compute[257802]: 2025-10-02 12:19:40.721 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:40 compute-0 ceph-mon[73607]: pgmap v1684: 305 pgs: 305 active+clean; 447 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 74 op/s
Oct 02 12:19:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/914282701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3169995338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3169995338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.202 2 INFO nova.scheduler.client.report [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Deleted allocations for instance fab74f5b-b97e-42c7-89ac-6cc796bf5b74
Oct 02 12:19:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 454 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.448 2 DEBUG os_brick.utils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.449 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.473 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.473 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[99dbe62b-6f3f-4cbe-a05c-e3efdb3a82cd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.475 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.483 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.483 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[57e60f9e-661d-478f-94bc-077ae554cc4a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.485 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.495 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.495 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[9062d04f-661b-4c25-b70e-948b1ce4c8f1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.497 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[38b4fd9f-fdba-451b-ab52-ca32b35dfebe]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.497 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.524 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.526 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.527 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.527 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.527 2 DEBUG os_brick.utils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.527 2 DEBUG nova.virt.block_device [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating existing volume attachment record: d9ca43b3-656f-4879-87cc-759e42fbf324 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.691 2 DEBUG oslo_concurrency.lockutils [None req-2e142f1b-cff2-4a14-b1c2-8a101135e463 0e268197f7054a098b565874c3fdd76c a07342ec339549a4b091ddbcfedae271 - - default default] Lock "fab74f5b-b97e-42c7-89ac-6cc796bf5b74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:41 compute-0 nova_compute[257802]: 2025-10-02 12:19:41.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:42 compute-0 nova_compute[257802]: 2025-10-02 12:19:42.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:42.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/606115242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:19:42
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images']
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:19:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:42.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:42 compute-0 ceph-mon[73607]: pgmap v1685: 305 pgs: 305 active+clean; 454 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Oct 02 12:19:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/606115242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:19:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.283 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407568.2818198, fab74f5b-b97e-42c7-89ac-6cc796bf5b74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.284 2 INFO nova.compute.manager [-] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] VM Stopped (Lifecycle Event)
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 420 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.417 2 DEBUG nova.compute.manager [None req-fc4fa253-38ee-4557-8f94-04943cfe123d - - - - - -] [instance: fab74f5b-b97e-42c7-89ac-6cc796bf5b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.684 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.686 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.686 2 INFO nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Creating image(s)
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.687 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.687 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Ensure instance console log exists: /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.687 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.688 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.688 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.690 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start _get_guest_xml network_info=[{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': 'd9ca43b3-656f-4879-87cc-759e42fbf324', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a1b4f371-5f32-4b45-87d9-56f18142a917', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a1b4f371-5f32-4b45-87d9-56f18142a917', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'attached_at': '', 'detached_at': '', 'volume_id': 'a1b4f371-5f32-4b45-87d9-56f18142a917', 'serial': 'a1b4f371-5f32-4b45-87d9-56f18142a917'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.693 2 WARNING nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.697 2 DEBUG nova.virt.libvirt.host [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.697 2 DEBUG nova.virt.libvirt.host [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.699 2 DEBUG nova.virt.libvirt.host [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.700 2 DEBUG nova.virt.libvirt.host [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.700 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.701 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.701 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.701 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.702 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.702 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.702 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.702 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.702 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.703 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.703 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.703 2 DEBUG nova.virt.hardware [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.728 2 DEBUG nova.storage.rbd_utils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:43 compute-0 nova_compute[257802]: 2025-10-02 12:19:43.731 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768865811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.139 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.205 2 DEBUG nova.virt.libvirt.vif [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1594332990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1594332990',id=81,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-5b8jgtxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:15Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=ac0f45a4-0e95-492b-be4f-14fe19840399,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.206 2 DEBUG nova.network.os_vif_util [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.207 2 DEBUG nova.network.os_vif_util [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.208 2 DEBUG nova.objects.instance [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.277 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <uuid>ac0f45a4-0e95-492b-be4f-14fe19840399</uuid>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <name>instance-00000051</name>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1594332990</nova:name>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:19:43</nova:creationTime>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:user uuid="69d8e29c6d3747e98a5985a584f4c814">tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member</nova:user>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:project uuid="8efba404696b40fbbaa6431b934b87f1">tempest-ServerBootFromVolumeStableRescueTest-153154373</nova:project>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <nova:port uuid="fcce5b3f-0d8b-4461-8c10-1b4d34cc8894">
Oct 02 12:19:44 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <system>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="serial">ac0f45a4-0e95-492b-be4f-14fe19840399</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="uuid">ac0f45a4-0e95-492b-be4f-14fe19840399</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </system>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <os>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </os>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <features>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </features>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config">
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </source>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-a1b4f371-5f32-4b45-87d9-56f18142a917">
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </source>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:19:44 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <serial>a1b4f371-5f32-4b45-87d9-56f18142a917</serial>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:26:5c:23"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <target dev="tapfcce5b3f-0d"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/console.log" append="off"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <video>
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </video>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:19:44 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:19:44 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:19:44 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:19:44 compute-0 nova_compute[257802]: </domain>
Oct 02 12:19:44 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.279 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Preparing to wait for external event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.279 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.279 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.279 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.280 2 DEBUG nova.virt.libvirt.vif [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1594332990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1594332990',id=81,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-5b8jgtxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:15Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=ac0f45a4-0e95-492b-be4f-14fe19840399,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.280 2 DEBUG nova.network.os_vif_util [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.280 2 DEBUG nova.network.os_vif_util [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.281 2 DEBUG os_vif [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcce5b3f-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcce5b3f-0d, col_values=(('external_ids', {'iface-id': 'fcce5b3f-0d8b-4461-8c10-1b4d34cc8894', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:5c:23', 'vm-uuid': 'ac0f45a4-0e95-492b-be4f-14fe19840399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:44 compute-0 NetworkManager[44987]: <info>  [1759407584.2869] manager: (tapfcce5b3f-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.293 2 INFO os_vif [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d')
Oct 02 12:19:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.412 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.412 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.412 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No VIF found with MAC fa:16:3e:26:5c:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.413 2 INFO nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Using config drive
Oct 02 12:19:44 compute-0 nova_compute[257802]: 2025-10-02 12:19:44.434 2 DEBUG nova.storage.rbd_utils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:44.582 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:44.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:44 compute-0 ceph-mon[73607]: pgmap v1686: 305 pgs: 305 active+clean; 420 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct 02 12:19:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2768865811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 385 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.573 2 INFO nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Creating config drive at /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.578 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzuj21yp1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.708 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzuj21yp1" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.733 2 DEBUG nova.storage.rbd_utils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.736 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.899 2 DEBUG oslo_concurrency.processutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.900 2 INFO nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Deleting local config drive /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config because it was imported into RBD.
Oct 02 12:19:45 compute-0 kernel: tapfcce5b3f-0d: entered promiscuous mode
Oct 02 12:19:45 compute-0 NetworkManager[44987]: <info>  [1759407585.9536] manager: (tapfcce5b3f-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Oct 02 12:19:45 compute-0 ovn_controller[148183]: 2025-10-02T12:19:45Z|00334|binding|INFO|Claiming lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for this chassis.
Oct 02 12:19:45 compute-0 ovn_controller[148183]: 2025-10-02T12:19:45Z|00335|binding|INFO|fcce5b3f-0d8b-4461-8c10-1b4d34cc8894: Claiming fa:16:3e:26:5c:23 10.100.0.12
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1846814930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.961 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.962 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 bound to our chassis
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.963 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:19:45 compute-0 ovn_controller[148183]: 2025-10-02T12:19:45Z|00336|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 ovn-installed in OVS
Oct 02 12:19:45 compute-0 ovn_controller[148183]: 2025-10-02T12:19:45Z|00337|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 up in Southbound
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.974 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[666256ac-4559-4988-8b5f-1a1b89122cfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.975 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:19:45 compute-0 nova_compute[257802]: 2025-10-02 12:19:45.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.978 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.978 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[273b0450-5a97-49a1-9783-11832864723e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.978 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a54a627b-8f59-4cb6-8e53-7eee26002707]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:45.990 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[f075b652-b5cc-4c04-9cfa-32f6eda3a283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:45 compute-0 systemd-machined[211836]: New machine qemu-36-instance-00000051.
Oct 02 12:19:46 compute-0 systemd-udevd[307355]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.002 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bab71fd0-a5b8-4e88-bdf5-15d35a366e68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000051.
Oct 02 12:19:46 compute-0 NetworkManager[44987]: <info>  [1759407586.0168] device (tapfcce5b3f-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:19:46 compute-0 NetworkManager[44987]: <info>  [1759407586.0178] device (tapfcce5b3f-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.028 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6df0a85d-7b53-4934-a16f-702bd0f7da4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 systemd-udevd[307359]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.033 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67e5609a-130b-4483-a472-3c25bde9297a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 NetworkManager[44987]: <info>  [1759407586.0346] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.057 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed46df3-93ec-4fca-a447-ba9b794f5f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.060 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[42876689-06a2-49dd-8cfc-41fbe3ef2096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 NetworkManager[44987]: <info>  [1759407586.0816] device (tapf1725bd8-70): carrier: link connected
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.088 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[71f5efbf-c1e2-4dd4-b24c-584c33b9bb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.102 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1f0182-9dc5-4505-91e3-1cb35cad8871]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565370, 'reachable_time': 35486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307385, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.114 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d11755de-e3c8-4435-b221-547c0487959d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565370, 'tstamp': 565370}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307386, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.128 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99b3e874-685b-4e49-8a7e-826ad6dc5c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565370, 'reachable_time': 35486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307387, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.152 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[099e7796-bf66-483e-9387-c855c99bf69c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.198 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fe97e1-2e65-49ca-993e-00239ed4614c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.199 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-0 NetworkManager[44987]: <info>  [1759407586.2031] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct 02 12:19:46 compute-0 kernel: tapf1725bd8-70: entered promiscuous mode
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.205 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:46 compute-0 ovn_controller[148183]: 2025-10-02T12:19:46Z|00338|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.210 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.210 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f02982c2-5361-4b42-a057-8bd28ebbd62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.211 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:19:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:19:46.211 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:46.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.380 2 DEBUG nova.compute.manager [req-886c5819-2f39-4474-be1d-9ff2a1a33d04 req-75f210ae-611a-4035-9dab-f3145b82f63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.380 2 DEBUG oslo_concurrency.lockutils [req-886c5819-2f39-4474-be1d-9ff2a1a33d04 req-75f210ae-611a-4035-9dab-f3145b82f63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.381 2 DEBUG oslo_concurrency.lockutils [req-886c5819-2f39-4474-be1d-9ff2a1a33d04 req-75f210ae-611a-4035-9dab-f3145b82f63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.381 2 DEBUG oslo_concurrency.lockutils [req-886c5819-2f39-4474-be1d-9ff2a1a33d04 req-75f210ae-611a-4035-9dab-f3145b82f63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:46 compute-0 nova_compute[257802]: 2025-10-02 12:19:46.381 2 DEBUG nova.compute.manager [req-886c5819-2f39-4474-be1d-9ff2a1a33d04 req-75f210ae-611a-4035-9dab-f3145b82f63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Processing event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:19:46 compute-0 podman[307420]: 2025-10-02 12:19:46.549620498 +0000 UTC m=+0.051652326 container create a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:19:46 compute-0 systemd[1]: Started libpod-conmon-a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688.scope.
Oct 02 12:19:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dc466208792f464efafc15dc26cd9ce9d7ae0cc55e8518fc7bf7d419d18948/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:46 compute-0 podman[307420]: 2025-10-02 12:19:46.520856704 +0000 UTC m=+0.022888552 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:19:46 compute-0 podman[307420]: 2025-10-02 12:19:46.631666278 +0000 UTC m=+0.133698176 container init a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:19:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:19:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:19:46 compute-0 podman[307420]: 2025-10-02 12:19:46.640667948 +0000 UTC m=+0.142699776 container start a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:19:46 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [NOTICE]   (307455) : New worker (307464) forked
Oct 02 12:19:46 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [NOTICE]   (307455) : Loading success.
Oct 02 12:19:46 compute-0 ceph-mon[73607]: pgmap v1687: 305 pgs: 305 active+clean; 385 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Oct 02 12:19:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2271156186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.213 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.215 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407587.2131267, ac0f45a4-0e95-492b-be4f-14fe19840399 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.215 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Started (Lifecycle Event)
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.220 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.224 2 INFO nova.virt.libvirt.driver [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance spawned successfully.
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.224 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.250 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.254 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.254 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.255 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.255 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.255 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.256 2 DEBUG nova.virt.libvirt.driver [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.260 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.308 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.308 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407587.213415, ac0f45a4-0e95-492b-be4f-14fe19840399 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.309 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Paused (Lifecycle Event)
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.337 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.340 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407587.219677, ac0f45a4-0e95-492b-be4f-14fe19840399 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.340 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Resumed (Lifecycle Event)
Oct 02 12:19:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 385 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.369 2 INFO nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Took 3.68 seconds to spawn the instance on the hypervisor.
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.369 2 DEBUG nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.371 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.376 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.418 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.446 2 INFO nova.compute.manager [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Took 32.65 seconds to build instance.
Oct 02 12:19:47 compute-0 nova_compute[257802]: 2025-10-02 12:19:47.461 2 DEBUG oslo_concurrency.lockutils [None req-40d5315f-a6ca-4456-ba6a-865a015fbdfb 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:48 compute-0 nova_compute[257802]: 2025-10-02 12:19:48.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:48 compute-0 nova_compute[257802]: 2025-10-02 12:19:48.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:48.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:48.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:48 compute-0 ceph-mon[73607]: pgmap v1688: 305 pgs: 305 active+clean; 385 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 385 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.382 2 DEBUG nova.compute.manager [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.382 2 DEBUG oslo_concurrency.lockutils [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.383 2 DEBUG oslo_concurrency.lockutils [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.383 2 DEBUG oslo_concurrency.lockutils [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.383 2 DEBUG nova.compute.manager [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:19:49 compute-0 nova_compute[257802]: 2025-10-02 12:19:49.383 2 WARNING nova.compute.manager [req-505b68f9-b7f6-4db2-91ca-4d21f387c143 req-507b0819-a65e-49ff-9f98-adc691697213 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state None.
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.131 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.373 2 INFO nova.compute.manager [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Rescuing
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.374 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.374 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:50 compute-0 nova_compute[257802]: 2025-10-02 12:19:50.374 2 DEBUG nova.network.neutron [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:19:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:50.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:50 compute-0 ceph-mon[73607]: pgmap v1689: 305 pgs: 305 active+clean; 385 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 02 12:19:51 compute-0 ovn_controller[148183]: 2025-10-02T12:19:51Z|00339|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct 02 12:19:51 compute-0 nova_compute[257802]: 2025-10-02 12:19:51.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 385 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 456 KiB/s wr, 84 op/s
Oct 02 12:19:51 compute-0 nova_compute[257802]: 2025-10-02 12:19:51.487 2 DEBUG nova.network.neutron [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:51 compute-0 nova_compute[257802]: 2025-10-02 12:19:51.506 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:51 compute-0 ovn_controller[148183]: 2025-10-02T12:19:51Z|00340|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct 02 12:19:51 compute-0 nova_compute[257802]: 2025-10-02 12:19:51.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:51 compute-0 nova_compute[257802]: 2025-10-02 12:19:51.829 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:19:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3427342206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:52 compute-0 nova_compute[257802]: 2025-10-02 12:19:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:53 compute-0 ceph-mon[73607]: pgmap v1690: 305 pgs: 305 active+clean; 385 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 456 KiB/s wr, 84 op/s
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.126 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.126 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 415 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 83 op/s
Oct 02 12:19:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209983335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.542 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:53 compute-0 podman[307520]: 2025-10-02 12:19:53.633678325 +0000 UTC m=+0.054643780 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.648 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.648 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.808 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.809 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=20.896987915039062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.810 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.810 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.882 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance ac0f45a4-0e95-492b-be4f-14fe19840399 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.882 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.882 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:19:53 compute-0 nova_compute[257802]: 2025-10-02 12:19:53.952 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1209983335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004878268658105342 of space, bias 1.0, pg target 1.4634805974316025 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001978752268285874 of space, bias 1.0, pg target 0.5916469282174763 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989558787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.450 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.456 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.472 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.508 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:19:54 compute-0 nova_compute[257802]: 2025-10-02 12:19:54.508 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:19:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1506022889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:19:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1506022889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:55 compute-0 ceph-mon[73607]: pgmap v1691: 305 pgs: 305 active+clean; 415 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 83 op/s
Oct 02 12:19:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/989558787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1506022889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1506022889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 432 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.509 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.509 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.510 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.528 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.528 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.529 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:19:55 compute-0 nova_compute[257802]: 2025-10-02 12:19:55.529 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1336774177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:56.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:56 compute-0 podman[307564]: 2025-10-02 12:19:56.909130948 +0000 UTC m=+0.052448136 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:19:56 compute-0 podman[307565]: 2025-10-02 12:19:56.91288412 +0000 UTC m=+0.053720457 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:19:57 compute-0 nova_compute[257802]: 2025-10-02 12:19:57.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:57 compute-0 ceph-mon[73607]: pgmap v1692: 305 pgs: 305 active+clean; 432 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:19:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2857518082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1621902664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 432 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 12:19:57 compute-0 nova_compute[257802]: 2025-10-02 12:19:57.457 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:57 compute-0 nova_compute[257802]: 2025-10-02 12:19:57.472 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:57 compute-0 nova_compute[257802]: 2025-10-02 12:19:57.472 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:19:58 compute-0 ceph-mon[73607]: pgmap v1693: 305 pgs: 305 active+clean; 432 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 12:19:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3806474939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:19:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:19:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:19:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:58 compute-0 sudo[307603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:58 compute-0 sudo[307603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:58 compute-0 sudo[307603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:59 compute-0 sudo[307628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:59 compute-0 sudo[307628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:59 compute-0 sudo[307628]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:59 compute-0 nova_compute[257802]: 2025-10-02 12:19:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:59 compute-0 nova_compute[257802]: 2025-10-02 12:19:59.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 432 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 02 12:19:59 compute-0 ovn_controller[148183]: 2025-10-02T12:19:59Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:5c:23 10.100.0.12
Oct 02 12:19:59 compute-0 ovn_controller[148183]: 2025-10-02T12:19:59Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:5c:23 10.100.0.12
Oct 02 12:20:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:20:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:00 compute-0 ceph-mon[73607]: pgmap v1694: 305 pgs: 305 active+clean; 432 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 02 12:20:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:20:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:00.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 437 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 87 op/s
Oct 02 12:20:01 compute-0 nova_compute[257802]: 2025-10-02 12:20:01.880 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:01 compute-0 podman[307654]: 2025-10-02 12:20:01.947792512 +0000 UTC m=+0.083260391 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:20:02 compute-0 nova_compute[257802]: 2025-10-02 12:20:02.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:02.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:02 compute-0 ceph-mon[73607]: pgmap v1695: 305 pgs: 305 active+clean; 437 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 87 op/s
Oct 02 12:20:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 455 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 02 12:20:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:04.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:04 compute-0 kernel: tapfcce5b3f-0d (unregistering): left promiscuous mode
Oct 02 12:20:04 compute-0 NetworkManager[44987]: <info>  [1759407604.6402] device (tapfcce5b3f-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:04 compute-0 ovn_controller[148183]: 2025-10-02T12:20:04Z|00341|binding|INFO|Releasing lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 from this chassis (sb_readonly=0)
Oct 02 12:20:04 compute-0 ovn_controller[148183]: 2025-10-02T12:20:04Z|00342|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 down in Southbound
Oct 02 12:20:04 compute-0 ovn_controller[148183]: 2025-10-02T12:20:04Z|00343|binding|INFO|Removing iface tapfcce5b3f-0d ovn-installed in OVS
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:04.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.659 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.660 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.662 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.663 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[90808055-b68d-4b05-986d-b126befcbfda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.664 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 02 12:20:04 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Consumed 14.207s CPU time.
Oct 02 12:20:04 compute-0 systemd-machined[211836]: Machine qemu-36-instance-00000051 terminated.
Oct 02 12:20:04 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [NOTICE]   (307455) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:04 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [NOTICE]   (307455) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:04 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [WARNING]  (307455) : Exiting Master process...
Oct 02 12:20:04 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [ALERT]    (307455) : Current worker (307464) exited with code 143 (Terminated)
Oct 02 12:20:04 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[307436]: [WARNING]  (307455) : All workers exited. Exiting... (0)
Oct 02 12:20:04 compute-0 systemd[1]: libpod-a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688.scope: Deactivated successfully.
Oct 02 12:20:04 compute-0 conmon[307436]: conmon a49ce628bb0b07d1f9ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688.scope/container/memory.events
Oct 02 12:20:04 compute-0 podman[307705]: 2025-10-02 12:20:04.80376846 +0000 UTC m=+0.044263155 container died a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:04 compute-0 ceph-mon[73607]: pgmap v1696: 305 pgs: 305 active+clean; 455 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 02 12:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3dc466208792f464efafc15dc26cd9ce9d7ae0cc55e8518fc7bf7d419d18948-merged.mount: Deactivated successfully.
Oct 02 12:20:04 compute-0 podman[307705]: 2025-10-02 12:20:04.858205814 +0000 UTC m=+0.098700509 container cleanup a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:20:04 compute-0 systemd[1]: libpod-conmon-a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688.scope: Deactivated successfully.
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.893 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance shutdown successfully after 13 seconds.
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.899 2 INFO nova.virt.libvirt.driver [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance destroyed successfully.
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.899 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.919 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Attempting a stable device rescue
Oct 02 12:20:04 compute-0 podman[307736]: 2025-10-02 12:20:04.942671372 +0000 UTC m=+0.058304679 container remove a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.947 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8ad05b-a7ea-4a9a-ab2f-be53bcd3389d]: (4, ('Thu Oct  2 12:20:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688)\na49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688\nThu Oct  2 12:20:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (a49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688)\na49ce628bb0b07d1f9fff8e35a17ee2993d169fd222f11b077d71e26ef70d688\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.949 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50657ebe-d9c0-4969-a714-ac04946770d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.950 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 kernel: tapf1725bd8-70: left promiscuous mode
Oct 02 12:20:04 compute-0 nova_compute[257802]: 2025-10-02 12:20:04.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:04.972 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa0a8e5-d3a5-49eb-b7d9-3d493351cd77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:05.011 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1e682d35-47c9-4ba8-b460-e87dcc8c9b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:05.012 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3381e2f9-2fad-4b0e-a62d-e667846e759b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:05.030 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[945bf55b-924b-4c19-9506-ce3308f1e6e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565365, 'reachable_time': 18417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307763, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:05.032 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:05.032 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbcf7a6-cbcf-4535-893b-a9e5138c29cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:05 compute-0 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.224 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.228 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.228 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Creating image(s)
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.257 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.261 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.298 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.325 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.329 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "30bcbc6b5296f8c345f9f917fe970fe88eb5d586" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.329 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "30bcbc6b5296f8c345f9f917fe970fe88eb5d586" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 168 op/s
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.587 2 DEBUG nova.virt.libvirt.imagebackend [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/8aa41919-15c5-43d9-ac12-d18997b6c8f0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/8aa41919-15c5-43d9-ac12-d18997b6c8f0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.642 2 DEBUG nova.virt.libvirt.imagebackend [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/8aa41919-15c5-43d9-ac12-d18997b6c8f0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:20:05 compute-0 nova_compute[257802]: 2025-10-02 12:20:05.642 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] cloning images/8aa41919-15c5-43d9-ac12-d18997b6c8f0@snap to None/ac0f45a4-0e95-492b-be4f-14fe19840399_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.156 2 DEBUG oslo_concurrency.lockutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "30bcbc6b5296f8c345f9f917fe970fe88eb5d586" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.193 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'migration_context' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.222 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.225 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start _get_guest_xml network_info=[{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:26:5c:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8aa41919-15c5-43d9-ac12-d18997b6c8f0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': 'd9ca43b3-656f-4879-87cc-759e42fbf324', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a1b4f371-5f32-4b45-87d9-56f18142a917', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a1b4f371-5f32-4b45-87d9-56f18142a917', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'attached_at': '', 'detached_at': '', 'volume_id': 'a1b4f371-5f32-4b45-87d9-56f18142a917', 'serial': 'a1b4f371-5f32-4b45-87d9-56f18142a917'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.225 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'resources' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.242 2 WARNING nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.248 2 DEBUG nova.virt.libvirt.host [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.249 2 DEBUG nova.virt.libvirt.host [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.251 2 DEBUG nova.virt.libvirt.host [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.252 2 DEBUG nova.virt.libvirt.host [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.253 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.253 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.254 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.254 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.254 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.254 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.255 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.255 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.255 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.255 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.255 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.256 2 DEBUG nova.virt.hardware [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.256 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.314 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.417 2 DEBUG nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.417 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.419 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.419 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.419 2 DEBUG nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.421 2 WARNING nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state rescuing.
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.421 2 DEBUG nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.421 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.422 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.422 2 DEBUG oslo_concurrency.lockutils [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.423 2 DEBUG nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.423 2 WARNING nova.compute.manager [req-4d3bbfdb-d458-45a8-8280-31355d74d7bf req-526340e5-333e-429f-8b7d-fff693ffd40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state rescuing.
Oct 02 12:20:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019987123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.726 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:06 compute-0 nova_compute[257802]: 2025-10-02 12:20:06.751 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:07 compute-0 ceph-mon[73607]: pgmap v1697: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 168 op/s
Oct 02 12:20:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4019987123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856428545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.172 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.173 2 DEBUG nova.virt.libvirt.vif [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1594332990',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1594332990',id=81,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:19:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-5b8jgtxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:47Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=ac0f45a4-0e95-492b-be4f-14fe19840399,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:26:5c:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.174 2 DEBUG nova.network.os_vif_util [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:26:5c:23"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.175 2 DEBUG nova.network.os_vif_util [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.176 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.199 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <uuid>ac0f45a4-0e95-492b-be4f-14fe19840399</uuid>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <name>instance-00000051</name>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1594332990</nova:name>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:20:06</nova:creationTime>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:user uuid="69d8e29c6d3747e98a5985a584f4c814">tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member</nova:user>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:project uuid="8efba404696b40fbbaa6431b934b87f1">tempest-ServerBootFromVolumeStableRescueTest-153154373</nova:project>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <nova:port uuid="fcce5b3f-0d8b-4461-8c10-1b4d34cc8894">
Oct 02 12:20:07 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <system>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="serial">ac0f45a4-0e95-492b-be4f-14fe19840399</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="uuid">ac0f45a4-0e95-492b-be4f-14fe19840399</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </system>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <os>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </os>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <features>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </features>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-a1b4f371-5f32-4b45-87d9-56f18142a917">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <serial>a1b4f371-5f32-4b45-87d9-56f18142a917</serial>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ac0f45a4-0e95-492b-be4f-14fe19840399_disk.rescue">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:07 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <boot order="1"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:26:5c:23"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <target dev="tapfcce5b3f-0d"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/console.log" append="off"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <video>
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </video>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:20:07 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:20:07 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:20:07 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:20:07 compute-0 nova_compute[257802]: </domain>
Oct 02 12:20:07 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.205 2 INFO nova.virt.libvirt.driver [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance destroyed successfully.
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.250 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.250 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.250 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.250 2 DEBUG nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No VIF found with MAC fa:16:3e:26:5c:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.251 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Using config drive
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.270 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.292 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.335 2 DEBUG nova.objects.instance [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'keypairs' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.640 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Creating config drive at /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.645 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o76moi9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.775 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o76moi9" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.806 2 DEBUG nova.storage.rbd_utils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:07 compute-0 nova_compute[257802]: 2025-10-02 12:20:07.810 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3856428545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.246 2 DEBUG oslo_concurrency.processutils [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue ac0f45a4-0e95-492b-be4f-14fe19840399_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.247 2 INFO nova.virt.libvirt.driver [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Deleting local config drive /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399/disk.config.rescue because it was imported into RBD.
Oct 02 12:20:08 compute-0 kernel: tapfcce5b3f-0d: entered promiscuous mode
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.3006] manager: (tapfcce5b3f-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Oct 02 12:20:08 compute-0 ovn_controller[148183]: 2025-10-02T12:20:08Z|00344|binding|INFO|Claiming lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for this chassis.
Oct 02 12:20:08 compute-0 ovn_controller[148183]: 2025-10-02T12:20:08Z|00345|binding|INFO|fcce5b3f-0d8b-4461-8c10-1b4d34cc8894: Claiming fa:16:3e:26:5c:23 10.100.0.12
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.310 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:08.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.312 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 bound to our chassis
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.314 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:08 compute-0 ovn_controller[148183]: 2025-10-02T12:20:08Z|00346|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 ovn-installed in OVS
Oct 02 12:20:08 compute-0 ovn_controller[148183]: 2025-10-02T12:20:08Z|00347|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 up in Southbound
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.325 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f97bdf-0ada-4946-a997-eb1c563d4f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.326 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.328 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.328 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b51f9aed-9b4f-4185-a9b4-7a6915371219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.329 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33ee49b6-7dae-47cf-98b4-1e489781b50a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 systemd-udevd[308042]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:08 compute-0 systemd-machined[211836]: New machine qemu-37-instance-00000051.
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.344 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7294ee0d-70ab-4eef-8d85-b312f33d781a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.3587] device (tapfcce5b3f-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.3597] device (tapfcce5b3f-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:08 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000051.
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.364 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7998cf9d-4cd9-4131-9cb4-0a0db6e48757]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.391 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[58a7aa80-f794-4bd3-9b62-357492e5ed7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.396 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7dc041-a613-4721-93c9-c45a293c757f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 systemd-udevd[308045]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.3975] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.430 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[47609d1e-e62e-4a91-8350-11a870f83c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.433 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f490e3a9-ec79-4b70-a409-1800c23a0c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.4519] device (tapf1725bd8-70): carrier: link connected
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.458 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b8591eb9-ce4b-4ba3-8143-7671530ad748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.474 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2b89d9dc-4188-49be-95bf-925564408197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567607, 'reachable_time': 43947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308074, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2d4757-647c-44aa-aa07-0912c28d576b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 567607, 'tstamp': 567607}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308075, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.508 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5245a901-f760-492b-8309-ad274bc35590]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567607, 'reachable_time': 43947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308076, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.536 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e641b1ec-eed9-4821-b35a-c459a1b315b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.592 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[01200a02-aa15-41a2-bb8e-29e7f8ba9a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.594 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.594 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.595 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:08 compute-0 NetworkManager[44987]: <info>  [1759407608.5974] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct 02 12:20:08 compute-0 kernel: tapf1725bd8-70: entered promiscuous mode
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.601 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:08 compute-0 ovn_controller[148183]: 2025-10-02T12:20:08Z|00348|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.605 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.606 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c47d36-42b1-41ac-8f77-cfb4ae39f82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.606 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:08.607 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:08 compute-0 nova_compute[257802]: 2025-10-02 12:20:08.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:09 compute-0 podman[308164]: 2025-10-02 12:20:09.00211812 +0000 UTC m=+0.085588718 container create b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:20:09 compute-0 podman[308164]: 2025-10-02 12:20:08.946372864 +0000 UTC m=+0.029843512 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:09 compute-0 systemd[1]: Started libpod-conmon-b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2.scope.
Oct 02 12:20:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef89f4f4fbf02ef3c52f8b27eba1c813605cb8f0f7b2f58a395dc62c42f7705/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:09 compute-0 podman[308164]: 2025-10-02 12:20:09.121162836 +0000 UTC m=+0.204633444 container init b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:20:09 compute-0 podman[308164]: 2025-10-02 12:20:09.12784742 +0000 UTC m=+0.211318018 container start b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:20:09 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [NOTICE]   (308187) : New worker (308189) forked
Oct 02 12:20:09 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [NOTICE]   (308187) : Loading success.
Oct 02 12:20:09 compute-0 ceph-mon[73607]: pgmap v1698: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Oct 02 12:20:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1183189716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 152 op/s
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.485 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for ac0f45a4-0e95-492b-be4f-14fe19840399 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.486 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407609.4852293, ac0f45a4-0e95-492b-be4f-14fe19840399 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.486 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Resumed (Lifecycle Event)
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.491 2 DEBUG nova.compute.manager [None req-f8326bfb-5b5b-4064-9a12-5db416904d46 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.523 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.525 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.553 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.554 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407609.485364, ac0f45a4-0e95-492b-be4f-14fe19840399 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.554 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Started (Lifecycle Event)
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.575 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:09 compute-0 nova_compute[257802]: 2025-10-02 12:20:09.578 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:10.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3938907317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:10 compute-0 ceph-mon[73607]: pgmap v1699: 305 pgs: 305 active+clean; 465 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 152 op/s
Oct 02 12:20:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:10.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.034 2 DEBUG nova.compute.manager [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.034 2 DEBUG oslo_concurrency.lockutils [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.034 2 DEBUG oslo_concurrency.lockutils [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.035 2 DEBUG oslo_concurrency.lockutils [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.035 2 DEBUG nova.compute.manager [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:11 compute-0 nova_compute[257802]: 2025-10-02 12:20:11.035 2 WARNING nova.compute.manager [req-f28d13de-5460-4533-96a3-ebabc748e717 req-a6780e57-91e0-4f9d-a147-c185eedebaaa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state rescued and task_state None.
Oct 02 12:20:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 472 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Oct 02 12:20:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1847149126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:12 compute-0 nova_compute[257802]: 2025-10-02 12:20:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3907116082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:12 compute-0 nova_compute[257802]: 2025-10-02 12:20:12.656 2 INFO nova.compute.manager [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Unrescuing
Oct 02 12:20:12 compute-0 nova_compute[257802]: 2025-10-02 12:20:12.657 2 DEBUG oslo_concurrency.lockutils [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:12 compute-0 nova_compute[257802]: 2025-10-02 12:20:12.657 2 DEBUG oslo_concurrency.lockutils [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:12 compute-0 nova_compute[257802]: 2025-10-02 12:20:12.657 2 DEBUG nova.network.neutron [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:12.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:13 compute-0 ceph-mon[73607]: pgmap v1700: 305 pgs: 305 active+clean; 472 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Oct 02 12:20:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3907116082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.183 2 DEBUG nova.compute.manager [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.184 2 DEBUG oslo_concurrency.lockutils [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.184 2 DEBUG oslo_concurrency.lockutils [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.185 2 DEBUG oslo_concurrency.lockutils [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.185 2 DEBUG nova.compute.manager [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:13 compute-0 nova_compute[257802]: 2025-10-02 12:20:13.186 2 WARNING nova.compute.manager [req-16f082bc-8570-4092-b29d-9ca05d242c5e req-3d800a8b-71e4-465c-b2fd-e2163bfb6c28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:20:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 516 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.4 MiB/s wr, 229 op/s
Oct 02 12:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.101 2 DEBUG nova.network.neutron [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [{"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.156 2 DEBUG oslo_concurrency.lockutils [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-ac0f45a4-0e95-492b-be4f-14fe19840399" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.157 2 DEBUG nova.objects.instance [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'flavor' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1957980323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:14.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:14 compute-0 kernel: tapfcce5b3f-0d (unregistering): left promiscuous mode
Oct 02 12:20:14 compute-0 NetworkManager[44987]: <info>  [1759407614.3595] device (tapfcce5b3f-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00349|binding|INFO|Releasing lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 from this chassis (sb_readonly=0)
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00350|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 down in Southbound
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00351|binding|INFO|Removing iface tapfcce5b3f-0d ovn-installed in OVS
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.533 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.534 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.535 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.536 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b83b6bc-e71d-4fe1-8a4d-15f54671b85c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.536 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 02 12:20:14 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000051.scope: Consumed 5.806s CPU time.
Oct 02 12:20:14 compute-0 systemd-machined[211836]: Machine qemu-37-instance-00000051 terminated.
Oct 02 12:20:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:14.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [NOTICE]   (308187) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [NOTICE]   (308187) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [WARNING]  (308187) : Exiting Master process...
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [ALERT]    (308187) : Current worker (308189) exited with code 143 (Terminated)
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308183]: [WARNING]  (308187) : All workers exited. Exiting... (0)
Oct 02 12:20:14 compute-0 systemd[1]: libpod-b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2.scope: Deactivated successfully.
Oct 02 12:20:14 compute-0 podman[308226]: 2025-10-02 12:20:14.706373288 +0000 UTC m=+0.094994647 container died b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.756 2 INFO nova.virt.libvirt.driver [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance destroyed successfully.
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.756 2 DEBUG nova.objects.instance [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fef89f4f4fbf02ef3c52f8b27eba1c813605cb8f0f7b2f58a395dc62c42f7705-merged.mount: Deactivated successfully.
Oct 02 12:20:14 compute-0 kernel: tapfcce5b3f-0d: entered promiscuous mode
Oct 02 12:20:14 compute-0 NetworkManager[44987]: <info>  [1759407614.8581] manager: (tapfcce5b3f-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 systemd-udevd[308205]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00352|binding|INFO|Claiming lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for this chassis.
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00353|binding|INFO|fcce5b3f-0d8b-4461-8c10-1b4d34cc8894: Claiming fa:16:3e:26:5c:23 10.100.0.12
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:14.869 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:14 compute-0 NetworkManager[44987]: <info>  [1759407614.8765] device (tapfcce5b3f-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 NetworkManager[44987]: <info>  [1759407614.8775] device (tapfcce5b3f-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00354|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 ovn-installed in OVS
Oct 02 12:20:14 compute-0 ovn_controller[148183]: 2025-10-02T12:20:14Z|00355|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 up in Southbound
Oct 02 12:20:14 compute-0 nova_compute[257802]: 2025-10-02 12:20:14.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 systemd-machined[211836]: New machine qemu-38-instance-00000051.
Oct 02 12:20:14 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000051.
Oct 02 12:20:14 compute-0 podman[308226]: 2025-10-02 12:20:14.928902619 +0000 UTC m=+0.317523978 container cleanup b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:14 compute-0 systemd[1]: libpod-conmon-b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2.scope: Deactivated successfully.
Oct 02 12:20:15 compute-0 podman[308287]: 2025-10-02 12:20:15.011773059 +0000 UTC m=+0.059229322 container remove b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.022 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ceced51f-bb25-497d-9e7c-e12a073b7e9f]: (4, ('Thu Oct  2 12:20:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2)\nb92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2\nThu Oct  2 12:20:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (b92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2)\nb92d76f2a5159a8f2dcac65493085e36d83c896e55e66b441479027d4ce54fb2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.024 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d977fb9c-1d99-46a4-b4d0-06b17a68bcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.025 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:15 compute-0 kernel: tapf1725bd8-70: left promiscuous mode
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.045 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[81b3cf9e-b8eb-4a7d-b2ac-14edf6066aa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.074 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb2faac-f42a-4427-bf01-de34b7fec5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.075 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3dbec1-d47c-4c21-a4ea-db28ed72d055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.090 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6f463941-b0de-4531-b02e-f40648932241]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567601, 'reachable_time': 30179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308303, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.092 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.092 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e0146257-468f-4986-b9f4-4120791d9311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.093 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis
Oct 02 12:20:15 compute-0 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.094 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.105 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[81839fdb-66dd-417a-96d9-2b928e3acb52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.105 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.107 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.107 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8b47aaee-27e5-4b8a-a744-406c359c098e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.108 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a884a37a-7759-4086-af54-03d26a5f5fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.117 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[962ab8c6-81ab-4440-9c20-de697bee3a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.134 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[817b4b94-489d-4838-8feb-c2df5ca6f6ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.160 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[265823a5-acc3-41ca-8375-715c98d0bcb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.165 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9a791546-e147-4954-a1a7-09db9249b0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 NetworkManager[44987]: <info>  [1759407615.1666] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.197 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[13fce39c-9672-446e-a1fe-a434557a9660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.201 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c9892a97-96e6-491a-8578-3af25a56d4e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 NetworkManager[44987]: <info>  [1759407615.2243] device (tapf1725bd8-70): carrier: link connected
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.230 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b4da5fda-a816-437b-b2d6-c06dfe370d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.246 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[421f3825-f156-4a18-9b5e-0a4371e096b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568284, 'reachable_time': 21187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308342, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.261 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7d872921-fb8c-4654-8adc-601981f4b8dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568284, 'tstamp': 568284}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308346, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ceph-mon[73607]: pgmap v1701: 305 pgs: 305 active+clean; 516 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.4 MiB/s wr, 229 op/s
Oct 02 12:20:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2471755034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.288 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbb0220-319f-4527-9124-8ca1a37fefa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568284, 'reachable_time': 21187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308352, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.316 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb2a7f5-4803-4cc7-9db4-285d1c4f702e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.363 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5017c245-96fe-4e3e-a772-7669c33c517c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.364 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.365 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.365 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:15 compute-0 NetworkManager[44987]: <info>  [1759407615.3674] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 02 12:20:15 compute-0 kernel: tapf1725bd8-70: entered promiscuous mode
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.369 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 538 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Oct 02 12:20:15 compute-0 ovn_controller[148183]: 2025-10-02T12:20:15Z|00356|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.372 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.375 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0d696370-5ea0-4cb2-b335-ff05afde9573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.376 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:15.378 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:15 compute-0 podman[308403]: 2025-10-02 12:20:15.746503896 +0000 UTC m=+0.077037468 container create 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:20:15 compute-0 systemd[1]: Started libpod-conmon-0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4.scope.
Oct 02 12:20:15 compute-0 podman[308403]: 2025-10-02 12:20:15.69646419 +0000 UTC m=+0.026997792 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.796 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for ac0f45a4-0e95-492b-be4f-14fe19840399 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.797 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407615.7963753, ac0f45a4-0e95-492b-be4f-14fe19840399 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.797 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Resumed (Lifecycle Event)
Oct 02 12:20:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15f82b7d5b0c4373632360ed0a550114a6239fdb86d46a9a138eed091392d159/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:15 compute-0 podman[308403]: 2025-10-02 12:20:15.832285417 +0000 UTC m=+0.162819019 container init 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.832 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.838 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:15 compute-0 podman[308403]: 2025-10-02 12:20:15.840777155 +0000 UTC m=+0.171310727 container start 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:20:15 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [NOTICE]   (308438) : New worker (308443) forked
Oct 02 12:20:15 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [NOTICE]   (308438) : Loading success.
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.892 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.892 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407615.7979758, ac0f45a4-0e95-492b-be4f-14fe19840399 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.893 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Started (Lifecycle Event)
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.950 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:15 compute-0 nova_compute[257802]: 2025-10-02 12:20:15.953 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:16 compute-0 nova_compute[257802]: 2025-10-02 12:20:16.014 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:20:16 compute-0 ceph-mon[73607]: pgmap v1702: 305 pgs: 305 active+clean; 538 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Oct 02 12:20:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1087788064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:16 compute-0 nova_compute[257802]: 2025-10-02 12:20:16.571 2 DEBUG nova.compute.manager [None req-43b26788-d341-4e6e-9bdc-cd8913af1324 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:16.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 538 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.642 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.643 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.643 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.644 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.644 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.644 2 WARNING nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state None.
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.644 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.644 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.645 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.645 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.645 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.645 2 WARNING nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state None.
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.646 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.646 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.646 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.646 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.647 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.647 2 WARNING nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state None.
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.647 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.647 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.648 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.648 2 DEBUG oslo_concurrency.lockutils [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.648 2 DEBUG nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.648 2 WARNING nova.compute.manager [req-a375555b-6fd6-4f7d-af82-f69a46e619bd req-b4b3d140-3bf6-4c2d-b258-461e85833f19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state None.
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.814 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.814 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.815 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.815 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.815 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.816 2 INFO nova.compute.manager [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Terminating instance
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.817 2 DEBUG nova.compute.manager [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:20:17 compute-0 kernel: tapfcce5b3f-0d (unregistering): left promiscuous mode
Oct 02 12:20:17 compute-0 NetworkManager[44987]: <info>  [1759407617.8656] device (tapfcce5b3f-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:17 compute-0 ovn_controller[148183]: 2025-10-02T12:20:17Z|00357|binding|INFO|Releasing lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 from this chassis (sb_readonly=0)
Oct 02 12:20:17 compute-0 ovn_controller[148183]: 2025-10-02T12:20:17Z|00358|binding|INFO|Setting lport fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 down in Southbound
Oct 02 12:20:17 compute-0 ovn_controller[148183]: 2025-10-02T12:20:17Z|00359|binding|INFO|Removing iface tapfcce5b3f-0d ovn-installed in OVS
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:17 compute-0 nova_compute[257802]: 2025-10-02 12:20:17.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:17.923 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:23 10.100.0.12'], port_security=['fa:16:3e:26:5c:23 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ac0f45a4-0e95-492b-be4f-14fe19840399', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:17.924 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis
Oct 02 12:20:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:17.926 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:17.926 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[51f3c755-39ad-4393-844d-d2b1a8e28e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:17.927 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore
Oct 02 12:20:17 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 02 12:20:17 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000051.scope: Consumed 2.917s CPU time.
Oct 02 12:20:17 compute-0 systemd-machined[211836]: Machine qemu-38-instance-00000051 terminated.
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.061 2 INFO nova.virt.libvirt.driver [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Instance destroyed successfully.
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.062 2 DEBUG nova.objects.instance [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'resources' on Instance uuid ac0f45a4-0e95-492b-be4f-14fe19840399 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.085 2 DEBUG nova.virt.libvirt.vif [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1594332990',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1594332990',id=81,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-5b8jgtxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:16Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=ac0f45a4-0e95-492b-be4f-14fe19840399,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.086 2 DEBUG nova.network.os_vif_util [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "address": "fa:16:3e:26:5c:23", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcce5b3f-0d", "ovs_interfaceid": "fcce5b3f-0d8b-4461-8c10-1b4d34cc8894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.086 2 DEBUG nova.network.os_vif_util [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.086 2 DEBUG os_vif [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcce5b3f-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:18 compute-0 nova_compute[257802]: 2025-10-02 12:20:18.094 2 INFO os_vif [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:5c:23,bridge_name='br-int',has_traffic_filtering=True,id=fcce5b3f-0d8b-4461-8c10-1b4d34cc8894,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcce5b3f-0d')
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [NOTICE]   (308438) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [NOTICE]   (308438) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [WARNING]  (308438) : Exiting Master process...
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [WARNING]  (308438) : Exiting Master process...
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [ALERT]    (308438) : Current worker (308443) exited with code 143 (Terminated)
Oct 02 12:20:18 compute-0 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[308418]: [WARNING]  (308438) : All workers exited. Exiting... (0)
Oct 02 12:20:18 compute-0 systemd[1]: libpod-0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4.scope: Deactivated successfully.
Oct 02 12:20:18 compute-0 podman[308478]: 2025-10-02 12:20:18.107728966 +0000 UTC m=+0.109205686 container died 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:20:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-15f82b7d5b0c4373632360ed0a550114a6239fdb86d46a9a138eed091392d159-merged.mount: Deactivated successfully.
Oct 02 12:20:18 compute-0 ceph-mon[73607]: pgmap v1703: 305 pgs: 305 active+clean; 538 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Oct 02 12:20:18 compute-0 podman[308478]: 2025-10-02 12:20:18.624454753 +0000 UTC m=+0.625931443 container cleanup 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:18 compute-0 systemd[1]: libpod-conmon-0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4.scope: Deactivated successfully.
Oct 02 12:20:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:19 compute-0 podman[308537]: 2025-10-02 12:20:19.054944098 +0000 UTC m=+0.406179331 container remove 0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.061 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[373823f1-5b1d-48f4-97c7-4aad08d2ce9e]: (4, ('Thu Oct  2 12:20:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4)\n0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4\nThu Oct  2 12:20:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4)\n0b28834df4fe85deb5aafce0420a7fed7d948d045e8a4789d1faee7c8893a8b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.063 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0e88f9-32c0-4482-a900-638f79c3f500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.065 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:19 compute-0 kernel: tapf1725bd8-70: left promiscuous mode
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.105 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad61174-77ea-47c6-8601-0c425df70bdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.144 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba49934-5e88-414b-b0d0-cbd3024d2f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.145 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7fed88-e469-4782-ad48-9a674a07f7b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.167 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9811cd0a-ebec-435c-a4be-4242c31d2373]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568278, 'reachable_time': 30822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308577, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.170 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:19.170 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[375245b7-a32b-4797-b0d5-652f3b09f5a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:19 compute-0 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct 02 12:20:19 compute-0 sudo[308552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:19 compute-0 sudo[308552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 sudo[308552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:19 compute-0 sudo[308580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:19 compute-0 sudo[308580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 sudo[308580]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 544 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.600 2 INFO nova.virt.libvirt.driver [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Deleting instance files /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399_del
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.601 2 INFO nova.virt.libvirt.driver [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Deletion of /var/lib/nova/instances/ac0f45a4-0e95-492b-be4f-14fe19840399_del complete
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.668 2 INFO nova.compute.manager [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Took 1.85 seconds to destroy the instance on the hypervisor.
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.669 2 DEBUG oslo.service.loopingcall [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.669 2 DEBUG nova.compute.manager [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.670 2 DEBUG nova.network.neutron [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.983 2 DEBUG nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.984 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.984 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.984 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.984 2 DEBUG nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.984 2 DEBUG nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-unplugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.985 2 DEBUG nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.985 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.985 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.985 2 DEBUG oslo_concurrency.lockutils [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.985 2 DEBUG nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] No waiting events found dispatching network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:19 compute-0 nova_compute[257802]: 2025-10-02 12:20:19.986 2 WARNING nova.compute.manager [req-21e2ef43-b51d-4246-bcbc-3a11a55b2b08 req-5fb8d25f-0045-47c7-b311-9b33612d48e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received unexpected event network-vif-plugged-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 for instance with vm_state active and task_state deleting.
Oct 02 12:20:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.537 2 DEBUG nova.network.neutron [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.563 2 INFO nova.compute.manager [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Took 0.89 seconds to deallocate network for instance.
Oct 02 12:20:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.724 2 INFO nova.compute.manager [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Took 0.16 seconds to detach 1 volumes for instance.
Oct 02 12:20:20 compute-0 ceph-mon[73607]: pgmap v1704: 305 pgs: 305 active+clean; 544 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.772 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.773 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:20 compute-0 nova_compute[257802]: 2025-10-02 12:20:20.830 2 DEBUG oslo_concurrency.processutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696431819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.233 2 DEBUG oslo_concurrency.processutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.239 2 DEBUG nova.compute.provider_tree [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.256 2 DEBUG nova.scheduler.client.report [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.313 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.341 2 INFO nova.scheduler.client.report [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Deleted allocations for instance ac0f45a4-0e95-492b-be4f-14fe19840399
Oct 02 12:20:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 544 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 300 op/s
Oct 02 12:20:21 compute-0 nova_compute[257802]: 2025-10-02 12:20:21.418 2 DEBUG oslo_concurrency.lockutils [None req-908d15ee-841a-4fb6-aee3-ae6768a1ca7b 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "ac0f45a4-0e95-492b-be4f-14fe19840399" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/696431819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:22 compute-0 nova_compute[257802]: 2025-10-02 12:20:22.087 2 DEBUG nova.compute.manager [req-c4d485f4-7a23-4f63-b641-68ea2c1c3572 req-2f5f7ca6-21fb-4878-b8be-69d814d52943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Received event network-vif-deleted-fcce5b3f-0d8b-4461-8c10-1b4d34cc8894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:22 compute-0 nova_compute[257802]: 2025-10-02 12:20:22.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:22.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:22 compute-0 ceph-mon[73607]: pgmap v1705: 305 pgs: 305 active+clean; 544 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 300 op/s
Oct 02 12:20:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2938752026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:23 compute-0 nova_compute[257802]: 2025-10-02 12:20:23.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 509 MiB data, 891 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.8 MiB/s wr, 341 op/s
Oct 02 12:20:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 02 12:20:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 02 12:20:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 02 12:20:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:23 compute-0 podman[308630]: 2025-10-02 12:20:23.929914701 +0000 UTC m=+0.068572370 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 12:20:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:24.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:24 compute-0 ceph-mon[73607]: pgmap v1706: 305 pgs: 305 active+clean; 509 MiB data, 891 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.8 MiB/s wr, 341 op/s
Oct 02 12:20:24 compute-0 ceph-mon[73607]: osdmap e254: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 497 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 94 KiB/s wr, 289 op/s
Oct 02 12:20:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.360 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.360 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.379 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.455 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.455 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.462 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.462 2 INFO nova.compute.claims [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:20:26 compute-0 nova_compute[257802]: 2025-10-02 12:20:26.578 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:26 compute-0 ceph-mon[73607]: pgmap v1708: 305 pgs: 305 active+clean; 497 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 94 KiB/s wr, 289 op/s
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:26.937 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:26.938 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:26.938 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228654194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.006 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.013 2 DEBUG nova.compute.provider_tree [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.029 2 DEBUG nova.scheduler.client.report [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.053 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.054 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.099 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.099 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.125 2 INFO nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.147 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.280 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.281 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.281 2 INFO nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Creating image(s)
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.304 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.329 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.354 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.357 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 497 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 94 KiB/s wr, 289 op/s
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.419 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.420 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.421 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.421 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.449 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.453 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.631 2 DEBUG nova.policy [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9f7faffac7240869a0196df1ddda7e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.833 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:27 compute-0 podman[308769]: 2025-10-02 12:20:27.911242934 +0000 UTC m=+0.051551323 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:20:27 compute-0 podman[308785]: 2025-10-02 12:20:27.91556937 +0000 UTC m=+0.055390828 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:20:27 compute-0 nova_compute[257802]: 2025-10-02 12:20:27.924 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] resizing rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3228654194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.059 2 DEBUG nova.objects.instance [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid e7e38e6b-74d9-470a-ad54-222ee4a47e1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.074 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.075 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Ensure instance console log exists: /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.075 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.075 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.076 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:28 compute-0 nova_compute[257802]: 2025-10-02 12:20:28.545 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Successfully created port: b181aa8d-06f5-47f8-a6c3-d383f70cc4ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:20:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:28 compute-0 ceph-mon[73607]: pgmap v1709: 305 pgs: 305 active+clean; 497 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 94 KiB/s wr, 289 op/s
Oct 02 12:20:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/25203635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 436 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 217 op/s
Oct 02 12:20:29 compute-0 nova_compute[257802]: 2025-10-02 12:20:29.701 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Successfully updated port: b181aa8d-06f5-47f8-a6c3-d383f70cc4ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:20:29 compute-0 nova_compute[257802]: 2025-10-02 12:20:29.733 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:29 compute-0 nova_compute[257802]: 2025-10-02 12:20:29.733 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:29 compute-0 nova_compute[257802]: 2025-10-02 12:20:29.733 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:30 compute-0 nova_compute[257802]: 2025-10-02 12:20:30.199 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:30 compute-0 nova_compute[257802]: 2025-10-02 12:20:30.230 2 DEBUG nova.compute.manager [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-changed-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:30 compute-0 nova_compute[257802]: 2025-10-02 12:20:30.231 2 DEBUG nova.compute.manager [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Refreshing instance network info cache due to event network-changed-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:20:30 compute-0 nova_compute[257802]: 2025-10-02 12:20:30.231 2 DEBUG oslo_concurrency.lockutils [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:30 compute-0 ceph-mon[73607]: pgmap v1710: 305 pgs: 305 active+clean; 436 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 217 op/s
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.201 2 DEBUG nova.network.neutron [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Updating instance_info_cache with network_info: [{"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.282 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.283 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Instance network_info: |[{"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.283 2 DEBUG oslo_concurrency.lockutils [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.284 2 DEBUG nova.network.neutron [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Refreshing network info cache for port b181aa8d-06f5-47f8-a6c3-d383f70cc4ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.287 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Start _get_guest_xml network_info=[{"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.292 2 WARNING nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.296 2 DEBUG nova.virt.libvirt.host [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.297 2 DEBUG nova.virt.libvirt.host [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.299 2 DEBUG nova.virt.libvirt.host [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.300 2 DEBUG nova.virt.libvirt.host [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.301 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.301 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.302 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.302 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.302 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.303 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.303 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.303 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.303 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.304 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.304 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.304 2 DEBUG nova.virt.hardware [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.307 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 418 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Oct 02 12:20:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518281206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.736 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.770 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:31 compute-0 nova_compute[257802]: 2025-10-02 12:20:31.775 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2518281206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3978992816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458697372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.285 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.287 2 DEBUG nova.virt.libvirt.vif [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-382007051',display_name='tempest-DeleteServersTestJSON-server-382007051',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-382007051',id=85,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-2t1406ri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:27Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e7e38e6b-74d9-470a-ad54-222ee4a47e1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.287 2 DEBUG nova.network.os_vif_util [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.288 2 DEBUG nova.network.os_vif_util [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.289 2 DEBUG nova.objects.instance [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7e38e6b-74d9-470a-ad54-222ee4a47e1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.311 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <uuid>e7e38e6b-74d9-470a-ad54-222ee4a47e1f</uuid>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <name>instance-00000055</name>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:name>tempest-DeleteServersTestJSON-server-382007051</nova:name>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:20:31</nova:creationTime>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <nova:port uuid="b181aa8d-06f5-47f8-a6c3-d383f70cc4ba">
Oct 02 12:20:32 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <system>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="serial">e7e38e6b-74d9-470a-ad54-222ee4a47e1f</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="uuid">e7e38e6b-74d9-470a-ad54-222ee4a47e1f</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </system>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <os>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </os>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <features>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </features>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk">
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config">
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:32 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:8d:47:99"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <target dev="tapb181aa8d-06"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/console.log" append="off"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <video>
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </video>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:20:32 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:20:32 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:20:32 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:20:32 compute-0 nova_compute[257802]: </domain>
Oct 02 12:20:32 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.313 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Preparing to wait for external event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.313 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.313 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.314 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.314 2 DEBUG nova.virt.libvirt.vif [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-382007051',display_name='tempest-DeleteServersTestJSON-server-382007051',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-382007051',id=85,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-2t1406ri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:27Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e7e38e6b-74d9-470a-ad54-222ee4a47e1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.314 2 DEBUG nova.network.os_vif_util [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.315 2 DEBUG nova.network.os_vif_util [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.315 2 DEBUG os_vif [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb181aa8d-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb181aa8d-06, col_values=(('external_ids', {'iface-id': 'b181aa8d-06f5-47f8-a6c3-d383f70cc4ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:47:99', 'vm-uuid': 'e7e38e6b-74d9-470a-ad54-222ee4a47e1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 NetworkManager[44987]: <info>  [1759407632.3217] manager: (tapb181aa8d-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.328 2 INFO os_vif [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06')
Oct 02 12:20:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.394 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.394 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.394 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:8d:47:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.395 2 INFO nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Using config drive
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.420 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.424 2 DEBUG nova.network.neutron [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Updated VIF entry in instance network info cache for port b181aa8d-06f5-47f8-a6c3-d383f70cc4ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.425 2 DEBUG nova.network.neutron [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Updating instance_info_cache with network_info: [{"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:32 compute-0 podman[308946]: 2025-10-02 12:20:32.432530465 +0000 UTC m=+0.072195079 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.446 2 DEBUG oslo_concurrency.lockutils [req-90f5dea7-c9f9-47f3-ba86-6938d42721c7 req-89630f63-3038-464b-bc86-c96dde5d9874 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e7e38e6b-74d9-470a-ad54-222ee4a47e1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.701 2 INFO nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Creating config drive at /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.705 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67qs41u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.838 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67qs41u" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.871 2 DEBUG nova.storage.rbd_utils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:32 compute-0 nova_compute[257802]: 2025-10-02 12:20:32.875 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.059 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407618.0573363, ac0f45a4-0e95-492b-be4f-14fe19840399 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.060 2 INFO nova.compute.manager [-] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] VM Stopped (Lifecycle Event)
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.085 2 DEBUG nova.compute.manager [None req-fef465ba-e1e6-4d60-bf3d-eabf3ac73994 - - - - - -] [instance: ac0f45a4-0e95-492b-be4f-14fe19840399] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.125 2 DEBUG oslo_concurrency.processutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config e7e38e6b-74d9-470a-ad54-222ee4a47e1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.125 2 INFO nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Deleting local config drive /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f/disk.config because it was imported into RBD.
Oct 02 12:20:33 compute-0 ceph-mon[73607]: pgmap v1711: 305 pgs: 305 active+clean; 418 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Oct 02 12:20:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2458697372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:33 compute-0 kernel: tapb181aa8d-06: entered promiscuous mode
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.1885] manager: (tapb181aa8d-06): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_controller[148183]: 2025-10-02T12:20:33Z|00360|binding|INFO|Claiming lport b181aa8d-06f5-47f8-a6c3-d383f70cc4ba for this chassis.
Oct 02 12:20:33 compute-0 ovn_controller[148183]: 2025-10-02T12:20:33Z|00361|binding|INFO|b181aa8d-06f5-47f8-a6c3-d383f70cc4ba: Claiming fa:16:3e:8d:47:99 10.100.0.11
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.211 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:47:99 10.100.0.11'], port_security=['fa:16:3e:8d:47:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e7e38e6b-74d9-470a-ad54-222ee4a47e1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.213 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b181aa8d-06f5-47f8-a6c3-d383f70cc4ba in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.216 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.234 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a05e741-debc-4136-8ab7-b0cfa912a158]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.235 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:33 compute-0 systemd-udevd[309043]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.239 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.239 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f04c01a-eab9-43eb-a854-cec6928ff7eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 systemd-machined[211836]: New machine qemu-39-instance-00000055.
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.241 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[adff51ff-d25d-4970-8f3e-b23891c5736a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.2575] device (tapb181aa8d-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.2585] device (tapb181aa8d-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.256 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[df7d4a4e-dc23-4eed-86a0-7a1614c2ecb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000055.
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.289 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ac13d466-f0a9-45d1-b91f-a14af08ec82b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_controller[148183]: 2025-10-02T12:20:33Z|00362|binding|INFO|Setting lport b181aa8d-06f5-47f8-a6c3-d383f70cc4ba ovn-installed in OVS
Oct 02 12:20:33 compute-0 ovn_controller[148183]: 2025-10-02T12:20:33Z|00363|binding|INFO|Setting lport b181aa8d-06f5-47f8-a6c3-d383f70cc4ba up in Southbound
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.330 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6df93c-8f52-4363-9e59-517948ab2bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.3384] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.337 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3d08bb-c0b5-46d6-af03-9a5eedcf760a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 systemd-udevd[309046]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.383 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7cd580-d2b8-4d47-9887-92857d86de30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.387 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c8d00b-a771-4e3a-8062-1d341f3d2ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.4071] device (tap7754c79a-c0): carrier: link connected
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.412 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6e3464-2413-4708-a675-291e7a2c91cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.430 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[17f128f6-56bd-42db-a41e-484a6ab6e01a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570103, 'reachable_time': 17954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309077, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.447 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6e45939c-ebe6-44e0-8538-8f19ab4d1b5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570103, 'tstamp': 570103}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309078, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.465 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[434e40e2-fc01-44ec-a24e-c4e1e9d61096]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570103, 'reachable_time': 17954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309079, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.493 2 DEBUG nova.compute.manager [req-97d5b6cf-3e54-4eda-9a56-c0e7ca97a0fb req-289a92e3-7328-4010-9cde-c7d1376c93fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.493 2 DEBUG oslo_concurrency.lockutils [req-97d5b6cf-3e54-4eda-9a56-c0e7ca97a0fb req-289a92e3-7328-4010-9cde-c7d1376c93fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.494 2 DEBUG oslo_concurrency.lockutils [req-97d5b6cf-3e54-4eda-9a56-c0e7ca97a0fb req-289a92e3-7328-4010-9cde-c7d1376c93fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.494 2 DEBUG oslo_concurrency.lockutils [req-97d5b6cf-3e54-4eda-9a56-c0e7ca97a0fb req-289a92e3-7328-4010-9cde-c7d1376c93fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.494 2 DEBUG nova.compute.manager [req-97d5b6cf-3e54-4eda-9a56-c0e7ca97a0fb req-289a92e3-7328-4010-9cde-c7d1376c93fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Processing event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.495 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cfafa457-3cce-4e74-9140-c8c71987f982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.544 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b47356f0-bd9a-4f47-a6a7-75c2401755bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.545 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.545 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.545 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 kernel: tap7754c79a-c0: entered promiscuous mode
Oct 02 12:20:33 compute-0 NetworkManager[44987]: <info>  [1759407633.5491] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.555 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_controller[148183]: 2025-10-02T12:20:33Z|00364|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct 02 12:20:33 compute-0 nova_compute[257802]: 2025-10-02 12:20:33.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.570 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.571 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2edeab-6e48-42eb-8134-40c029658285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.572 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:33.572 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 02 12:20:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 02 12:20:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 02 12:20:34 compute-0 podman[309130]: 2025-10-02 12:20:34.019560529 +0000 UTC m=+0.087375051 container create 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:20:34 compute-0 podman[309130]: 2025-10-02 12:20:33.953644435 +0000 UTC m=+0.021458987 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:34 compute-0 systemd[1]: Started libpod-conmon-8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b.scope.
Oct 02 12:20:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff381cc6de8d648bf8b0901bfc812940c837da02e550e88ee8a8e0e2c733953/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:34 compute-0 podman[309130]: 2025-10-02 12:20:34.117754784 +0000 UTC m=+0.185569336 container init 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:20:34 compute-0 podman[309130]: 2025-10-02 12:20:34.123323141 +0000 UTC m=+0.191137663 container start 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:34 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [NOTICE]   (309173) : New worker (309175) forked
Oct 02 12:20:34 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [NOTICE]   (309173) : Loading success.
Oct 02 12:20:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.476 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407634.4752734, e7e38e6b-74d9-470a-ad54-222ee4a47e1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.477 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] VM Started (Lifecycle Event)
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.481 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.486 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.491 2 INFO nova.virt.libvirt.driver [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Instance spawned successfully.
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.491 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.500 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.504 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.515 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.516 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.516 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.517 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.517 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.517 2 DEBUG nova.virt.libvirt.driver [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.544 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.545 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407634.4756172, e7e38e6b-74d9-470a-ad54-222ee4a47e1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.545 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] VM Paused (Lifecycle Event)
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.598 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.602 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407634.4842343, e7e38e6b-74d9-470a-ad54-222ee4a47e1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.602 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] VM Resumed (Lifecycle Event)
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.626 2 INFO nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Took 7.35 seconds to spawn the instance on the hypervisor.
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.626 2 DEBUG nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.628 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.635 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.672 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.729 2 INFO nova.compute.manager [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Took 8.30 seconds to build instance.
Oct 02 12:20:34 compute-0 nova_compute[257802]: 2025-10-02 12:20:34.750 2 DEBUG oslo_concurrency.lockutils [None req-5f4cd0c7-57fc-465c-8858-83fe5ee9e7e9 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:34 compute-0 ceph-mon[73607]: pgmap v1712: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 12:20:34 compute-0 ceph-mon[73607]: osdmap e255: 3 total, 3 up, 3 in
Oct 02 12:20:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1608713711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 404 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.581 2 DEBUG nova.compute.manager [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.581 2 DEBUG oslo_concurrency.lockutils [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.582 2 DEBUG oslo_concurrency.lockutils [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.583 2 DEBUG oslo_concurrency.lockutils [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.583 2 DEBUG nova.compute.manager [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] No waiting events found dispatching network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:35 compute-0 nova_compute[257802]: 2025-10-02 12:20:35.583 2 WARNING nova.compute.manager [req-d1619aa5-e29a-44da-b3f1-2995923c5638 req-b27b29d3-005a-40ab-943d-da79cb035440 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received unexpected event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba for instance with vm_state active and task_state None.
Oct 02 12:20:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:36.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:36 compute-0 ceph-mon[73607]: pgmap v1714: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 404 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:20:37 compute-0 nova_compute[257802]: 2025-10-02 12:20:37.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:37 compute-0 nova_compute[257802]: 2025-10-02 12:20:37.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 404 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:20:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 02 12:20:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1064520525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 02 12:20:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 02 12:20:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:20:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.365 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.365 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.386 2 DEBUG nova.objects.instance [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'flavor' on Instance uuid e7e38e6b-74d9-470a-ad54-222ee4a47e1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.432 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.699 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.700 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.700 2 INFO nova.compute.manager [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Attaching volume 041421bd-9b54-4471-8009-d216600b627c to /dev/vdb
Oct 02 12:20:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.872 2 DEBUG os_brick.utils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.873 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.885 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.886 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c9265a76-613c-4c71-b15a-83d5b9e3fcfd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.888 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.896 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.896 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6f233378-4faa-45f3-8a68-5ebfaf3c566c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.898 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.906 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.907 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd88c60-e128-45ca-adf8-4828654bf369]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.908 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d703783d-0071-4922-af29-4a9dfba4b7fe]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.909 2 DEBUG oslo_concurrency.processutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.939 2 DEBUG oslo_concurrency.processutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.943 2 DEBUG os_brick.initiator.connectors.lightos [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.944 2 DEBUG os_brick.initiator.connectors.lightos [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.944 2 DEBUG os_brick.initiator.connectors.lightos [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.945 2 DEBUG os_brick.utils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:20:38 compute-0 nova_compute[257802]: 2025-10-02 12:20:38.945 2 DEBUG nova.virt.block_device [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Updating existing volume attachment record: f0d86544-7d7f-4db7-abaf-e2ece5e411a7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:20:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 02 12:20:38 compute-0 ceph-mon[73607]: pgmap v1715: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 404 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:20:38 compute-0 ceph-mon[73607]: osdmap e256: 3 total, 3 up, 3 in
Oct 02 12:20:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 02 12:20:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 02 12:20:39 compute-0 sudo[309194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:39 compute-0 sudo[309194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309194]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 sudo[309219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:39 compute-0 sudo[309219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309219]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 sudo[309244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:39 compute-0 sudo[309244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309244]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 sudo[309249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:39 compute-0 sudo[309249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309249]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 333 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 207 op/s
Oct 02 12:20:39 compute-0 sudo[309293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:20:39 compute-0 sudo[309293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:39 compute-0 sudo[309307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:39 compute-0 sudo[309307]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.740 2 DEBUG nova.objects.instance [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'flavor' on Instance uuid e7e38e6b-74d9-470a-ad54-222ee4a47e1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.774 2 DEBUG nova.virt.libvirt.driver [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Attempting to attach volume 041421bd-9b54-4471-8009-d216600b627c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.777 2 DEBUG nova.virt.libvirt.guest [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:20:39 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-041421bd-9b54-4471-8009-d216600b627c">
Oct 02 12:20:39 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   </source>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:20:39 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:20:39 compute-0 nova_compute[257802]:   <serial>041421bd-9b54-4471-8009-d216600b627c</serial>
Oct 02 12:20:39 compute-0 nova_compute[257802]: </disk>
Oct 02 12:20:39 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:20:39 compute-0 sudo[309293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.928 2 DEBUG nova.virt.libvirt.driver [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.930 2 DEBUG nova.virt.libvirt.driver [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.931 2 DEBUG nova.virt.libvirt.driver [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:39 compute-0 nova_compute[257802]: 2025-10-02 12:20:39.931 2 DEBUG nova.virt.libvirt.driver [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:8d:47:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 02 12:20:40 compute-0 ceph-mon[73607]: osdmap e257: 3 total, 3 up, 3 in
Oct 02 12:20:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1583138155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:20:40 compute-0 nova_compute[257802]: 2025-10-02 12:20:40.164 2 DEBUG oslo_concurrency.lockutils [None req-baf9db43-30d8-4b6f-b83d-4ae80267b6df a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0f13ddf5-88eb-4460-a717-96c1dc9da562 does not exist
Oct 02 12:20:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev be40f521-d158-4b70-8957-a6f74da18902 does not exist
Oct 02 12:20:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fb2bda05-fcbf-4624-867b-71c179e76c14 does not exist
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:20:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:40 compute-0 sudo[309394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:40 compute-0 sudo[309394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:40 compute-0 sudo[309394]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:40 compute-0 sudo[309419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:40 compute-0 sudo[309419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:40 compute-0 sudo[309419]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:40 compute-0 sudo[309444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:40 compute-0 sudo[309444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:40 compute-0 sudo[309444]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:40 compute-0 sudo[309469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:20:40 compute-0 sudo[309469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:20:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.725087333 +0000 UTC m=+0.051908572 container create 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:40 compute-0 systemd[1]: Started libpod-conmon-28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908.scope.
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.698489282 +0000 UTC m=+0.025310551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.834229117 +0000 UTC m=+0.161050376 container init 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.846062047 +0000 UTC m=+0.172883306 container start 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:20:40 compute-0 thirsty_franklin[309551]: 167 167
Oct 02 12:20:40 compute-0 systemd[1]: libpod-28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908.scope: Deactivated successfully.
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.860581312 +0000 UTC m=+0.187402551 container attach 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.862428557 +0000 UTC m=+0.189249796 container died 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2713223d17c3fface837afc868c5983113d990ebf1568fdccf8c72d7e49e61ce-merged.mount: Deactivated successfully.
Oct 02 12:20:40 compute-0 podman[309534]: 2025-10-02 12:20:40.961214907 +0000 UTC m=+0.288036146 container remove 28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:40 compute-0 systemd[1]: libpod-conmon-28f8f6036cb093c39e7dc757b4a141ad0d1ee6c91beab5b512874a6478011908.scope: Deactivated successfully.
Oct 02 12:20:41 compute-0 ceph-mon[73607]: pgmap v1718: 305 pgs: 305 active+clean; 333 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 207 op/s
Oct 02 12:20:41 compute-0 ceph-mon[73607]: osdmap e258: 3 total, 3 up, 3 in
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:20:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:41 compute-0 podman[309576]: 2025-10-02 12:20:41.13891083 +0000 UTC m=+0.047486944 container create e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:41 compute-0 systemd[1]: Started libpod-conmon-e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de.scope.
Oct 02 12:20:41 compute-0 podman[309576]: 2025-10-02 12:20:41.115893906 +0000 UTC m=+0.024470040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:41 compute-0 podman[309576]: 2025-10-02 12:20:41.25811168 +0000 UTC m=+0.166687794 container init e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:41 compute-0 podman[309576]: 2025-10-02 12:20:41.267808818 +0000 UTC m=+0.176384922 container start e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:20:41 compute-0 podman[309576]: 2025-10-02 12:20:41.27571173 +0000 UTC m=+0.184287894 container attach e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.304 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.307 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.307 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.308 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.308 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.310 2 INFO nova.compute.manager [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Terminating instance
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.311 2 DEBUG nova.compute.manager [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:20:41 compute-0 kernel: tapb181aa8d-06 (unregistering): left promiscuous mode
Oct 02 12:20:41 compute-0 NetworkManager[44987]: <info>  [1759407641.3735] device (tapb181aa8d-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 357 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 6.5 MiB/s wr, 335 op/s
Oct 02 12:20:41 compute-0 ovn_controller[148183]: 2025-10-02T12:20:41Z|00365|binding|INFO|Releasing lport b181aa8d-06f5-47f8-a6c3-d383f70cc4ba from this chassis (sb_readonly=0)
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 ovn_controller[148183]: 2025-10-02T12:20:41Z|00366|binding|INFO|Setting lport b181aa8d-06f5-47f8-a6c3-d383f70cc4ba down in Southbound
Oct 02 12:20:41 compute-0 ovn_controller[148183]: 2025-10-02T12:20:41Z|00367|binding|INFO|Removing iface tapb181aa8d-06 ovn-installed in OVS
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.400 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:47:99 10.100.0.11'], port_security=['fa:16:3e:8d:47:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e7e38e6b-74d9-470a-ad54-222ee4a47e1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.401 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b181aa8d-06f5-47f8-a6c3-d383f70cc4ba in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.403 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.405 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb8ff7d-7a6b-4337-beb9-7789a44f85a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.407 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 12:20:41 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000055.scope: Consumed 8.050s CPU time.
Oct 02 12:20:41 compute-0 systemd-machined[211836]: Machine qemu-39-instance-00000055 terminated.
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.549 2 INFO nova.virt.libvirt.driver [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Instance destroyed successfully.
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.550 2 DEBUG nova.objects.instance [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid e7e38e6b-74d9-470a-ad54-222ee4a47e1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [NOTICE]   (309173) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [NOTICE]   (309173) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [WARNING]  (309173) : Exiting Master process...
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [WARNING]  (309173) : Exiting Master process...
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [ALERT]    (309173) : Current worker (309175) exited with code 143 (Terminated)
Oct 02 12:20:41 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[309169]: [WARNING]  (309173) : All workers exited. Exiting... (0)
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.563 2 DEBUG nova.virt.libvirt.vif [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:20:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-382007051',display_name='tempest-DeleteServersTestJSON-server-382007051',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-382007051',id=85,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-2t1406ri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:34Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e7e38e6b-74d9-470a-ad54-222ee4a47e1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.563 2 DEBUG nova.network.os_vif_util [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "address": "fa:16:3e:8d:47:99", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb181aa8d-06", "ovs_interfaceid": "b181aa8d-06f5-47f8-a6c3-d383f70cc4ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.564 2 DEBUG nova.network.os_vif_util [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.564 2 DEBUG os_vif [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb181aa8d-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:41 compute-0 systemd[1]: libpod-8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b.scope: Deactivated successfully.
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.573 2 INFO os_vif [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:47:99,bridge_name='br-int',has_traffic_filtering=True,id=b181aa8d-06f5-47f8-a6c3-d383f70cc4ba,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb181aa8d-06')
Oct 02 12:20:41 compute-0 podman[309622]: 2025-10-02 12:20:41.57734008 +0000 UTC m=+0.065426844 container died 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff381cc6de8d648bf8b0901bfc812940c837da02e550e88ee8a8e0e2c733953-merged.mount: Deactivated successfully.
Oct 02 12:20:41 compute-0 podman[309622]: 2025-10-02 12:20:41.776726903 +0000 UTC m=+0.264813667 container cleanup 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:41 compute-0 systemd[1]: libpod-conmon-8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b.scope: Deactivated successfully.
Oct 02 12:20:41 compute-0 podman[309677]: 2025-10-02 12:20:41.944958254 +0000 UTC m=+0.147399831 container remove 8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.954 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ae604a-b8be-4cad-a76c-b7904aa57557]: (4, ('Thu Oct  2 12:20:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b)\n8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b\nThu Oct  2 12:20:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b)\n8c5188c4c18b98d1b0f99617d145b02f8c95aaff8218dd998a0e6c53c789c90b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.956 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1199c619-e50b-4702-94a5-4652140e56ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.957 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 kernel: tap7754c79a-c0: left promiscuous mode
Oct 02 12:20:41 compute-0 nova_compute[257802]: 2025-10-02 12:20:41.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:41.989 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9d782e49-7e35-4701-a4b9-f08168189ff1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:42.013 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[511d5b17-aeaf-4cf2-bf39-4e07c7c952bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:42.014 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[301689b5-c7ac-4cb2-a321-c9eb7375804c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:42.031 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ad6c55-f78e-4444-8e0c-3511c5a72214]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570095, 'reachable_time': 18832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309693, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:42.035 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:42.035 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[099a8ebf-be34-4b2e-939b-12878807d63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:42 compute-0 epic_chebyshev[309593]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:20:42 compute-0 epic_chebyshev[309593]: --> relative data size: 1.0
Oct 02 12:20:42 compute-0 epic_chebyshev[309593]: --> All data devices are unavailable
Oct 02 12:20:42 compute-0 systemd[1]: libpod-e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de.scope: Deactivated successfully.
Oct 02 12:20:42 compute-0 podman[309576]: 2025-10-02 12:20:42.214251271 +0000 UTC m=+1.122827375 container died e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a98a604680aab2e5b7e293c2273487d355d526310aa62c6dc9d10fdf8ea48837-merged.mount: Deactivated successfully.
Oct 02 12:20:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:42.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:42 compute-0 podman[309576]: 2025-10-02 12:20:42.373487491 +0000 UTC m=+1.282063595 container remove e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:20:42 compute-0 systemd[1]: libpod-conmon-e636c7362f98961533c008105f1b6a69fa262cbabddb30446d3b34076caef8de.scope: Deactivated successfully.
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:20:42
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.420 2 INFO nova.virt.libvirt.driver [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Deleting instance files /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f_del
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.422 2 INFO nova.virt.libvirt.driver [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Deletion of /var/lib/nova/instances/e7e38e6b-74d9-470a-ad54-222ee4a47e1f_del complete
Oct 02 12:20:42 compute-0 sudo[309469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.474 2 INFO nova.compute.manager [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Took 1.16 seconds to destroy the instance on the hypervisor.
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.475 2 DEBUG oslo.service.loopingcall [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.476 2 DEBUG nova.compute.manager [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:20:42 compute-0 nova_compute[257802]: 2025-10-02 12:20:42.477 2 DEBUG nova.network.neutron [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:20:42 compute-0 sudo[309718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:42 compute-0 sudo[309718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:42 compute-0 sudo[309718]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:42 compute-0 sudo[309744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:42 compute-0 sudo[309744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:42 compute-0 sudo[309744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:42 compute-0 sudo[309769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:42 compute-0 sudo[309769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:42 compute-0 sudo[309769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:42 compute-0 sudo[309794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:20:42 compute-0 sudo[309794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:20:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:20:43 compute-0 ceph-mon[73607]: pgmap v1720: 305 pgs: 305 active+clean; 357 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 6.5 MiB/s wr, 335 op/s
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.160241003 +0000 UTC m=+0.064262836 container create f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:20:43 compute-0 systemd[1]: Started libpod-conmon-f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2.scope.
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.124120439 +0000 UTC m=+0.028142272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.297013863 +0000 UTC m=+0.201035686 container init f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.309164071 +0000 UTC m=+0.213185904 container start f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:20:43 compute-0 stoic_jepsen[309875]: 167 167
Oct 02 12:20:43 compute-0 systemd[1]: libpod-f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2.scope: Deactivated successfully.
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.320375456 +0000 UTC m=+0.224397289 container attach f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.321976055 +0000 UTC m=+0.225997888 container died f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 346 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.8 MiB/s wr, 530 op/s
Oct 02 12:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b87d21e7a5a163936bb5276b5446f2ce11df5def7bbf60302f86372331dfabb-merged.mount: Deactivated successfully.
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.405 2 DEBUG nova.compute.manager [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-unplugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.407 2 DEBUG oslo_concurrency.lockutils [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.408 2 DEBUG oslo_concurrency.lockutils [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.408 2 DEBUG oslo_concurrency.lockutils [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.408 2 DEBUG nova.compute.manager [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] No waiting events found dispatching network-vif-unplugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:43 compute-0 nova_compute[257802]: 2025-10-02 12:20:43.409 2 DEBUG nova.compute.manager [req-b78f707b-efab-4f55-bf26-f61d86aec3d5 req-d59a02e6-a838-4d32-bf18-54d2a69126b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-unplugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:20:43 compute-0 podman[309859]: 2025-10-02 12:20:43.419307969 +0000 UTC m=+0.323329782 container remove f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:20:43 compute-0 systemd[1]: libpod-conmon-f8164229181ad5262cde0fb0475fbc0c8bdc48820637b66f4ce8ab05dc222ca2.scope: Deactivated successfully.
Oct 02 12:20:43 compute-0 podman[309899]: 2025-10-02 12:20:43.657150545 +0000 UTC m=+0.069210186 container create 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:20:43 compute-0 systemd[1]: Started libpod-conmon-2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b.scope.
Oct 02 12:20:43 compute-0 podman[309899]: 2025-10-02 12:20:43.633410643 +0000 UTC m=+0.045470304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10c9e1f73287a2b449978bafacc8408de60cb38bbf3e415f7447ae46691969b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10c9e1f73287a2b449978bafacc8408de60cb38bbf3e415f7447ae46691969b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10c9e1f73287a2b449978bafacc8408de60cb38bbf3e415f7447ae46691969b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10c9e1f73287a2b449978bafacc8408de60cb38bbf3e415f7447ae46691969b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:43 compute-0 podman[309899]: 2025-10-02 12:20:43.768636766 +0000 UTC m=+0.180696427 container init 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:20:43 compute-0 podman[309899]: 2025-10-02 12:20:43.780375143 +0000 UTC m=+0.192434804 container start 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:43 compute-0 podman[309899]: 2025-10-02 12:20:43.796866597 +0000 UTC m=+0.208926228 container attach 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:20:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 02 12:20:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 02 12:20:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 02 12:20:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:44.132 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:44.136 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.345 2 DEBUG nova.network.neutron [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.374 2 INFO nova.compute.manager [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Took 1.90 seconds to deallocate network for instance.
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.545 2 INFO nova.compute.manager [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Took 0.17 seconds to detach 1 volumes for instance.
Oct 02 12:20:44 compute-0 gracious_wing[309915]: {
Oct 02 12:20:44 compute-0 gracious_wing[309915]:     "1": [
Oct 02 12:20:44 compute-0 gracious_wing[309915]:         {
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "devices": [
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "/dev/loop3"
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             ],
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "lv_name": "ceph_lv0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "lv_size": "7511998464",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "name": "ceph_lv0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "tags": {
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.cluster_name": "ceph",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.crush_device_class": "",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.encrypted": "0",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.osd_id": "1",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.type": "block",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:                 "ceph.vdo": "0"
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             },
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "type": "block",
Oct 02 12:20:44 compute-0 gracious_wing[309915]:             "vg_name": "ceph_vg0"
Oct 02 12:20:44 compute-0 gracious_wing[309915]:         }
Oct 02 12:20:44 compute-0 gracious_wing[309915]:     ]
Oct 02 12:20:44 compute-0 gracious_wing[309915]: }
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.589 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.589 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:44 compute-0 systemd[1]: libpod-2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b.scope: Deactivated successfully.
Oct 02 12:20:44 compute-0 podman[309899]: 2025-10-02 12:20:44.594319662 +0000 UTC m=+1.006379293 container died 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:20:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d10c9e1f73287a2b449978bafacc8408de60cb38bbf3e415f7447ae46691969b-merged.mount: Deactivated successfully.
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.634 2 DEBUG oslo_concurrency.processutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:44 compute-0 podman[309899]: 2025-10-02 12:20:44.687839692 +0000 UTC m=+1.099899323 container remove 2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:44 compute-0 systemd[1]: libpod-conmon-2ed6b81eba202e4d4dc0828f4fe0095674a9e86a9372f8199e5ccdddf5fa117b.scope: Deactivated successfully.
Oct 02 12:20:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:44 compute-0 sudo[309794]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:44 compute-0 sudo[309938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:44 compute-0 sudo[309938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:44 compute-0 sudo[309938]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:44 compute-0 sudo[309982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:44 compute-0 sudo[309982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:44 compute-0 sudo[309982]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:44 compute-0 sudo[310007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:44 compute-0 sudo[310007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:44 compute-0 sudo[310007]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:44 compute-0 sudo[310032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:20:44 compute-0 nova_compute[257802]: 2025-10-02 12:20:44.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:44 compute-0 sudo[310032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:44 compute-0 ceph-mon[73607]: pgmap v1721: 305 pgs: 305 active+clean; 346 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.8 MiB/s wr, 530 op/s
Oct 02 12:20:44 compute-0 ceph-mon[73607]: osdmap e259: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224385576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.077 2 DEBUG oslo_concurrency.processutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.083 2 DEBUG nova.compute.provider_tree [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.103 2 DEBUG nova.scheduler.client.report [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.132 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.152 2 INFO nova.scheduler.client.report [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocations for instance e7e38e6b-74d9-470a-ad54-222ee4a47e1f
Oct 02 12:20:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 02 12:20:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.224 2 DEBUG oslo_concurrency.lockutils [None req-fb04c047-c790-410d-a005-549ad77da823 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.273376495 +0000 UTC m=+0.049442272 container create a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:20:45 compute-0 systemd[1]: Started libpod-conmon-a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917.scope.
Oct 02 12:20:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.246787264 +0000 UTC m=+0.022852941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.365359148 +0000 UTC m=+0.141424815 container init a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.371424517 +0000 UTC m=+0.147490164 container start a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:45 compute-0 laughing_dijkstra[310116]: 167 167
Oct 02 12:20:45 compute-0 systemd[1]: libpod-a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917.scope: Deactivated successfully.
Oct 02 12:20:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 439 op/s
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.402404856 +0000 UTC m=+0.178470523 container attach a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.40299869 +0000 UTC m=+0.179064327 container died a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f5c79699bd78c6fe1803641450c612589c61a44e6f7c4bf69f15bc829dc562e-merged.mount: Deactivated successfully.
Oct 02 12:20:45 compute-0 podman[310101]: 2025-10-02 12:20:45.491113618 +0000 UTC m=+0.267179275 container remove a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.491 2 DEBUG nova.compute.manager [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.494 2 DEBUG oslo_concurrency.lockutils [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.495 2 DEBUG oslo_concurrency.lockutils [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.495 2 DEBUG oslo_concurrency.lockutils [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e7e38e6b-74d9-470a-ad54-222ee4a47e1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.495 2 DEBUG nova.compute.manager [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] No waiting events found dispatching network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.495 2 WARNING nova.compute.manager [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received unexpected event network-vif-plugged-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba for instance with vm_state deleted and task_state None.
Oct 02 12:20:45 compute-0 nova_compute[257802]: 2025-10-02 12:20:45.496 2 DEBUG nova.compute.manager [req-74de4726-186c-4c97-8181-b7ad03df07b2 req-b11ef674-48f4-4962-894a-ebfd17e6fa95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Received event network-vif-deleted-b181aa8d-06f5-47f8-a6c3-d383f70cc4ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:45 compute-0 systemd[1]: libpod-conmon-a658d6bc2a32a43f60c83720d6a98e988a44c6cbd187c54a521fb754aff46917.scope: Deactivated successfully.
Oct 02 12:20:45 compute-0 podman[310142]: 2025-10-02 12:20:45.677455813 +0000 UTC m=+0.055033229 container create 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:45 compute-0 podman[310142]: 2025-10-02 12:20:45.644248009 +0000 UTC m=+0.021825455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:45 compute-0 systemd[1]: Started libpod-conmon-89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53.scope.
Oct 02 12:20:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6436b5d8af93334d48d46f391b4eb71e36d516962af3e4e3e4484a13bdfb1b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6436b5d8af93334d48d46f391b4eb71e36d516962af3e4e3e4484a13bdfb1b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6436b5d8af93334d48d46f391b4eb71e36d516962af3e4e3e4484a13bdfb1b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6436b5d8af93334d48d46f391b4eb71e36d516962af3e4e3e4484a13bdfb1b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:45 compute-0 podman[310142]: 2025-10-02 12:20:45.777962855 +0000 UTC m=+0.155540301 container init 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:20:45 compute-0 podman[310142]: 2025-10-02 12:20:45.784030513 +0000 UTC m=+0.161607929 container start 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:20:45 compute-0 podman[310142]: 2025-10-02 12:20:45.787031957 +0000 UTC m=+0.164609373 container attach 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:20:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1346637038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4224385576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:45 compute-0 ceph-mon[73607]: osdmap e260: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2418439772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:46 compute-0 nova_compute[257802]: 2025-10-02 12:20:46.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:46.138 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 02 12:20:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 02 12:20:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 02 12:20:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:46 compute-0 nova_compute[257802]: 2025-10-02 12:20:46.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]: {
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:         "osd_id": 1,
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:         "type": "bluestore"
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]:     }
Oct 02 12:20:46 compute-0 pensive_chatterjee[310158]: }
Oct 02 12:20:46 compute-0 systemd[1]: libpod-89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53.scope: Deactivated successfully.
Oct 02 12:20:46 compute-0 podman[310180]: 2025-10-02 12:20:46.649465352 +0000 UTC m=+0.023823194 container died 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:20:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:46.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6436b5d8af93334d48d46f391b4eb71e36d516962af3e4e3e4484a13bdfb1b1-merged.mount: Deactivated successfully.
Oct 02 12:20:46 compute-0 podman[310180]: 2025-10-02 12:20:46.901359122 +0000 UTC m=+0.275716964 container remove 89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:46 compute-0 systemd[1]: libpod-conmon-89736b3e06889d146eedfa8053fa7341859fba8b56134c991f97eb0aadd71e53.scope: Deactivated successfully.
Oct 02 12:20:46 compute-0 sudo[310032]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:20:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:20:47 compute-0 nova_compute[257802]: 2025-10-02 12:20:47.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d5a5bef5-8055-4769-8cff-873bac7f8c1c does not exist
Oct 02 12:20:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 369286b8-a289-4b4f-8c5e-2ff03443c63f does not exist
Oct 02 12:20:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f8d9a09d-dc08-4d9a-9bfa-86b2e61831ef does not exist
Oct 02 12:20:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 02 12:20:47 compute-0 sudo[310193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:47 compute-0 sudo[310193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:47 compute-0 sudo[310193]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:47 compute-0 ceph-mon[73607]: pgmap v1724: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 439 op/s
Oct 02 12:20:47 compute-0 ceph-mon[73607]: osdmap e261: 3 total, 3 up, 3 in
Oct 02 12:20:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:20:47 compute-0 sudo[310218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:20:47 compute-0 sudo[310218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:47 compute-0 sudo[310218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 02 12:20:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 02 12:20:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 KiB/s wr, 120 op/s
Oct 02 12:20:48 compute-0 nova_compute[257802]: 2025-10-02 12:20:48.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:48 compute-0 ceph-mon[73607]: osdmap e262: 3 total, 3 up, 3 in
Oct 02 12:20:48 compute-0 ceph-mon[73607]: pgmap v1727: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 KiB/s wr, 120 op/s
Oct 02 12:20:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 02 12:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 02 12:20:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 02 12:20:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:20:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8797 writes, 38K keys, 8796 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8797 writes, 8796 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1842 writes, 8125 keys, 1842 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s
                                           Interval WAL: 1843 writes, 1843 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.4      0.74              0.13        22    0.034       0      0       0.0       0.0
                                             L6      1/0    9.41 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    108.3     89.7      2.05              0.53        21    0.098    112K    12K       0.0       0.0
                                            Sum      1/0    9.41 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     79.6     83.0      2.79              0.66        43    0.065    112K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.0     72.7     74.3      0.76              0.20        10    0.076     33K   3084       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    108.3     89.7      2.05              0.53        21    0.098    112K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.7      0.73              0.13        21    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.07 MB/s read, 2.8 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 25.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000193 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1473,24.17 MB,7.95067%) FilterBlock(44,316.23 KB,0.101586%) IndexBlock(44,567.52 KB,0.182307%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:20:49 compute-0 nova_compute[257802]: 2025-10-02 12:20:49.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 373 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.5 MiB/s wr, 138 op/s
Oct 02 12:20:49 compute-0 ceph-mon[73607]: osdmap e263: 3 total, 3 up, 3 in
Oct 02 12:20:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:20:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:20:51 compute-0 ceph-mon[73607]: pgmap v1729: 305 pgs: 305 active+clean; 373 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.5 MiB/s wr, 138 op/s
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.095 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.096 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.123 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.213 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.213 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.221 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.222 2 INFO nova.compute.claims [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.371 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 405 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 154 op/s
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995336236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.787 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.792 2 DEBUG nova.compute.provider_tree [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.809 2 DEBUG nova.scheduler.client.report [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.877 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.878 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.893 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.894 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:51 compute-0 nova_compute[257802]: 2025-10-02 12:20:51.921 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.039 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:20:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 02 12:20:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1995336236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.090 2 INFO nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.142 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:20:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.162 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.163 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.169 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.170 2 INFO nova.compute.claims [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.281 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.282 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.282 2 INFO nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating image(s)
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.306 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.337 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.369 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.374 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.473 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.474 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.475 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.475 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.511 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.516 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:52 compute-0 nova_compute[257802]: 2025-10-02 12:20:52.573 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:52.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626327134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.002 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.009 2 DEBUG nova.compute.provider_tree [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.033 2 DEBUG nova.scheduler.client.report [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.123 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.124 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.168 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.169 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.191 2 INFO nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.209 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:20:53 compute-0 ceph-mon[73607]: pgmap v1730: 305 pgs: 305 active+clean; 405 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 154 op/s
Oct 02 12:20:53 compute-0 ceph-mon[73607]: osdmap e264: 3 total, 3 up, 3 in
Oct 02 12:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2626327134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.300 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.301 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.301 2 INFO nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Creating image(s)
Oct 02 12:20:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 416 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 9.3 MiB/s wr, 205 op/s
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.426 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.459 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.493 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.497 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.524 2 DEBUG nova.policy [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9f7faffac7240869a0196df1ddda7e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.564 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.565 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.565 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.565 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.593 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.598 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 098f39eb-751f-4a70-8e8b-593f9d203681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.720 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.839 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] resizing rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 02 12:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 02 12:20:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 02 12:20:53 compute-0 nova_compute[257802]: 2025-10-02 12:20:53.988 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 098f39eb-751f-4a70-8e8b-593f9d203681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.059 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] resizing rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.147 2 DEBUG nova.objects.instance [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'migration_context' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.169 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.170 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Ensure instance console log exists: /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.170 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.170 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.171 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.172 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.179 2 WARNING nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.186 2 DEBUG nova.virt.libvirt.host [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.187 2 DEBUG nova.virt.libvirt.host [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.190 2 DEBUG nova.virt.libvirt.host [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.191 2 DEBUG nova.virt.libvirt.host [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.192 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.192 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.193 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.193 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.194 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.194 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.194 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.194 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.195 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.195 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.195 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.195 2 DEBUG nova.virt.hardware [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.199 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.260 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Successfully created port: 2a565172-9810-4239-9e57-a527a2124db7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.269 2 DEBUG nova.objects.instance [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid 098f39eb-751f-4a70-8e8b-593f9d203681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.286 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.286 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Ensure instance console log exists: /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.286 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.287 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.287 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021812258631118556 of space, bias 1.0, pg target 0.6543677589335567 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0034096989577517217 of space, bias 1.0, pg target 1.0229096873255166 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006420266843476643 of space, bias 1.0, pg target 1.9196597861995162 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:20:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156551144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.630 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.660 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:54 compute-0 nova_compute[257802]: 2025-10-02 12:20:54.664 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 02 12:20:54 compute-0 podman[310683]: 2025-10-02 12:20:54.93793604 +0000 UTC m=+0.077003947 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 12:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 02 12:20:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.138 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.138 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.139 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.139 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970872343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1707071716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1707071716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.175 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.176 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.176 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.176 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.176 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:55 compute-0 ceph-mon[73607]: pgmap v1732: 305 pgs: 305 active+clean; 416 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 9.3 MiB/s wr, 205 op/s
Oct 02 12:20:55 compute-0 ceph-mon[73607]: osdmap e265: 3 total, 3 up, 3 in
Oct 02 12:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4156551144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.203 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.205 2 DEBUG nova.objects.instance [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.222 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <uuid>5ae8a491-7025-40ff-a6f8-240f71af3e41</uuid>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <name>instance-00000056</name>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerShowV257Test-server-1672132577</nova:name>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:20:54</nova:creationTime>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:user uuid="a804670c0de24489af33ba77885d5ee6">tempest-ServerShowV257Test-364849187-project-member</nova:user>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <nova:project uuid="a7f4317b8b6c47689bbb12f29d6cab5a">tempest-ServerShowV257Test-364849187</nova:project>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <system>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="serial">5ae8a491-7025-40ff-a6f8-240f71af3e41</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="uuid">5ae8a491-7025-40ff-a6f8-240f71af3e41</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </system>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <os>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </os>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <features>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </features>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ae8a491-7025-40ff-a6f8-240f71af3e41_disk">
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config">
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:55 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/console.log" append="off"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <video>
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </video>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:20:55 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:20:55 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:20:55 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:20:55 compute-0 nova_compute[257802]: </domain>
Oct 02 12:20:55 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.316 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.318 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.318 2 INFO nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Using config drive
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.345 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 553 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 19 MiB/s wr, 426 op/s
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.468 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Successfully updated port: 2a565172-9810-4239-9e57-a527a2124db7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.489 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.490 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.490 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.546 2 INFO nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating config drive at /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.551 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa82ib539 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.578 2 DEBUG nova.compute.manager [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-changed-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.578 2 DEBUG nova.compute.manager [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Refreshing instance network info cache due to event network-changed-2a565172-9810-4239-9e57-a527a2124db7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.579 2 DEBUG oslo_concurrency.lockutils [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361607671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.610 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.651 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.666 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.667 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.683 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa82ib539" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.717 2 DEBUG nova.storage.rbd_utils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.722 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.895 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.896 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4530MB free_disk=20.942501068115234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.896 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:55 compute-0 nova_compute[257802]: 2025-10-02 12:20:55.897 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.027 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 5ae8a491-7025-40ff-a6f8-240f71af3e41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.028 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 098f39eb-751f-4a70-8e8b-593f9d203681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.028 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.028 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.076 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.186 2 DEBUG oslo_concurrency.processutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.188 2 INFO nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deleting local config drive /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config because it was imported into RBD.
Oct 02 12:20:56 compute-0 ceph-mon[73607]: osdmap e266: 3 total, 3 up, 3 in
Oct 02 12:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2970872343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1707071716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1707071716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1361607671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:56 compute-0 systemd-machined[211836]: New machine qemu-40-instance-00000056.
Oct 02 12:20:56 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000056.
Oct 02 12:20:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:56.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023584445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407641.547749, e7e38e6b-74d9-470a-ad54-222ee4a47e1f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.550 2 INFO nova.compute.manager [-] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] VM Stopped (Lifecycle Event)
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.553 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.560 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.672 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.679 2 DEBUG nova.compute.manager [None req-f9af157c-3ad0-4d54-8dd1-387dd4962135 - - - - - -] [instance: e7e38e6b-74d9-470a-ad54-222ee4a47e1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.716 2 DEBUG nova.network.neutron [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Updating instance_info_cache with network_info: [{"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.777 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.778 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.778 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.778 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Instance network_info: |[{"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.779 2 DEBUG oslo_concurrency.lockutils [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.779 2 DEBUG nova.network.neutron [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Refreshing network info cache for port 2a565172-9810-4239-9e57-a527a2124db7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.781 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Start _get_guest_xml network_info=[{"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.786 2 WARNING nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.792 2 DEBUG nova.virt.libvirt.host [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.792 2 DEBUG nova.virt.libvirt.host [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.803 2 DEBUG nova.virt.libvirt.host [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.804 2 DEBUG nova.virt.libvirt.host [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.804 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.805 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.805 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.805 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.805 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.806 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.806 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.806 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.806 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.806 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.807 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.807 2 DEBUG nova.virt.hardware [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:56 compute-0 nova_compute[257802]: 2025-10-02 12:20:56.809 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 553 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 15 MiB/s wr, 368 op/s
Oct 02 12:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007430529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.403 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.427 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.432 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:57 compute-0 ceph-mon[73607]: pgmap v1735: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 553 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 19 MiB/s wr, 426 op/s
Oct 02 12:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2023584445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.455 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407657.4290059, 5ae8a491-7025-40ff-a6f8-240f71af3e41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.456 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] VM Resumed (Lifecycle Event)
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.458 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.459 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.462 2 INFO nova.virt.libvirt.driver [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance spawned successfully.
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.463 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.516 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.520 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.535 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.536 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.536 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.537 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.538 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.538 2 DEBUG nova.virt.libvirt.driver [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.572 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.573 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407657.4290822, 5ae8a491-7025-40ff-a6f8-240f71af3e41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.573 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] VM Started (Lifecycle Event)
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.610 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.614 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.638 2 INFO nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Took 5.36 seconds to spawn the instance on the hypervisor.
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.639 2 DEBUG nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.661 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.728 2 INFO nova.compute.manager [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Took 6.54 seconds to build instance.
Oct 02 12:20:57 compute-0 nova_compute[257802]: 2025-10-02 12:20:57.752 2 DEBUG oslo_concurrency.lockutils [None req-b9b1a7d6-a2be-48a4-acf7-d5892fc597ef a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043938446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.037 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.038 2 DEBUG nova.virt.libvirt.vif [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1340405804',display_name='tempest-DeleteServersTestJSON-server-1340405804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1340405804',id=87,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-9hnbw0h5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:53Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=098f39eb-751f-4a70-8e8b-593f9d203681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.039 2 DEBUG nova.network.os_vif_util [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.040 2 DEBUG nova.network.os_vif_util [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.040 2 DEBUG nova.objects.instance [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 098f39eb-751f-4a70-8e8b-593f9d203681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.070 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <uuid>098f39eb-751f-4a70-8e8b-593f9d203681</uuid>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <name>instance-00000057</name>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:name>tempest-DeleteServersTestJSON-server-1340405804</nova:name>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:20:56</nova:creationTime>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <nova:port uuid="2a565172-9810-4239-9e57-a527a2124db7">
Oct 02 12:20:58 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <system>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="serial">098f39eb-751f-4a70-8e8b-593f9d203681</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="uuid">098f39eb-751f-4a70-8e8b-593f9d203681</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </system>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <os>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </os>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <features>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </features>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/098f39eb-751f-4a70-8e8b-593f9d203681_disk">
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/098f39eb-751f-4a70-8e8b-593f9d203681_disk.config">
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </source>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:20:58 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:65:09:7b"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <target dev="tap2a565172-98"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/console.log" append="off"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <video>
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </video>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:20:58 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:20:58 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:20:58 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:20:58 compute-0 nova_compute[257802]: </domain>
Oct 02 12:20:58 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.077 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Preparing to wait for external event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.077 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.078 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.078 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.079 2 DEBUG nova.virt.libvirt.vif [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1340405804',display_name='tempest-DeleteServersTestJSON-server-1340405804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1340405804',id=87,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-9hnbw0h5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:53Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=098f39eb-751f-4a70-8e8b-593f9d203681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.079 2 DEBUG nova.network.os_vif_util [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.080 2 DEBUG nova.network.os_vif_util [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.081 2 DEBUG os_vif [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a565172-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a565172-98, col_values=(('external_ids', {'iface-id': '2a565172-9810-4239-9e57-a527a2124db7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:09:7b', 'vm-uuid': '098f39eb-751f-4a70-8e8b-593f9d203681'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:58 compute-0 NetworkManager[44987]: <info>  [1759407658.0894] manager: (tap2a565172-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.096 2 INFO os_vif [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98')
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.161 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:65:09:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.163 2 INFO nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Using config drive
Oct 02 12:20:58 compute-0 podman[310931]: 2025-10-02 12:20:58.188984396 +0000 UTC m=+0.064661295 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.204 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:58 compute-0 podman[310932]: 2025-10-02 12:20:58.205336067 +0000 UTC m=+0.071852172 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:20:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:20:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.433 2 DEBUG nova.network.neutron [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Updated VIF entry in instance network info cache for port 2a565172-9810-4239-9e57-a527a2124db7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.433 2 DEBUG nova.network.neutron [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Updating instance_info_cache with network_info: [{"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 02 12:20:58 compute-0 ceph-mon[73607]: pgmap v1736: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 553 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 15 MiB/s wr, 368 op/s
Oct 02 12:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3007430529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/644672697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2043938446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1653184868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.458 2 DEBUG oslo_concurrency.lockutils [req-086b1b50-5ad4-4d59-abd8-ef660a1c5843 req-c670c7f0-25f0-48ec-b69d-a5663841d9c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-098f39eb-751f-4a70-8e8b-593f9d203681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 02 12:20:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.629 2 INFO nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Creating config drive at /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.634 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2pgm56e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:20:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.770 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2pgm56e" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.795 2 DEBUG nova.storage.rbd_utils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 098f39eb-751f-4a70-8e8b-593f9d203681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.799 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config 098f39eb-751f-4a70-8e8b-593f9d203681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.989 2 DEBUG oslo_concurrency.processutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config 098f39eb-751f-4a70-8e8b-593f9d203681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:58 compute-0 nova_compute[257802]: 2025-10-02 12:20:58.989 2 INFO nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Deleting local config drive /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681/disk.config because it was imported into RBD.
Oct 02 12:20:59 compute-0 kernel: tap2a565172-98: entered promiscuous mode
Oct 02 12:20:59 compute-0 systemd-udevd[310865]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.0595] manager: (tap2a565172-98): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct 02 12:20:59 compute-0 ovn_controller[148183]: 2025-10-02T12:20:59Z|00368|binding|INFO|Claiming lport 2a565172-9810-4239-9e57-a527a2124db7 for this chassis.
Oct 02 12:20:59 compute-0 ovn_controller[148183]: 2025-10-02T12:20:59Z|00369|binding|INFO|2a565172-9810-4239-9e57-a527a2124db7: Claiming fa:16:3e:65:09:7b 10.100.0.10
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.0742] device (tap2a565172-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.0761] device (tap2a565172-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.086 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:09:7b 10.100.0.10'], port_security=['fa:16:3e:65:09:7b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '098f39eb-751f-4a70-8e8b-593f9d203681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2a565172-9810-4239-9e57-a527a2124db7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.088 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2a565172-9810-4239-9e57-a527a2124db7 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.089 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:59 compute-0 systemd-machined[211836]: New machine qemu-41-instance-00000057.
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.102 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa323bb-de03-417f-ad69-4c0a2a0fd6ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.104 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.106 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.106 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45854c78-63a0-45f5-8f57-0e4c34c93cad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.107 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5456bdec-56db-4602-88b8-8330b34a1186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000057.
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.126 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcffba0-434a-4ff2-ac6c-188071bc0811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 ovn_controller[148183]: 2025-10-02T12:20:59Z|00370|binding|INFO|Setting lport 2a565172-9810-4239-9e57-a527a2124db7 ovn-installed in OVS
Oct 02 12:20:59 compute-0 ovn_controller[148183]: 2025-10-02T12:20:59Z|00371|binding|INFO|Setting lport 2a565172-9810-4239-9e57-a527a2124db7 up in Southbound
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.153 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2bedc2cb-bb2e-4470-b50e-38703b48b042]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.190 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e043dd3b-cd22-455c-856d-dbc7f9132e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.2037] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.201 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[317f4e33-265e-4383-80ef-e362c97878d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.239 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4e914679-b555-46d4-81c6-340cc28ab5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.242 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5485da08-358c-4f92-b817-d4978e4cf751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.2682] device (tap7754c79a-c0): carrier: link connected
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.275 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab2a9b4-ecb8-4b6b-9ce6-5f7008a33c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.293 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[23d9e7e6-f013-48b9-8ac8-6eadb62c01bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572689, 'reachable_time': 24382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311073, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.306 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d545c5f-38ad-4104-81f4-43c16b074946]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572689, 'tstamp': 572689}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311074, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.322 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[56b25724-d24d-407c-b1af-5f12ad24d49e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572689, 'reachable_time': 24382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311075, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.352 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1562dfad-870b-42a1-98c2-7a0673560d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 582 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 18 MiB/s wr, 452 op/s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.405 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[772861bf-2e2a-4c4e-8cd6-d811025bf266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.406 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.407 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.407 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:59 compute-0 NetworkManager[44987]: <info>  [1759407659.4101] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct 02 12:20:59 compute-0 kernel: tap7754c79a-c0: entered promiscuous mode
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.412 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:59 compute-0 ovn_controller[148183]: 2025-10-02T12:20:59Z|00372|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.430 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.431 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[433ac8d0-c99f-4fc1-a6be-05531c5bb04d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.432 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:20:59.433 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:59 compute-0 ceph-mon[73607]: osdmap e267: 3 total, 3 up, 3 in
Oct 02 12:20:59 compute-0 sudo[311085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:59 compute-0 sudo[311085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:59 compute-0 sudo[311085]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:59 compute-0 sudo[311110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:59 compute-0 sudo[311110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:59 compute-0 sudo[311110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.621 2 DEBUG nova.compute.manager [req-e72345a2-af7b-468f-8dbe-8c97a8d86647 req-ab307ed9-25f7-42fb-80e1-b0dd324ec092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.623 2 DEBUG oslo_concurrency.lockutils [req-e72345a2-af7b-468f-8dbe-8c97a8d86647 req-ab307ed9-25f7-42fb-80e1-b0dd324ec092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.623 2 DEBUG oslo_concurrency.lockutils [req-e72345a2-af7b-468f-8dbe-8c97a8d86647 req-ab307ed9-25f7-42fb-80e1-b0dd324ec092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.623 2 DEBUG oslo_concurrency.lockutils [req-e72345a2-af7b-468f-8dbe-8c97a8d86647 req-ab307ed9-25f7-42fb-80e1-b0dd324ec092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.624 2 DEBUG nova.compute.manager [req-e72345a2-af7b-468f-8dbe-8c97a8d86647 req-ab307ed9-25f7-42fb-80e1-b0dd324ec092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Processing event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:20:59 compute-0 podman[311157]: 2025-10-02 12:20:59.783238738 +0000 UTC m=+0.022217836 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:59 compute-0 podman[311157]: 2025-10-02 12:20:59.888502116 +0000 UTC m=+0.127481204 container create 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:20:59 compute-0 systemd[1]: Started libpod-conmon-9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9.scope.
Oct 02 12:20:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7fbed21fe337a88c3b4d5c1e4202f99d2b46ac17f23e78845bc9c856f054f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:59 compute-0 podman[311157]: 2025-10-02 12:20:59.974932564 +0000 UTC m=+0.213911632 container init 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:20:59 compute-0 podman[311157]: 2025-10-02 12:20:59.98012527 +0000 UTC m=+0.219104338 container start 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:20:59 compute-0 nova_compute[257802]: 2025-10-02 12:20:59.989 2 INFO nova.compute.manager [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Rebuilding instance
Oct 02 12:21:00 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [NOTICE]   (311177) : New worker (311179) forked
Oct 02 12:21:00 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [NOTICE]   (311177) : Loading success.
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.244 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.304 2 DEBUG nova.compute.manager [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:00.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.428 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'pci_requests' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.447 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.487 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'resources' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:00 compute-0 ceph-mon[73607]: pgmap v1738: 305 pgs: 305 active+clean; 582 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 18 MiB/s wr, 452 op/s
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.578 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'migration_context' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.592 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.595 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:21:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.747 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407660.746586, 098f39eb-751f-4a70-8e8b-593f9d203681 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.747 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] VM Started (Lifecycle Event)
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.749 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.753 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.756 2 INFO nova.virt.libvirt.driver [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Instance spawned successfully.
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.756 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.789 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.794 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.798 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.799 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.800 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.800 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.801 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.802 2 DEBUG nova.virt.libvirt.driver [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.935 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.936 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407660.7466934, 098f39eb-751f-4a70-8e8b-593f9d203681 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:00 compute-0 nova_compute[257802]: 2025-10-02 12:21:00.936 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] VM Paused (Lifecycle Event)
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.043 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.046 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407660.7524202, 098f39eb-751f-4a70-8e8b-593f9d203681 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.047 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] VM Resumed (Lifecycle Event)
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.068 2 INFO nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Took 7.77 seconds to spawn the instance on the hypervisor.
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.069 2 DEBUG nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.086 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.088 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.199 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.334 2 INFO nova.compute.manager [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Took 9.20 seconds to build instance.
Oct 02 12:21:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 546 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 14 MiB/s wr, 401 op/s
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.386 2 DEBUG oslo_concurrency.lockutils [None req-ad445bdf-cef8-42b2-8256-f85d4d43283f a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.840 2 DEBUG nova.compute.manager [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.840 2 DEBUG oslo_concurrency.lockutils [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.840 2 DEBUG oslo_concurrency.lockutils [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.841 2 DEBUG oslo_concurrency.lockutils [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.841 2 DEBUG nova.compute.manager [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] No waiting events found dispatching network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:21:01 compute-0 nova_compute[257802]: 2025-10-02 12:21:01.841 2 WARNING nova.compute.manager [req-0d30a60e-2de5-4b1c-9123-02d35d7ce70a req-f4e4c14a-38a0-44dd-92d3-2dbda38cc4b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received unexpected event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 for instance with vm_state active and task_state None.
Oct 02 12:21:02 compute-0 nova_compute[257802]: 2025-10-02 12:21:02.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 02 12:21:02 compute-0 ceph-mon[73607]: pgmap v1739: 305 pgs: 305 active+clean; 546 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 14 MiB/s wr, 401 op/s
Oct 02 12:21:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 02 12:21:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 02 12:21:02 compute-0 nova_compute[257802]: 2025-10-02 12:21:02.737 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:02 compute-0 podman[311233]: 2025-10-02 12:21:02.959390778 +0000 UTC m=+0.096116184 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 517 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.8 MiB/s wr, 263 op/s
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.504 2 INFO nova.compute.manager [None req-8ad8dcda-8e43-46f3-bf71-0dfbf90a6530 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Pausing
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.505 2 DEBUG nova.objects.instance [None req-8ad8dcda-8e43-46f3-bf71-0dfbf90a6530 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'flavor' on Instance uuid 098f39eb-751f-4a70-8e8b-593f9d203681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.606 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407663.6059358, 098f39eb-751f-4a70-8e8b-593f9d203681 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.606 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] VM Paused (Lifecycle Event)
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.608 2 DEBUG nova.compute.manager [None req-8ad8dcda-8e43-46f3-bf71-0dfbf90a6530 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 02 12:21:03 compute-0 ceph-mon[73607]: osdmap e268: 3 total, 3 up, 3 in
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.810 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:03 compute-0 nova_compute[257802]: 2025-10-02 12:21:03.814 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 02 12:21:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 02 12:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 02 12:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 02 12:21:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 02 12:21:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:04 compute-0 ceph-mon[73607]: pgmap v1741: 305 pgs: 305 active+clean; 517 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.8 MiB/s wr, 263 op/s
Oct 02 12:21:04 compute-0 ceph-mon[73607]: osdmap e269: 3 total, 3 up, 3 in
Oct 02 12:21:04 compute-0 ceph-mon[73607]: osdmap e270: 3 total, 3 up, 3 in
Oct 02 12:21:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 420 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 58 KiB/s wr, 319 op/s
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.195 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.196 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.197 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.197 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.197 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.198 2 INFO nova.compute.manager [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Terminating instance
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.200 2 DEBUG nova.compute.manager [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:21:06 compute-0 kernel: tap2a565172-98 (unregistering): left promiscuous mode
Oct 02 12:21:06 compute-0 NetworkManager[44987]: <info>  [1759407666.2767] device (tap2a565172-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:21:06 compute-0 ovn_controller[148183]: 2025-10-02T12:21:06Z|00373|binding|INFO|Releasing lport 2a565172-9810-4239-9e57-a527a2124db7 from this chassis (sb_readonly=0)
Oct 02 12:21:06 compute-0 ovn_controller[148183]: 2025-10-02T12:21:06Z|00374|binding|INFO|Setting lport 2a565172-9810-4239-9e57-a527a2124db7 down in Southbound
Oct 02 12:21:06 compute-0 ovn_controller[148183]: 2025-10-02T12:21:06Z|00375|binding|INFO|Removing iface tap2a565172-98 ovn-installed in OVS
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:06 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 02 12:21:06 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000057.scope: Consumed 4.497s CPU time.
Oct 02 12:21:06 compute-0 systemd-machined[211836]: Machine qemu-41-instance-00000057 terminated.
Oct 02 12:21:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:06.368 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:09:7b 10.100.0.10'], port_security=['fa:16:3e:65:09:7b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '098f39eb-751f-4a70-8e8b-593f9d203681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2a565172-9810-4239-9e57-a527a2124db7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:21:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:06.370 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2a565172-9810-4239-9e57-a527a2124db7 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis
Oct 02 12:21:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:06.371 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:21:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:06.372 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4b4958-0d5e-40b9-ab3f-34d3f537cf72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:06.372 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.432 2 INFO nova.virt.libvirt.driver [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Instance destroyed successfully.
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.433 2 DEBUG nova.objects.instance [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid 098f39eb-751f-4a70-8e8b-593f9d203681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.508 2 DEBUG nova.virt.libvirt.vif [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:20:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1340405804',display_name='tempest-DeleteServersTestJSON-server-1340405804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1340405804',id=87,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-9hnbw0h5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:03Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=098f39eb-751f-4a70-8e8b-593f9d203681,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.509 2 DEBUG nova.network.os_vif_util [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "2a565172-9810-4239-9e57-a527a2124db7", "address": "fa:16:3e:65:09:7b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a565172-98", "ovs_interfaceid": "2a565172-9810-4239-9e57-a527a2124db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.509 2 DEBUG nova.network.os_vif_util [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.510 2 DEBUG os_vif [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a565172-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:21:06 compute-0 nova_compute[257802]: 2025-10-02 12:21:06.517 2 INFO os_vif [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:09:7b,bridge_name='br-int',has_traffic_filtering=True,id=2a565172-9810-4239-9e57-a527a2124db7,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a565172-98')
Oct 02 12:21:06 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [NOTICE]   (311177) : haproxy version is 2.8.14-c23fe91
Oct 02 12:21:06 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [NOTICE]   (311177) : path to executable is /usr/sbin/haproxy
Oct 02 12:21:06 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [ALERT]    (311177) : Current worker (311179) exited with code 143 (Terminated)
Oct 02 12:21:06 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[311173]: [WARNING]  (311177) : All workers exited. Exiting... (0)
Oct 02 12:21:06 compute-0 systemd[1]: libpod-9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9.scope: Deactivated successfully.
Oct 02 12:21:06 compute-0 podman[311294]: 2025-10-02 12:21:06.557083936 +0000 UTC m=+0.087924325 container died 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a7fbed21fe337a88c3b4d5c1e4202f99d2b46ac17f23e78845bc9c856f054f-merged.mount: Deactivated successfully.
Oct 02 12:21:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:07 compute-0 podman[311294]: 2025-10-02 12:21:07.016639532 +0000 UTC m=+0.547479921 container cleanup 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:21:07 compute-0 systemd[1]: libpod-conmon-9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9.scope: Deactivated successfully.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: pgmap v1744: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 420 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 58 KiB/s wr, 319 op/s
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.096150) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667096184, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2264, "num_deletes": 261, "total_data_size": 3798086, "memory_usage": 3857808, "flush_reason": "Manual Compaction"}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 02 12:21:07 compute-0 podman[311344]: 2025-10-02 12:21:07.102601048 +0000 UTC m=+0.065828304 container remove 9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2639031-54c2-422f-af85-38a5cedad738]: (4, ('Thu Oct  2 12:21:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9)\n9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9\nThu Oct  2 12:21:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9)\n9b8883539b4d4fe82a7f138e17195ac56aff976c66344fc4a8daeb736800abf9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.110 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[334589dd-a86f-4205-ade3-a3083f3b3724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.111 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:07 compute-0 kernel: tap7754c79a-c0: left promiscuous mode
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667118935, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3715692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36704, "largest_seqno": 38967, "table_properties": {"data_size": 3705427, "index_size": 6503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22161, "raw_average_key_size": 21, "raw_value_size": 3684699, "raw_average_value_size": 3509, "num_data_blocks": 281, "num_entries": 1050, "num_filter_entries": 1050, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407476, "oldest_key_time": 1759407476, "file_creation_time": 1759407667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 22967 microseconds, and 7147 cpu microseconds.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.119111) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3715692 bytes OK
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.119171) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.124588) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.124612) EVENT_LOG_v1 {"time_micros": 1759407667124605, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.124632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3788709, prev total WAL file size 3788709, number of live WAL files 2.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.125795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3628KB)], [80(9635KB)]
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667125848, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13582807, "oldest_snapshot_seqno": -1}
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.137 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[82c48f04-2d76-4417-b0bb-debfc07e4bd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.167 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[53e1a064-787c-45d3-a238-7e7498673c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.167 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[62d5f459-de92-42f5-a749-3791ec17a8cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.181 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8b4bfe-9f51-4e46-80a1-70ffc71226c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572681, 'reachable_time': 41238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311359, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.185 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:21:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:07.185 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1918ee-dafb-4ed9-bb11-ac3a37d00498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6667 keys, 11644346 bytes, temperature: kUnknown
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667195261, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11644346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11597141, "index_size": 29429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170807, "raw_average_key_size": 25, "raw_value_size": 11475076, "raw_average_value_size": 1721, "num_data_blocks": 1177, "num_entries": 6667, "num_filter_entries": 6667, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.195472) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11644346 bytes
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.200188) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.5 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7201, records dropped: 534 output_compression: NoCompression
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.200210) EVENT_LOG_v1 {"time_micros": 1759407667200201, "job": 46, "event": "compaction_finished", "compaction_time_micros": 69477, "compaction_time_cpu_micros": 24092, "output_level": 6, "num_output_files": 1, "total_output_size": 11644346, "num_input_records": 7201, "num_output_records": 6667, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667200896, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667202535, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.125699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.202622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.202628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.202630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.202632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:21:07.202634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:21:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 420 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 29 KiB/s wr, 275 op/s
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.790 2 INFO nova.virt.libvirt.driver [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Deleting instance files /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681_del
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.791 2 INFO nova.virt.libvirt.driver [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Deletion of /var/lib/nova/instances/098f39eb-751f-4a70-8e8b-593f9d203681_del complete
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.920 2 INFO nova.compute.manager [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Took 1.72 seconds to destroy the instance on the hypervisor.
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.921 2 DEBUG oslo.service.loopingcall [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.921 2 DEBUG nova.compute.manager [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:21:07 compute-0 nova_compute[257802]: 2025-10-02 12:21:07.922 2 DEBUG nova.network.neutron [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.317 2 DEBUG nova.compute.manager [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-unplugged-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.318 2 DEBUG oslo_concurrency.lockutils [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.318 2 DEBUG oslo_concurrency.lockutils [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.318 2 DEBUG oslo_concurrency.lockutils [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.318 2 DEBUG nova.compute.manager [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] No waiting events found dispatching network-vif-unplugged-2a565172-9810-4239-9e57-a527a2124db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.319 2 DEBUG nova.compute.manager [req-13b0a4ea-19d1-4f85-9663-cb7f6c5ec349 req-c18bb391-8fa9-49ee-88ba-6838423413ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-unplugged-2a565172-9810-4239-9e57-a527a2124db7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:21:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.800 2 DEBUG nova.network.neutron [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.808 2 DEBUG nova.compute.manager [req-fa443f90-bf6a-446a-b0b4-735b3e949d67 req-ac889c2c-e1b1-4d20-879a-50dd0a7092f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-deleted-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.809 2 INFO nova.compute.manager [req-fa443f90-bf6a-446a-b0b4-735b3e949d67 req-ac889c2c-e1b1-4d20-879a-50dd0a7092f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Neutron deleted interface 2a565172-9810-4239-9e57-a527a2124db7; detaching it from the instance and deleting it from the info cache
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.809 2 DEBUG nova.network.neutron [req-fa443f90-bf6a-446a-b0b4-735b3e949d67 req-ac889c2c-e1b1-4d20-879a-50dd0a7092f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.928 2 INFO nova.compute.manager [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Took 1.01 seconds to deallocate network for instance.
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.933 2 DEBUG nova.compute.manager [req-fa443f90-bf6a-446a-b0b4-735b3e949d67 req-ac889c2c-e1b1-4d20-879a-50dd0a7092f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Detach interface failed, port_id=2a565172-9810-4239-9e57-a527a2124db7, reason: Instance 098f39eb-751f-4a70-8e8b-593f9d203681 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:21:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.989 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:08 compute-0 nova_compute[257802]: 2025-10-02 12:21:08.990 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 02 12:21:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.056 2 DEBUG oslo_concurrency.processutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:09 compute-0 ceph-mon[73607]: pgmap v1745: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 420 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 29 KiB/s wr, 275 op/s
Oct 02 12:21:09 compute-0 ceph-mon[73607]: osdmap e271: 3 total, 3 up, 3 in
Oct 02 12:21:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 343 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 KiB/s wr, 193 op/s
Oct 02 12:21:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209375839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.491 2 DEBUG oslo_concurrency.processutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.498 2 DEBUG nova.compute.provider_tree [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.542 2 DEBUG nova.scheduler.client.report [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.636 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.686 2 INFO nova.scheduler.client.report [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocations for instance 098f39eb-751f-4a70-8e8b-593f9d203681
Oct 02 12:21:09 compute-0 nova_compute[257802]: 2025-10-02 12:21:09.821 2 DEBUG oslo_concurrency.lockutils [None req-40e77e68-01b4-4cff-8d9a-ff5d3bea41c0 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:10 compute-0 ceph-mon[73607]: pgmap v1747: 305 pgs: 305 active+clean; 343 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 KiB/s wr, 193 op/s
Oct 02 12:21:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/209375839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.447 2 DEBUG nova.compute.manager [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.447 2 DEBUG oslo_concurrency.lockutils [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.448 2 DEBUG oslo_concurrency.lockutils [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.448 2 DEBUG oslo_concurrency.lockutils [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "098f39eb-751f-4a70-8e8b-593f9d203681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.449 2 DEBUG nova.compute.manager [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] No waiting events found dispatching network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.450 2 WARNING nova.compute.manager [req-62eab6c7-1801-4c05-a97d-ec4cc89abefe req-974680e7-82fd-4389-b3fe-90c32a5dbbe1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Received unexpected event network-vif-plugged-2a565172-9810-4239-9e57-a527a2124db7 for instance with vm_state deleted and task_state None.
Oct 02 12:21:10 compute-0 nova_compute[257802]: 2025-10-02 12:21:10.635 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:21:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 332 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 662 KiB/s wr, 168 op/s
Oct 02 12:21:11 compute-0 nova_compute[257802]: 2025-10-02 12:21:11.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:12 compute-0 nova_compute[257802]: 2025-10-02 12:21:12.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:12 compute-0 ceph-mon[73607]: pgmap v1748: 305 pgs: 305 active+clean; 332 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 662 KiB/s wr, 168 op/s
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:12 compute-0 sshd-session[311386]: Invalid user hadoop from 167.99.55.34 port 59936
Oct 02 12:21:12 compute-0 sshd-session[311386]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:21:12 compute-0 sshd-session[311386]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:21:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 340 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Oct 02 12:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2728080054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 02 12:21:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:14 compute-0 ceph-mon[73607]: pgmap v1749: 305 pgs: 305 active+clean; 340 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Oct 02 12:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/639431983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:14 compute-0 ceph-mon[73607]: osdmap e272: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 391 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 557 KiB/s rd, 5.2 MiB/s wr, 208 op/s
Oct 02 12:21:15 compute-0 sshd-session[311386]: Failed password for invalid user hadoop from 167.99.55.34 port 59936 ssh2
Oct 02 12:21:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 02 12:21:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000056.scope: Consumed 14.693s CPU time.
Oct 02 12:21:16 compute-0 systemd-machined[211836]: Machine qemu-40-instance-00000056 terminated.
Oct 02 12:21:16 compute-0 sshd-session[311386]: Connection closed by invalid user hadoop 167.99.55.34 port 59936 [preauth]
Oct 02 12:21:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:16 compute-0 nova_compute[257802]: 2025-10-02 12:21:16.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:16 compute-0 nova_compute[257802]: 2025-10-02 12:21:16.662 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance shutdown successfully after 16 seconds.
Oct 02 12:21:16 compute-0 nova_compute[257802]: 2025-10-02 12:21:16.672 2 INFO nova.virt.libvirt.driver [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance destroyed successfully.
Oct 02 12:21:16 compute-0 nova_compute[257802]: 2025-10-02 12:21:16.679 2 INFO nova.virt.libvirt.driver [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance destroyed successfully.
Oct 02 12:21:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:16.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:16 compute-0 ceph-mon[73607]: pgmap v1751: 305 pgs: 305 active+clean; 391 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 557 KiB/s rd, 5.2 MiB/s wr, 208 op/s
Oct 02 12:21:17 compute-0 nova_compute[257802]: 2025-10-02 12:21:17.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 391 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 500 KiB/s rd, 5.0 MiB/s wr, 153 op/s
Oct 02 12:21:17 compute-0 nova_compute[257802]: 2025-10-02 12:21:17.981 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deleting instance files /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41_del
Oct 02 12:21:17 compute-0 nova_compute[257802]: 2025-10-02 12:21:17.983 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deletion of /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41_del complete
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.145 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.146 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating image(s)
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.188 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.229 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.275 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.279 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.358 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.359 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.360 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.360 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.385 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.388 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:18.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.797 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:18 compute-0 nova_compute[257802]: 2025-10-02 12:21:18.888 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] resizing rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:21:19 compute-0 ceph-mon[73607]: pgmap v1752: 305 pgs: 305 active+clean; 391 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 500 KiB/s rd, 5.0 MiB/s wr, 153 op/s
Oct 02 12:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3603085431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4182312384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2073604288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4082954703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.044 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.044 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Ensure instance console log exists: /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.045 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.045 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.046 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.047 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.051 2 WARNING nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.055 2 DEBUG nova.virt.libvirt.host [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.056 2 DEBUG nova.virt.libvirt.host [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.058 2 DEBUG nova.virt.libvirt.host [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.059 2 DEBUG nova.virt.libvirt.host [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.060 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.060 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.061 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.061 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.061 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.062 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.062 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.062 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.062 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.063 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.063 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.063 2 DEBUG nova.virt.hardware [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.063 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.208 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 409 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 456 KiB/s rd, 7.5 MiB/s wr, 183 op/s
Oct 02 12:21:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408879300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.628 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.661 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:19 compute-0 nova_compute[257802]: 2025-10-02 12:21:19.668 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:19 compute-0 sudo[311620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:19 compute-0 sudo[311620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:19 compute-0 sudo[311620]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:19 compute-0 sudo[311646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:19 compute-0 sudo[311646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:19 compute-0 sudo[311646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576773563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3408879300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.121 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.125 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <uuid>5ae8a491-7025-40ff-a6f8-240f71af3e41</uuid>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <name>instance-00000056</name>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerShowV257Test-server-1672132577</nova:name>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:21:19</nova:creationTime>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:user uuid="a804670c0de24489af33ba77885d5ee6">tempest-ServerShowV257Test-364849187-project-member</nova:user>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <nova:project uuid="a7f4317b8b6c47689bbb12f29d6cab5a">tempest-ServerShowV257Test-364849187</nova:project>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <nova:ports/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <system>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="serial">5ae8a491-7025-40ff-a6f8-240f71af3e41</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="uuid">5ae8a491-7025-40ff-a6f8-240f71af3e41</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </system>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <os>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </os>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <features>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </features>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ae8a491-7025-40ff-a6f8-240f71af3e41_disk">
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </source>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config">
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </source>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:21:20 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/console.log" append="off"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <video>
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </video>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:21:20 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:21:20 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:21:20 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:21:20 compute-0 nova_compute[257802]: </domain>
Oct 02 12:21:20 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.255 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.256 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.256 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Using config drive
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.283 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.370 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:20.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:20 compute-0 nova_compute[257802]: 2025-10-02 12:21:20.931 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'keypairs' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:21 compute-0 ceph-mon[73607]: pgmap v1753: 305 pgs: 305 active+clean; 409 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 456 KiB/s rd, 7.5 MiB/s wr, 183 op/s
Oct 02 12:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2576773563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.239 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Creating config drive at /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.252 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3k4xwvb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 400 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 7.8 MiB/s wr, 197 op/s
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.414 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3k4xwvb" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.448 2 DEBUG nova.storage.rbd_utils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] rbd image 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.453 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.483 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407666.4306564, 098f39eb-751f-4a70-8e8b-593f9d203681 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.484 2 INFO nova.compute.manager [-] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] VM Stopped (Lifecycle Event)
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:21 compute-0 nova_compute[257802]: 2025-10-02 12:21:21.558 2 DEBUG nova.compute.manager [None req-cbe8544a-18b9-4543-9452-55a9329a66f4 - - - - - -] [instance: 098f39eb-751f-4a70-8e8b-593f9d203681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:22 compute-0 nova_compute[257802]: 2025-10-02 12:21:22.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:22 compute-0 nova_compute[257802]: 2025-10-02 12:21:22.175 2 DEBUG oslo_concurrency.processutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config 5ae8a491-7025-40ff-a6f8-240f71af3e41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:22 compute-0 nova_compute[257802]: 2025-10-02 12:21:22.175 2 INFO nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deleting local config drive /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41/disk.config because it was imported into RBD.
Oct 02 12:21:22 compute-0 systemd-machined[211836]: New machine qemu-42-instance-00000056.
Oct 02 12:21:22 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000056.
Oct 02 12:21:22 compute-0 ceph-mon[73607]: pgmap v1754: 305 pgs: 305 active+clean; 400 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 7.8 MiB/s wr, 197 op/s
Oct 02 12:21:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 418 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 175 KiB/s rd, 7.5 MiB/s wr, 189 op/s
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.803 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 5ae8a491-7025-40ff-a6f8-240f71af3e41 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.804 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407683.8030858, 5ae8a491-7025-40ff-a6f8-240f71af3e41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.804 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] VM Resumed (Lifecycle Event)
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.806 2 DEBUG nova.compute.manager [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.806 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.809 2 INFO nova.virt.libvirt.driver [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance spawned successfully.
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.809 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.864 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.874 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.878 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.878 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.879 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.879 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.880 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.881 2 DEBUG nova.virt.libvirt.driver [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.913 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.914 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407683.804652, 5ae8a491-7025-40ff-a6f8-240f71af3e41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.914 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] VM Started (Lifecycle Event)
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.981 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:23 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.985 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:23.999 2 DEBUG nova.compute.manager [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.033 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.061 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.062 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.062 2 DEBUG nova.objects.instance [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:21:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.131 2 DEBUG oslo_concurrency.lockutils [None req-def89d54-0565-495c-893c-4dcd192dc1c1 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:24.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.680 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.680 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.680 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "5ae8a491-7025-40ff-a6f8-240f71af3e41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.681 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.681 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.682 2 INFO nova.compute.manager [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Terminating instance
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.683 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "refresh_cache-5ae8a491-7025-40ff-a6f8-240f71af3e41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.683 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquired lock "refresh_cache-5ae8a491-7025-40ff-a6f8-240f71af3e41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.683 2 DEBUG nova.network.neutron [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:21:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:24.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:24 compute-0 nova_compute[257802]: 2025-10-02 12:21:24.878 2 DEBUG nova.network.neutron [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:21:25 compute-0 nova_compute[257802]: 2025-10-02 12:21:25.231 2 DEBUG nova.network.neutron [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:25 compute-0 nova_compute[257802]: 2025-10-02 12:21:25.247 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Releasing lock "refresh_cache-5ae8a491-7025-40ff-a6f8-240f71af3e41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:25 compute-0 nova_compute[257802]: 2025-10-02 12:21:25.248 2 DEBUG nova.compute.manager [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:21:25 compute-0 ceph-mon[73607]: pgmap v1755: 305 pgs: 305 active+clean; 418 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 175 KiB/s rd, 7.5 MiB/s wr, 189 op/s
Oct 02 12:21:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 419 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Oct 02 12:21:25 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 02 12:21:25 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000056.scope: Consumed 2.994s CPU time.
Oct 02 12:21:25 compute-0 systemd-machined[211836]: Machine qemu-42-instance-00000056 terminated.
Oct 02 12:21:25 compute-0 podman[311811]: 2025-10-02 12:21:25.55488597 +0000 UTC m=+0.067559385 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:21:25 compute-0 nova_compute[257802]: 2025-10-02 12:21:25.677 2 INFO nova.virt.libvirt.driver [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance destroyed successfully.
Oct 02 12:21:25 compute-0 nova_compute[257802]: 2025-10-02 12:21:25.678 2 DEBUG nova.objects.instance [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lazy-loading 'resources' on Instance uuid 5ae8a491-7025-40ff-a6f8-240f71af3e41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:26 compute-0 ceph-mon[73607]: pgmap v1756: 305 pgs: 305 active+clean; 419 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Oct 02 12:21:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:26.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:26 compute-0 nova_compute[257802]: 2025-10-02 12:21:26.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:26.937 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:26.938 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:26.938 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:27 compute-0 nova_compute[257802]: 2025-10-02 12:21:27.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 419 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 187 op/s
Oct 02 12:21:27 compute-0 nova_compute[257802]: 2025-10-02 12:21:27.830 2 INFO nova.virt.libvirt.driver [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deleting instance files /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41_del
Oct 02 12:21:27 compute-0 nova_compute[257802]: 2025-10-02 12:21:27.831 2 INFO nova.virt.libvirt.driver [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deletion of /var/lib/nova/instances/5ae8a491-7025-40ff-a6f8-240f71af3e41_del complete
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.033 2 INFO nova.compute.manager [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Took 2.78 seconds to destroy the instance on the hypervisor.
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.033 2 DEBUG oslo.service.loopingcall [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.034 2 DEBUG nova.compute.manager [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.034 2 DEBUG nova.network.neutron [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.166 2 DEBUG nova.network.neutron [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.192 2 DEBUG nova.network.neutron [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.320 2 INFO nova.compute.manager [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Took 0.29 seconds to deallocate network for instance.
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.378 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.379 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.432 2 DEBUG oslo_concurrency.processutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:28 compute-0 ceph-mon[73607]: pgmap v1757: 305 pgs: 305 active+clean; 419 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 187 op/s
Oct 02 12:21:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091713333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.864 2 DEBUG oslo_concurrency.processutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.873 2 DEBUG nova.compute.provider_tree [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.894 2 DEBUG nova.scheduler.client.report [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:28 compute-0 podman[311874]: 2025-10-02 12:21:28.920874761 +0000 UTC m=+0.064772498 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 12:21:28 compute-0 podman[311876]: 2025-10-02 12:21:28.922578582 +0000 UTC m=+0.062455860 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.942 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:28 compute-0 nova_compute[257802]: 2025-10-02 12:21:28.994 2 INFO nova.scheduler.client.report [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Deleted allocations for instance 5ae8a491-7025-40ff-a6f8-240f71af3e41
Oct 02 12:21:29 compute-0 nova_compute[257802]: 2025-10-02 12:21:29.082 2 DEBUG oslo_concurrency.lockutils [None req-5cfa2d88-ce77-4cc9-8396-6593abfa3b64 a804670c0de24489af33ba77885d5ee6 a7f4317b8b6c47689bbb12f29d6cab5a - - default default] Lock "5ae8a491-7025-40ff-a6f8-240f71af3e41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 390 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 299 op/s
Oct 02 12:21:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4091713333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:30 compute-0 ceph-mon[73607]: pgmap v1758: 305 pgs: 305 active+clean; 390 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 299 op/s
Oct 02 12:21:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/296445127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.3 MiB/s wr, 283 op/s
Oct 02 12:21:31 compute-0 nova_compute[257802]: 2025-10-02 12:21:31.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:32 compute-0 nova_compute[257802]: 2025-10-02 12:21:32.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:33 compute-0 ceph-mon[73607]: pgmap v1759: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.3 MiB/s wr, 283 op/s
Oct 02 12:21:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 612 KiB/s wr, 264 op/s
Oct 02 12:21:33 compute-0 podman[311916]: 2025-10-02 12:21:33.935019893 +0000 UTC m=+0.072947228 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:21:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:34.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:35 compute-0 ceph-mon[73607]: pgmap v1760: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 612 KiB/s wr, 264 op/s
Oct 02 12:21:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 256 KiB/s wr, 244 op/s
Oct 02 12:21:36 compute-0 ceph-mon[73607]: pgmap v1761: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 256 KiB/s wr, 244 op/s
Oct 02 12:21:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:36 compute-0 nova_compute[257802]: 2025-10-02 12:21:36.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:37 compute-0 nova_compute[257802]: 2025-10-02 12:21:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 217 KiB/s wr, 163 op/s
Oct 02 12:21:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:38.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:38 compute-0 ceph-mon[73607]: pgmap v1762: 305 pgs: 305 active+clean; 372 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 217 KiB/s wr, 163 op/s
Oct 02 12:21:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:38.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 406 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 228 op/s
Oct 02 12:21:39 compute-0 sudo[311946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:39 compute-0 sudo[311946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:39 compute-0 sudo[311946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.035 2 DEBUG nova.compute.manager [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:21:40 compute-0 sudo[311971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:40 compute-0 sudo[311971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:40 compute-0 sudo[311971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.153 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.154 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.179 2 DEBUG nova.objects.instance [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_requests' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.195 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.196 2 INFO nova.compute.claims [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.196 2 DEBUG nova.objects.instance [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.216 2 DEBUG nova.objects.instance [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.259 2 INFO nova.compute.resource_tracker [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating resource usage from migration e9f22ec2-887a-497c-978d-a9e444a3e4bd
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.259 2 DEBUG nova.compute.resource_tracker [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Starting to track incoming migration e9f22ec2-887a-497c-978d-a9e444a3e4bd with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.318 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:40.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.676 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407685.6745877, 5ae8a491-7025-40ff-a6f8-240f71af3e41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.677 2 INFO nova.compute.manager [-] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] VM Stopped (Lifecycle Event)
Oct 02 12:21:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538432175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.705 2 DEBUG nova.compute.manager [None req-aa36e150-2162-44ed-8acb-e0e244bf75d9 - - - - - -] [instance: 5ae8a491-7025-40ff-a6f8-240f71af3e41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:40 compute-0 ceph-mon[73607]: pgmap v1763: 305 pgs: 305 active+clean; 406 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 228 op/s
Oct 02 12:21:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2127937119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2127937119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.716 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.724 2 DEBUG nova.compute.provider_tree [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.745 2 DEBUG nova.scheduler.client.report [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.773 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:40 compute-0 nova_compute[257802]: 2025-10-02 12:21:40.774 2 INFO nova.compute.manager [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Migrating
Oct 02 12:21:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:40.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 426 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 149 op/s
Oct 02 12:21:41 compute-0 nova_compute[257802]: 2025-10-02 12:21:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2538432175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:42 compute-0 nova_compute[257802]: 2025-10-02 12:21:42.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:21:42
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms']
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:21:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:21:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:21:42 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:21:43 compute-0 ceph-mon[73607]: pgmap v1764: 305 pgs: 305 active+clean; 426 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 149 op/s
Oct 02 12:21:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 02 12:21:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:21:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 426 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 5.0 MiB/s wr, 147 op/s
Oct 02 12:21:43 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 12:21:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:44 compute-0 ceph-mon[73607]: osdmap e273: 3 total, 3 up, 3 in
Oct 02 12:21:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2453946287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2453946287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:44 compute-0 sshd-session[312020]: Accepted publickey for nova from 192.168.122.101 port 43030 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:21:44 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:21:44 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:21:44 compute-0 systemd-logind[789]: New session 60 of user nova.
Oct 02 12:21:44 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:21:44 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:21:44 compute-0 systemd[312024]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:21:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:44 compute-0 systemd[312024]: Queued start job for default target Main User Target.
Oct 02 12:21:44 compute-0 systemd[312024]: Created slice User Application Slice.
Oct 02 12:21:44 compute-0 systemd[312024]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:21:44 compute-0 systemd[312024]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:21:44 compute-0 systemd[312024]: Reached target Paths.
Oct 02 12:21:44 compute-0 systemd[312024]: Reached target Timers.
Oct 02 12:21:44 compute-0 systemd[312024]: Starting D-Bus User Message Bus Socket...
Oct 02 12:21:44 compute-0 systemd[312024]: Starting Create User's Volatile Files and Directories...
Oct 02 12:21:44 compute-0 systemd[312024]: Finished Create User's Volatile Files and Directories.
Oct 02 12:21:44 compute-0 systemd[312024]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:21:44 compute-0 systemd[312024]: Reached target Sockets.
Oct 02 12:21:44 compute-0 systemd[312024]: Reached target Basic System.
Oct 02 12:21:44 compute-0 systemd[312024]: Reached target Main User Target.
Oct 02 12:21:44 compute-0 systemd[312024]: Startup finished in 130ms.
Oct 02 12:21:44 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:21:44 compute-0 systemd[1]: Started Session 60 of User nova.
Oct 02 12:21:44 compute-0 sshd-session[312020]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:21:44 compute-0 sshd-session[312040]: Received disconnect from 192.168.122.101 port 43030:11: disconnected by user
Oct 02 12:21:44 compute-0 sshd-session[312040]: Disconnected from user nova 192.168.122.101 port 43030
Oct 02 12:21:44 compute-0 sshd-session[312020]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:21:44 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct 02 12:21:44 compute-0 systemd-logind[789]: Session 60 logged out. Waiting for processes to exit.
Oct 02 12:21:44 compute-0 systemd-logind[789]: Removed session 60.
Oct 02 12:21:44 compute-0 sshd-session[312042]: Accepted publickey for nova from 192.168.122.101 port 43034 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:21:44 compute-0 systemd-logind[789]: New session 62 of user nova.
Oct 02 12:21:44 compute-0 systemd[1]: Started Session 62 of User nova.
Oct 02 12:21:44 compute-0 sshd-session[312042]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:21:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:44 compute-0 sshd-session[312045]: Received disconnect from 192.168.122.101 port 43034:11: disconnected by user
Oct 02 12:21:44 compute-0 sshd-session[312045]: Disconnected from user nova 192.168.122.101 port 43034
Oct 02 12:21:44 compute-0 sshd-session[312042]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:21:44 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Oct 02 12:21:44 compute-0 systemd-logind[789]: Session 62 logged out. Waiting for processes to exit.
Oct 02 12:21:44 compute-0 systemd-logind[789]: Removed session 62.
Oct 02 12:21:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 02 12:21:45 compute-0 ceph-mon[73607]: pgmap v1766: 305 pgs: 305 active+clean; 426 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 5.0 MiB/s wr, 147 op/s
Oct 02 12:21:45 compute-0 nova_compute[257802]: 2025-10-02 12:21:45.349 2 INFO nova.network.neutron [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating port ce91d40e-1bd9-4703-95b5-a23942c1592e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:21:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 497 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.5 MiB/s wr, 256 op/s
Oct 02 12:21:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 02 12:21:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:46.076 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:21:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:46.076 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:21:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/788338658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:21:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/788338658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.341 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.342 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.343 2 DEBUG nova.network.neutron [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:21:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.505 2 DEBUG nova.compute.manager [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.505 2 DEBUG nova.compute.manager [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing instance network info cache due to event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.505 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:46 compute-0 ceph-mon[73607]: pgmap v1767: 305 pgs: 305 active+clean; 497 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.5 MiB/s wr, 256 op/s
Oct 02 12:21:46 compute-0 ceph-mon[73607]: osdmap e274: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2895835696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/788338658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/788338658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:46 compute-0 nova_compute[257802]: 2025-10-02 12:21:46.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:46.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:47 compute-0 nova_compute[257802]: 2025-10-02 12:21:47.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:47 compute-0 nova_compute[257802]: 2025-10-02 12:21:47.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:21:47 compute-0 nova_compute[257802]: 2025-10-02 12:21:47.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 497 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.3 MiB/s wr, 231 op/s
Oct 02 12:21:47 compute-0 sudo[312048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:47 compute-0 sudo[312048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:47 compute-0 sudo[312048]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:47 compute-0 ceph-mon[73607]: osdmap e275: 3 total, 3 up, 3 in
Oct 02 12:21:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3059860970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:47 compute-0 sudo[312073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:47 compute-0 sudo[312073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:47 compute-0 sudo[312073]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:47 compute-0 sudo[312098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:47 compute-0 sudo[312098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:47 compute-0 sudo[312098]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:47 compute-0 sudo[312123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:21:47 compute-0 sudo[312123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:48 compute-0 sudo[312123]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.448 2 DEBUG nova.network.neutron [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.470 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.473 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.473 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.599 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.601 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.601 2 INFO nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Creating image(s)
Oct 02 12:21:48 compute-0 nova_compute[257802]: 2025-10-02 12:21:48.647 2 DEBUG nova.storage.rbd_utils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(nova-resize) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:21:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f85d7239-e8a2-4b97-abed-74116f2fd528 does not exist
Oct 02 12:21:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 535e670e-9687-4f4c-92cd-f4b687168dd7 does not exist
Oct 02 12:21:48 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2b6082c9-a168-46b2-a135-985595dd80ac does not exist
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:21:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:21:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:48 compute-0 sudo[312216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:48 compute-0 sudo[312216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:48 compute-0 sudo[312216]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:48.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:48 compute-0 sudo[312241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:48 compute-0 sudo[312241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:48 compute-0 sudo[312241]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:48 compute-0 sudo[312266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:48 compute-0 sudo[312266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:48 compute-0 sudo[312266]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:48 compute-0 sudo[312291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:21:48 compute-0 sudo[312291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:49 compute-0 ceph-mon[73607]: pgmap v1770: 305 pgs: 305 active+clean; 497 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.3 MiB/s wr, 231 op/s
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:21:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:49 compute-0 nova_compute[257802]: 2025-10-02 12:21:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 450 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.6 MiB/s wr, 260 op/s
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.353878014 +0000 UTC m=+0.026236924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.505043848 +0000 UTC m=+0.177402678 container create b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:21:49 compute-0 systemd[1]: Started libpod-conmon-b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6.scope.
Oct 02 12:21:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.784098253 +0000 UTC m=+0.456457173 container init b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.796577448 +0000 UTC m=+0.468936278 container start b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:21:49 compute-0 vigilant_wozniak[312372]: 167 167
Oct 02 12:21:49 compute-0 systemd[1]: libpod-b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6.scope: Deactivated successfully.
Oct 02 12:21:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 02 12:21:49 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.935572653 +0000 UTC m=+0.607931533 container attach b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:21:49 compute-0 podman[312356]: 2025-10-02 12:21:49.936488896 +0000 UTC m=+0.608847736 container died b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:21:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:21:50.078 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2d860007f4e2ca7deb97035e89b8162d6e0ee689eb76ef375beafa11bf0933-merged.mount: Deactivated successfully.
Oct 02 12:21:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:50.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.463 2 DEBUG nova.objects.instance [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:50 compute-0 podman[312356]: 2025-10-02 12:21:50.511216613 +0000 UTC m=+1.183575473 container remove b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.576 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.576 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Ensure instance console log exists: /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.577 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.577 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.577 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.580 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start _get_guest_xml network_info=[{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.586 2 WARNING nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.591 2 DEBUG nova.virt.libvirt.host [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.591 2 DEBUG nova.virt.libvirt.host [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.597 2 DEBUG nova.virt.libvirt.host [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.598 2 DEBUG nova.virt.libvirt.host [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:21:50 compute-0 systemd[1]: libpod-conmon-b87dc6f243060c43feacd59bb987a2a3258b51b9ca435e8724e3a17338e932b6.scope: Deactivated successfully.
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.599 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.601 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.601 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.601 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.602 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.602 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.602 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.602 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.603 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.603 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.603 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.603 2 DEBUG nova.virt.hardware [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.604 2 DEBUG nova.objects.instance [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:50 compute-0 nova_compute[257802]: 2025-10-02 12:21:50.619 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:50 compute-0 podman[312435]: 2025-10-02 12:21:50.740146152 +0000 UTC m=+0.080842902 container create 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:21:50 compute-0 podman[312435]: 2025-10-02 12:21:50.680540151 +0000 UTC m=+0.021236891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:50 compute-0 systemd[1]: Started libpod-conmon-60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3.scope.
Oct 02 12:21:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:50 compute-0 ceph-mon[73607]: pgmap v1771: 305 pgs: 305 active+clean; 450 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.6 MiB/s wr, 260 op/s
Oct 02 12:21:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3635761677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:50 compute-0 ceph-mon[73607]: osdmap e276: 3 total, 3 up, 3 in
Oct 02 12:21:50 compute-0 podman[312435]: 2025-10-02 12:21:50.946515996 +0000 UTC m=+0.287212766 container init 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:21:50 compute-0 podman[312435]: 2025-10-02 12:21:50.957367582 +0000 UTC m=+0.298064342 container start 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:50 compute-0 podman[312435]: 2025-10-02 12:21:50.981084283 +0000 UTC m=+0.321781063 container attach 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141000901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.125 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.168 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.260 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated VIF entry in instance network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.262 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.294 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 437 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133005930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.668 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.669 2 DEBUG nova.virt.libvirt.vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:19:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.670 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.670 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.673 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <uuid>ef69bc17-6b51-491e-82e9-c4106abb8d74</uuid>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <name>instance-00000052</name>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-457871717</nova:name>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:21:50</nova:creationTime>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <nova:port uuid="ce91d40e-1bd9-4703-95b5-a23942c1592e">
Oct 02 12:21:51 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <system>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="serial">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="uuid">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </system>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <os>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </os>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <features>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </features>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk">
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </source>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config">
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </source>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:21:51 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:18:4b:71"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <target dev="tapce91d40e-1b"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/console.log" append="off"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <video>
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </video>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:21:51 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:21:51 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:21:51 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:21:51 compute-0 nova_compute[257802]: </domain>
Oct 02 12:21:51 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.674 2 DEBUG nova.virt.libvirt.vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:19:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.675 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.675 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.675 2 DEBUG os_vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce91d40e-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce91d40e-1b, col_values=(('external_ids', {'iface-id': 'ce91d40e-1bd9-4703-95b5-a23942c1592e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:4b:71', 'vm-uuid': 'ef69bc17-6b51-491e-82e9-c4106abb8d74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:51 compute-0 NetworkManager[44987]: <info>  [1759407711.6820] manager: (tapce91d40e-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.691 2 INFO os_vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')
Oct 02 12:21:51 compute-0 vibrant_driscoll[312470]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:21:51 compute-0 vibrant_driscoll[312470]: --> relative data size: 1.0
Oct 02 12:21:51 compute-0 vibrant_driscoll[312470]: --> All data devices are unavailable
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.783 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.786 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.786 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:18:4b:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.787 2 INFO nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Using config drive
Oct 02 12:21:51 compute-0 systemd[1]: libpod-60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3.scope: Deactivated successfully.
Oct 02 12:21:51 compute-0 conmon[312470]: conmon 60788bec190523a15520 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3.scope/container/memory.events
Oct 02 12:21:51 compute-0 podman[312435]: 2025-10-02 12:21:51.80357898 +0000 UTC m=+1.144275710 container died 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.828 2 DEBUG nova.compute.manager [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:21:51 compute-0 nova_compute[257802]: 2025-10-02 12:21:51.829 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46e4115f4a5db58af04b990718c046eb5e4234985046510f6f942bed0827231-merged.mount: Deactivated successfully.
Oct 02 12:21:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3141000901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/133005930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:52 compute-0 podman[312435]: 2025-10-02 12:21:52.130296914 +0000 UTC m=+1.470993634 container remove 60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:21:52 compute-0 sudo[312291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:52 compute-0 nova_compute[257802]: 2025-10-02 12:21:52.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:52 compute-0 systemd[1]: libpod-conmon-60788bec190523a155204a20d50c3d6d332f29c1d548004b4760aecb3f69efd3.scope: Deactivated successfully.
Oct 02 12:21:52 compute-0 sudo[312561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:52 compute-0 sudo[312561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:52 compute-0 sudo[312561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:52 compute-0 sudo[312586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:52 compute-0 sudo[312586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:52 compute-0 sudo[312586]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:52 compute-0 sudo[312611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:52 compute-0 sudo[312611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:52 compute-0 sudo[312611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:52 compute-0 sudo[312636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:21:52 compute-0 sudo[312636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:52 compute-0 podman[312704]: 2025-10-02 12:21:52.947954172 +0000 UTC m=+0.073829009 container create b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:21:52 compute-0 podman[312704]: 2025-10-02 12:21:52.901262318 +0000 UTC m=+0.027137155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:53 compute-0 systemd[1]: Started libpod-conmon-b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490.scope.
Oct 02 12:21:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:53 compute-0 podman[312704]: 2025-10-02 12:21:53.070622347 +0000 UTC m=+0.196497274 container init b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:21:53 compute-0 podman[312704]: 2025-10-02 12:21:53.082306804 +0000 UTC m=+0.208181621 container start b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:53 compute-0 objective_ride[312720]: 167 167
Oct 02 12:21:53 compute-0 systemd[1]: libpod-b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490.scope: Deactivated successfully.
Oct 02 12:21:53 compute-0 conmon[312720]: conmon b523d5ff01d59c5541ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490.scope/container/memory.events
Oct 02 12:21:53 compute-0 podman[312704]: 2025-10-02 12:21:53.098850788 +0000 UTC m=+0.224725625 container attach b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:21:53 compute-0 podman[312704]: 2025-10-02 12:21:53.09930223 +0000 UTC m=+0.225177047 container died b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:21:53 compute-0 ceph-mon[73607]: pgmap v1773: 305 pgs: 305 active+clean; 437 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Oct 02 12:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a42c4d8ee5e9f51ad0adc80d7caac024f6f4e6944df06d1fb8c61c7102ac8e-merged.mount: Deactivated successfully.
Oct 02 12:21:53 compute-0 podman[312704]: 2025-10-02 12:21:53.280393195 +0000 UTC m=+0.406268022 container remove b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:21:53 compute-0 systemd[1]: libpod-conmon-b523d5ff01d59c5541ac806b1876349f89631950a9f4a93e94ad8df89f032490.scope: Deactivated successfully.
Oct 02 12:21:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 454 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 97 op/s
Oct 02 12:21:53 compute-0 podman[312746]: 2025-10-02 12:21:53.533414393 +0000 UTC m=+0.102148283 container create 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:21:53 compute-0 podman[312746]: 2025-10-02 12:21:53.454627614 +0000 UTC m=+0.023361514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:53 compute-0 systemd[1]: Started libpod-conmon-5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee.scope.
Oct 02 12:21:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853347faa109afc6c3e5361ae4173805912ed05665612fdd196c9bd1d5cb78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853347faa109afc6c3e5361ae4173805912ed05665612fdd196c9bd1d5cb78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853347faa109afc6c3e5361ae4173805912ed05665612fdd196c9bd1d5cb78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853347faa109afc6c3e5361ae4173805912ed05665612fdd196c9bd1d5cb78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:53 compute-0 podman[312746]: 2025-10-02 12:21:53.892069579 +0000 UTC m=+0.460803569 container init 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:53 compute-0 podman[312746]: 2025-10-02 12:21:53.905225651 +0000 UTC m=+0.473959541 container start 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:21:53 compute-0 podman[312746]: 2025-10-02 12:21:53.926453201 +0000 UTC m=+0.495187121 container attach 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:21:54 compute-0 nova_compute[257802]: 2025-10-02 12:21:54.095 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 02 12:21:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 02 12:21:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005992963951702959 of space, bias 1.0, pg target 1.7978891855108878 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004063466929555184 of space, bias 1.0, pg target 1.214976611937 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:21:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:54.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]: {
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:     "1": [
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:         {
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "devices": [
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "/dev/loop3"
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             ],
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "lv_name": "ceph_lv0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "lv_size": "7511998464",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "name": "ceph_lv0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "tags": {
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.cluster_name": "ceph",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.crush_device_class": "",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.encrypted": "0",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.osd_id": "1",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.type": "block",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:                 "ceph.vdo": "0"
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             },
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "type": "block",
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:             "vg_name": "ceph_vg0"
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:         }
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]:     ]
Oct 02 12:21:54 compute-0 musing_bhaskara[312763]: }
Oct 02 12:21:54 compute-0 systemd[1]: libpod-5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee.scope: Deactivated successfully.
Oct 02 12:21:54 compute-0 podman[312746]: 2025-10-02 12:21:54.730052616 +0000 UTC m=+1.298786556 container died 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:21:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:54.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:54 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:21:54 compute-0 systemd[312024]: Activating special unit Exit the Session...
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped target Main User Target.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped target Basic System.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped target Paths.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped target Sockets.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped target Timers.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:21:54 compute-0 systemd[312024]: Closed D-Bus User Message Bus Socket.
Oct 02 12:21:54 compute-0 systemd[312024]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:21:54 compute-0 systemd[312024]: Removed slice User Application Slice.
Oct 02 12:21:54 compute-0 systemd[312024]: Reached target Shutdown.
Oct 02 12:21:54 compute-0 systemd[312024]: Finished Exit the Session.
Oct 02 12:21:54 compute-0 systemd[312024]: Reached target Exit the Session.
Oct 02 12:21:54 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:21:54 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:21:54 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:21:54 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:21:54 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:21:54 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:21:54 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-72853347faa109afc6c3e5361ae4173805912ed05665612fdd196c9bd1d5cb78-merged.mount: Deactivated successfully.
Oct 02 12:21:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:21:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3754314976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:21:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3754314976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:55 compute-0 ceph-mon[73607]: pgmap v1774: 305 pgs: 305 active+clean; 454 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 97 op/s
Oct 02 12:21:55 compute-0 ceph-mon[73607]: osdmap e277: 3 total, 3 up, 3 in
Oct 02 12:21:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/286001587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/131269951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 429 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 12:21:55 compute-0 podman[312746]: 2025-10-02 12:21:55.465667174 +0000 UTC m=+2.034401084 container remove 5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:21:55 compute-0 systemd[1]: libpod-conmon-5f389037bbb8fd79b53490b587bf3b72a3c9770e24d16f5652b9ac5b5cf03aee.scope: Deactivated successfully.
Oct 02 12:21:55 compute-0 sudo[312636]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:55 compute-0 sudo[312785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:55 compute-0 sudo[312785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:55 compute-0 sudo[312785]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:55 compute-0 sudo[312811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:55 compute-0 sudo[312811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:55 compute-0 sudo[312811]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:55 compute-0 podman[312809]: 2025-10-02 12:21:55.769903177 +0000 UTC m=+0.109401251 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:21:55 compute-0 sudo[312853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:55 compute-0 sudo[312853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:55 compute-0 sudo[312853]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:55 compute-0 sudo[312880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:21:55 compute-0 sudo[312880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.249 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.249 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.250 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.250 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.354942008 +0000 UTC m=+0.116686109 container create a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:21:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.267054224 +0000 UTC m=+0.028798375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3754314976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3754314976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:56 compute-0 ceph-mon[73607]: pgmap v1776: 305 pgs: 305 active+clean; 429 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 12:21:56 compute-0 systemd[1]: Started libpod-conmon-a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1.scope.
Oct 02 12:21:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.673176773 +0000 UTC m=+0.434920934 container init a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:56 compute-0 nova_compute[257802]: 2025-10-02 12:21:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.686433408 +0000 UTC m=+0.448177539 container start a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 02 12:21:56 compute-0 sad_fermat[312962]: 167 167
Oct 02 12:21:56 compute-0 systemd[1]: libpod-a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1.scope: Deactivated successfully.
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.751169933 +0000 UTC m=+0.512914094 container attach a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:21:56 compute-0 podman[312946]: 2025-10-02 12:21:56.753021869 +0000 UTC m=+0.514766010 container died a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:21:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:21:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa2ccdaa693293f1567319246a29b06353da48de595f1a22439e3f091d09371-merged.mount: Deactivated successfully.
Oct 02 12:21:57 compute-0 podman[312946]: 2025-10-02 12:21:57.207960733 +0000 UTC m=+0.969704864 container remove a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermat, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:21:57 compute-0 nova_compute[257802]: 2025-10-02 12:21:57.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:57 compute-0 systemd[1]: libpod-conmon-a7151d71bcbe5b5a616acb9c079efa1e2a8e71bdaa6e905d3d7e16f1c1d3f1f1.scope: Deactivated successfully.
Oct 02 12:21:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 429 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Oct 02 12:21:57 compute-0 podman[312987]: 2025-10-02 12:21:57.493344003 +0000 UTC m=+0.067613067 container create ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:21:57 compute-0 podman[312987]: 2025-10-02 12:21:57.454583044 +0000 UTC m=+0.028852118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:57 compute-0 systemd[1]: Started libpod-conmon-ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8.scope.
Oct 02 12:21:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f14a5194c56c3551605f5657c1c2d832b88b93e4584f3c0c0e487476b3c75b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f14a5194c56c3551605f5657c1c2d832b88b93e4584f3c0c0e487476b3c75b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f14a5194c56c3551605f5657c1c2d832b88b93e4584f3c0c0e487476b3c75b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f14a5194c56c3551605f5657c1c2d832b88b93e4584f3c0c0e487476b3c75b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:57 compute-0 podman[312987]: 2025-10-02 12:21:57.6324501 +0000 UTC m=+0.206719144 container init ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:57 compute-0 podman[312987]: 2025-10-02 12:21:57.643264896 +0000 UTC m=+0.217533930 container start ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:21:57 compute-0 podman[312987]: 2025-10-02 12:21:57.740312502 +0000 UTC m=+0.314581546 container attach ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2824117336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:57 compute-0 ceph-mon[73607]: osdmap e278: 3 total, 3 up, 3 in
Oct 02 12:21:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/472983939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.306 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.327 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.327 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.327 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.355 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.355 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.355 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.356 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.356 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]: {
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:         "osd_id": 1,
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:         "type": "bluestore"
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]:     }
Oct 02 12:21:58 compute-0 bold_grothendieck[313004]: }
Oct 02 12:21:58 compute-0 systemd[1]: libpod-ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8.scope: Deactivated successfully.
Oct 02 12:21:58 compute-0 podman[312987]: 2025-10-02 12:21:58.506253626 +0000 UTC m=+1.080522650 container died ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483554121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.788 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:21:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:58.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.839 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.839 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:21:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f14a5194c56c3551605f5657c1c2d832b88b93e4584f3c0c0e487476b3c75b-merged.mount: Deactivated successfully.
Oct 02 12:21:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 02 12:21:58 compute-0 ceph-mon[73607]: pgmap v1778: 305 pgs: 305 active+clean; 429 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Oct 02 12:21:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2586325224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3916135186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3483554121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.977 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.978 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4476MB free_disk=20.863990783691406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.978 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:58 compute-0 nova_compute[257802]: 2025-10-02 12:21:58.978 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.066 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance ef69bc17-6b51-491e-82e9-c4106abb8d74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.067 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.067 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.113 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 02 12:21:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 02 12:21:59 compute-0 podman[312987]: 2025-10-02 12:21:59.194720999 +0000 UTC m=+1.768990053 container remove ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:21:59 compute-0 systemd[1]: libpod-conmon-ea546aafc89092ddc96c65e07279e7d63a2983208b581ad036f08f73de69d5b8.scope: Deactivated successfully.
Oct 02 12:21:59 compute-0 sudo[312880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:21:59 compute-0 podman[313063]: 2025-10-02 12:21:59.340854199 +0000 UTC m=+0.076716740 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:21:59 compute-0 podman[313062]: 2025-10-02 12:21:59.350209418 +0000 UTC m=+0.085933566 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:21:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 405 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Oct 02 12:21:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:21:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:21:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:21:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a1b54b06-0534-4f3a-8303-64f8f54f15c4 does not exist
Oct 02 12:21:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ede9ee09-9975-4780-9e46-483bf612d3d4 does not exist
Oct 02 12:21:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 15fca44f-e4f0-4bc0-8bda-2d2b471b4928 does not exist
Oct 02 12:21:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834554826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.692 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.700 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:59 compute-0 sudo[313122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:59 compute-0 sudo[313122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.719 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:59 compute-0 sudo[313122]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.741 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:21:59 compute-0 nova_compute[257802]: 2025-10-02 12:21:59.741 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:59 compute-0 sudo[313149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:21:59 compute-0 sudo[313149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:59 compute-0 sudo[313149]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:00 compute-0 sudo[313174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:00 compute-0 sudo[313174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:00 compute-0 sudo[313174]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:00 compute-0 sudo[313199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:00 compute-0 sudo[313199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:00 compute-0 sudo[313199]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:00 compute-0 ceph-mon[73607]: osdmap e279: 3 total, 3 up, 3 in
Oct 02 12:22:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:22:00 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:22:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2834554826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:00 compute-0 nova_compute[257802]: 2025-10-02 12:22:00.986 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:01 compute-0 nova_compute[257802]: 2025-10-02 12:22:01.019 2 DEBUG oslo_concurrency.lockutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:01 compute-0 nova_compute[257802]: 2025-10-02 12:22:01.019 2 DEBUG oslo_concurrency.lockutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:01 compute-0 nova_compute[257802]: 2025-10-02 12:22:01.019 2 DEBUG nova.network.neutron [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:01 compute-0 nova_compute[257802]: 2025-10-02 12:22:01.020 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'info_cache' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 405 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Oct 02 12:22:01 compute-0 ceph-mon[73607]: pgmap v1780: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 405 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Oct 02 12:22:01 compute-0 nova_compute[257802]: 2025-10-02 12:22:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:02 compute-0 nova_compute[257802]: 2025-10-02 12:22:02.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:02 compute-0 ceph-mon[73607]: pgmap v1781: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 405 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Oct 02 12:22:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 384 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 142 op/s
Oct 02 12:22:03 compute-0 nova_compute[257802]: 2025-10-02 12:22:03.906 2 DEBUG nova.network.neutron [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:03 compute-0 nova_compute[257802]: 2025-10-02 12:22:03.936 2 DEBUG oslo_concurrency.lockutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:03 compute-0 nova_compute[257802]: 2025-10-02 12:22:03.984 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance destroyed successfully.
Oct 02 12:22:03 compute-0 nova_compute[257802]: 2025-10-02 12:22:03.984 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.026 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.040 2 DEBUG nova.virt.libvirt.vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.041 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.042 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.043 2 DEBUG os_vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce91d40e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.053 2 INFO os_vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.065 2 DEBUG nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start _get_guest_xml network_info=[{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.070 2 WARNING nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.080 2 DEBUG nova.virt.libvirt.host [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.081 2 DEBUG nova.virt.libvirt.host [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.087 2 DEBUG nova.virt.libvirt.host [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.088 2 DEBUG nova.virt.libvirt.host [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.089 2 DEBUG nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.090 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.091 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.091 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.092 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.092 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.093 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.093 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.093 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.094 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.094 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.095 2 DEBUG nova.virt.hardware [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.095 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.126 2 DEBUG oslo_concurrency.processutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 02 12:22:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:04.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.513 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 02 12:22:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 02 12:22:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970443417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.580 2 DEBUG oslo_concurrency.processutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:04 compute-0 nova_compute[257802]: 2025-10-02 12:22:04.639 2 DEBUG oslo_concurrency.processutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:04 compute-0 ceph-mon[73607]: pgmap v1782: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 384 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 142 op/s
Oct 02 12:22:04 compute-0 ceph-mon[73607]: osdmap e280: 3 total, 3 up, 3 in
Oct 02 12:22:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/970443417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:04 compute-0 podman[313287]: 2025-10-02 12:22:04.936722421 +0000 UTC m=+0.079349764 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:22:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513723174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.044 2 DEBUG oslo_concurrency.processutils [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.046 2 DEBUG nova.virt.libvirt.vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.047 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.048 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.049 2 DEBUG nova.objects.instance [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.073 2 DEBUG nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <uuid>ef69bc17-6b51-491e-82e9-c4106abb8d74</uuid>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <name>instance-00000052</name>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-457871717</nova:name>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:22:04</nova:creationTime>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <nova:port uuid="ce91d40e-1bd9-4703-95b5-a23942c1592e">
Oct 02 12:22:05 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <system>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="serial">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="uuid">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </system>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <os>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </os>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <features>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </features>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk">
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config">
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:05 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:18:4b:71"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <target dev="tapce91d40e-1b"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/console.log" append="off"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <video>
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </video>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:22:05 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:22:05 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:22:05 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:22:05 compute-0 nova_compute[257802]: </domain>
Oct 02 12:22:05 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.075 2 DEBUG nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.075 2 DEBUG nova.virt.libvirt.driver [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.076 2 DEBUG nova.virt.libvirt.vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.077 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.077 2 DEBUG nova.network.os_vif_util [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.078 2 DEBUG os_vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce91d40e-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce91d40e-1b, col_values=(('external_ids', {'iface-id': 'ce91d40e-1bd9-4703-95b5-a23942c1592e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:4b:71', 'vm-uuid': 'ef69bc17-6b51-491e-82e9-c4106abb8d74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.0854] manager: (tapce91d40e-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.092 2 INFO os_vif [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.2010] manager: (tapce91d40e-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct 02 12:22:05 compute-0 kernel: tapce91d40e-1b: entered promiscuous mode
Oct 02 12:22:05 compute-0 ovn_controller[148183]: 2025-10-02T12:22:05Z|00376|binding|INFO|Claiming lport ce91d40e-1bd9-4703-95b5-a23942c1592e for this chassis.
Oct 02 12:22:05 compute-0 ovn_controller[148183]: 2025-10-02T12:22:05Z|00377|binding|INFO|ce91d40e-1bd9-4703-95b5-a23942c1592e: Claiming fa:16:3e:18:4b:71 10.100.0.4
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.2237] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.2247] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.233 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:4b:71 10.100.0.4'], port_security=['fa:16:3e:18:4b:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ef69bc17-6b51-491e-82e9-c4106abb8d74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '6', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ce91d40e-1bd9-4703-95b5-a23942c1592e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.234 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ce91d40e-1bd9-4703-95b5-a23942c1592e in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.236 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:05 compute-0 systemd-udevd[313328]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:05 compute-0 systemd-machined[211836]: New machine qemu-43-instance-00000052.
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.248 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae35300-74d6-4852-8b13-3a5711018194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.249 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.252 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.253 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdecbe2-32f3-43b8-9ce2-0ecd2cc33e11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.253 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[284b1a8f-f345-43d9-a38b-776d780bb85a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.2635] device (tapce91d40e-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.2648] device (tapce91d40e-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:05 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000052.
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.273 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[14915fc9-3a14-4633-b7df-14c1b864a689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.301 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[24ef191d-cae1-474f-9d74-80a350e0bf7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.328 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[36ace719-65ed-405b-b6f0-4b0c3e2da206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.336 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86f8bcd7-aa3a-44fc-9c92-b1bb43584aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.3385] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/175)
Oct 02 12:22:05 compute-0 systemd-udevd[313333]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.374 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0922d218-6169-4ba0-92bd-ae050a9e4426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.378 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5f568f8d-e36e-4ad8-aaa4-d29c0db52881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.4029] device (tap4035a600-40): carrier: link connected
Oct 02 12:22:05 compute-0 ovn_controller[148183]: 2025-10-02T12:22:05Z|00378|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e ovn-installed in OVS
Oct 02 12:22:05 compute-0 ovn_controller[148183]: 2025-10-02T12:22:05Z|00379|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e up in Southbound
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.409 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6e2dbd-3423-496c-bf9d-97188366807f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 326 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 161 op/s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.429 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3f024ebe-6bdf-4983-a540-a3ed7ab5583f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579302, 'reachable_time': 38016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313362, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.445 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6b076fa8-4277-482d-b9eb-bc4d5f287bab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579302, 'tstamp': 579302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313363, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.462 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c6eaacb3-1567-4dce-b46e-c5d050c55858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579302, 'reachable_time': 38016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313364, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[90288642-cde1-4d80-bce1-d00b13aab3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.542 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2a26431c-6005-4d04-ba09-78a4c802f330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.543 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.543 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.544 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:22:05 compute-0 NetworkManager[44987]: <info>  [1759407725.5460] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.547 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:05 compute-0 ovn_controller[148183]: 2025-10-02T12:22:05Z|00380|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.564 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.566 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c92068b3-07f6-4782-9a69-a20d03f240b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.566 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:05.567 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.741 2 DEBUG nova.compute.manager [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.742 2 DEBUG oslo_concurrency.lockutils [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.742 2 DEBUG oslo_concurrency.lockutils [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.743 2 DEBUG oslo_concurrency.lockutils [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.744 2 DEBUG nova.compute.manager [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:05 compute-0 nova_compute[257802]: 2025-10-02 12:22:05.744 2 WARNING nova.compute.manager [req-95a3c14f-889f-4bd1-87b4-a82825720561 req-8a0feaa2-aede-41e0-964c-5a3e65698177 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state stopped and task_state powering-on.
Oct 02 12:22:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1100569601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/513723174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:06 compute-0 podman[313430]: 2025-10-02 12:22:05.924280532 +0000 UTC m=+0.026017299 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:06 compute-0 podman[313430]: 2025-10-02 12:22:06.031097088 +0000 UTC m=+0.132833845 container create b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:22:06 compute-0 systemd[1]: Started libpod-conmon-b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2.scope.
Oct 02 12:22:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f5aaec66b7a0b75dce0a1fb07861e4bd799959460e2442501f8cb2c1577ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:06 compute-0 podman[313430]: 2025-10-02 12:22:06.246942965 +0000 UTC m=+0.348679682 container init b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:22:06 compute-0 podman[313430]: 2025-10-02 12:22:06.25201947 +0000 UTC m=+0.353756187 container start b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:22:06 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [NOTICE]   (313457) : New worker (313459) forked
Oct 02 12:22:06 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [NOTICE]   (313457) : Loading success.
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.412 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407726.411853, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.413 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Resumed (Lifecycle Event)
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.416 2 DEBUG nova.compute.manager [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.419 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance rebooted successfully.
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.420 2 DEBUG nova.compute.manager [None req-00172869-7b49-4b4b-8350-19b390b68d9b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.448 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.450 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.479 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407726.4154053, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.480 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Started (Lifecycle Event)
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.507 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.511 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.817 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.818 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.836 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:06 compute-0 ceph-mon[73607]: pgmap v1784: 305 pgs: 305 active+clean; 326 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 161 op/s
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.939 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.940 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.949 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:06 compute-0 nova_compute[257802]: 2025-10-02 12:22:06.949 2 INFO nova.compute.claims [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.102 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 326 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.6 KiB/s wr, 146 op/s
Oct 02 12:22:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485364595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.574 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.580 2 DEBUG nova.compute.provider_tree [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.594 2 DEBUG nova.scheduler.client.report [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.620 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.621 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.671 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.671 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.689 2 INFO nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.705 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.799 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.800 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.801 2 INFO nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Creating image(s)
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.825 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.852 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.879 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.883 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.914 2 DEBUG nova.compute.manager [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.915 2 DEBUG oslo_concurrency.lockutils [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.915 2 DEBUG oslo_concurrency.lockutils [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.915 2 DEBUG oslo_concurrency.lockutils [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.916 2 DEBUG nova.compute.manager [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.916 2 WARNING nova.compute.manager [req-e7b74262-a56b-49d4-8a58-f3ee8fa97c5f req-fedb106e-c56c-4e22-b972-68d343fa701b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state active and task_state None.
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.946 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.947 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.947 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.948 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.975 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:07 compute-0 nova_compute[257802]: 2025-10-02 12:22:07.979 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 12549ae0-14ff-4982-be0e-4ada2a821895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/485364595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.095 2 DEBUG nova.policy [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9f7faffac7240869a0196df1ddda7e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.284 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 12549ae0-14ff-4982-be0e-4ada2a821895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.319 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.320 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.320 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.320 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.321 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.322 2 INFO nova.compute.manager [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Terminating instance
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.323 2 DEBUG nova.compute.manager [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.364 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] resizing rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:22:08 compute-0 kernel: tapce91d40e-1b (unregistering): left promiscuous mode
Oct 02 12:22:08 compute-0 NetworkManager[44987]: <info>  [1759407728.4014] device (tapce91d40e-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 ovn_controller[148183]: 2025-10-02T12:22:08Z|00381|binding|INFO|Releasing lport ce91d40e-1bd9-4703-95b5-a23942c1592e from this chassis (sb_readonly=0)
Oct 02 12:22:08 compute-0 ovn_controller[148183]: 2025-10-02T12:22:08Z|00382|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e down in Southbound
Oct 02 12:22:08 compute-0 ovn_controller[148183]: 2025-10-02T12:22:08Z|00383|binding|INFO|Removing iface tapce91d40e-1b ovn-installed in OVS
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.417 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:4b:71 10.100.0.4'], port_security=['fa:16:3e:18:4b:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ef69bc17-6b51-491e-82e9-c4106abb8d74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '8', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ce91d40e-1bd9-4703-95b5-a23942c1592e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.418 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ce91d40e-1bd9-4703-95b5-a23942c1592e in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.420 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.421 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef94fd1-c4d6-4f13-bacc-082fde26d273]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.421 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:22:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 02 12:22:08 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000052.scope: Consumed 2.858s CPU time.
Oct 02 12:22:08 compute-0 systemd-machined[211836]: Machine qemu-43-instance-00000052 terminated.
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.506 2 DEBUG nova.objects.instance [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid 12549ae0-14ff-4982-be0e-4ada2a821895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.541 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.542 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Ensure instance console log exists: /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.543 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.543 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.543 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:08 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [NOTICE]   (313457) : haproxy version is 2.8.14-c23fe91
Oct 02 12:22:08 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [NOTICE]   (313457) : path to executable is /usr/sbin/haproxy
Oct 02 12:22:08 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [WARNING]  (313457) : Exiting Master process...
Oct 02 12:22:08 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [ALERT]    (313457) : Current worker (313459) exited with code 143 (Terminated)
Oct 02 12:22:08 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[313453]: [WARNING]  (313457) : All workers exited. Exiting... (0)
Oct 02 12:22:08 compute-0 systemd[1]: libpod-b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2.scope: Deactivated successfully.
Oct 02 12:22:08 compute-0 podman[313680]: 2025-10-02 12:22:08.557098054 +0000 UTC m=+0.047822733 container died b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.591 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance destroyed successfully.
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.592 2 DEBUG nova.objects.instance [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.606 2 DEBUG nova.virt.libvirt.vif [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.607 2 DEBUG nova.network.os_vif_util [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.608 2 DEBUG nova.network.os_vif_util [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.608 2 DEBUG os_vif [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce91d40e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.615 2 INFO os_vif [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')
Oct 02 12:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2-userdata-shm.mount: Deactivated successfully.
Oct 02 12:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-db9f5aaec66b7a0b75dce0a1fb07861e4bd799959460e2442501f8cb2c1577ee-merged.mount: Deactivated successfully.
Oct 02 12:22:08 compute-0 podman[313680]: 2025-10-02 12:22:08.64227692 +0000 UTC m=+0.133001579 container cleanup b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:22:08 compute-0 systemd[1]: libpod-conmon-b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2.scope: Deactivated successfully.
Oct 02 12:22:08 compute-0 podman[313736]: 2025-10-02 12:22:08.715043203 +0000 UTC m=+0.050441647 container remove b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.721 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[72f2a225-32cd-4dab-9389-42fc8322b17f]: (4, ('Thu Oct  2 12:22:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2)\nb1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2\nThu Oct  2 12:22:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (b1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2)\nb1f89445e5a69eda6d3f548c82bd6c7917feebf3085f966c295c39e398400bd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.722 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f6499b23-dc33-4fe2-9d99-9f65599ed0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.723 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:22:08 compute-0 nova_compute[257802]: 2025-10-02 12:22:08.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.744 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbceaa1-ba30-4536-a283-9d6cc2fd58f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.768 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffcced4-7ff6-46d3-98e9-3e8a2d1447e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.769 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef72588-29d6-4e98-88a9-69d362a0966f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.783 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[351d4060-c0ea-4c1a-bf7d-7042c95fb197]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579294, 'reachable_time': 43537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313753, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.785 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:22:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:08.785 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1184e9df-28b4-4e3d-895a-19e8563aaa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:09 compute-0 ceph-mon[73607]: pgmap v1785: 305 pgs: 305 active+clean; 326 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.6 KiB/s wr, 146 op/s
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.107 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Successfully created port: d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.186 2 INFO nova.virt.libvirt.driver [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Deleting instance files /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74_del
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.187 2 INFO nova.virt.libvirt.driver [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Deletion of /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74_del complete
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.264 2 INFO nova.compute.manager [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.265 2 DEBUG oslo.service.loopingcall [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.265 2 DEBUG nova.compute.manager [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.265 2 DEBUG nova.network.neutron [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 351 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 178 op/s
Oct 02 12:22:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 02 12:22:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 02 12:22:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.950 2 DEBUG nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.951 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.952 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.953 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.953 2 DEBUG nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.954 2 DEBUG nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.954 2 DEBUG nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.954 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.955 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.955 2 DEBUG oslo_concurrency.lockutils [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.956 2 DEBUG nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:09 compute-0 nova_compute[257802]: 2025-10-02 12:22:09.956 2 WARNING nova.compute.manager [req-966c8de1-7ba7-4c02-8088-88776bf80fda req-cc0d4b8e-4a27-4fc9-b8ee-20c2c7a264d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state active and task_state deleting.
Oct 02 12:22:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.526 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Successfully updated port: d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.549 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.549 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.549 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:10 compute-0 ceph-mon[73607]: pgmap v1786: 305 pgs: 305 active+clean; 351 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 178 op/s
Oct 02 12:22:10 compute-0 ceph-mon[73607]: osdmap e281: 3 total, 3 up, 3 in
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.618 2 DEBUG nova.network.neutron [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.639 2 INFO nova.compute.manager [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 1.37 seconds to deallocate network for instance.
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.704 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.704 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.770 2 DEBUG nova.compute.manager [req-27259be4-67f4-48b7-b58d-f48bfe7d1eac req-18a20125-0431-4f67-86a6-24adde68776c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-deleted-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:10 compute-0 nova_compute[257802]: 2025-10-02 12:22:10.772 2 DEBUG oslo_concurrency.processutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:10.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322797692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.179 2 DEBUG oslo_concurrency.processutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.186 2 DEBUG nova.compute.provider_tree [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.200 2 DEBUG nova.scheduler.client.report [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.226 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.262 2 INFO nova.scheduler.client.report [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocations for instance ef69bc17-6b51-491e-82e9-c4106abb8d74
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.356 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.373 2 DEBUG oslo_concurrency.lockutils [None req-a5399729-a9b6-42d2-8d9e-cdfbf10b1132 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.408 2 DEBUG nova.compute.manager [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-changed-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.409 2 DEBUG nova.compute.manager [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Refreshing instance network info cache due to event network-changed-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:11 compute-0 nova_compute[257802]: 2025-10-02 12:22:11.409 2 DEBUG oslo_concurrency.lockutils [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 361 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Oct 02 12:22:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/72021545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/322797692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1675394595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:12 compute-0 nova_compute[257802]: 2025-10-02 12:22:12.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:12.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:12 compute-0 ceph-mon[73607]: pgmap v1788: 305 pgs: 305 active+clean; 361 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 356 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 261 op/s
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.821 2 DEBUG nova.network.neutron [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Updating instance_info_cache with network_info: [{"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.854 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.854 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance network_info: |[{"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.855 2 DEBUG oslo_concurrency.lockutils [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.855 2 DEBUG nova.network.neutron [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Refreshing network info cache for port d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.858 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Start _get_guest_xml network_info=[{"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.862 2 WARNING nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.865 2 DEBUG nova.virt.libvirt.host [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.865 2 DEBUG nova.virt.libvirt.host [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.872 2 DEBUG nova.virt.libvirt.host [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.873 2 DEBUG nova.virt.libvirt.host [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.874 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.874 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.874 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.874 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.875 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.875 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.875 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.875 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.875 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.876 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.876 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.876 2 DEBUG nova.virt.hardware [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:13 compute-0 nova_compute[257802]: 2025-10-02 12:22:13.879 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502042578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.285 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.311 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.314 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:14 compute-0 ceph-mon[73607]: pgmap v1789: 305 pgs: 305 active+clean; 356 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 261 op/s
Oct 02 12:22:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2502042578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641635675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.772 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.774 2 DEBUG nova.virt.libvirt.vif [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1694975376',display_name='tempest-DeleteServersTestJSON-server-1694975376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1694975376',id=92,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-218tqu1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:07Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=12549ae0-14ff-4982-be0e-4ada2a821895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.774 2 DEBUG nova.network.os_vif_util [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.775 2 DEBUG nova.network.os_vif_util [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.776 2 DEBUG nova.objects.instance [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12549ae0-14ff-4982-be0e-4ada2a821895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.801 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <uuid>12549ae0-14ff-4982-be0e-4ada2a821895</uuid>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <name>instance-0000005c</name>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:name>tempest-DeleteServersTestJSON-server-1694975376</nova:name>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:22:13</nova:creationTime>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <nova:port uuid="d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f">
Oct 02 12:22:14 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <system>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="serial">12549ae0-14ff-4982-be0e-4ada2a821895</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="uuid">12549ae0-14ff-4982-be0e-4ada2a821895</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </system>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <os>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </os>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <features>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </features>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/12549ae0-14ff-4982-be0e-4ada2a821895_disk">
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/12549ae0-14ff-4982-be0e-4ada2a821895_disk.config">
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:96:7b:db"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <target dev="tapd92a78aa-54"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/console.log" append="off"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <video>
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </video>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:22:14 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:22:14 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:22:14 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:22:14 compute-0 nova_compute[257802]: </domain>
Oct 02 12:22:14 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.803 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Preparing to wait for external event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.803 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.803 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.803 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.804 2 DEBUG nova.virt.libvirt.vif [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1694975376',display_name='tempest-DeleteServersTestJSON-server-1694975376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1694975376',id=92,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-218tqu1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:07Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=12549ae0-14ff-4982-be0e-4ada2a821895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.804 2 DEBUG nova.network.os_vif_util [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.805 2 DEBUG nova.network.os_vif_util [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.805 2 DEBUG os_vif [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd92a78aa-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd92a78aa-54, col_values=(('external_ids', {'iface-id': 'd92a78aa-54f0-4abe-9e9c-8c7790e8cc7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:7b:db', 'vm-uuid': '12549ae0-14ff-4982-be0e-4ada2a821895'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 NetworkManager[44987]: <info>  [1759407734.8125] manager: (tapd92a78aa-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.817 2 INFO os_vif [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54')
Oct 02 12:22:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.872 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.873 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.873 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:96:7b:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.874 2 INFO nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Using config drive
Oct 02 12:22:14 compute-0 nova_compute[257802]: 2025-10-02 12:22:14.896 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 372 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.478 2 INFO nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Creating config drive at /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.483 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rqgyzb0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.618 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rqgyzb0" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.657 2 DEBUG nova.storage.rbd_utils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 12549ae0-14ff-4982-be0e-4ada2a821895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.662 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config 12549ae0-14ff-4982-be0e-4ada2a821895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/641635675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.693 2 DEBUG nova.network.neutron [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Updated VIF entry in instance network info cache for port d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.694 2 DEBUG nova.network.neutron [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Updating instance_info_cache with network_info: [{"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.710 2 DEBUG oslo_concurrency.lockutils [req-9b76152e-a8b9-436c-a7a7-a4135840746c req-4c1266c4-0220-444d-97e4-97ddb9b29e1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-12549ae0-14ff-4982-be0e-4ada2a821895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.848 2 DEBUG oslo_concurrency.processutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config 12549ae0-14ff-4982-be0e-4ada2a821895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.849 2 INFO nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Deleting local config drive /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895/disk.config because it was imported into RBD.
Oct 02 12:22:15 compute-0 kernel: tapd92a78aa-54: entered promiscuous mode
Oct 02 12:22:15 compute-0 ovn_controller[148183]: 2025-10-02T12:22:15Z|00384|binding|INFO|Claiming lport d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f for this chassis.
Oct 02 12:22:15 compute-0 ovn_controller[148183]: 2025-10-02T12:22:15Z|00385|binding|INFO|d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f: Claiming fa:16:3e:96:7b:db 10.100.0.6
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:15 compute-0 NetworkManager[44987]: <info>  [1759407735.9062] manager: (tapd92a78aa-54): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.907 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:7b:db 10.100.0.6'], port_security=['fa:16:3e:96:7b:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '12549ae0-14ff-4982-be0e-4ada2a821895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.908 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.909 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.919 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d047d0cd-84a3-4437-9307-ae5967a381ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.920 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:15 compute-0 ovn_controller[148183]: 2025-10-02T12:22:15Z|00386|binding|INFO|Setting lport d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f ovn-installed in OVS
Oct 02 12:22:15 compute-0 ovn_controller[148183]: 2025-10-02T12:22:15Z|00387|binding|INFO|Setting lport d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f up in Southbound
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.923 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:15 compute-0 nova_compute[257802]: 2025-10-02 12:22:15.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.923 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9e82c9-a288-4121-9d91-00da248da649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.924 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[22ee7f0d-4cf4-4271-ae2a-fe2cae3dd631]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.935 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc220b0-01e9-4d6a-a08e-5296ec976ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 systemd-machined[211836]: New machine qemu-44-instance-0000005c.
Oct 02 12:22:15 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-0000005c.
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.963 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3b0dab-728e-443f-b6f7-8c45cdee7106]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 systemd-udevd[313920]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:15 compute-0 NetworkManager[44987]: <info>  [1759407735.9855] device (tapd92a78aa-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:15 compute-0 NetworkManager[44987]: <info>  [1759407735.9866] device (tapd92a78aa-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.992 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0e72f44b-5eaa-42df-a6a9-20ea488b020d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:15 compute-0 NetworkManager[44987]: <info>  [1759407735.9982] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Oct 02 12:22:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:15.997 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cc339e7c-8953-4eee-afc7-8756b5cabed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.025 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5f96c872-2ecf-4616-87b2-dd02d48c99fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.028 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[72d2a351-0072-4fce-808b-de668a0230bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 NetworkManager[44987]: <info>  [1759407736.0482] device (tap7754c79a-c0): carrier: link connected
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.053 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a4868ea1-868f-41f3-9faa-1f274b218552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.068 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c505b420-6d45-4f2e-8b02-dcfbaa36b7a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580367, 'reachable_time': 22026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313948, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.082 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3ff6c6-bd2f-438c-82e3-58c2c869bbda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580367, 'tstamp': 580367}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313949, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.097 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[297e951a-87fc-4e63-b13f-5db09c67c315]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580367, 'reachable_time': 22026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313950, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.126 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[49dcb75c-9da5-48e6-9ac3-4f2be3930c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.182 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c78fe3d7-f537-4bc0-b770-c94ed084cb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.183 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.184 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.184 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:16 compute-0 kernel: tap7754c79a-c0: entered promiscuous mode
Oct 02 12:22:16 compute-0 NetworkManager[44987]: <info>  [1759407736.1865] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.191 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:16 compute-0 ovn_controller[148183]: 2025-10-02T12:22:16Z|00388|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.196 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.197 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcaf91e-a86d-49da-af9e-4bd3d4277c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.198 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:16.200 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.495 2 DEBUG nova.compute.manager [req-3fea7e06-5186-4123-b767-850090c16db7 req-cc5f1019-8f2c-4282-8e4d-5720d357d342 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.496 2 DEBUG oslo_concurrency.lockutils [req-3fea7e06-5186-4123-b767-850090c16db7 req-cc5f1019-8f2c-4282-8e4d-5720d357d342 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.496 2 DEBUG oslo_concurrency.lockutils [req-3fea7e06-5186-4123-b767-850090c16db7 req-cc5f1019-8f2c-4282-8e4d-5720d357d342 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.496 2 DEBUG oslo_concurrency.lockutils [req-3fea7e06-5186-4123-b767-850090c16db7 req-cc5f1019-8f2c-4282-8e4d-5720d357d342 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.496 2 DEBUG nova.compute.manager [req-3fea7e06-5186-4123-b767-850090c16db7 req-cc5f1019-8f2c-4282-8e4d-5720d357d342 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Processing event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:16 compute-0 podman[314024]: 2025-10-02 12:22:16.584597169 +0000 UTC m=+0.079141449 container create 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:22:16 compute-0 podman[314024]: 2025-10-02 12:22:16.53888896 +0000 UTC m=+0.033433260 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:16 compute-0 systemd[1]: Started libpod-conmon-50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69.scope.
Oct 02 12:22:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbaf3741b1a7ca7bea6ba09b1cdae028ddec0db76cae3bd625c65f2ce60647b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.672 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.673 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:16 compute-0 podman[314024]: 2025-10-02 12:22:16.687656793 +0000 UTC m=+0.182201083 container init 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:22:16 compute-0 podman[314024]: 2025-10-02 12:22:16.699188306 +0000 UTC m=+0.193732596 container start 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.700 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:16 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [NOTICE]   (314044) : New worker (314046) forked
Oct 02 12:22:16 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [NOTICE]   (314044) : Loading success.
Oct 02 12:22:16 compute-0 ceph-mon[73607]: pgmap v1790: 305 pgs: 305 active+clean; 372 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.767 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.767 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.775 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.775 2 INFO nova.compute.claims [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:22:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.870 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407736.87005, 12549ae0-14ff-4982-be0e-4ada2a821895 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.870 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] VM Started (Lifecycle Event)
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.874 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.879 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.882 2 INFO nova.virt.libvirt.driver [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance spawned successfully.
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.883 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.910 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.915 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.918 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.919 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.919 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.920 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.920 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.920 2 DEBUG nova.virt.libvirt.driver [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.941 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.978 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.979 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407736.8701527, 12549ae0-14ff-4982-be0e-4ada2a821895 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.979 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] VM Paused (Lifecycle Event)
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.988 2 INFO nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Took 9.19 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.988 2 DEBUG nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:16 compute-0 nova_compute[257802]: 2025-10-02 12:22:16.997 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.001 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407736.876808, 12549ae0-14ff-4982-be0e-4ada2a821895 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.001 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] VM Resumed (Lifecycle Event)
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.022 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.025 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.045 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.055 2 INFO nova.compute.manager [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Took 10.16 seconds to build instance.
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.078 2 DEBUG oslo_concurrency.lockutils [None req-fce22f81-53bc-4ba4-9110-a3d84215bcb6 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567125073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.390 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.395 2 DEBUG nova.compute.provider_tree [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.408 2 DEBUG nova.scheduler.client.report [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 372 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.432 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.433 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.475 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.475 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.589 2 INFO nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.606 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.653 2 DEBUG nova.policy [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '25468893d71641a385711fd2982bb00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '10fff81da7a54740a53a0771ce916329', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.700 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.702 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.702 2 INFO nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Creating image(s)
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.733 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1567125073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.763 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.791 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.795 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.857 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.858 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.859 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.859 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.884 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:17 compute-0 nova_compute[257802]: 2025-10-02 12:22:17.889 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.090 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.091 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.109 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.202 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.203 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.211 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.211 2 INFO nova.compute.claims [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.214 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.284 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] resizing rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:22:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.445 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.522 2 DEBUG nova.objects.instance [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.541 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.541 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Ensure instance console log exists: /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.542 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.542 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.542 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.769 2 DEBUG nova.compute.manager [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.769 2 DEBUG oslo_concurrency.lockutils [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.770 2 DEBUG oslo_concurrency.lockutils [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.770 2 DEBUG oslo_concurrency.lockutils [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.770 2 DEBUG nova.compute.manager [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] No waiting events found dispatching network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.770 2 WARNING nova.compute.manager [req-e4ab4dc3-d037-46f4-9f81-173ac27508fb req-9956391c-64fd-4311-a13a-ec91d631c7d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received unexpected event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f for instance with vm_state active and task_state None.
Oct 02 12:22:18 compute-0 ceph-mon[73607]: pgmap v1791: 305 pgs: 305 active+clean; 372 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Oct 02 12:22:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253065339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.898 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.904 2 DEBUG nova.compute.provider_tree [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.919 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Successfully created port: 7da3e5f9-b358-4404-825b-b1ad43bc54ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.925 2 DEBUG nova.scheduler.client.report [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.928 2 DEBUG oslo_concurrency.lockutils [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.929 2 DEBUG oslo_concurrency.lockutils [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.929 2 DEBUG nova.compute.manager [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.933 2 DEBUG nova.compute.manager [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.934 2 DEBUG nova.objects.instance [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'flavor' on Instance uuid 12549ae0-14ff-4982-be0e-4ada2a821895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.973 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.974 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:18 compute-0 nova_compute[257802]: 2025-10-02 12:22:18.979 2 DEBUG nova.virt.libvirt.driver [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.065 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.065 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.091 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.116 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.200 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.201 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.202 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Creating image(s)
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.227 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.252 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.276 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.279 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.343 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.344 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.344 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.345 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.369 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.373 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.397 2 DEBUG nova.policy [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93e805bcb0e047ca9d45c653f5ec913d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '91d108e807094b0fa8e63a923d2269ee', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:22:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 388 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 216 op/s
Oct 02 12:22:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:19 compute-0 nova_compute[257802]: 2025-10-02 12:22:19.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4253065339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1522284553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/922221350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.132 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Successfully updated port: 7da3e5f9-b358-4404-825b-b1ad43bc54ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.146 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.147 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.147 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.225 2 DEBUG nova.compute.manager [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-changed-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.225 2 DEBUG nova.compute.manager [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Refreshing instance network info cache due to event network-changed-7da3e5f9-b358-4404-825b-b1ad43bc54ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.225 2 DEBUG oslo_concurrency.lockutils [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.285 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:20 compute-0 sudo[314360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:20 compute-0 sudo[314360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:20 compute-0 sudo[314360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:20 compute-0 sudo[314385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:20 compute-0 sudo[314385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:20 compute-0 sudo[314385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.730 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:20 compute-0 nova_compute[257802]: 2025-10-02 12:22:20.805 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] resizing rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:22:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.022 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Successfully created port: 81b397b3-6d28-4163-a930-341d2b78d96a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:21 compute-0 ceph-mon[73607]: pgmap v1792: 305 pgs: 305 active+clean; 388 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 216 op/s
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.151 2 DEBUG nova.objects.instance [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'migration_context' on Instance uuid 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.165 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.165 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Ensure instance console log exists: /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.166 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.166 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.167 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 425 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 245 op/s
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.549 2 DEBUG nova.network.neutron [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.809 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.810 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance network_info: |[{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.810 2 DEBUG oslo_concurrency.lockutils [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.810 2 DEBUG nova.network.neutron [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Refreshing network info cache for port 7da3e5f9-b358-4404-825b-b1ad43bc54ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.812 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start _get_guest_xml network_info=[{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.816 2 WARNING nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.820 2 DEBUG nova.virt.libvirt.host [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.820 2 DEBUG nova.virt.libvirt.host [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.823 2 DEBUG nova.virt.libvirt.host [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.823 2 DEBUG nova.virt.libvirt.host [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.824 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.824 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.825 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.825 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.825 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.825 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.825 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.826 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.826 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.826 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.826 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.826 2 DEBUG nova.virt.hardware [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:21 compute-0 nova_compute[257802]: 2025-10-02 12:22:21.829 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363589560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.250 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.280 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.287 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:22.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651179448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.772 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.773 2 DEBUG nova.virt.libvirt.vif [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.774 2 DEBUG nova.network.os_vif_util [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.775 2 DEBUG nova.network.os_vif_util [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.777 2 DEBUG nova.objects.instance [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.815 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <uuid>9ae72f32-b9fd-44eb-b10d-79119ad2ca85</uuid>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <name>instance-0000005d</name>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-1174011223</nova:name>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:22:21</nova:creationTime>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <nova:port uuid="7da3e5f9-b358-4404-825b-b1ad43bc54ac">
Oct 02 12:22:22 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <system>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="serial">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="uuid">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </system>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <os>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </os>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <features>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </features>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk">
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config">
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:fc:ac:3b"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <target dev="tap7da3e5f9-b3"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/console.log" append="off"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <video>
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </video>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:22:22 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:22:22 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:22:22 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:22:22 compute-0 nova_compute[257802]: </domain>
Oct 02 12:22:22 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.817 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Preparing to wait for external event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.817 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.818 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.818 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.819 2 DEBUG nova.virt.libvirt.vif [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.820 2 DEBUG nova.network.os_vif_util [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.820 2 DEBUG nova.network.os_vif_util [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.821 2 DEBUG os_vif [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7da3e5f9-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7da3e5f9-b3, col_values=(('external_ids', {'iface-id': '7da3e5f9-b358-4404-825b-b1ad43bc54ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ac:3b', 'vm-uuid': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:22 compute-0 NetworkManager[44987]: <info>  [1759407742.8312] manager: (tap7da3e5f9-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.839 2 INFO os_vif [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:22:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.918 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Successfully updated port: 81b397b3-6d28-4163-a930-341d2b78d96a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.941 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.942 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.942 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:fc:ac:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.943 2 INFO nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Using config drive
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.978 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.987 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.988 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquired lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:22 compute-0 nova_compute[257802]: 2025-10-02 12:22:22.988 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:23 compute-0 ceph-mon[73607]: pgmap v1793: 305 pgs: 305 active+clean; 425 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 245 op/s
Oct 02 12:22:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3363589560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1651179448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.374 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 482 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.4 MiB/s wr, 287 op/s
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.481 2 INFO nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Creating config drive at /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.487 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzl5vtca execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.590 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407728.5884619, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.591 2 INFO nova.compute.manager [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Stopped (Lifecycle Event)
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.614 2 DEBUG nova.compute.manager [None req-80e1bae7-f1f1-4cbe-bef3-28a89d222ea6 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.642 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzl5vtca" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.675 2 DEBUG nova.storage.rbd_utils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.679 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.708 2 DEBUG nova.network.neutron [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updated VIF entry in instance network info cache for port 7da3e5f9-b358-4404-825b-b1ad43bc54ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.709 2 DEBUG nova.network.neutron [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.726 2 DEBUG oslo_concurrency.lockutils [req-33922968-f2b1-42ec-8de2-49d81ec25cb4 req-dbbf8176-b066-463a-bd3c-fb6b43a3f26d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.897 2 DEBUG oslo_concurrency.processutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config 9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.897 2 INFO nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Deleting local config drive /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/disk.config because it was imported into RBD.
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.924 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-changed-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.925 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Refreshing instance network info cache due to event network-changed-81b397b3-6d28-4163-a930-341d2b78d96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.925 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:23 compute-0 NetworkManager[44987]: <info>  [1759407743.9602] manager: (tap7da3e5f9-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Oct 02 12:22:23 compute-0 kernel: tap7da3e5f9-b3: entered promiscuous mode
Oct 02 12:22:23 compute-0 ovn_controller[148183]: 2025-10-02T12:22:23Z|00389|binding|INFO|Claiming lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac for this chassis.
Oct 02 12:22:23 compute-0 ovn_controller[148183]: 2025-10-02T12:22:23Z|00390|binding|INFO|7da3e5f9-b358-4404-825b-b1ad43bc54ac: Claiming fa:16:3e:fc:ac:3b 10.100.0.3
Oct 02 12:22:23 compute-0 nova_compute[257802]: 2025-10-02 12:22:23.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:23.989 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '2', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:23.991 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:22:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:23.993 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.009 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[be444941-553e-4439-9d29-3ab901d16b5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.010 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:24 compute-0 systemd-udevd[314621]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.014 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.014 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[02884768-9ce4-43f7-8ac8-8980d3b34132]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.015 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa46c9e-8387-42c0-8f0a-b140d4e7e0a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 systemd-machined[211836]: New machine qemu-45-instance-0000005d.
Oct 02 12:22:24 compute-0 ovn_controller[148183]: 2025-10-02T12:22:24Z|00391|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac ovn-installed in OVS
Oct 02 12:22:24 compute-0 ovn_controller[148183]: 2025-10-02T12:22:24Z|00392|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac up in Southbound
Oct 02 12:22:24 compute-0 NetworkManager[44987]: <info>  [1759407744.0235] device (tap7da3e5f9-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:24 compute-0 NetworkManager[44987]: <info>  [1759407744.0243] device (tap7da3e5f9-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.034 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ae48afb4-9b17-430a-a942-3187cb3d6012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-0000005d.
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.071 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13afc8ab-a136-43f2-a232-808117885631]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.108 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[64a75fcc-aa30-4dce-8c92-098a3c38632d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 NetworkManager[44987]: <info>  [1759407744.1166] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Oct 02 12:22:24 compute-0 systemd-udevd[314624]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.115 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2568c6aa-f999-4adb-bc48-c0d3d6c1b0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2198849570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/376438984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3974811453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.170 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb693af-920b-4ddd-b71b-c4a730fd704d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.174 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc540e0-7d03-4ec1-8847-0f59cb705bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 NetworkManager[44987]: <info>  [1759407744.2062] device (tap4035a600-40): carrier: link connected
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.211 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0eceb2-d8cf-444b-b38f-cb43e6a9bf01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b493c8-ab76-435f-aa0b-da5e9c01771d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581183, 'reachable_time': 43739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314653, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.256 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0550d0-93ed-4a59-8a07-0404848357b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581183, 'tstamp': 581183}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314654, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.287 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6819301e-4561-49fa-ad52-9e6bfdbd5172]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581183, 'reachable_time': 43739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314655, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.326 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a24c3bac-0d77-4bfa-8e14-dad29ba50d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.402 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[63619c1d-69b6-4fb0-844c-76f2b6078bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.403 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.403 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.404 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:24 compute-0 NetworkManager[44987]: <info>  [1759407744.4064] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct 02 12:22:24 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.417 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:24 compute-0 ovn_controller[148183]: 2025-10-02T12:22:24Z|00393|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.419 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.420 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a6dcac-79ee-4b7e-a98d-16e6a4f5e869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.421 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:24.421 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:24.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.696 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Updating instance_info_cache with network_info: [{"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.721 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Releasing lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.721 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Instance network_info: |[{"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.721 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.721 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Refreshing network info cache for port 81b397b3-6d28-4163-a930-341d2b78d96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.724 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Start _get_guest_xml network_info=[{"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.728 2 WARNING nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.736 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.736 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.739 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.740 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.741 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.741 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.742 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.742 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.742 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.742 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.742 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.743 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.743 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.743 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.743 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.743 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:24 compute-0 nova_compute[257802]: 2025-10-02 12:22:24.746 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:24 compute-0 podman[314730]: 2025-10-02 12:22:24.79762342 +0000 UTC m=+0.082886111 container create 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:22:24 compute-0 podman[314730]: 2025-10-02 12:22:24.759642739 +0000 UTC m=+0.044905490 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:24 compute-0 systemd[1]: Started libpod-conmon-3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947.scope.
Oct 02 12:22:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:24.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8bd282c0d6e6d22d52f0312e8fbd4a403fe5b610d6d99bd867ad480c5eb66fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:24 compute-0 podman[314730]: 2025-10-02 12:22:24.900878479 +0000 UTC m=+0.186141170 container init 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:24 compute-0 podman[314730]: 2025-10-02 12:22:24.907711707 +0000 UTC m=+0.192974378 container start 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:22:24 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [NOTICE]   (314769) : New worker (314771) forked
Oct 02 12:22:24 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [NOTICE]   (314769) : Loading success.
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.024 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407745.0244286, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.025 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Started (Lifecycle Event)
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.047 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.049 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407745.0272107, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.050 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Paused (Lifecycle Event)
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.065 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.067 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.083 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:25 compute-0 ceph-mon[73607]: pgmap v1794: 305 pgs: 305 active+clean; 482 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.4 MiB/s wr, 287 op/s
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.232 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.260 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.264 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 511 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 311 op/s
Oct 02 12:22:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999755845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.693 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.694 2 DEBUG nova.virt.libvirt.vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-1',id=94,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:19Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.694 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.695 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.696 2 DEBUG nova.objects.instance [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'pci_devices' on Instance uuid 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.715 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <uuid>6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c</uuid>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <name>instance-0000005e</name>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:name>tempest-ListServersNegativeTestJSON-server-478094513-1</nova:name>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:22:24</nova:creationTime>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:user uuid="93e805bcb0e047ca9d45c653f5ec913d">tempest-ListServersNegativeTestJSON-1913538533-project-member</nova:user>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:project uuid="91d108e807094b0fa8e63a923d2269ee">tempest-ListServersNegativeTestJSON-1913538533</nova:project>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <nova:port uuid="81b397b3-6d28-4163-a930-341d2b78d96a">
Oct 02 12:22:25 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="serial">6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="uuid">6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk">
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config">
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:22:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:f8:30:54"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <target dev="tap81b397b3-6d"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/console.log" append="off"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:22:25 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:22:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:22:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:22:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:22:25 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.717 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Preparing to wait for external event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.717 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.718 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.718 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.719 2 DEBUG nova.virt.libvirt.vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-1',id=94,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:19Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.719 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.720 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.720 2 DEBUG os_vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81b397b3-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81b397b3-6d, col_values=(('external_ids', {'iface-id': '81b397b3-6d28-4163-a930-341d2b78d96a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:30:54', 'vm-uuid': '6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:25 compute-0 NetworkManager[44987]: <info>  [1759407745.7273] manager: (tap81b397b3-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.734 2 INFO os_vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d')
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.794 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.794 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.794 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No VIF found with MAC fa:16:3e:f8:30:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.795 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Using config drive
Oct 02 12:22:25 compute-0 nova_compute[257802]: 2025-10-02 12:22:25.818 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:25 compute-0 podman[314843]: 2025-10-02 12:22:25.925823186 +0000 UTC m=+0.059586501 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:26.047 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:26.048 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.196 2 DEBUG nova.compute.manager [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.197 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.197 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.197 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.197 2 DEBUG nova.compute.manager [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Processing event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.197 2 DEBUG nova.compute.manager [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.198 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.198 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.198 2 DEBUG oslo_concurrency.lockutils [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.198 2 DEBUG nova.compute.manager [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.198 2 WARNING nova.compute.manager [req-3ff59012-a00a-4cb3-9d09-0ef7cef8efd4 req-0000eb39-b2e0-4bb7-894b-2a321ee7e4e0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state building and task_state spawning.
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.199 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.203 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407746.2031832, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.204 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Resumed (Lifecycle Event)
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.205 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.208 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance spawned successfully.
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.209 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/684442568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/193472829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3999755845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1046833467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.225 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.228 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.228 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.229 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.229 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.229 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.232 2 DEBUG nova.virt.libvirt.driver [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.237 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.269 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.318 2 INFO nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Took 8.62 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.318 2 DEBUG nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.407 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Creating config drive at /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.412 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gjkui6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.438 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Updated VIF entry in instance network info cache for port 81b397b3-6d28-4163-a930-341d2b78d96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.439 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Updating instance_info_cache with network_info: [{"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:26.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.452 2 INFO nova.compute.manager [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Took 9.71 seconds to build instance.
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.474 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.475 2 DEBUG oslo_concurrency.lockutils [None req-d496c7ac-f816-4cc2-bbdd-7ccf6ba4e25b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.544 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gjkui6z" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.578 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:26 compute-0 nova_compute[257802]: 2025-10-02 12:22:26.584 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:26.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:26.939 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:26.939 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:26.940 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.087 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.089 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Deleting local config drive /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c/disk.config because it was imported into RBD.
Oct 02 12:22:27 compute-0 kernel: tap81b397b3-6d: entered promiscuous mode
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.1450] manager: (tap81b397b3-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Oct 02 12:22:27 compute-0 ovn_controller[148183]: 2025-10-02T12:22:27Z|00394|binding|INFO|Claiming lport 81b397b3-6d28-4163-a930-341d2b78d96a for this chassis.
Oct 02 12:22:27 compute-0 ovn_controller[148183]: 2025-10-02T12:22:27Z|00395|binding|INFO|81b397b3-6d28-4163-a930-341d2b78d96a: Claiming fa:16:3e:f8:30:54 10.100.0.6
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.154 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:30:54 10.100.0.6'], port_security=['fa:16:3e:f8:30:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91d108e807094b0fa8e63a923d2269ee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f5f828a-9c59-4be3-90d2-ef448e431573', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91243388-a708-4071-bc5f-90666534b8e0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=81b397b3-6d28-4163-a930-341d2b78d96a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.156 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 81b397b3-6d28-4163-a930-341d2b78d96a in datapath 81df0bd4-1de1-409c-8730-5d718bbb9ab0 bound to our chassis
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.160 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81df0bd4-1de1-409c-8730-5d718bbb9ab0
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.174 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[17893460-bb9d-4f80-8a4b-a5cd035fda1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.175 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81df0bd4-11 in ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.176 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81df0bd4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.176 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b252ffe-a1ad-4a15-98eb-162b83f9fe72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_controller[148183]: 2025-10-02T12:22:27Z|00396|binding|INFO|Setting lport 81b397b3-6d28-4163-a930-341d2b78d96a ovn-installed in OVS
Oct 02 12:22:27 compute-0 ovn_controller[148183]: 2025-10-02T12:22:27Z|00397|binding|INFO|Setting lport 81b397b3-6d28-4163-a930-341d2b78d96a up in Southbound
Oct 02 12:22:27 compute-0 systemd-udevd[314915]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.180 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f040ae7-394f-476d-9afa-90da959fb928]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.191 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e7660f-5411-4364-a914-e9cb7906a9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.1945] device (tap81b397b3-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.1953] device (tap81b397b3-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:27 compute-0 systemd-machined[211836]: New machine qemu-46-instance-0000005e.
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.205 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8aa0373-8754-4f98-862c-f9175f13c235]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000005e.
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.232 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9928019e-0688-4107-b4da-475a97cfd838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 systemd-udevd[314921]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.2390] manager: (tap81df0bd4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/187)
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.241 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c07c15ae-3ac5-4a0d-ab00-c85a5c228cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ceph-mon[73607]: pgmap v1795: 305 pgs: 305 active+clean; 511 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 311 op/s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.276 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a76c1d-70ab-4f0f-b3d0-d3e04cb05e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.281 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[502bb30c-3379-4919-8959-88c4e63090d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.3043] device (tap81df0bd4-10): carrier: link connected
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.312 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[00c3dbfb-a014-4f34-87d0-1c5df4658f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.331 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[472d57d4-20c9-4606-b655-5796d97bddae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81df0bd4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:05:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581492, 'reachable_time': 29206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314950, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.345 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b9fc359b-d560-4aca-8feb-521bddbcc7b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581492, 'tstamp': 581492}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314951, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.371 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2ef19f-0f3b-49cc-8fc2-0ce7876224d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81df0bd4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:05:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581492, 'reachable_time': 29206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314952, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.399 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3f0437-48b8-42ad-95be-7ce6ad12f6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 511 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 281 op/s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.460 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2ea6ef-5d23-43d4-9f28-b4c45312895a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.462 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81df0bd4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.462 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.462 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81df0bd4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:27 compute-0 NetworkManager[44987]: <info>  [1759407747.4651] manager: (tap81df0bd4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Oct 02 12:22:27 compute-0 kernel: tap81df0bd4-10: entered promiscuous mode
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.467 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81df0bd4-10, col_values=(('external_ids', {'iface-id': '158c9d13-b7ad-4d55-8f96-3c408ed5e2d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:27 compute-0 ovn_controller[148183]: 2025-10-02T12:22:27Z|00398|binding|INFO|Releasing lport 158c9d13-b7ad-4d55-8f96-3c408ed5e2d5 from this chassis (sb_readonly=0)
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.487 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.488 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8879cf7d-3f1f-45e3-82a4-992e415e3b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.489 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-81df0bd4-1de1-409c-8730-5d718bbb9ab0
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 81df0bd4-1de1-409c-8730-5d718bbb9ab0
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:27.490 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'env', 'PROCESS_TAG=haproxy-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81df0bd4-1de1-409c-8730-5d718bbb9ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.627 2 DEBUG nova.compute.manager [req-9154a972-4243-4958-a250-01bd10da029d req-c8334926-3972-4631-ba6b-051897e198d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.628 2 DEBUG oslo_concurrency.lockutils [req-9154a972-4243-4958-a250-01bd10da029d req-c8334926-3972-4631-ba6b-051897e198d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.629 2 DEBUG oslo_concurrency.lockutils [req-9154a972-4243-4958-a250-01bd10da029d req-c8334926-3972-4631-ba6b-051897e198d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.630 2 DEBUG oslo_concurrency.lockutils [req-9154a972-4243-4958-a250-01bd10da029d req-c8334926-3972-4631-ba6b-051897e198d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:27 compute-0 nova_compute[257802]: 2025-10-02 12:22:27.630 2 DEBUG nova.compute.manager [req-9154a972-4243-4958-a250-01bd10da029d req-c8334926-3972-4631-ba6b-051897e198d8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Processing event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:27 compute-0 podman[315026]: 2025-10-02 12:22:27.929288361 +0000 UTC m=+0.051541323 container create a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:27 compute-0 systemd[1]: Started libpod-conmon-a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d.scope.
Oct 02 12:22:28 compute-0 podman[315026]: 2025-10-02 12:22:27.904178096 +0000 UTC m=+0.026431078 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7163de237b7261123fdce283dee8b70b43c57a272fd84e44973fef0e44f4b0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:28 compute-0 podman[315026]: 2025-10-02 12:22:28.036280022 +0000 UTC m=+0.158533004 container init a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:22:28 compute-0 podman[315026]: 2025-10-02 12:22:28.042576657 +0000 UTC m=+0.164829619 container start a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:22:28 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [NOTICE]   (315044) : New worker (315046) forked
Oct 02 12:22:28 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [NOTICE]   (315044) : Loading success.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.203 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407748.2031562, 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.204 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] VM Started (Lifecycle Event)
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.208 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.212 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.215 2 INFO nova.virt.libvirt.driver [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Instance spawned successfully.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.216 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.224 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.227 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.236 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.237 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.237 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.238 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.238 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.239 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.263 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.263 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407748.2033427, 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.263 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] VM Paused (Lifecycle Event)
Oct 02 12:22:28 compute-0 ceph-mon[73607]: pgmap v1796: 305 pgs: 305 active+clean; 511 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 281 op/s
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.294 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.297 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407748.2119033, 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.297 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] VM Resumed (Lifecycle Event)
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.317 2 INFO nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Took 9.12 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.317 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.320 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.327 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.361 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.415 2 INFO nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Took 10.24 seconds to build instance.
Oct 02 12:22:28 compute-0 nova_compute[257802]: 2025-10-02 12:22:28.438 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:28.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.065 2 DEBUG nova.virt.libvirt.driver [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:22:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 511 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 7.2 MiB/s wr, 372 op/s
Oct 02 12:22:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.726 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.728 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.728 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.728 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.728 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] No waiting events found dispatching network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.729 2 WARNING nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received unexpected event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a for instance with vm_state active and task_state None.
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.729 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-changed-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.729 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Refreshing instance network info cache due to event network-changed-7da3e5f9-b358-4404-825b-b1ad43bc54ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.730 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.730 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:29 compute-0 nova_compute[257802]: 2025-10-02 12:22:29.730 2 DEBUG nova.network.neutron [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Refreshing network info cache for port 7da3e5f9-b358-4404-825b-b1ad43bc54ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:29 compute-0 podman[315057]: 2025-10-02 12:22:29.92593038 +0000 UTC m=+0.058762461 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:22:29 compute-0 podman[315056]: 2025-10-02 12:22:29.931694751 +0000 UTC m=+0.066882450 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 12:22:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:30.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:30 compute-0 ceph-mon[73607]: pgmap v1797: 305 pgs: 305 active+clean; 511 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 7.2 MiB/s wr, 372 op/s
Oct 02 12:22:30 compute-0 ovn_controller[148183]: 2025-10-02T12:22:30Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:7b:db 10.100.0.6
Oct 02 12:22:30 compute-0 ovn_controller[148183]: 2025-10-02T12:22:30Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:7b:db 10.100.0.6
Oct 02 12:22:30 compute-0 nova_compute[257802]: 2025-10-02 12:22:30.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:30.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:31.050 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 519 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.4 MiB/s wr, 441 op/s
Oct 02 12:22:32 compute-0 nova_compute[257802]: 2025-10-02 12:22:32.229 2 DEBUG nova.network.neutron [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updated VIF entry in instance network info cache for port 7da3e5f9-b358-4404-825b-b1ad43bc54ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:32 compute-0 nova_compute[257802]: 2025-10-02 12:22:32.230 2 DEBUG nova.network.neutron [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:32 compute-0 nova_compute[257802]: 2025-10-02 12:22:32.253 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:32 compute-0 nova_compute[257802]: 2025-10-02 12:22:32.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:32.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:32 compute-0 ceph-mon[73607]: pgmap v1798: 305 pgs: 305 active+clean; 519 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.4 MiB/s wr, 441 op/s
Oct 02 12:22:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 539 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 6.9 MiB/s wr, 480 op/s
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.330 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.330 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.331 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.331 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.331 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.332 2 INFO nova.compute.manager [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Terminating instance
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.333 2 DEBUG nova.compute.manager [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:34 compute-0 kernel: tap81b397b3-6d (unregistering): left promiscuous mode
Oct 02 12:22:34 compute-0 NetworkManager[44987]: <info>  [1759407754.3821] device (tap81b397b3-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 ovn_controller[148183]: 2025-10-02T12:22:34Z|00399|binding|INFO|Releasing lport 81b397b3-6d28-4163-a930-341d2b78d96a from this chassis (sb_readonly=0)
Oct 02 12:22:34 compute-0 ovn_controller[148183]: 2025-10-02T12:22:34Z|00400|binding|INFO|Setting lport 81b397b3-6d28-4163-a930-341d2b78d96a down in Southbound
Oct 02 12:22:34 compute-0 ovn_controller[148183]: 2025-10-02T12:22:34Z|00401|binding|INFO|Removing iface tap81b397b3-6d ovn-installed in OVS
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.433 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:30:54 10.100.0.6'], port_security=['fa:16:3e:f8:30:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91d108e807094b0fa8e63a923d2269ee', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f5f828a-9c59-4be3-90d2-ef448e431573', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91243388-a708-4071-bc5f-90666534b8e0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=81b397b3-6d28-4163-a930-341d2b78d96a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.434 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 81b397b3-6d28-4163-a930-341d2b78d96a in datapath 81df0bd4-1de1-409c-8730-5d718bbb9ab0 unbound from our chassis
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.435 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81df0bd4-1de1-409c-8730-5d718bbb9ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.437 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76d3a0c4-8b9d-433d-97c8-1df4fc9b9e3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.437 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 namespace which is not needed anymore
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:34.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:34 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 02 12:22:34 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005e.scope: Consumed 6.774s CPU time.
Oct 02 12:22:34 compute-0 systemd-machined[211836]: Machine qemu-46-instance-0000005e terminated.
Oct 02 12:22:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.563 2 INFO nova.virt.libvirt.driver [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Instance destroyed successfully.
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.563 2 DEBUG nova.objects.instance [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'resources' on Instance uuid 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [NOTICE]   (315044) : haproxy version is 2.8.14-c23fe91
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [NOTICE]   (315044) : path to executable is /usr/sbin/haproxy
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [WARNING]  (315044) : Exiting Master process...
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [WARNING]  (315044) : Exiting Master process...
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [ALERT]    (315044) : Current worker (315046) exited with code 143 (Terminated)
Oct 02 12:22:34 compute-0 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[315040]: [WARNING]  (315044) : All workers exited. Exiting... (0)
Oct 02 12:22:34 compute-0 systemd[1]: libpod-a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d.scope: Deactivated successfully.
Oct 02 12:22:34 compute-0 podman[315123]: 2025-10-02 12:22:34.575558134 +0000 UTC m=+0.052446447 container died a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.579 2 DEBUG nova.virt.libvirt.vif [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-1',id=94,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:28Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.580 2 DEBUG nova.network.os_vif_util [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "81b397b3-6d28-4163-a930-341d2b78d96a", "address": "fa:16:3e:f8:30:54", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81b397b3-6d", "ovs_interfaceid": "81b397b3-6d28-4163-a930-341d2b78d96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.580 2 DEBUG nova.network.os_vif_util [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.584 2 DEBUG os_vif [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81b397b3-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.591 2 INFO os_vif [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:30:54,bridge_name='br-int',has_traffic_filtering=True,id=81b397b3-6d28-4163-a930-341d2b78d96a,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81b397b3-6d')
Oct 02 12:22:34 compute-0 ceph-mon[73607]: pgmap v1799: 305 pgs: 305 active+clean; 539 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 6.9 MiB/s wr, 480 op/s
Oct 02 12:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7163de237b7261123fdce283dee8b70b43c57a272fd84e44973fef0e44f4b0b-merged.mount: Deactivated successfully.
Oct 02 12:22:34 compute-0 podman[315123]: 2025-10-02 12:22:34.612614501 +0000 UTC m=+0.089502814 container cleanup a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:22:34 compute-0 systemd[1]: libpod-conmon-a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d.scope: Deactivated successfully.
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.666 2 DEBUG nova.compute.manager [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-unplugged-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.667 2 DEBUG oslo_concurrency.lockutils [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.667 2 DEBUG oslo_concurrency.lockutils [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.668 2 DEBUG oslo_concurrency.lockutils [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.668 2 DEBUG nova.compute.manager [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] No waiting events found dispatching network-vif-unplugged-81b397b3-6d28-4163-a930-341d2b78d96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.668 2 DEBUG nova.compute.manager [req-9590cd70-e7ee-424b-bf18-10dfb88eccd8 req-7c4f8626-6ce8-462b-969f-e3016f3598ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-unplugged-81b397b3-6d28-4163-a930-341d2b78d96a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:22:34 compute-0 podman[315178]: 2025-10-02 12:22:34.677121021 +0000 UTC m=+0.043736972 container remove a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.682 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9a74a65f-ef42-4dfa-bc3e-5fbc8bca3d4a]: (4, ('Thu Oct  2 12:22:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 (a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d)\na2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d\nThu Oct  2 12:22:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 (a2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d)\na2fddf92f838b787649b31f8fbd3dced01cbe168cf0c86ea1e99cde9b240dc9d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.685 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[38c25179-7ded-4270-b2fe-0ac98f8ff573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.686 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81df0bd4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 kernel: tap81df0bd4-10: left promiscuous mode
Oct 02 12:22:34 compute-0 nova_compute[257802]: 2025-10-02 12:22:34.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.709 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[41730aaf-8925-45ff-b50e-38addbe324e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.734 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[be101acc-b142-46bd-a857-e3a86106120e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.736 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[671a8714-056f-4069-9b4e-bbe0b55410fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.751 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e0226644-5587-4f92-b32a-7f5f9e4c640b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581485, 'reachable_time': 34215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315199, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.753 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:22:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:34.753 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[93448ae6-7c2d-47a6-b6a8-839322a01a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d81df0bd4\x2d1de1\x2d409c\x2d8730\x2d5d718bbb9ab0.mount: Deactivated successfully.
Oct 02 12:22:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:34.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.061 2 INFO nova.virt.libvirt.driver [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Deleting instance files /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_del
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.062 2 INFO nova.virt.libvirt.driver [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Deletion of /var/lib/nova/instances/6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c_del complete
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.121 2 INFO nova.compute.manager [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.122 2 DEBUG oslo.service.loopingcall [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.123 2 DEBUG nova.compute.manager [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.123 2 DEBUG nova.network.neutron [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 527 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 4.4 MiB/s wr, 449 op/s
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.725 2 DEBUG nova.network.neutron [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.751 2 INFO nova.compute.manager [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Took 0.63 seconds to deallocate network for instance.
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.808 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.809 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:35 compute-0 nova_compute[257802]: 2025-10-02 12:22:35.891 2 DEBUG oslo_concurrency.processutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:36 compute-0 podman[315201]: 2025-10-02 12:22:36.009109709 +0000 UTC m=+0.146205503 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:22:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595857861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.382 2 DEBUG oslo_concurrency.processutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.389 2 DEBUG nova.compute.provider_tree [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.411 2 DEBUG nova.scheduler.client.report [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.437 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.474 2 INFO nova.scheduler.client.report [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Deleted allocations for instance 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.567 2 DEBUG oslo_concurrency.lockutils [None req-976e601c-7e88-4ebb-af00-a59aacc59e66 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:36 compute-0 ceph-mon[73607]: pgmap v1800: 305 pgs: 305 active+clean; 527 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 4.4 MiB/s wr, 449 op/s
Oct 02 12:22:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2595857861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.761 2 DEBUG nova.compute.manager [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.761 2 DEBUG oslo_concurrency.lockutils [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.762 2 DEBUG oslo_concurrency.lockutils [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.762 2 DEBUG oslo_concurrency.lockutils [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.762 2 DEBUG nova.compute.manager [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] No waiting events found dispatching network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.763 2 WARNING nova.compute.manager [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received unexpected event network-vif-plugged-81b397b3-6d28-4163-a930-341d2b78d96a for instance with vm_state deleted and task_state None.
Oct 02 12:22:36 compute-0 nova_compute[257802]: 2025-10-02 12:22:36.763 2 DEBUG nova.compute.manager [req-7218941f-86bc-4b66-9fb6-c57f11a3dcdb req-8df54304-ad33-4134-bb41-e2b974837cc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Received event network-vif-deleted-81b397b3-6d28-4163-a930-341d2b78d96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:37 compute-0 nova_compute[257802]: 2025-10-02 12:22:37.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 527 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.2 MiB/s wr, 369 op/s
Oct 02 12:22:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:38.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:38 compute-0 ceph-mon[73607]: pgmap v1801: 305 pgs: 305 active+clean; 527 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.2 MiB/s wr, 369 op/s
Oct 02 12:22:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:38.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:39 compute-0 ovn_controller[148183]: 2025-10-02T12:22:39Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:ac:3b 10.100.0.3
Oct 02 12:22:39 compute-0 ovn_controller[148183]: 2025-10-02T12:22:39Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:ac:3b 10.100.0.3
Oct 02 12:22:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 503 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 3.1 MiB/s wr, 403 op/s
Oct 02 12:22:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:39 compute-0 nova_compute[257802]: 2025-10-02 12:22:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:40 compute-0 nova_compute[257802]: 2025-10-02 12:22:40.111 2 DEBUG nova.virt.libvirt.driver [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:22:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:40.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:40 compute-0 sudo[315251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:40 compute-0 sudo[315251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:40 compute-0 sudo[315251]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:40 compute-0 sudo[315277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:40 compute-0 sudo[315277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:40 compute-0 sudo[315277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:40 compute-0 ceph-mon[73607]: pgmap v1802: 305 pgs: 305 active+clean; 503 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 3.1 MiB/s wr, 403 op/s
Oct 02 12:22:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 535 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.7 MiB/s wr, 347 op/s
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:22:42
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:22:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:42.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:42 compute-0 kernel: tapd92a78aa-54 (unregistering): left promiscuous mode
Oct 02 12:22:42 compute-0 NetworkManager[44987]: <info>  [1759407762.5178] device (tapd92a78aa-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 ovn_controller[148183]: 2025-10-02T12:22:42Z|00402|binding|INFO|Releasing lport d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f from this chassis (sb_readonly=0)
Oct 02 12:22:42 compute-0 ovn_controller[148183]: 2025-10-02T12:22:42Z|00403|binding|INFO|Setting lport d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f down in Southbound
Oct 02 12:22:42 compute-0 ovn_controller[148183]: 2025-10-02T12:22:42Z|00404|binding|INFO|Removing iface tapd92a78aa-54 ovn-installed in OVS
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:42.543 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:7b:db 10.100.0.6'], port_security=['fa:16:3e:96:7b:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '12549ae0-14ff-4982-be0e-4ada2a821895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:42.545 158261 INFO neutron.agent.ovn.metadata.agent [-] Port d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis
Oct 02 12:22:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:42.548 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:42.550 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd37628-3353-4715-bafe-6bdc2bb3dd1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:42.551 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore
Oct 02 12:22:42 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 02 12:22:42 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005c.scope: Consumed 14.440s CPU time.
Oct 02 12:22:42 compute-0 systemd-machined[211836]: Machine qemu-44-instance-0000005c terminated.
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.828 2 DEBUG nova.compute.manager [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-vif-unplugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.828 2 DEBUG oslo_concurrency.lockutils [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.829 2 DEBUG oslo_concurrency.lockutils [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.829 2 DEBUG oslo_concurrency.lockutils [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.829 2 DEBUG nova.compute.manager [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] No waiting events found dispatching network-vif-unplugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:42 compute-0 nova_compute[257802]: 2025-10-02 12:22:42.829 2 WARNING nova.compute.manager [req-3a42c3a2-9fef-4f37-be0b-96d7eb1c7fac req-81a89d09-67c5-41d0-b3f7-b0f6ac41c5c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received unexpected event network-vif-unplugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f for instance with vm_state active and task_state powering-off.
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [NOTICE]   (314044) : haproxy version is 2.8.14-c23fe91
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [NOTICE]   (314044) : path to executable is /usr/sbin/haproxy
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [WARNING]  (314044) : Exiting Master process...
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [WARNING]  (314044) : Exiting Master process...
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [ALERT]    (314044) : Current worker (314046) exited with code 143 (Terminated)
Oct 02 12:22:42 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[314040]: [WARNING]  (314044) : All workers exited. Exiting... (0)
Oct 02 12:22:42 compute-0 systemd[1]: libpod-50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69.scope: Deactivated successfully.
Oct 02 12:22:42 compute-0 podman[315326]: 2025-10-02 12:22:42.857613434 +0000 UTC m=+0.217589981 container died 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:42 compute-0 ceph-mon[73607]: pgmap v1803: 305 pgs: 305 active+clean; 535 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.7 MiB/s wr, 347 op/s
Oct 02 12:22:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/514671987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1013036572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69-userdata-shm.mount: Deactivated successfully.
Oct 02 12:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbaf3741b1a7ca7bea6ba09b1cdae028ddec0db76cae3bd625c65f2ce60647b-merged.mount: Deactivated successfully.
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.132 2 INFO nova.virt.libvirt.driver [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance shutdown successfully after 24 seconds.
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.140 2 INFO nova.virt.libvirt.driver [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance destroyed successfully.
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.140 2 DEBUG nova.objects.instance [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'numa_topology' on Instance uuid 12549ae0-14ff-4982-be0e-4ada2a821895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.163 2 DEBUG nova.compute.manager [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.229 2 DEBUG oslo_concurrency.lockutils [None req-7c9731da-d448-4494-b5e8-4837b2b99b0c a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:43 compute-0 podman[315326]: 2025-10-02 12:22:43.271476801 +0000 UTC m=+0.631453348 container cleanup 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:22:43 compute-0 systemd[1]: libpod-conmon-50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69.scope: Deactivated successfully.
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:22:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 503 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.9 MiB/s wr, 333 op/s
Oct 02 12:22:43 compute-0 podman[315367]: 2025-10-02 12:22:43.523097535 +0000 UTC m=+0.218015201 container remove 50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.533 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8d1914-507f-4181-b22e-ee40ad51fec7]: (4, ('Thu Oct  2 12:22:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69)\n50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69\nThu Oct  2 12:22:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69)\n50f0de6ebe16624343f49564200a3e8f42c1d45ea9eb4921e3fbf2cfe57cef69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.535 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5288752b-8d17-431b-af7a-06d2652a3961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.536 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:43 compute-0 kernel: tap7754c79a-c0: left promiscuous mode
Oct 02 12:22:43 compute-0 nova_compute[257802]: 2025-10-02 12:22:43.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.600 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd52a63-55c1-4032-992c-68bc3d093771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.631 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[25fa48a4-e8be-4896-abca-6118d280ba2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.633 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33115931-2182-4c8b-86cc-96de60cb2568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.647 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fd2c82-0196-4958-8f08-92c95f4a9bec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580361, 'reachable_time': 19565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315385, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.649 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:22:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:22:43.649 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1d446d2e-78f1-4e38-882f-d053afc8ccba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.138 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.138 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.139 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.139 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.140 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.142 2 INFO nova.compute.manager [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Terminating instance
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.144 2 DEBUG nova.compute.manager [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.154 2 INFO nova.virt.libvirt.driver [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Instance destroyed successfully.
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.154 2 DEBUG nova.objects.instance [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid 12549ae0-14ff-4982-be0e-4ada2a821895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.173 2 DEBUG nova.virt.libvirt.vif [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1694975376',display_name='tempest-DeleteServersTestJSON-server-1694975376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1694975376',id=92,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-218tqu1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:43Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=12549ae0-14ff-4982-be0e-4ada2a821895,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.174 2 DEBUG nova.network.os_vif_util [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "address": "fa:16:3e:96:7b:db", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92a78aa-54", "ovs_interfaceid": "d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.176 2 DEBUG nova.network.os_vif_util [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.176 2 DEBUG os_vif [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd92a78aa-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.188 2 INFO os_vif [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:7b:db,bridge_name='br-int',has_traffic_filtering=True,id=d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92a78aa-54')
Oct 02 12:22:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:44.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.924 2 INFO nova.virt.libvirt.driver [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Deleting instance files /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895_del
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.926 2 INFO nova.virt.libvirt.driver [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Deletion of /var/lib/nova/instances/12549ae0-14ff-4982-be0e-4ada2a821895_del complete
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.961 2 DEBUG nova.compute.manager [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.962 2 DEBUG oslo_concurrency.lockutils [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.962 2 DEBUG oslo_concurrency.lockutils [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.962 2 DEBUG oslo_concurrency.lockutils [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.963 2 DEBUG nova.compute.manager [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] No waiting events found dispatching network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:44 compute-0 nova_compute[257802]: 2025-10-02 12:22:44.963 2 WARNING nova.compute.manager [req-dab9a31b-77fd-4016-89f6-ae203372474c req-11e780cf-f62f-45b5-b134-b360d2f296e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received unexpected event network-vif-plugged-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f for instance with vm_state stopped and task_state deleting.
Oct 02 12:22:45 compute-0 nova_compute[257802]: 2025-10-02 12:22:45.025 2 INFO nova.compute.manager [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:45 compute-0 nova_compute[257802]: 2025-10-02 12:22:45.025 2 DEBUG oslo.service.loopingcall [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:45 compute-0 nova_compute[257802]: 2025-10-02 12:22:45.026 2 DEBUG nova.compute.manager [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:45 compute-0 nova_compute[257802]: 2025-10-02 12:22:45.026 2 DEBUG nova.network.neutron [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:45 compute-0 ceph-mon[73607]: pgmap v1804: 305 pgs: 305 active+clean; 503 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.9 MiB/s wr, 333 op/s
Oct 02 12:22:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 426 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 274 op/s
Oct 02 12:22:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4184840037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.188 2 DEBUG nova.network.neutron [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.209 2 INFO nova.compute.manager [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Took 1.18 seconds to deallocate network for instance.
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.257 2 DEBUG nova.compute.manager [req-91e5d2a2-18f8-4ecc-920c-721acda63efc req-1c0d09a1-b71a-4d86-a0c2-ac3e1003d63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Received event network-vif-deleted-d92a78aa-54f0-4abe-9e9c-8c7790e8cc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.267 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.268 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.345 2 DEBUG oslo_concurrency.processutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:46.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894482923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.768 2 DEBUG oslo_concurrency.processutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.774 2 DEBUG nova.compute.provider_tree [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.800 2 DEBUG nova.scheduler.client.report [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.820 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.855 2 INFO nova.scheduler.client.report [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocations for instance 12549ae0-14ff-4982-be0e-4ada2a821895
Oct 02 12:22:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:46.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:46 compute-0 nova_compute[257802]: 2025-10-02 12:22:46.960 2 DEBUG oslo_concurrency.lockutils [None req-56bb6b0c-b272-46ba-b1fc-ed326e66138b a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "12549ae0-14ff-4982-be0e-4ada2a821895" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:47 compute-0 ceph-mon[73607]: pgmap v1805: 305 pgs: 305 active+clean; 426 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 274 op/s
Oct 02 12:22:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1894482923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3969393644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:47 compute-0 nova_compute[257802]: 2025-10-02 12:22:47.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 426 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 584 KiB/s rd, 4.3 MiB/s wr, 204 op/s
Oct 02 12:22:48 compute-0 nova_compute[257802]: 2025-10-02 12:22:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:48.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 358 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 4.3 MiB/s wr, 215 op/s
Oct 02 12:22:49 compute-0 ceph-mon[73607]: pgmap v1806: 305 pgs: 305 active+clean; 426 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 584 KiB/s rd, 4.3 MiB/s wr, 204 op/s
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.562 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407754.560611, 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.562 2 INFO nova.compute.manager [-] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] VM Stopped (Lifecycle Event)
Oct 02 12:22:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:49 compute-0 ovn_controller[148183]: 2025-10-02T12:22:49Z|00405|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.627 2 DEBUG nova.compute.manager [None req-5885fa74-2027-43cb-8117-a5d1b388a89e - - - - - -] [instance: 6745ea5e-e009-4cb2-9c89-8fa54b9b8d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:49 compute-0 nova_compute[257802]: 2025-10-02 12:22:49.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:50 compute-0 nova_compute[257802]: 2025-10-02 12:22:50.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:50.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:51 compute-0 ceph-mon[73607]: pgmap v1807: 305 pgs: 305 active+clean; 358 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 4.3 MiB/s wr, 215 op/s
Oct 02 12:22:51 compute-0 nova_compute[257802]: 2025-10-02 12:22:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:51 compute-0 nova_compute[257802]: 2025-10-02 12:22:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 3.4 MiB/s wr, 181 op/s
Oct 02 12:22:52 compute-0 nova_compute[257802]: 2025-10-02 12:22:52.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:52 compute-0 ceph-mon[73607]: pgmap v1808: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 3.4 MiB/s wr, 181 op/s
Oct 02 12:22:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 1.7 MiB/s wr, 147 op/s
Oct 02 12:22:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/477719117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:54 compute-0 nova_compute[257802]: 2025-10-02 12:22:54.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065065998394751535 of space, bias 1.0, pg target 1.951979951842546 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.646643409466313 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:22:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 74 KiB/s wr, 52 op/s
Oct 02 12:22:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:56.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.369 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.370 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.370 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.371 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 13 KiB/s wr, 11 op/s
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.765 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407762.7640762, 12549ae0-14ff-4982-be0e-4ada2a821895 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.766 2 INFO nova.compute.manager [-] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] VM Stopped (Lifecycle Event)
Oct 02 12:22:57 compute-0 nova_compute[257802]: 2025-10-02 12:22:57.833 2 DEBUG nova.compute.manager [None req-cba3580d-a7ed-4d7a-b466-4da9ffc51b29 - - - - - -] [instance: 12549ae0-14ff-4982-be0e-4ada2a821895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:22:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 116K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.90 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 36.72 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 4102 syncs, 2.53 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:22:58 compute-0 nova_compute[257802]: 2025-10-02 12:22:58.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:22:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:22:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:22:58 compute-0 nova_compute[257802]: 2025-10-02 12:22:58.953 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:58 compute-0 nova_compute[257802]: 2025-10-02 12:22:58.997 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:58 compute-0 nova_compute[257802]: 2025-10-02 12:22:58.997 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.121 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.122 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:59 compute-0 nova_compute[257802]: 2025-10-02 12:22:59.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 391 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct 02 12:23:00 compute-0 sudo[315464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:00 compute-0 sudo[315464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 sudo[315464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:00 compute-0 sudo[315501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:00 compute-0 sudo[315501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 sudo[315501]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:00.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:00 compute-0 sudo[315536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:00 compute-0 sudo[315536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 sudo[315536]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:00 compute-0 sudo[315561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:23:00 compute-0 sudo[315561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.odxjnj missed beacon ack from the monitors
Oct 02 12:23:00 compute-0 sudo[315587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:00 compute-0 sudo[315587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 sudo[315587]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:00 compute-0 sudo[315619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:00 compute-0 sudo[315619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:00 compute-0 sudo[315619]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 391 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 6.1 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.928152084s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.927358627s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _txc_state_proc slow aio_wait, txc = 0x55bcd60d6000, latency = 6.959425926s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _txc_state_proc slow aio_wait, txc = 0x55bcd52bfb00, latency = 6.954778671s
Oct 02 12:23:01 compute-0 podman[315434]: 2025-10-02 12:23:01.838952758 +0000 UTC m=+4.966602851 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:23:01 compute-0 podman[315488]: 2025-10-02 12:23:01.846770108 +0000 UTC m=+1.535112673 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:23:01 compute-0 podman[315489]: 2025-10-02 12:23:01.851761381 +0000 UTC m=+1.532678315 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 12:23:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_flush, latency = 7.006064415s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.141632080s
Oct 02 12:23:01 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.143228531s, txc = 0x55bcd546cf00
Oct 02 12:23:01 compute-0 ceph-mon[73607]: pgmap v1809: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 1.7 MiB/s wr, 147 op/s
Oct 02 12:23:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2234734662' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2234734662' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:02 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.260053635s, txc = 0x55bcd6807500
Oct 02 12:23:02 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.213824749s, txc = 0x55bcd60d6000
Oct 02 12:23:02 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.206254959s, txc = 0x55bcd52bfb00
Oct 02 12:23:02 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.182034492s, txc = 0x55bcd612d200
Oct 02 12:23:02 compute-0 sudo[315561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1671313389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.264 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.382 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.382 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:02.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.576 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.577 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4312MB free_disk=20.85171890258789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.634 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.635 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.635 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:23:02 compute-0 nova_compute[257802]: 2025-10-02 12:23:02.673 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:23:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:23:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:03 compute-0 ceph-mon[73607]: pgmap v1810: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 74 KiB/s wr, 52 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73607]: pgmap v1811: 305 pgs: 305 active+clean; 358 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 13 KiB/s wr, 11 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/885307893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2726592695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: pgmap v1812: 305 pgs: 305 active+clean; 391 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73607]: pgmap v1813: 305 pgs: 305 active+clean; 391 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 6.1 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1671313389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3678170210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.116 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.124 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.155 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.246 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.247 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:03 compute-0 nova_compute[257802]: 2025-10-02 12:23:03.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 391 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f460dbf9-2305-4952-96ac-0d90cdb1b473 does not exist
Oct 02 12:23:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 263606dd-cf40-4b0f-8580-8a66e41346f5 does not exist
Oct 02 12:23:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8bc59a71-521e-4c74-b9cb-72b940d918bd does not exist
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:23:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:23:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:23:04 compute-0 sudo[315720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:04 compute-0 sudo[315720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:04 compute-0 sudo[315720]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3678170210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4172738426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1954220450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:23:04 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:23:04 compute-0 sudo[315745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:04 compute-0 sudo[315745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:04 compute-0 sudo[315745]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:04 compute-0 sudo[315770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:04 compute-0 sudo[315770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:04 compute-0 sudo[315770]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:04 compute-0 sudo[315795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:23:04 compute-0 sudo[315795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.368 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.369 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.392 2 DEBUG nova.objects.instance [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.452 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:04.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.615234372 +0000 UTC m=+0.022716977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.717042606 +0000 UTC m=+0.124525201 container create 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.733 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.734 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.735 2 INFO nova.compute.manager [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Attaching volume 64167daa-7aad-468b-9a08-b3195fe87186 to /dev/vdb
Oct 02 12:23:04 compute-0 systemd[1]: Started libpod-conmon-72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1.scope.
Oct 02 12:23:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.88496592 +0000 UTC m=+0.292448525 container init 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.892986777 +0000 UTC m=+0.300469362 container start 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:23:04 compute-0 friendly_gauss[315879]: 167 167
Oct 02 12:23:04 compute-0 systemd[1]: libpod-72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1.scope: Deactivated successfully.
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.906 2 DEBUG os_brick.utils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.907 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.911668864 +0000 UTC m=+0.319151449 container attach 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:23:04 compute-0 podman[315863]: 2025-10-02 12:23:04.911984401 +0000 UTC m=+0.319466986 container died 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.918 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.918 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[acb81f92-738f-4a62-8261-f3bd1e497eb1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.919 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.928 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.928 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6912a8b9-ceb3-4e2b-a8a1-dce200e0f99b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.930 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.938 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.938 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[aac4db5c-cf2b-4934-8eb3-34c17d044ce4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.939 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[b846b7d6-2ea3-4125-9b3c-93bd92ca2db9]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.940 2 DEBUG oslo_concurrency.processutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.964 2 DEBUG oslo_concurrency.processutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.966 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.967 2 DEBUG os_brick.utils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:23:04 compute-0 nova_compute[257802]: 2025-10-02 12:23:04.968 2 DEBUG nova.virt.block_device [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating existing volume attachment record: 87e5f9cc-9f59-4da9-a34b-13e28384afb3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a6b7504a86ab5302d057d43a6541b1640d06b0a1dc546118cf7b9318f27145-merged.mount: Deactivated successfully.
Oct 02 12:23:05 compute-0 ceph-mon[73607]: pgmap v1814: 305 pgs: 305 active+clean; 391 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Oct 02 12:23:05 compute-0 podman[315863]: 2025-10-02 12:23:05.287751706 +0000 UTC m=+0.695234291 container remove 72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gauss, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:23:05 compute-0 systemd[1]: libpod-conmon-72813f13ba40f68220838032169ba27ee0ba457cc3c56335afae8117b0e90de1.scope: Deactivated successfully.
Oct 02 12:23:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 391 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.3 MiB/s wr, 18 op/s
Oct 02 12:23:05 compute-0 podman[315911]: 2025-10-02 12:23:05.465263345 +0000 UTC m=+0.036309941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:05 compute-0 podman[315911]: 2025-10-02 12:23:05.60306424 +0000 UTC m=+0.174110806 container create f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.684 2 DEBUG nova.objects.instance [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.709 2 DEBUG nova.virt.libvirt.driver [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Attempting to attach volume 64167daa-7aad-468b-9a08-b3195fe87186 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.713 2 DEBUG nova.virt.libvirt.guest [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:23:05 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-64167daa-7aad-468b-9a08-b3195fe87186">
Oct 02 12:23:05 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   </source>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:23:05 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:23:05 compute-0 nova_compute[257802]:   <serial>64167daa-7aad-468b-9a08-b3195fe87186</serial>
Oct 02 12:23:05 compute-0 nova_compute[257802]: </disk>
Oct 02 12:23:05 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:23:05 compute-0 systemd[1]: Started libpod-conmon-f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e.scope.
Oct 02 12:23:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:05 compute-0 podman[315911]: 2025-10-02 12:23:05.829347623 +0000 UTC m=+0.400394179 container init f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:23:05 compute-0 podman[315911]: 2025-10-02 12:23:05.838678561 +0000 UTC m=+0.409725087 container start f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:23:05 compute-0 podman[315911]: 2025-10-02 12:23:05.871768152 +0000 UTC m=+0.442814708 container attach f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.871 2 DEBUG nova.virt.libvirt.driver [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.872 2 DEBUG nova.virt.libvirt.driver [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.873 2 DEBUG nova.virt.libvirt.driver [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:05 compute-0 nova_compute[257802]: 2025-10-02 12:23:05.873 2 DEBUG nova.virt.libvirt.driver [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:fc:ac:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:23:06 compute-0 nova_compute[257802]: 2025-10-02 12:23:06.195 2 DEBUG oslo_concurrency.lockutils [None req-6a8744b5-0660-4684-ab51-894ba5475d2a 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:06 compute-0 ceph-mon[73607]: pgmap v1815: 305 pgs: 305 active+clean; 391 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.3 MiB/s wr, 18 op/s
Oct 02 12:23:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3539772317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:06.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:06 compute-0 boring_heisenberg[315929]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:23:06 compute-0 boring_heisenberg[315929]: --> relative data size: 1.0
Oct 02 12:23:06 compute-0 boring_heisenberg[315929]: --> All data devices are unavailable
Oct 02 12:23:06 compute-0 systemd[1]: libpod-f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e.scope: Deactivated successfully.
Oct 02 12:23:06 compute-0 podman[315911]: 2025-10-02 12:23:06.646666553 +0000 UTC m=+1.217713089 container died f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-424e0c6220264ac216e55e991e97d6d94851195d6180690545248c2970ccc5ab-merged.mount: Deactivated successfully.
Oct 02 12:23:06 compute-0 podman[315963]: 2025-10-02 12:23:06.897714053 +0000 UTC m=+0.217715454 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:23:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:23:06 compute-0 podman[315911]: 2025-10-02 12:23:06.96615258 +0000 UTC m=+1.537199106 container remove f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:23:06 compute-0 systemd[1]: libpod-conmon-f204d55b5e79b2ee200dbf948e149445f9c639fa5e8ee1d976ad2ce7ec4da40e.scope: Deactivated successfully.
Oct 02 12:23:06 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 02 12:23:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:06.990080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:23:06 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 02 12:23:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407786990162, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1465, "num_deletes": 255, "total_data_size": 2235162, "memory_usage": 2277880, "flush_reason": "Manual Compaction"}
Oct 02 12:23:06 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 02 12:23:06 compute-0 sudo[315795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787015948, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1425644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38968, "largest_seqno": 40432, "table_properties": {"data_size": 1420038, "index_size": 2746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15248, "raw_average_key_size": 21, "raw_value_size": 1407664, "raw_average_value_size": 2005, "num_data_blocks": 119, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407668, "oldest_key_time": 1759407668, "file_creation_time": 1759407786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 25887 microseconds, and 6595 cpu microseconds.
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.015986) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1425644 bytes OK
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.016004) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.037081) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.037115) EVENT_LOG_v1 {"time_micros": 1759407787037105, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.037138) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2228684, prev total WAL file size 2228684, number of live WAL files 2.
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.038415) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323536' seq:72057594037927935, type:22 .. '6D6772737461740031353038' seq:0, type:0; will stop at (end)
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1392KB)], [83(11MB)]
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787038446, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 13069990, "oldest_snapshot_seqno": -1}
Oct 02 12:23:07 compute-0 sudo[316002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:07 compute-0 sudo[316002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:07 compute-0 sudo[316002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:07 compute-0 sudo[316027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6892 keys, 9976937 bytes, temperature: kUnknown
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787150014, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9976937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9931239, "index_size": 27346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 176062, "raw_average_key_size": 25, "raw_value_size": 9808303, "raw_average_value_size": 1423, "num_data_blocks": 1092, "num_entries": 6892, "num_filter_entries": 6892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:23:07 compute-0 sudo[316027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:07 compute-0 sudo[316027]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.150393) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9976937 bytes
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.166856) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.9 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.2) write-amplify(7.0) OK, records in: 7369, records dropped: 477 output_compression: NoCompression
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.166886) EVENT_LOG_v1 {"time_micros": 1759407787166875, "job": 48, "event": "compaction_finished", "compaction_time_micros": 111810, "compaction_time_cpu_micros": 21892, "output_level": 6, "num_output_files": 1, "total_output_size": 9976937, "num_input_records": 7369, "num_output_records": 6892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787167199, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787168952, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.038312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.169041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.169048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.169051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.169054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:07.169056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:07 compute-0 sudo[316052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:07 compute-0 sudo[316052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:07 compute-0 sudo[316052]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:07 compute-0 sudo[316077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:23:07 compute-0 sudo[316077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:07 compute-0 nova_compute[257802]: 2025-10-02 12:23:07.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.655815423 +0000 UTC m=+0.036193238 container create 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:23:07 compute-0 systemd[1]: Started libpod-conmon-55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f.scope.
Oct 02 12:23:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.640502118 +0000 UTC m=+0.020879963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.739295098 +0000 UTC m=+0.119672943 container init 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.74752811 +0000 UTC m=+0.127905925 container start 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:23:07 compute-0 hungry_tesla[316159]: 167 167
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.752907781 +0000 UTC m=+0.133285596 container attach 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:23:07 compute-0 systemd[1]: libpod-55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f.scope: Deactivated successfully.
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.754238253 +0000 UTC m=+0.134616068 container died 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:23:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f4f4c8d1979f89c5d2adb50daebde2375d19c92a15b4c84e8bdedf732054ed-merged.mount: Deactivated successfully.
Oct 02 12:23:07 compute-0 podman[316142]: 2025-10-02 12:23:07.862730211 +0000 UTC m=+0.243108036 container remove 55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:23:07 compute-0 systemd[1]: libpod-conmon-55355ccf21f10216f634641b101817128bff0aba910f2375c85e7174fe67e33f.scope: Deactivated successfully.
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.078968458 +0000 UTC m=+0.086589782 container create 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:23:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/773209993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.017712838 +0000 UTC m=+0.025334192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:08 compute-0 systemd[1]: Started libpod-conmon-9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8.scope.
Oct 02 12:23:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4574ea472d17d5ad0a404665ebcc3ec86a92505c5ba91370c5bc87df6a151dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4574ea472d17d5ad0a404665ebcc3ec86a92505c5ba91370c5bc87df6a151dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4574ea472d17d5ad0a404665ebcc3ec86a92505c5ba91370c5bc87df6a151dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4574ea472d17d5ad0a404665ebcc3ec86a92505c5ba91370c5bc87df6a151dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.172583551 +0000 UTC m=+0.180204865 container init 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.179103921 +0000 UTC m=+0.186725255 container start 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.185035216 +0000 UTC m=+0.192656550 container attach 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:23:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]: {
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:     "1": [
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:         {
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "devices": [
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "/dev/loop3"
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             ],
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "lv_name": "ceph_lv0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "lv_size": "7511998464",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "name": "ceph_lv0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "tags": {
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.cluster_name": "ceph",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.crush_device_class": "",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.encrypted": "0",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.osd_id": "1",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.type": "block",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:                 "ceph.vdo": "0"
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             },
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "type": "block",
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:             "vg_name": "ceph_vg0"
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:         }
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]:     ]
Oct 02 12:23:08 compute-0 wonderful_kowalevski[316199]: }
Oct 02 12:23:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:08 compute-0 systemd[1]: libpod-9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8.scope: Deactivated successfully.
Oct 02 12:23:08 compute-0 podman[316183]: 2025-10-02 12:23:08.956966515 +0000 UTC m=+0.964587889 container died 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4574ea472d17d5ad0a404665ebcc3ec86a92505c5ba91370c5bc87df6a151dc-merged.mount: Deactivated successfully.
Oct 02 12:23:09 compute-0 podman[316183]: 2025-10-02 12:23:09.030034735 +0000 UTC m=+1.037656059 container remove 9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:23:09 compute-0 sudo[316077]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:09 compute-0 systemd[1]: libpod-conmon-9577350ee4ff333fb29f39b9d81d26bf54c8120c3b3e2714eeb1fa5c0346dde8.scope: Deactivated successfully.
Oct 02 12:23:09 compute-0 ceph-mon[73607]: pgmap v1816: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:23:09 compute-0 sudo[316221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:09 compute-0 sudo[316221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:09 compute-0 sudo[316221]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:09 compute-0 sudo[316246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:09 compute-0 sudo[316246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:09 compute-0 sudo[316246]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:09 compute-0 sudo[316271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:09 compute-0 sudo[316271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:09 compute-0 sudo[316271]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:09 compute-0 nova_compute[257802]: 2025-10-02 12:23:09.249 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:09 compute-0 sudo[316296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:23:09 compute-0 sudo[316296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:09 compute-0 nova_compute[257802]: 2025-10-02 12:23:09.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.607326666 +0000 UTC m=+0.046316095 container create bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:23:09 compute-0 systemd[1]: Started libpod-conmon-bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e.scope.
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.585695356 +0000 UTC m=+0.024684795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.70137724 +0000 UTC m=+0.140366759 container init bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.711386185 +0000 UTC m=+0.150375604 container start bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.716235064 +0000 UTC m=+0.155224513 container attach bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:23:09 compute-0 laughing_wilbur[316378]: 167 167
Oct 02 12:23:09 compute-0 systemd[1]: libpod-bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e.scope: Deactivated successfully.
Oct 02 12:23:09 compute-0 conmon[316378]: conmon bfa9d37de5ae565db336 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e.scope/container/memory.events
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.719405931 +0000 UTC m=+0.158395360 container died bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6889e4e9616a3d14f48d00fc9e9318e43d4a00165e296d5cf8e71e7d80f03e4b-merged.mount: Deactivated successfully.
Oct 02 12:23:09 compute-0 podman[316362]: 2025-10-02 12:23:09.76181417 +0000 UTC m=+0.200803589 container remove bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:09 compute-0 systemd[1]: libpod-conmon-bfa9d37de5ae565db3366416072d26dee189fb3e4d039d654b6f59156bfaee2e.scope: Deactivated successfully.
Oct 02 12:23:09 compute-0 podman[316402]: 2025-10-02 12:23:09.921070141 +0000 UTC m=+0.038583816 container create 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:23:09 compute-0 systemd[1]: Started libpod-conmon-4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6.scope.
Oct 02 12:23:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff107c1576af79d4ff6c5f2bbabe980219f1f154d33b2d3c3f952a351e566d09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff107c1576af79d4ff6c5f2bbabe980219f1f154d33b2d3c3f952a351e566d09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff107c1576af79d4ff6c5f2bbabe980219f1f154d33b2d3c3f952a351e566d09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff107c1576af79d4ff6c5f2bbabe980219f1f154d33b2d3c3f952a351e566d09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:09 compute-0 podman[316402]: 2025-10-02 12:23:09.997211376 +0000 UTC m=+0.114725101 container init 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:23:10 compute-0 podman[316402]: 2025-10-02 12:23:09.906963136 +0000 UTC m=+0.024476821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:23:10 compute-0 podman[316402]: 2025-10-02 12:23:10.004480905 +0000 UTC m=+0.121994590 container start 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:23:10 compute-0 podman[316402]: 2025-10-02 12:23:10.015164156 +0000 UTC m=+0.132678091 container attach 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.088 2 DEBUG nova.compute.manager [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.160 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.161 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.181 2 DEBUG nova.objects.instance [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.194 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.194 2 INFO nova.compute.claims [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.195 2 DEBUG nova.objects.instance [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.210 2 DEBUG nova.objects.instance [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.251 2 INFO nova.compute.resource_tracker [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating resource usage from migration 9704bf50-9913-401b-9376-cacd38ba7ca8
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.316 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297865785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.796 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.808 2 DEBUG nova.compute.provider_tree [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.828 2 DEBUG nova.scheduler.client.report [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:10 compute-0 happy_saha[316419]: {
Oct 02 12:23:10 compute-0 happy_saha[316419]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:23:10 compute-0 happy_saha[316419]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:23:10 compute-0 happy_saha[316419]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:23:10 compute-0 happy_saha[316419]:         "osd_id": 1,
Oct 02 12:23:10 compute-0 happy_saha[316419]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:23:10 compute-0 happy_saha[316419]:         "type": "bluestore"
Oct 02 12:23:10 compute-0 happy_saha[316419]:     }
Oct 02 12:23:10 compute-0 happy_saha[316419]: }
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.850 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.852 2 INFO nova.compute.manager [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Migrating
Oct 02 12:23:10 compute-0 systemd[1]: libpod-4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6.scope: Deactivated successfully.
Oct 02 12:23:10 compute-0 podman[316402]: 2025-10-02 12:23:10.861428106 +0000 UTC m=+0.978941791 container died 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.896 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.898 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:10 compute-0 nova_compute[257802]: 2025-10-02 12:23:10.898 2 DEBUG nova.network.neutron [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff107c1576af79d4ff6c5f2bbabe980219f1f154d33b2d3c3f952a351e566d09-merged.mount: Deactivated successfully.
Oct 02 12:23:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:11 compute-0 podman[316402]: 2025-10-02 12:23:11.030576308 +0000 UTC m=+1.148089993 container remove 4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:23:11 compute-0 systemd[1]: libpod-conmon-4d639df57a0cd62f83fa36cde4c5137d21cdda69602ab25e961479f89f7dc5b6.scope: Deactivated successfully.
Oct 02 12:23:11 compute-0 sudo[316296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:23:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:23:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9489f9cd-3410-4086-8a4e-45ff3dce812e does not exist
Oct 02 12:23:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ab22e14b-4a5e-4a56-8fb5-b759f40ee912 does not exist
Oct 02 12:23:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f5dd34f0-a256-4ccb-b851-1b4a2c92f857 does not exist
Oct 02 12:23:11 compute-0 ceph-mon[73607]: pgmap v1817: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 12:23:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/297865785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:23:11 compute-0 sudo[316475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:11 compute-0 sudo[316475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:11 compute-0 sudo[316475]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:11 compute-0 sudo[316500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:23:11 compute-0 sudo[316500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:11 compute-0 sudo[316500]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 489 KiB/s wr, 29 op/s
Oct 02 12:23:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:12 compute-0 nova_compute[257802]: 2025-10-02 12:23:12.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:12 compute-0 ceph-mon[73607]: pgmap v1818: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 489 KiB/s wr, 29 op/s
Oct 02 12:23:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:12 compute-0 nova_compute[257802]: 2025-10-02 12:23:12.852 2 DEBUG nova.network.neutron [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:12 compute-0 nova_compute[257802]: 2025-10-02 12:23:12.869 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:12 compute-0 nova_compute[257802]: 2025-10-02 12:23:12.986 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:23:12 compute-0 nova_compute[257802]: 2025-10-02 12:23:12.991 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:23:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 442 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Oct 02 12:23:14 compute-0 nova_compute[257802]: 2025-10-02 12:23:14.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:14 compute-0 ceph-mon[73607]: pgmap v1819: 305 pgs: 305 active+clean; 442 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Oct 02 12:23:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3162333235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:15 compute-0 kernel: tap7da3e5f9-b3 (unregistering): left promiscuous mode
Oct 02 12:23:15 compute-0 NetworkManager[44987]: <info>  [1759407795.3213] device (tap7da3e5f9-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:15 compute-0 ovn_controller[148183]: 2025-10-02T12:23:15Z|00406|binding|INFO|Releasing lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac from this chassis (sb_readonly=0)
Oct 02 12:23:15 compute-0 ovn_controller[148183]: 2025-10-02T12:23:15Z|00407|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac down in Southbound
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 ovn_controller[148183]: 2025-10-02T12:23:15Z|00408|binding|INFO|Removing iface tap7da3e5f9-b3 ovn-installed in OVS
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.341 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '4', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.342 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.343 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.344 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[43a95be5-d56a-4e28-b615-ff366d4da6cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.345 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 12:23:15 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000005d.scope: Consumed 16.288s CPU time.
Oct 02 12:23:15 compute-0 systemd-machined[211836]: Machine qemu-45-instance-0000005d terminated.
Oct 02 12:23:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 451 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 118 op/s
Oct 02 12:23:15 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [NOTICE]   (314769) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:15 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [NOTICE]   (314769) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:15 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [WARNING]  (314769) : Exiting Master process...
Oct 02 12:23:15 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [ALERT]    (314769) : Current worker (314771) exited with code 143 (Terminated)
Oct 02 12:23:15 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[314746]: [WARNING]  (314769) : All workers exited. Exiting... (0)
Oct 02 12:23:15 compute-0 systemd[1]: libpod-3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947.scope: Deactivated successfully.
Oct 02 12:23:15 compute-0 podman[316551]: 2025-10-02 12:23:15.486855427 +0000 UTC m=+0.043703312 container died 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8bd282c0d6e6d22d52f0312e8fbd4a403fe5b610d6d99bd867ad480c5eb66fc-merged.mount: Deactivated successfully.
Oct 02 12:23:15 compute-0 podman[316551]: 2025-10-02 12:23:15.535440517 +0000 UTC m=+0.092288402 container cleanup 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:23:15 compute-0 systemd[1]: libpod-conmon-3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947.scope: Deactivated successfully.
Oct 02 12:23:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2882623300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 podman[316582]: 2025-10-02 12:23:15.628735892 +0000 UTC m=+0.062849000 container remove 3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.635 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f671f451-0fe1-4a13-90d5-9be23c4cd9ce]: (4, ('Thu Oct  2 12:23:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947)\n3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947\nThu Oct  2 12:23:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947)\n3f7110bc50f59364756fec7b555423eb8e757eb86df56ab37f6a88553a177947\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.637 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[130df933-f76c-4dbe-b6f3-9e898e3e7be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.638 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:15 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.660 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[192cc0d0-588b-4e9a-abdd-66cdc82a7914]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.697 2 DEBUG nova.compute.manager [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.698 2 DEBUG oslo_concurrency.lockutils [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.698 2 DEBUG oslo_concurrency.lockutils [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.699 2 DEBUG oslo_concurrency.lockutils [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.699 2 DEBUG nova.compute.manager [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.697 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdbf244-12a2-4d25-84d6-8868035fd9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 nova_compute[257802]: 2025-10-02 12:23:15.699 2 WARNING nova.compute.manager [req-cdb565e0-dc29-455d-a5c7-a1148f901a14 req-6a063831-0c58-418d-ac5b-9e2dfbe37187 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state active and task_state resize_migrating.
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.699 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4332e1-2a8e-4a01-90cc-32a92db66ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.720 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67c581e8-15ec-446f-b3e5-78dac1de1c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581172, 'reachable_time': 31102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316609, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.723 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:15.723 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4f95ab-d6f6-4300-96b5-d33a2098abc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.009 2 INFO nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance shutdown successfully after 3 seconds.
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.018 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance destroyed successfully.
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.019 2 DEBUG nova.virt.libvirt.vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.020 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.021 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.022 2 DEBUG os_vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7da3e5f9-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.036 2 INFO os_vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.045 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.046 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:16 compute-0 nova_compute[257802]: 2025-10-02 12:23:16.046 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:23:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:16.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:23:16 compute-0 ceph-mon[73607]: pgmap v1820: 305 pgs: 305 active+clean; 451 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 118 op/s
Oct 02 12:23:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.143 2 DEBUG nova.network.neutron [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.263 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.263 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.264 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 438 MiB data, 969 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.495 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.496 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.496 2 DEBUG nova.network.neutron [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/515492389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.834 2 DEBUG nova.compute.manager [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.835 2 DEBUG oslo_concurrency.lockutils [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.835 2 DEBUG oslo_concurrency.lockutils [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.836 2 DEBUG oslo_concurrency.lockutils [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.836 2 DEBUG nova.compute.manager [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:17 compute-0 nova_compute[257802]: 2025-10-02 12:23:17.836 2 WARNING nova.compute.manager [req-115a234b-dce9-4ee4-88d9-986ce821a676 req-fa5583c8-43da-4a9f-95ab-e2e2dca797f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state active and task_state resize_migrated.
Oct 02 12:23:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:18.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:18 compute-0 ceph-mon[73607]: pgmap v1821: 305 pgs: 305 active+clean; 438 MiB data, 969 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Oct 02 12:23:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:18.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.127 2 DEBUG nova.network.neutron [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.153 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.222 2 DEBUG os_brick.utils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.223 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.233 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.234 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[fdebfdf3-3aef-474e-9561-acb02457a36f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.235 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.243 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.243 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3eb4e0-80af-4a66-bfbc-51715e450ced]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.244 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.252 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.253 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[13572331-eb8b-4bce-988e-bfed30fb737a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.254 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[14fa7c21-f30e-4ee2-930b-eb8b9c6e18db]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.254 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.284 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.286 2 DEBUG os_brick.initiator.connectors.lightos [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.287 2 DEBUG os_brick.initiator.connectors.lightos [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.287 2 DEBUG os_brick.initiator.connectors.lightos [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:23:19 compute-0 nova_compute[257802]: 2025-10-02 12:23:19.287 2 DEBUG os_brick.utils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:23:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 405 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 02 12:23:20 compute-0 nova_compute[257802]: 2025-10-02 12:23:20.485 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:23:20 compute-0 nova_compute[257802]: 2025-10-02 12:23:20.487 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:23:20 compute-0 nova_compute[257802]: 2025-10-02 12:23:20.488 2 INFO nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Creating image(s)
Oct 02 12:23:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:20 compute-0 nova_compute[257802]: 2025-10-02 12:23:20.517 2 DEBUG nova.storage.rbd_utils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(nova-resize) on rbd image(9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:23:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 02 12:23:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 02 12:23:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 02 12:23:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:20.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:21 compute-0 sudo[316656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:21 compute-0 sudo[316656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:21 compute-0 sudo[316656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:21 compute-0 ceph-mon[73607]: pgmap v1822: 305 pgs: 305 active+clean; 405 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 02 12:23:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1346144995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:21 compute-0 sudo[316681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:21 compute-0 sudo[316681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:21 compute-0 sudo[316681]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 405 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.639 2 DEBUG nova.objects.instance [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.728 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.728 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Ensure instance console log exists: /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.729 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.729 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.729 2 DEBUG oslo_concurrency.lockutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.732 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start _get_guest_xml network_info=[{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': None, 'guest_format': None, 'attachment_id': '4828b9c9-25a6-4b60-9892-2c42f1deb7c8', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-64167daa-7aad-468b-9a08-b3195fe87186', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '64167daa-7aad-468b-9a08-b3195fe87186', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'attached_at': '2025-10-02T12:23:20.000000', 'detached_at': '', 'volume_id': '64167daa-7aad-468b-9a08-b3195fe87186', 'serial': '64167daa-7aad-468b-9a08-b3195fe87186'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.736 2 WARNING nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.741 2 DEBUG nova.virt.libvirt.host [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.742 2 DEBUG nova.virt.libvirt.host [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.744 2 DEBUG nova.virt.libvirt.host [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.744 2 DEBUG nova.virt.libvirt.host [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.745 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.746 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.746 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.746 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.746 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.747 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.747 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.747 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.747 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.748 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.748 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.748 2 DEBUG nova.virt.hardware [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.748 2 DEBUG nova.objects.instance [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:21 compute-0 nova_compute[257802]: 2025-10-02 12:23:21.767 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:22 compute-0 ceph-mon[73607]: osdmap e282: 3 total, 3 up, 3 in
Oct 02 12:23:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/770534632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.188 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.238 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788175752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.716 2 DEBUG oslo_concurrency.processutils [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.740 2 DEBUG nova.virt.libvirt.vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.740 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.741 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.743 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <uuid>9ae72f32-b9fd-44eb-b10d-79119ad2ca85</uuid>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <name>instance-0000005d</name>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-1174011223</nova:name>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:23:21</nova:creationTime>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <nova:port uuid="7da3e5f9-b358-4404-825b-b1ad43bc54ac">
Oct 02 12:23:22 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <system>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="serial">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="uuid">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </system>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <os>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </os>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <features>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </features>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-64167daa-7aad-468b-9a08-b3195fe87186">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <serial>64167daa-7aad-468b-9a08-b3195fe87186</serial>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:fc:ac:3b"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <target dev="tap7da3e5f9-b3"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/console.log" append="off"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <video>
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </video>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:23:22 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:23:22 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:23:22 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:23:22 compute-0 nova_compute[257802]: </domain>
Oct 02 12:23:22 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.744 2 DEBUG nova.virt.libvirt.vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.745 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:fc:ac:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.745 2 DEBUG nova.network.os_vif_util [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.746 2 DEBUG os_vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7da3e5f9-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7da3e5f9-b3, col_values=(('external_ids', {'iface-id': '7da3e5f9-b358-4404-825b-b1ad43bc54ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ac:3b', 'vm-uuid': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 NetworkManager[44987]: <info>  [1759407802.7532] manager: (tap7da3e5f9-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.762 2 INFO os_vif [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.847 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.847 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.848 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.848 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:fc:ac:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.848 2 INFO nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Using config drive
Oct 02 12:23:22 compute-0 kernel: tap7da3e5f9-b3: entered promiscuous mode
Oct 02 12:23:22 compute-0 NetworkManager[44987]: <info>  [1759407802.9355] manager: (tap7da3e5f9-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Oct 02 12:23:22 compute-0 ovn_controller[148183]: 2025-10-02T12:23:22Z|00409|binding|INFO|Claiming lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac for this chassis.
Oct 02 12:23:22 compute-0 ovn_controller[148183]: 2025-10-02T12:23:22Z|00410|binding|INFO|7da3e5f9-b358-4404-825b-b1ad43bc54ac: Claiming fa:16:3e:fc:ac:3b 10.100.0.3
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.943 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '5', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.945 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.946 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 ovn_controller[148183]: 2025-10-02T12:23:22Z|00411|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac ovn-installed in OVS
Oct 02 12:23:22 compute-0 ovn_controller[148183]: 2025-10-02T12:23:22Z|00412|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac up in Southbound
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 nova_compute[257802]: 2025-10-02 12:23:22.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.962 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00e9dd35-0b17-4587-8f78-424d18b5661b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.963 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.965 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.965 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7478b823-7e73-432a-ba2d-b7d47ab0ece8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.965 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3982fbc4-a759-4b5a-8bda-9d2708e6a06b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:22 compute-0 systemd-udevd[316838]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.976 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5d33e9ee-6b7e-4992-8f28-63bc80f75e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:22 compute-0 NetworkManager[44987]: <info>  [1759407802.9805] device (tap7da3e5f9-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:23:22 compute-0 NetworkManager[44987]: <info>  [1759407802.9813] device (tap7da3e5f9-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:23:22 compute-0 systemd-machined[211836]: New machine qemu-47-instance-0000005d.
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:22.998 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[85637430-0464-4c3f-849f-94fccbb208f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000005d.
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.023 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddbb328-0ee2-40d0-8035-ab71d3bad53c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 NetworkManager[44987]: <info>  [1759407803.0297] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.029 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f8236388-1eb3-413f-b67a-d0d4e3dd9d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.064 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7bde169f-0dee-4c08-991b-954941f3939a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.068 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[aea17aa5-a5c4-420d-bc10-09d51e7876e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 NetworkManager[44987]: <info>  [1759407803.0869] device (tap4035a600-40): carrier: link connected
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.091 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a25002-41f6-469c-93a3-64a2b5fe8f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c825d2a0-4784-4cef-901b-4eda9c9c89f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587071, 'reachable_time': 35616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316870, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.123 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[312443ec-552e-4346-b020-fc6aacf14be3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587071, 'tstamp': 587071}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316871, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.139 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9220a24b-f8db-420a-9ce5-98cf0fa1a530]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587071, 'reachable_time': 35616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316872, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.164 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[77165e3b-891d-45be-835c-6868b2672516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.213 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[abbbc507-6775-4eb5-898d-58aefeb2c37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.215 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.215 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.215 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:23 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:23:23 compute-0 NetworkManager[44987]: <info>  [1759407803.2176] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct 02 12:23:23 compute-0 nova_compute[257802]: 2025-10-02 12:23:23.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-0 nova_compute[257802]: 2025-10-02 12:23:23.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.219 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:23 compute-0 ovn_controller[148183]: 2025-10-02T12:23:23Z|00413|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:23:23 compute-0 nova_compute[257802]: 2025-10-02 12:23:23.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.221 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.222 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[936226ce-059e-45dd-89b5-d1214ece1036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.222 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:23:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:23.223 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:23:23 compute-0 ceph-mon[73607]: pgmap v1824: 305 pgs: 305 active+clean; 405 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Oct 02 12:23:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1925030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2788175752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:23 compute-0 nova_compute[257802]: 2025-10-02 12:23:23.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 384 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 175 op/s
Oct 02 12:23:23 compute-0 podman[316903]: 2025-10-02 12:23:23.558605848 +0000 UTC m=+0.028846838 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:23:23 compute-0 podman[316903]: 2025-10-02 12:23:23.826374437 +0000 UTC m=+0.296615447 container create 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:23:23 compute-0 systemd[1]: Started libpod-conmon-5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06.scope.
Oct 02 12:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c81255922bfeeecc598b31f7b6c16f69731eb6b674d7ac01126306182e0b4c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:24 compute-0 podman[316903]: 2025-10-02 12:23:24.061570588 +0000 UTC m=+0.531811628 container init 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:24 compute-0 podman[316903]: 2025-10-02 12:23:24.067165095 +0000 UTC m=+0.537406105 container start 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:23:24 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [NOTICE]   (316922) : New worker (316924) forked
Oct 02 12:23:24 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [NOTICE]   (316922) : Loading success.
Oct 02 12:23:24 compute-0 ceph-mon[73607]: pgmap v1825: 305 pgs: 305 active+clean; 384 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 175 op/s
Oct 02 12:23:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.157 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.158 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407805.1573493, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.158 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Resumed (Lifecycle Event)
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.161 2 DEBUG nova.compute.manager [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.166 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance running successfully.
Oct 02 12:23:25 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.168 2 DEBUG nova.virt.libvirt.guest [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.168 2 DEBUG nova.virt.libvirt.driver [None req-3c4152e1-9e81-4e42-90dc-984548d567da 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.180 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.184 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.207 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.207 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407805.1611114, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.207 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Started (Lifecycle Event)
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.239 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.243 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.441 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:23:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 383 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 167 op/s
Oct 02 12:23:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1689512058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.750 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.750 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.751 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.751 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.751 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.752 2 WARNING nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state resized and task_state None.
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.752 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.752 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.753 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.753 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.753 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:25 compute-0 nova_compute[257802]: 2025-10-02 12:23:25.753 2 WARNING nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state resized and task_state None.
Oct 02 12:23:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:26 compute-0 ceph-mon[73607]: pgmap v1826: 305 pgs: 305 active+clean; 383 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 167 op/s
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:26.940 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:26.941 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:26.941 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:27 compute-0 nova_compute[257802]: 2025-10-02 12:23:27.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Oct 02 12:23:27 compute-0 nova_compute[257802]: 2025-10-02 12:23:27.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/393264518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:28 compute-0 nova_compute[257802]: 2025-10-02 12:23:28.312 2 DEBUG nova.network.neutron [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Oct 02 12:23:28 compute-0 nova_compute[257802]: 2025-10-02 12:23:28.313 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:28 compute-0 nova_compute[257802]: 2025-10-02 12:23:28.313 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:28 compute-0 nova_compute[257802]: 2025-10-02 12:23:28.313 2 DEBUG nova.network.neutron [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:28 compute-0 ceph-mon[73607]: pgmap v1827: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Oct 02 12:23:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/138954372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 02 12:23:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:30 compute-0 ceph-mon[73607]: pgmap v1828: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 02 12:23:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:30.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 215 op/s
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.617 2 DEBUG nova.network.neutron [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.648 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:31 compute-0 kernel: tap7da3e5f9-b3 (unregistering): left promiscuous mode
Oct 02 12:23:31 compute-0 NetworkManager[44987]: <info>  [1759407811.7469] device (tap7da3e5f9-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:31 compute-0 ovn_controller[148183]: 2025-10-02T12:23:31Z|00414|binding|INFO|Releasing lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac from this chassis (sb_readonly=0)
Oct 02 12:23:31 compute-0 ovn_controller[148183]: 2025-10-02T12:23:31Z|00415|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac down in Southbound
Oct 02 12:23:31 compute-0 ovn_controller[148183]: 2025-10-02T12:23:31Z|00416|binding|INFO|Removing iface tap7da3e5f9-b3 ovn-installed in OVS
Oct 02 12:23:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:31.759 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '6', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:31.760 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:23:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:31.761 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:31.762 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a50c6ac-bd48-489e-a9a2-68a97123407b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:31.763 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:31 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 12:23:31 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Consumed 8.556s CPU time.
Oct 02 12:23:31 compute-0 systemd-machined[211836]: Machine qemu-47-instance-0000005d terminated.
Oct 02 12:23:31 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [NOTICE]   (316922) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:31 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [NOTICE]   (316922) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:31 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [WARNING]  (316922) : Exiting Master process...
Oct 02 12:23:31 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [ALERT]    (316922) : Current worker (316924) exited with code 143 (Terminated)
Oct 02 12:23:31 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[316918]: [WARNING]  (316922) : All workers exited. Exiting... (0)
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.902 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance destroyed successfully.
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.902 2 DEBUG nova.objects.instance [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:31 compute-0 systemd[1]: libpod-5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06.scope: Deactivated successfully.
Oct 02 12:23:31 compute-0 conmon[316918]: conmon 5833de18c822a02bdf01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06.scope/container/memory.events
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.916 2 DEBUG nova.virt.libvirt.vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.916 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.917 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.917 2 DEBUG os_vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7da3e5f9-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:31 compute-0 podman[317021]: 2025-10-02 12:23:31.914718393 +0000 UTC m=+0.058126164 container died 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:31 compute-0 nova_compute[257802]: 2025-10-02 12:23:31.935 2 INFO os_vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c81255922bfeeecc598b31f7b6c16f69731eb6b674d7ac01126306182e0b4c5-merged.mount: Deactivated successfully.
Oct 02 12:23:31 compute-0 podman[317021]: 2025-10-02 12:23:31.974312194 +0000 UTC m=+0.117719965 container cleanup 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:23:31 compute-0 systemd[1]: libpod-conmon-5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06.scope: Deactivated successfully.
Oct 02 12:23:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:32 compute-0 podman[317043]: 2025-10-02 12:23:32.010097389 +0000 UTC m=+0.076121575 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:32 compute-0 podman[317051]: 2025-10-02 12:23:32.021633773 +0000 UTC m=+0.083298782 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:23:32 compute-0 podman[317089]: 2025-10-02 12:23:32.048645344 +0000 UTC m=+0.051928783 container remove 5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:23:32 compute-0 podman[317052]: 2025-10-02 12:23:32.04967661 +0000 UTC m=+0.109488314 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.056 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[81fe7918-0973-437d-aa9d-bbc89ba1c909]: (4, ('Thu Oct  2 12:23:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06)\n5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06\nThu Oct  2 12:23:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06)\n5833de18c822a02bdf011aeceea758ea0c1fd9302e0a748769b3f4bce7660b06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.057 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[70f07301-a321-48d9-9414-90d2ff2d8041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.058 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.078 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9c150f85-a298-4a6c-89eb-20ce218d30d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.103 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[381abfe5-81f1-4ee0-b6a2-e3d8882e5471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.104 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[299e8765-6baf-4ba4-a1c5-233ef5bcc17b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.118 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9370794c-78cc-4a18-9b7a-d6f35f7d0ba2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587064, 'reachable_time': 23817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317127, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.121 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.121 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0c9f4c-806c-4c44-a78f-25335e65ddb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.400 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.401 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.433 2 DEBUG nova.objects.instance [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:32.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.567 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.646 2 DEBUG nova.compute.manager [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.647 2 DEBUG oslo_concurrency.lockutils [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.647 2 DEBUG oslo_concurrency.lockutils [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.648 2 DEBUG oslo_concurrency.lockutils [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.648 2 DEBUG nova.compute.manager [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.648 2 WARNING nova.compute.manager [req-55139921-08a3-451f-95a8-13c18c91cdab req-10f3acc3-e999-459d-86e3-7fa67bfb61da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.736 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:32.737 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:23:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:32.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758464942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:32 compute-0 ceph-mon[73607]: pgmap v1829: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 215 op/s
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.981 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:32 compute-0 nova_compute[257802]: 2025-10-02 12:23:32.988 2 DEBUG nova.compute.provider_tree [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.009 2 DEBUG nova.scheduler.client.report [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.075 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.268 2 INFO nova.compute.manager [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Swapping old allocation on dict_keys(['a293a24c-b5ed-43d1-8783-f02da4f75ad4']) held by migration 9704bf50-9913-401b-9376-cacd38ba7ca8 for instance
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.295 2 DEBUG nova.scheduler.client.report [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Overwriting current allocation {'allocations': {'a293a24c-b5ed-43d1-8783-f02da4f75ad4': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 55}}, 'project_id': '10fff81da7a54740a53a0771ce916329', 'user_id': '25468893d71641a385711fd2982bb00b', 'consumer_generation': 1} on consumer 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:23:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 405 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.550 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.551 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:33 compute-0 nova_compute[257802]: 2025-10-02 12:23:33.551 2 DEBUG nova.network.neutron [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:33.738 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3758464942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:34.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.756 2 DEBUG nova.compute.manager [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.756 2 DEBUG oslo_concurrency.lockutils [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.757 2 DEBUG oslo_concurrency.lockutils [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.757 2 DEBUG oslo_concurrency.lockutils [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.757 2 DEBUG nova.compute.manager [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:34 compute-0 nova_compute[257802]: 2025-10-02 12:23:34.757 2 WARNING nova.compute.manager [req-dd1c769b-7604-4e40-8bf5-c8954100828e req-a1f2d0de-8c9b-4779-bf43-052f9bc48cbf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:23:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:34.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:35 compute-0 ceph-mon[73607]: pgmap v1830: 305 pgs: 305 active+clean; 405 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.316 2 DEBUG nova.network.neutron [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.384 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-9ae72f32-b9fd-44eb-b10d-79119ad2ca85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.385 2 DEBUG os_brick.utils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.386 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.398 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.398 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[4505a89d-86ee-4905-ae77-bb2bcf7f015c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.400 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.408 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.408 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[46a3a12a-9e9a-4753-803f-e1dcd23a1be9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.410 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.419 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.419 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ece827ef-beda-4944-b632-cb2e77eb1870]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.421 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[97845289-0a73-4077-a9ce-c45f1c45fa30]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.421 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.445 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.449 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.449 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.449 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.450 2 DEBUG os_brick.utils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:23:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 958 KiB/s wr, 172 op/s
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.501 2 DEBUG nova.compute.manager [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.575 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.576 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.597 2 DEBUG nova.objects.instance [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_requests' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.613 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.613 2 INFO nova.compute.claims [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.613 2 DEBUG nova.objects.instance [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.626 2 DEBUG nova.objects.instance [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.705 2 INFO nova.compute.resource_tracker [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating resource usage from migration c0e29430-3f1f-4f18-bd83-d892eb2d6811
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.706 2 DEBUG nova.compute.resource_tracker [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Starting to track incoming migration c0e29430-3f1f-4f18-bd83-d892eb2d6811 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:23:35 compute-0 nova_compute[257802]: 2025-10-02 12:23:35.777 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170411895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.190 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.196 2 DEBUG nova.compute.provider_tree [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.213 2 DEBUG nova.scheduler.client.report [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.240 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.240 2 INFO nova.compute.manager [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Migrating
Oct 02 12:23:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 02 12:23:36 compute-0 ceph-mon[73607]: pgmap v1831: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 958 KiB/s wr, 172 op/s
Oct 02 12:23:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/170411895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 02 12:23:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 02 12:23:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.747 2 DEBUG nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.852 2 DEBUG nova.storage.rbd_utils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rolling back rbd image(9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:36.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:36 compute-0 nova_compute[257802]: 2025-10-02 12:23:36.998 2 DEBUG nova.storage.rbd_utils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(nova-resize) on rbd image(9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:23:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 02 12:23:37 compute-0 ceph-mon[73607]: osdmap e283: 3 total, 3 up, 3 in
Oct 02 12:23:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3837116973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 02 12:23:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.441 2 DEBUG nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start _get_guest_xml network_info=[{"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': None, 'guest_format': None, 'attachment_id': '6a7b7ebb-fc14-4d46-b6b9-7e3b3004988c', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-64167daa-7aad-468b-9a08-b3195fe87186', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '64167daa-7aad-468b-9a08-b3195fe87186', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'attached_at': '2025-10-02T12:23:36.000000', 'detached_at': '', 'volume_id': '64167daa-7aad-468b-9a08-b3195fe87186', 'serial': '64167daa-7aad-468b-9a08-b3195fe87186'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.445 2 WARNING nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.450 2 DEBUG nova.virt.libvirt.host [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.451 2 DEBUG nova.virt.libvirt.host [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:23:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 415 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 774 KiB/s wr, 122 op/s
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.454 2 DEBUG nova.virt.libvirt.host [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.454 2 DEBUG nova.virt.libvirt.host [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.455 2 DEBUG nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.455 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.456 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.456 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.456 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.456 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.457 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.457 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.457 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.457 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.457 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.458 2 DEBUG nova.virt.hardware [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.458 2 DEBUG nova.objects.instance [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.480 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680674664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.915 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:37 compute-0 nova_compute[257802]: 2025-10-02 12:23:37.963 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:38 compute-0 podman[317256]: 2025-10-02 12:23:38.00467131 +0000 UTC m=+0.137255843 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:23:38 compute-0 sshd-session[317304]: Accepted publickey for nova from 192.168.122.101 port 45332 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:23:38 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:23:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:23:38 compute-0 systemd-logind[789]: New session 63 of user nova.
Oct 02 12:23:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:23:38 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:23:38 compute-0 systemd[317327]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:23:38 compute-0 systemd[317327]: Queued start job for default target Main User Target.
Oct 02 12:23:38 compute-0 systemd[317327]: Created slice User Application Slice.
Oct 02 12:23:38 compute-0 systemd[317327]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:23:38 compute-0 systemd[317327]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:23:38 compute-0 systemd[317327]: Reached target Paths.
Oct 02 12:23:38 compute-0 systemd[317327]: Reached target Timers.
Oct 02 12:23:38 compute-0 systemd[317327]: Starting D-Bus User Message Bus Socket...
Oct 02 12:23:38 compute-0 systemd[317327]: Starting Create User's Volatile Files and Directories...
Oct 02 12:23:38 compute-0 systemd[317327]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:23:38 compute-0 systemd[317327]: Finished Create User's Volatile Files and Directories.
Oct 02 12:23:38 compute-0 systemd[317327]: Reached target Sockets.
Oct 02 12:23:38 compute-0 systemd[317327]: Reached target Basic System.
Oct 02 12:23:38 compute-0 systemd[317327]: Reached target Main User Target.
Oct 02 12:23:38 compute-0 systemd[317327]: Startup finished in 139ms.
Oct 02 12:23:38 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:23:38 compute-0 systemd[1]: Started Session 63 of User nova.
Oct 02 12:23:38 compute-0 sshd-session[317304]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750275137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.391 2 DEBUG oslo_concurrency.processutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 02 12:23:38 compute-0 sshd-session[317342]: Received disconnect from 192.168.122.101 port 45332:11: disconnected by user
Oct 02 12:23:38 compute-0 sshd-session[317342]: Disconnected from user nova 192.168.122.101 port 45332
Oct 02 12:23:38 compute-0 sshd-session[317304]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:23:38 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Oct 02 12:23:38 compute-0 systemd-logind[789]: Session 63 logged out. Waiting for processes to exit.
Oct 02 12:23:38 compute-0 systemd-logind[789]: Removed session 63.
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.422 2 DEBUG nova.virt.libvirt.vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.423 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.423 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.426 2 DEBUG nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <uuid>9ae72f32-b9fd-44eb-b10d-79119ad2ca85</uuid>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <name>instance-0000005d</name>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-1174011223</nova:name>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:23:37</nova:creationTime>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <nova:port uuid="7da3e5f9-b358-4404-825b-b1ad43bc54ac">
Oct 02 12:23:38 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <system>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="serial">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="uuid">9ae72f32-b9fd-44eb-b10d-79119ad2ca85</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </system>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <os>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </os>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <features>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </features>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_disk.config">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-64167daa-7aad-468b-9a08-b3195fe87186">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </source>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:23:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <serial>64167daa-7aad-468b-9a08-b3195fe87186</serial>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:fc:ac:3b"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <target dev="tap7da3e5f9-b3"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85/console.log" append="off"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <video>
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </video>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:23:38 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:23:38 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:23:38 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:23:38 compute-0 nova_compute[257802]: </domain>
Oct 02 12:23:38 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.427 2 DEBUG nova.compute.manager [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Preparing to wait for external event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.428 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.428 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.428 2 DEBUG oslo_concurrency.lockutils [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.429 2 DEBUG nova.virt.libvirt.vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.429 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.429 2 DEBUG nova.network.os_vif_util [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.430 2 DEBUG os_vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7da3e5f9-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7da3e5f9-b3, col_values=(('external_ids', {'iface-id': '7da3e5f9-b358-4404-825b-b1ad43bc54ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ac:3b', 'vm-uuid': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.4369] manager: (tap7da3e5f9-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.442 2 INFO os_vif [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:23:38 compute-0 ceph-mon[73607]: osdmap e284: 3 total, 3 up, 3 in
Oct 02 12:23:38 compute-0 ceph-mon[73607]: pgmap v1834: 305 pgs: 305 active+clean; 415 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 774 KiB/s wr, 122 op/s
Oct 02 12:23:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1680674664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1750275137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:38 compute-0 kernel: tap7da3e5f9-b3: entered promiscuous mode
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.5155] manager: (tap7da3e5f9-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:38.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:38 compute-0 ovn_controller[148183]: 2025-10-02T12:23:38Z|00417|binding|INFO|Claiming lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac for this chassis.
Oct 02 12:23:38 compute-0 ovn_controller[148183]: 2025-10-02T12:23:38Z|00418|binding|INFO|7da3e5f9-b358-4404-825b-b1ad43bc54ac: Claiming fa:16:3e:fc:ac:3b 10.100.0.3
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.5324] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.5329] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.540 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '7', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.541 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.542 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:38 compute-0 systemd-udevd[317364]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:23:38 compute-0 sshd-session[317349]: Accepted publickey for nova from 192.168.122.101 port 45340 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:23:38 compute-0 systemd-machined[211836]: New machine qemu-48-instance-0000005d.
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.561 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[74a6c909-09b9-47f0-9e12-6ee5939d5cbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.562 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.564 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.564 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3efa9d-9895-4901-b748-573f800132c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.5665] device (tap7da3e5f9-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:23:38 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000005d.
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.565 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[862c194e-4266-4269-9e51-1f01d1b7bdb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.5706] device (tap7da3e5f9-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.578 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[66f52e87-a0f9-43a9-8bd9-0dcd758bced8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 systemd-logind[789]: New session 65 of user nova.
Oct 02 12:23:38 compute-0 systemd[1]: Started Session 65 of User nova.
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.604 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1daea0fe-de18-4ad7-ae98-39094f0ed2f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 sshd-session[317349]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.635 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b938d1-1093-4880-8cf2-cfa188f4c722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 systemd-udevd[317368]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.6485] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.649 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e049af1a-316c-44d6-9259-776b7b0c186c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 sshd-session[317373]: Received disconnect from 192.168.122.101 port 45340:11: disconnected by user
Oct 02 12:23:38 compute-0 sshd-session[317373]: Disconnected from user nova 192.168.122.101 port 45340
Oct 02 12:23:38 compute-0 sshd-session[317349]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:23:38 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Oct 02 12:23:38 compute-0 systemd-logind[789]: Session 65 logged out. Waiting for processes to exit.
Oct 02 12:23:38 compute-0 systemd-logind[789]: Removed session 65.
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.687 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3e7065-28eb-4972-aa6a-73de074047ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.690 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c92a2c36-af72-4cda-ac0e-026cfd12bf48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 02 12:23:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.7114] device (tap4035a600-40): carrier: link connected
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.716 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[047486ae-1081-414e-b98a-64f166fcb7f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.729 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[04a1c76d-d08f-4a17-8a5d-c5952b829eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588633, 'reachable_time': 36225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317400, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.752 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f88123-1e95-4e95-85ca-febd6cb13cdb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588633, 'tstamp': 588633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317401, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_controller[148183]: 2025-10-02T12:23:38Z|00419|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac ovn-installed in OVS
Oct 02 12:23:38 compute-0 ovn_controller[148183]: 2025-10-02T12:23:38Z|00420|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac up in Southbound
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.772 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d71d2282-895d-4b2e-a5a1-bbd14429400c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588633, 'reachable_time': 36225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317402, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.799 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d29cc49f-8297-46e4-bf2d-9f47f824bde3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.852 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[36f37c4c-5ee2-4867-9434-9cf723338867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.853 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.854 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.855 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 NetworkManager[44987]: <info>  [1759407818.8573] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct 02 12:23:38 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.862 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 ovn_controller[148183]: 2025-10-02T12:23:38Z|00421|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.864 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.865 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0612bd6a-96ad-48fc-bcb6-81ac301aebb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.866 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:23:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:38.868 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:23:38 compute-0 nova_compute[257802]: 2025-10-02 12:23:38.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:23:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:38.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.155 2 DEBUG nova.compute.manager [req-de449487-3041-4ede-a0f3-52df1c1d3ac8 req-560e412c-eb98-4848-95ff-6cb2e03aae27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.156 2 DEBUG oslo_concurrency.lockutils [req-de449487-3041-4ede-a0f3-52df1c1d3ac8 req-560e412c-eb98-4848-95ff-6cb2e03aae27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.157 2 DEBUG oslo_concurrency.lockutils [req-de449487-3041-4ede-a0f3-52df1c1d3ac8 req-560e412c-eb98-4848-95ff-6cb2e03aae27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.157 2 DEBUG oslo_concurrency.lockutils [req-de449487-3041-4ede-a0f3-52df1c1d3ac8 req-560e412c-eb98-4848-95ff-6cb2e03aae27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.157 2 DEBUG nova.compute.manager [req-de449487-3041-4ede-a0f3-52df1c1d3ac8 req-560e412c-eb98-4848-95ff-6cb2e03aae27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Processing event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:23:39 compute-0 podman[317494]: 2025-10-02 12:23:39.207895143 +0000 UTC m=+0.040294988 container create 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:39 compute-0 systemd[1]: Started libpod-conmon-8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9.scope.
Oct 02 12:23:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:23:39 compute-0 podman[317494]: 2025-10-02 12:23:39.18654576 +0000 UTC m=+0.018945625 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4949522f9268e0afc03426b6b10b05bb6641b5bfebcc88ee89b54bad4b2be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:39 compute-0 podman[317494]: 2025-10-02 12:23:39.305298929 +0000 UTC m=+0.137698774 container init 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:23:39 compute-0 podman[317494]: 2025-10-02 12:23:39.31676222 +0000 UTC m=+0.149162065 container start 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:23:39 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [NOTICE]   (317513) : New worker (317515) forked
Oct 02 12:23:39 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [NOTICE]   (317513) : Loading success.
Oct 02 12:23:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 2 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 291 active+clean; 470 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.6 MiB/s wr, 201 op/s
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.540 2 DEBUG nova.compute.manager [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.542 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.542 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407819.5394785, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.543 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Started (Lifecycle Event)
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.549 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance running successfully.
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.549 2 DEBUG nova.virt.libvirt.driver [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.591 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.593 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.627 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.628 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407819.5407555, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.628 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Paused (Lifecycle Event)
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.660 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.664 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407819.5451367, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.665 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Resumed (Lifecycle Event)
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.684 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.688 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:39 compute-0 nova_compute[257802]: 2025-10-02 12:23:39.709 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:23:39 compute-0 ceph-mon[73607]: osdmap e285: 3 total, 3 up, 3 in
Oct 02 12:23:40 compute-0 nova_compute[257802]: 2025-10-02 12:23:40.034 2 INFO nova.compute.manager [None req-09b74de0-1eb3-413d-b3c2-e2c96c21b8f5 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance to original state: 'active'
Oct 02 12:23:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:40.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:40 compute-0 ceph-mon[73607]: pgmap v1836: 305 pgs: 2 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 291 active+clean; 470 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.6 MiB/s wr, 201 op/s
Oct 02 12:23:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.260 2 DEBUG nova.compute.manager [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.261 2 DEBUG oslo_concurrency.lockutils [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.261 2 DEBUG oslo_concurrency.lockutils [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.262 2 DEBUG oslo_concurrency.lockutils [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.262 2 DEBUG nova.compute.manager [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:41 compute-0 nova_compute[257802]: 2025-10-02 12:23:41.263 2 WARNING nova.compute.manager [req-663cdfab-5199-480e-b251-46c41dd9d82b req-ebdf4098-d754-4fbd-b617-2f94b88408da d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state active and task_state None.
Oct 02 12:23:41 compute-0 sudo[317525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:41 compute-0 sudo[317525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:41 compute-0 sudo[317525]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:41 compute-0 sudo[317550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:41 compute-0 sudo[317550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:41 compute-0 sudo[317550]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 2 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 291 active+clean; 470 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.6 MiB/s wr, 184 op/s
Oct 02 12:23:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:23:42
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr']
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:23:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:42.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:42 compute-0 nova_compute[257802]: 2025-10-02 12:23:42.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:42 compute-0 ceph-mon[73607]: pgmap v1837: 305 pgs: 2 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 291 active+clean; 470 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.6 MiB/s wr, 184 op/s
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:23:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 487 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.2 MiB/s wr, 297 op/s
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.670 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.671 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.671 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.671 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.671 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.672 2 INFO nova.compute.manager [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Terminating instance
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.673 2 DEBUG nova.compute.manager [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:23:43 compute-0 kernel: tap7da3e5f9-b3 (unregistering): left promiscuous mode
Oct 02 12:23:43 compute-0 NetworkManager[44987]: <info>  [1759407823.7727] device (tap7da3e5f9-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 ovn_controller[148183]: 2025-10-02T12:23:43Z|00422|binding|INFO|Releasing lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac from this chassis (sb_readonly=0)
Oct 02 12:23:43 compute-0 ovn_controller[148183]: 2025-10-02T12:23:43Z|00423|binding|INFO|Setting lport 7da3e5f9-b358-4404-825b-b1ad43bc54ac down in Southbound
Oct 02 12:23:43 compute-0 ovn_controller[148183]: 2025-10-02T12:23:43Z|00424|binding|INFO|Removing iface tap7da3e5f9-b3 ovn-installed in OVS
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:43.798 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ac:3b 10.100.0.3'], port_security=['fa:16:3e:fc:ac:3b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9ae72f32-b9fd-44eb-b10d-79119ad2ca85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '8', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7da3e5f9-b358-4404-825b-b1ad43bc54ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:43.800 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7da3e5f9-b358-4404-825b-b1ad43bc54ac in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:23:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:43.803 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:43.804 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[588090cf-aeb2-4144-a364-25352933f0bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:43.805 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 12:23:43 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005d.scope: Consumed 5.166s CPU time.
Oct 02 12:23:43 compute-0 systemd-machined[211836]: Machine qemu-48-instance-0000005d terminated.
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.916 2 INFO nova.virt.libvirt.driver [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Instance destroyed successfully.
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.916 2 DEBUG nova.objects.instance [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.934 2 DEBUG nova.virt.libvirt.vif [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1174011223',display_name='tempest-ServerActionsTestOtherB-server-1174011223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1174011223',id=93,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-12h270gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=9ae72f32-b9fd-44eb-b10d-79119ad2ca85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.935 2 DEBUG nova.network.os_vif_util [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "address": "fa:16:3e:fc:ac:3b", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da3e5f9-b3", "ovs_interfaceid": "7da3e5f9-b358-4404-825b-b1ad43bc54ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.936 2 DEBUG nova.network.os_vif_util [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.936 2 DEBUG os_vif [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7da3e5f9-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:43 compute-0 nova_compute[257802]: 2025-10-02 12:23:43.947 2 INFO os_vif [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ac:3b,bridge_name='br-int',has_traffic_filtering=True,id=7da3e5f9-b358-4404-825b-b1ad43bc54ac,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da3e5f9-b3')
Oct 02 12:23:43 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [NOTICE]   (317513) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:43 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [NOTICE]   (317513) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:43 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [WARNING]  (317513) : Exiting Master process...
Oct 02 12:23:43 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [ALERT]    (317513) : Current worker (317515) exited with code 143 (Terminated)
Oct 02 12:23:43 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[317509]: [WARNING]  (317513) : All workers exited. Exiting... (0)
Oct 02 12:23:43 compute-0 systemd[1]: libpod-8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9.scope: Deactivated successfully.
Oct 02 12:23:43 compute-0 podman[317607]: 2025-10-02 12:23:43.974501732 +0000 UTC m=+0.053047519 container died 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee4949522f9268e0afc03426b6b10b05bb6641b5bfebcc88ee89b54bad4b2be-merged.mount: Deactivated successfully.
Oct 02 12:23:44 compute-0 podman[317607]: 2025-10-02 12:23:44.037591198 +0000 UTC m=+0.116136975 container cleanup 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:44 compute-0 systemd[1]: libpod-conmon-8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9.scope: Deactivated successfully.
Oct 02 12:23:44 compute-0 podman[317658]: 2025-10-02 12:23:44.12745367 +0000 UTC m=+0.055168083 container remove 8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.137 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2980ea-5ed4-41ce-b027-56c0d18ef43d]: (4, ('Thu Oct  2 12:23:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9)\n8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9\nThu Oct  2 12:23:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9)\n8c6f5c1dba5913209708050db266ac31f734c84b1628387daced5e7f7d291bb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.139 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1934537-9a5d-4258-bd8e-17c72ca45a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.141 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:44 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.152 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e45de1-9cd5-4e65-8664-f61356973733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.183 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a2fb23-78de-4230-8e80-3edc8b563db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.184 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[49b3938d-af2e-47e0-a4fc-c109b03b220c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.202 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f82f96c8-4f7b-4b8f-9b3e-17fd6dceceb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588625, 'reachable_time': 26990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317673, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.205 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:23:44.205 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d71b3bc8-115e-4e90-8cba-c7f3450cdcf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.222 2 DEBUG nova.compute.manager [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.222 2 DEBUG oslo_concurrency.lockutils [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.223 2 DEBUG oslo_concurrency.lockutils [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.223 2 DEBUG oslo_concurrency.lockutils [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.223 2 DEBUG nova.compute.manager [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.223 2 DEBUG nova.compute.manager [req-615bda8a-7392-408e-aec1-8bd4b0e7bc1c req-75cdf175-c4bc-4629-ae97-709f5bd457dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-unplugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:44.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.793 2 INFO nova.virt.libvirt.driver [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Deleting instance files /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_del
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.793 2 INFO nova.virt.libvirt.driver [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Deletion of /var/lib/nova/instances/9ae72f32-b9fd-44eb-b10d-79119ad2ca85_del complete
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.870 2 INFO nova.compute.manager [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Took 1.20 seconds to destroy the instance on the hypervisor.
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.870 2 DEBUG oslo.service.loopingcall [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.871 2 DEBUG nova.compute.manager [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:23:44 compute-0 nova_compute[257802]: 2025-10-02 12:23:44.871 2 DEBUG nova.network.neutron [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:23:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:44.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:45 compute-0 nova_compute[257802]: 2025-10-02 12:23:45.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:45 compute-0 ceph-mon[73607]: pgmap v1838: 305 pgs: 305 active+clean; 487 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.2 MiB/s wr, 297 op/s
Oct 02 12:23:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 493 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.5 MiB/s wr, 259 op/s
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.079 2 DEBUG nova.network.neutron [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.138 2 INFO nova.compute.manager [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Took 1.27 seconds to deallocate network for instance.
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.154 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.366 2 INFO nova.compute.manager [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Took 0.23 seconds to detach 1 volumes for instance.
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.426 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.426 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.432 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.465 2 INFO nova.scheduler.client.report [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocations for instance 9ae72f32-b9fd-44eb-b10d-79119ad2ca85
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.491 2 DEBUG nova.compute.manager [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.491 2 DEBUG oslo_concurrency.lockutils [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.492 2 DEBUG oslo_concurrency.lockutils [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.492 2 DEBUG oslo_concurrency.lockutils [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.492 2 DEBUG nova.compute.manager [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] No waiting events found dispatching network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.492 2 WARNING nova.compute.manager [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received unexpected event network-vif-plugged-7da3e5f9-b358-4404-825b-b1ad43bc54ac for instance with vm_state deleted and task_state None.
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.493 2 DEBUG nova.compute.manager [req-eae16048-6e71-4698-a321-8b38ee6329bb req-5631ce79-b09f-4812-ae1e-b756d6abd6d2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Received event network-vif-deleted-7da3e5f9-b358-4404-825b-b1ad43bc54ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:46.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:46 compute-0 nova_compute[257802]: 2025-10-02 12:23:46.540 2 DEBUG oslo_concurrency.lockutils [None req-c67148ea-294b-4545-8521-6691d80fcff7 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "9ae72f32-b9fd-44eb-b10d-79119ad2ca85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 02 12:23:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 02 12:23:47 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 02 12:23:47 compute-0 ceph-mon[73607]: pgmap v1839: 305 pgs: 305 active+clean; 493 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.5 MiB/s wr, 259 op/s
Oct 02 12:23:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3496810732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:47 compute-0 ceph-mon[73607]: osdmap e286: 3 total, 3 up, 3 in
Oct 02 12:23:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 449 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 285 op/s
Oct 02 12:23:47 compute-0 nova_compute[257802]: 2025-10-02 12:23:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:48 compute-0 nova_compute[257802]: 2025-10-02 12:23:48.155 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 nova_compute[257802]: 2025-10-02 12:23:48.156 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3789223055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:48 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:23:48 compute-0 systemd[317327]: Activating special unit Exit the Session...
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped target Main User Target.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped target Basic System.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped target Paths.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped target Sockets.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped target Timers.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:23:48 compute-0 systemd[317327]: Closed D-Bus User Message Bus Socket.
Oct 02 12:23:48 compute-0 systemd[317327]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:23:48 compute-0 systemd[317327]: Removed slice User Application Slice.
Oct 02 12:23:48 compute-0 systemd[317327]: Reached target Shutdown.
Oct 02 12:23:48 compute-0 systemd[317327]: Finished Exit the Session.
Oct 02 12:23:48 compute-0 systemd[317327]: Reached target Exit the Session.
Oct 02 12:23:48 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:23:48 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:23:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:23:48 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:23:48 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:23:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:23:48 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:23:48 compute-0 nova_compute[257802]: 2025-10-02 12:23:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:23:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:23:49 compute-0 ceph-mon[73607]: pgmap v1841: 305 pgs: 305 active+clean; 449 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 285 op/s
Oct 02 12:23:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/52643392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 358 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 263 op/s
Oct 02 12:23:50 compute-0 nova_compute[257802]: 2025-10-02 12:23:50.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:50 compute-0 nova_compute[257802]: 2025-10-02 12:23:50.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:23:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:51 compute-0 ceph-mon[73607]: pgmap v1842: 305 pgs: 305 active+clean; 358 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 263 op/s
Oct 02 12:23:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3708690614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1161124527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.388 2 DEBUG nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.389 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.389 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.390 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.390 2 DEBUG nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:51 compute-0 nova_compute[257802]: 2025-10-02 12:23:51.390 2 WARNING nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:23:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 358 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 263 op/s
Oct 02 12:23:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:52 compute-0 nova_compute[257802]: 2025-10-02 12:23:52.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:52 compute-0 nova_compute[257802]: 2025-10-02 12:23:52.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/450895416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:52 compute-0 nova_compute[257802]: 2025-10-02 12:23:52.365 2 INFO nova.network.neutron [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating port a399f412-7fee-426a-9fed-b593373c0a90 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:23:52 compute-0 nova_compute[257802]: 2025-10-02 12:23:52.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:53 compute-0 ceph-mon[73607]: pgmap v1843: 305 pgs: 305 active+clean; 358 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 263 op/s
Oct 02 12:23:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 397 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 999 KiB/s rd, 3.5 MiB/s wr, 177 op/s
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.745 2 DEBUG nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.745 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.745 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.745 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.746 2 DEBUG nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.746 2 WARNING nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:23:53 compute-0 nova_compute[257802]: 2025-10-02 12:23:53.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:54 compute-0 nova_compute[257802]: 2025-10-02 12:23:54.012 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:54 compute-0 nova_compute[257802]: 2025-10-02 12:23:54.012 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:54 compute-0 nova_compute[257802]: 2025-10-02 12:23:54.013 2 DEBUG nova.network.neutron [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:54 compute-0 nova_compute[257802]: 2025-10-02 12:23:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005021490554624977 of space, bias 1.0, pg target 1.5064471663874932 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002163050495533222 of space, bias 1.0, pg target 0.6467520981644334 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.215954810220082 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:23:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:54.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:23:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3119879302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:23:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3119879302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:55 compute-0 ceph-mon[73607]: pgmap v1844: 305 pgs: 305 active+clean; 397 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 999 KiB/s rd, 3.5 MiB/s wr, 177 op/s
Oct 02 12:23:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3119879302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3119879302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 420 MiB data, 958 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 4.3 MiB/s wr, 169 op/s
Oct 02 12:23:55 compute-0 nova_compute[257802]: 2025-10-02 12:23:55.983 2 DEBUG nova.compute.manager [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-changed-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:55 compute-0 nova_compute[257802]: 2025-10-02 12:23:55.983 2 DEBUG nova.compute.manager [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing instance network info cache due to event network-changed-a399f412-7fee-426a-9fed-b593373c0a90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:23:55 compute-0 nova_compute[257802]: 2025-10-02 12:23:55.983 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:56 compute-0 ceph-mon[73607]: pgmap v1845: 305 pgs: 305 active+clean; 420 MiB data, 958 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 4.3 MiB/s wr, 169 op/s
Oct 02 12:23:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2161220600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/834154144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:56.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:56.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.002361) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837002477, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 784, "num_deletes": 257, "total_data_size": 984448, "memory_usage": 1000040, "flush_reason": "Manual Compaction"}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837010407, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 972642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40433, "largest_seqno": 41216, "table_properties": {"data_size": 968677, "index_size": 1681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9264, "raw_average_key_size": 19, "raw_value_size": 960429, "raw_average_value_size": 2017, "num_data_blocks": 73, "num_entries": 476, "num_filter_entries": 476, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407787, "oldest_key_time": 1759407787, "file_creation_time": 1759407837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 8090 microseconds, and 2855 cpu microseconds.
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.010448) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 972642 bytes OK
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.010467) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013002) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013056) EVENT_LOG_v1 {"time_micros": 1759407837013044, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013098) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 980491, prev total WAL file size 980491, number of live WAL files 2.
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013916) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353036' seq:0, type:0; will stop at (end)
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(949KB)], [86(9743KB)]
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837013978, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 10949579, "oldest_snapshot_seqno": -1}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6838 keys, 10814160 bytes, temperature: kUnknown
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837105073, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10814160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10767545, "index_size": 28391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 175996, "raw_average_key_size": 25, "raw_value_size": 10644333, "raw_average_value_size": 1556, "num_data_blocks": 1133, "num_entries": 6838, "num_filter_entries": 6838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.105317) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10814160 bytes
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.107460) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 118.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(22.4) write-amplify(11.1) OK, records in: 7368, records dropped: 530 output_compression: NoCompression
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.107477) EVENT_LOG_v1 {"time_micros": 1759407837107469, "job": 50, "event": "compaction_finished", "compaction_time_micros": 91165, "compaction_time_cpu_micros": 39947, "output_level": 6, "num_output_files": 1, "total_output_size": 10814160, "num_input_records": 7368, "num_output_records": 6838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837107724, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837109546, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.109574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.109578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.109580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.109582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:23:57.109584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:23:57 compute-0 nova_compute[257802]: 2025-10-02 12:23:57.124 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:57 compute-0 nova_compute[257802]: 2025-10-02 12:23:57.125 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:23:57 compute-0 nova_compute[257802]: 2025-10-02 12:23:57.125 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:23:57 compute-0 nova_compute[257802]: 2025-10-02 12:23:57.143 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 476 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.0 MiB/s wr, 132 op/s
Oct 02 12:23:57 compute-0 nova_compute[257802]: 2025-10-02 12:23:57.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3300163746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.079 2 DEBUG nova.network.neutron [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.100 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.103 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.104 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing network info cache for port a399f412-7fee-426a-9fed-b593373c0a90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.257 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.259 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.260 2 INFO nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Creating image(s)
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.297 2 DEBUG nova.storage.rbd_utils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] creating snapshot(nova-resize) on rbd image(e1e08b12-a046-4fbd-a025-dff040cca766_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:23:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.914 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407823.9106379, 9ae72f32-b9fd-44eb-b10d-79119ad2ca85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.914 2 INFO nova.compute.manager [-] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] VM Stopped (Lifecycle Event)
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.939 2 DEBUG nova.compute.manager [None req-2b3f7372-2a1e-4721-94c7-b2fc969b57a1 - - - - - -] [instance: 9ae72f32-b9fd-44eb-b10d-79119ad2ca85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:58 compute-0 nova_compute[257802]: 2025-10-02 12:23:58.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:23:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:58.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 02 12:23:59 compute-0 ceph-mon[73607]: pgmap v1846: 305 pgs: 305 active+clean; 476 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.0 MiB/s wr, 132 op/s
Oct 02 12:23:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3300163746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1692804411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3779338660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 02 12:23:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.109 2 DEBUG nova.objects.instance [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.269 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.270 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Ensure instance console log exists: /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.270 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.271 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.271 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.276 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start _get_guest_xml network_info=[{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.280 2 WARNING nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.284 2 DEBUG nova.virt.libvirt.host [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.284 2 DEBUG nova.virt.libvirt.host [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.287 2 DEBUG nova.virt.libvirt.host [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.287 2 DEBUG nova.virt.libvirt.host [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.288 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.289 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.289 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.290 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.290 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.290 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.291 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.291 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.292 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.292 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.292 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.293 2 DEBUG nova.virt.hardware [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.293 2 DEBUG nova.objects.instance [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.312 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.9 MiB/s wr, 159 op/s
Oct 02 12:23:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007056460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.739 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:59 compute-0 nova_compute[257802]: 2025-10-02 12:23:59.795 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:00 compute-0 ceph-mon[73607]: osdmap e287: 3 total, 3 up, 3 in
Oct 02 12:24:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2618027857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3772221847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4007056460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:24:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358872205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.222 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.224 2 DEBUG nova.virt.libvirt.vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:52Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.224 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.225 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.228 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <uuid>e1e08b12-a046-4fbd-a025-dff040cca766</uuid>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <name>instance-00000063</name>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:name>tempest-DeleteServersTestJSON-server-850761329</nova:name>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:23:59</nova:creationTime>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <nova:port uuid="a399f412-7fee-426a-9fed-b593373c0a90">
Oct 02 12:24:00 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <system>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="serial">e1e08b12-a046-4fbd-a025-dff040cca766</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="uuid">e1e08b12-a046-4fbd-a025-dff040cca766</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </system>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <os>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </os>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <features>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </features>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e1e08b12-a046-4fbd-a025-dff040cca766_disk">
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e1e08b12-a046-4fbd-a025-dff040cca766_disk.config">
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:24:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:cd:48:e8"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <target dev="tapa399f412-7f"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/console.log" append="off"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <video>
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </video>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:24:00 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:24:00 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:24:00 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:24:00 compute-0 nova_compute[257802]: </domain>
Oct 02 12:24:00 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.230 2 DEBUG nova.virt.libvirt.vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:52Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.230 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.230 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.231 2 DEBUG os_vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa399f412-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa399f412-7f, col_values=(('external_ids', {'iface-id': 'a399f412-7fee-426a-9fed-b593373c0a90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:48:e8', 'vm-uuid': 'e1e08b12-a046-4fbd-a025-dff040cca766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.2878] manager: (tapa399f412-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.297 2 INFO os_vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f')
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.368 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.369 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.369 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:cd:48:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.370 2 INFO nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Using config drive
Oct 02 12:24:00 compute-0 kernel: tapa399f412-7f: entered promiscuous mode
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.4915] manager: (tapa399f412-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct 02 12:24:00 compute-0 ovn_controller[148183]: 2025-10-02T12:24:00Z|00425|binding|INFO|Claiming lport a399f412-7fee-426a-9fed-b593373c0a90 for this chassis.
Oct 02 12:24:00 compute-0 ovn_controller[148183]: 2025-10-02T12:24:00Z|00426|binding|INFO|a399f412-7fee-426a-9fed-b593373c0a90: Claiming fa:16:3e:cd:48:e8 10.100.0.3
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.504 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:48:e8 10.100.0.3'], port_security=['fa:16:3e:cd:48:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1e08b12-a046-4fbd-a025-dff040cca766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a399f412-7fee-426a-9fed-b593373c0a90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.505 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a399f412-7fee-426a-9fed-b593373c0a90 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.506 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.519 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[62628b89-12ac-4d94-b9ee-04eafe0e79cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.520 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.525 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.525 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e8156332-a135-476e-a43d-547c37010a1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 systemd-udevd[317852]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.527 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[95687263-5d76-498b-8b8a-6dc7c12ac919]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_controller[148183]: 2025-10-02T12:24:00Z|00427|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 ovn-installed in OVS
Oct 02 12:24:00 compute-0 ovn_controller[148183]: 2025-10-02T12:24:00Z|00428|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 up in Southbound
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.541 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[dc45b2c9-5d43-4820-a1da-e95d76633c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:00.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:00 compute-0 systemd-machined[211836]: New machine qemu-49-instance-00000063.
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.5549] device (tapa399f412-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.5555] device (tapa399f412-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:24:00 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-00000063.
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.566 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a6979632-dff4-42f1-9c17-c30b62e3b025]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.592 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[80ff8dc2-d997-492c-8cff-81bf0b434ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.597 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f29d26d0-8a86-41cf-be76-912814b724be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.5989] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Oct 02 12:24:00 compute-0 systemd-udevd[317858]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.634 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[93fee1ce-4707-4a31-852b-774057f5f701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.638 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a2937e-8bdd-46b7-ac1c-8b173615596b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.6630] device (tap7754c79a-c0): carrier: link connected
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.670 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7413d336-b7eb-4343-a799-3c059bf49a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.688 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f80ed0-8334-4204-8ebe-091b0e7e3c69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590828, 'reachable_time': 41402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317886, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.702 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[28d4a648-9885-435f-8dcd-d379909bee9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590828, 'tstamp': 590828}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317887, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.716 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13f2d63b-5ca1-4904-9aad-975a05b38c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590828, 'reachable_time': 41402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317888, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.755 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3017b49b-32c2-4d27-8324-6f69aab6d74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.824 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8dcb19-08e8-4621-a0ad-57d61fe0ff11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.826 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.826 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.826 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 NetworkManager[44987]: <info>  [1759407840.8285] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 kernel: tap7754c79a-c0: entered promiscuous mode
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.831 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 ovn_controller[148183]: 2025-10-02T12:24:00Z|00429|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct 02 12:24:00 compute-0 nova_compute[257802]: 2025-10-02 12:24:00.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.852 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.853 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[562ec508-3015-41cb-90bd-6d4e7fa429d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.854 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:24:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:00.854 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:24:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:01 compute-0 ceph-mon[73607]: pgmap v1848: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.9 MiB/s wr, 159 op/s
Oct 02 12:24:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1358872205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/76348305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:01 compute-0 podman[317963]: 2025-10-02 12:24:01.25438293 +0000 UTC m=+0.085340592 container create 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:24:01 compute-0 podman[317963]: 2025-10-02 12:24:01.190566976 +0000 UTC m=+0.021524658 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:24:01 compute-0 systemd[1]: Started libpod-conmon-1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1.scope.
Oct 02 12:24:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91282673f8865f4a900078baeb04bab1b87393ede70b2d45baae73ada963c7c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:01 compute-0 podman[317963]: 2025-10-02 12:24:01.361945225 +0000 UTC m=+0.192902897 container init 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:01 compute-0 podman[317963]: 2025-10-02 12:24:01.368149786 +0000 UTC m=+0.199107438 container start 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:24:01 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [NOTICE]   (317983) : New worker (317989) forked
Oct 02 12:24:01 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [NOTICE]   (317983) : Loading success.
Oct 02 12:24:01 compute-0 sudo[317984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:01 compute-0 sudo[317984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 sudo[317984]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.9 MiB/s wr, 159 op/s
Oct 02 12:24:01 compute-0 sudo[318019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:01 compute-0 sudo[318019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 sudo[318019]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.557 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407841.5567315, e1e08b12-a046-4fbd-a025-dff040cca766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.557 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Resumed (Lifecycle Event)
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.559 2 DEBUG nova.compute.manager [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.562 2 INFO nova.virt.libvirt.driver [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance running successfully.
Oct 02 12:24:01 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.565 2 DEBUG nova.virt.libvirt.guest [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.565 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.606 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.613 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.674 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.675 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407841.5588179, e1e08b12-a046-4fbd-a025-dff040cca766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.675 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Started (Lifecycle Event)
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.701 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.707 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.920 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updated VIF entry in instance network info cache for port a399f412-7fee-426a-9fed-b593373c0a90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.921 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.945 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.945 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.945 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:24:01 compute-0 nova_compute[257802]: 2025-10-02 12:24:01.946 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.247 2 DEBUG nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.248 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.248 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.248 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.248 2 DEBUG nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.248 2 WARNING nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state resized and task_state None.
Oct 02 12:24:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 02 12:24:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 02 12:24:02 compute-0 nova_compute[257802]: 2025-10-02 12:24:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:02 compute-0 podman[318045]: 2025-10-02 12:24:02.933788497 +0000 UTC m=+0.067481584 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:24:02 compute-0 podman[318047]: 2025-10-02 12:24:02.95474086 +0000 UTC m=+0.076456623 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:02 compute-0 podman[318046]: 2025-10-02 12:24:02.956520304 +0000 UTC m=+0.091158643 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:24:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:03 compute-0 ceph-mon[73607]: pgmap v1849: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.9 MiB/s wr, 159 op/s
Oct 02 12:24:03 compute-0 ceph-mon[73607]: osdmap e288: 3 total, 3 up, 3 in
Oct 02 12:24:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 7.6 MiB/s wr, 352 op/s
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.205 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.235 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.236 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.237 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.258 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.258 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.259 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.259 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.259 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:04 compute-0 ceph-mon[73607]: pgmap v1851: 305 pgs: 305 active+clean; 530 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 7.6 MiB/s wr, 352 op/s
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.528 2 DEBUG nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.529 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.530 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.530 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.530 2 DEBUG nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.531 2 WARNING nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state resized and task_state deleting.
Oct 02 12:24:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651899732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.719 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.779 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.780 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.962 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.963 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=20.810211181640625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.964 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:04 compute-0 nova_compute[257802]: 2025-10-02 12:24:04.964 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.012 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Applying migration context for instance e1e08b12-a046-4fbd-a025-dff040cca766 as it has an incoming, in-progress migration c0e29430-3f1f-4f18-bd83-d892eb2d6811. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.013 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating resource usage from migration c0e29430-3f1f-4f18-bd83-d892eb2d6811
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.094 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance e1e08b12-a046-4fbd-a025-dff040cca766 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.095 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.095 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.114 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.186 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.186 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.204 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.234 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:24:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/651899732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.294 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 504 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 4.0 MiB/s wr, 371 op/s
Oct 02 12:24:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901020911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.743 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.749 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.768 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.800 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.801 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.801 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:05 compute-0 nova_compute[257802]: 2025-10-02 12:24:05.801 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:24:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 02 12:24:06 compute-0 ceph-mon[73607]: pgmap v1852: 305 pgs: 305 active+clean; 504 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 4.0 MiB/s wr, 371 op/s
Oct 02 12:24:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1901020911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 02 12:24:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 02 12:24:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:06 compute-0 nova_compute[257802]: 2025-10-02 12:24:06.786 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:06 compute-0 nova_compute[257802]: 2025-10-02 12:24:06.806 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:06 compute-0 nova_compute[257802]: 2025-10-02 12:24:06.987 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.233 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.234 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.235 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.235 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.236 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.238 2 INFO nova.compute.manager [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Terminating instance
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.240 2 DEBUG nova.compute.manager [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:24:07 compute-0 kernel: tapa399f412-7f (unregistering): left promiscuous mode
Oct 02 12:24:07 compute-0 NetworkManager[44987]: <info>  [1759407847.3015] device (tapa399f412-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:24:07 compute-0 ovn_controller[148183]: 2025-10-02T12:24:07Z|00430|binding|INFO|Releasing lport a399f412-7fee-426a-9fed-b593373c0a90 from this chassis (sb_readonly=0)
Oct 02 12:24:07 compute-0 ovn_controller[148183]: 2025-10-02T12:24:07Z|00431|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 down in Southbound
Oct 02 12:24:07 compute-0 ovn_controller[148183]: 2025-10-02T12:24:07Z|00432|binding|INFO|Removing iface tapa399f412-7f ovn-installed in OVS
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:07.328 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:48:e8 10.100.0.3'], port_security=['fa:16:3e:cd:48:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1e08b12-a046-4fbd-a025-dff040cca766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a399f412-7fee-426a-9fed-b593373c0a90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:07.330 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a399f412-7fee-426a-9fed-b593373c0a90 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis
Oct 02 12:24:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:07.331 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:24:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:07.332 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cccaf841-f13f-4a68-b5f7-07ecf8d1589d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:07.333 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore
Oct 02 12:24:07 compute-0 ceph-mon[73607]: osdmap e289: 3 total, 3 up, 3 in
Oct 02 12:24:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2470768981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 02 12:24:07 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000063.scope: Consumed 6.663s CPU time.
Oct 02 12:24:07 compute-0 systemd-machined[211836]: Machine qemu-49-instance-00000063 terminated.
Oct 02 12:24:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 480 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 65 KiB/s wr, 422 op/s
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.485 2 INFO nova.virt.libvirt.driver [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance destroyed successfully.
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.486 2 DEBUG nova.objects.instance [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.503 2 DEBUG nova.virt.libvirt.vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:01Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.503 2 DEBUG nova.network.os_vif_util [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.504 2 DEBUG nova.network.os_vif_util [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.504 2 DEBUG os_vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa399f412-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.512 2 INFO os_vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f')
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [NOTICE]   (317983) : haproxy version is 2.8.14-c23fe91
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [NOTICE]   (317983) : path to executable is /usr/sbin/haproxy
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [WARNING]  (317983) : Exiting Master process...
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [WARNING]  (317983) : Exiting Master process...
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [ALERT]    (317983) : Current worker (317989) exited with code 143 (Terminated)
Oct 02 12:24:07 compute-0 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[317978]: [WARNING]  (317983) : All workers exited. Exiting... (0)
Oct 02 12:24:07 compute-0 systemd[1]: libpod-1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1.scope: Deactivated successfully.
Oct 02 12:24:07 compute-0 podman[318173]: 2025-10-02 12:24:07.605888321 +0000 UTC m=+0.196005282 container died 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.836 2 DEBUG nova.compute.manager [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.836 2 DEBUG oslo_concurrency.lockutils [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.837 2 DEBUG oslo_concurrency.lockutils [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.837 2 DEBUG oslo_concurrency.lockutils [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1-userdata-shm.mount: Deactivated successfully.
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.837 2 DEBUG nova.compute.manager [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:07 compute-0 nova_compute[257802]: 2025-10-02 12:24:07.840 2 WARNING nova.compute.manager [req-72a6ad6f-eac8-4ea4-948d-a955a8d0ae91 req-242e0b6b-ed90-4a9a-8868-6ff3616bd010 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state None.
Oct 02 12:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-91282673f8865f4a900078baeb04bab1b87393ede70b2d45baae73ada963c7c1-merged.mount: Deactivated successfully.
Oct 02 12:24:08 compute-0 podman[318173]: 2025-10-02 12:24:08.079646347 +0000 UTC m=+0.669763308 container cleanup 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:08 compute-0 systemd[1]: libpod-conmon-1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1.scope: Deactivated successfully.
Oct 02 12:24:08 compute-0 ceph-mon[73607]: pgmap v1854: 305 pgs: 305 active+clean; 480 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 65 KiB/s wr, 422 op/s
Oct 02 12:24:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:08.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:08 compute-0 podman[318233]: 2025-10-02 12:24:08.701063538 +0000 UTC m=+0.586087177 container remove 1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.712 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7115d807-f9bb-411c-a59d-df6f161102a4]: (4, ('Thu Oct  2 12:24:07 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1)\n1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1\nThu Oct  2 12:24:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1)\n1914a50a084e23cc21b7679cffce1d70ddaccefd2177a70e66d754f5332a7be1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.715 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5209abb9-3b52-47e5-8dfc-8494e8dffde1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.716 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:08 compute-0 nova_compute[257802]: 2025-10-02 12:24:08.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:08 compute-0 kernel: tap7754c79a-c0: left promiscuous mode
Oct 02 12:24:08 compute-0 nova_compute[257802]: 2025-10-02 12:24:08.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.760 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5e85ce-16d0-47f4-8018-9852bf35828c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.797 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db298588-e6c4-40d3-af60-b31fb00428e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.798 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86502664-e035-4bf1-8ccb-bbcb4998c9d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.819 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c2616be1-138d-431a-b9fb-66fb1fb8b435]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590821, 'reachable_time': 18856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318273, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.822 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:24:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:08.822 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[18864ba9-40c5-4e44-ab1c-9a5adfde1609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct 02 12:24:08 compute-0 podman[318234]: 2025-10-02 12:24:08.834744333 +0000 UTC m=+0.706946218 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:24:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 65 KiB/s wr, 509 op/s
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.471 2 INFO nova.virt.libvirt.driver [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Deleting instance files /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766_del
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.472 2 INFO nova.virt.libvirt.driver [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Deletion of /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766_del complete
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.580 2 INFO nova.compute.manager [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Took 2.34 seconds to destroy the instance on the hypervisor.
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.581 2 DEBUG oslo.service.loopingcall [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.581 2 DEBUG nova.compute.manager [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.581 2 DEBUG nova.network.neutron [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.972 2 DEBUG nova.compute.manager [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.973 2 DEBUG oslo_concurrency.lockutils [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.973 2 DEBUG oslo_concurrency.lockutils [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.974 2 DEBUG oslo_concurrency.lockutils [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.974 2 DEBUG nova.compute.manager [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:09 compute-0 nova_compute[257802]: 2025-10-02 12:24:09.974 2 WARNING nova.compute.manager [req-1ac140b9-6bab-4b35-8383-b1a5e6e53525 req-2d949028-e610-434e-87c0-8056b4b329ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state None.
Oct 02 12:24:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:10 compute-0 ceph-mon[73607]: pgmap v1855: 305 pgs: 305 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 65 KiB/s wr, 509 op/s
Oct 02 12:24:10 compute-0 nova_compute[257802]: 2025-10-02 12:24:10.871 2 DEBUG nova.network.neutron [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:10 compute-0 nova_compute[257802]: 2025-10-02 12:24:10.923 2 INFO nova.compute.manager [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Took 1.34 seconds to deallocate network for instance.
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.012 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.013 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.064 2 DEBUG oslo_concurrency.processutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 41 KiB/s wr, 407 op/s
Oct 02 12:24:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873132068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.535 2 DEBUG oslo_concurrency.processutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.544 2 DEBUG nova.compute.provider_tree [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.568 2 DEBUG nova.scheduler.client.report [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.603 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:11 compute-0 sudo[318301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:11 compute-0 sudo[318301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:11 compute-0 sudo[318301]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.648 2 INFO nova.scheduler.client.report [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocations for instance e1e08b12-a046-4fbd-a025-dff040cca766
Oct 02 12:24:11 compute-0 sudo[318326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:11 compute-0 sudo[318326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:11 compute-0 sudo[318326]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:11 compute-0 nova_compute[257802]: 2025-10-02 12:24:11.736 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3873132068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:11 compute-0 sudo[318351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:11 compute-0 sudo[318351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:11 compute-0 sudo[318351]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:11 compute-0 sudo[318376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:24:11 compute-0 sudo[318376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 02 12:24:12 compute-0 sudo[318376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 nova_compute[257802]: 2025-10-02 12:24:12.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:24:12 compute-0 nova_compute[257802]: 2025-10-02 12:24:12.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 046afd56-6ed7-4861-8feb-cbe1155bb759 does not exist
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5c402aa1-ed7a-4950-bc76-d076b6efa161 does not exist
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5d2775b1-2f29-41ae-bc5e-b9fdfcfb9376 does not exist
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:24:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:24:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:12 compute-0 sudo[318433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:12 compute-0 sudo[318433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 sudo[318433]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 sudo[318458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:12 compute-0 sudo[318458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 sudo[318458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:12 compute-0 sudo[318483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:12 compute-0 sudo[318483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 sudo[318483]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 sudo[318508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:24:12 compute-0 sudo[318508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:13 compute-0 ceph-mon[73607]: pgmap v1856: 305 pgs: 305 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 41 KiB/s wr, 407 op/s
Oct 02 12:24:13 compute-0 ceph-mon[73607]: osdmap e290: 3 total, 3 up, 3 in
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:24:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:13 compute-0 nova_compute[257802]: 2025-10-02 12:24:13.068 2 DEBUG nova.compute.manager [req-cc9627d1-b5d5-4726-b4d2-5d125bdffe20 req-bf4ecbcf-2ff3-4dcd-ad07-3ca20cf2e3f7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-deleted-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.084558032 +0000 UTC m=+0.047059823 container create d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:13 compute-0 systemd[1]: Started libpod-conmon-d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5.scope.
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.057991421 +0000 UTC m=+0.020493242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.208142019 +0000 UTC m=+0.170643810 container init d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.216772931 +0000 UTC m=+0.179274732 container start d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:13 compute-0 elated_dewdney[318589]: 167 167
Oct 02 12:24:13 compute-0 systemd[1]: libpod-d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5.scope: Deactivated successfully.
Oct 02 12:24:13 compute-0 conmon[318589]: conmon d27ad9979b8bda8b8902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5.scope/container/memory.events
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.227821692 +0000 UTC m=+0.190323513 container attach d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.229041271 +0000 UTC m=+0.191543092 container died d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bc25b8ad2c27a7ab549f05cb18ab30e8e4193b8eb0d106114d8ee7b8dd1e1f6-merged.mount: Deactivated successfully.
Oct 02 12:24:13 compute-0 podman[318573]: 2025-10-02 12:24:13.308504358 +0000 UTC m=+0.271006149 container remove d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:13 compute-0 systemd[1]: libpod-conmon-d27ad9979b8bda8b890282fdaf8a691ec195d70b027a6ccd1744a27ab79678b5.scope: Deactivated successfully.
Oct 02 12:24:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 372 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 24 KiB/s wr, 265 op/s
Oct 02 12:24:13 compute-0 podman[318614]: 2025-10-02 12:24:13.536278777 +0000 UTC m=+0.057723974 container create 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:13 compute-0 systemd[1]: Started libpod-conmon-4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047.scope.
Oct 02 12:24:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:13 compute-0 podman[318614]: 2025-10-02 12:24:13.518337228 +0000 UTC m=+0.039782435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:13 compute-0 podman[318614]: 2025-10-02 12:24:13.631571722 +0000 UTC m=+0.153016929 container init 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:24:13 compute-0 podman[318614]: 2025-10-02 12:24:13.63927796 +0000 UTC m=+0.160723147 container start 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:24:13 compute-0 podman[318614]: 2025-10-02 12:24:13.642839898 +0000 UTC m=+0.164285095 container attach 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:14 compute-0 clever_joliot[318630]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:24:14 compute-0 clever_joliot[318630]: --> relative data size: 1.0
Oct 02 12:24:14 compute-0 clever_joliot[318630]: --> All data devices are unavailable
Oct 02 12:24:14 compute-0 systemd[1]: libpod-4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047.scope: Deactivated successfully.
Oct 02 12:24:14 compute-0 podman[318614]: 2025-10-02 12:24:14.493200867 +0000 UTC m=+1.014646054 container died 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:24:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fce1c038095e71813350aa02f6a704e3d4484cc9de2ed74bffeffd74718b3c61-merged.mount: Deactivated successfully.
Oct 02 12:24:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:15 compute-0 ceph-mon[73607]: pgmap v1858: 305 pgs: 305 active+clean; 372 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 24 KiB/s wr, 265 op/s
Oct 02 12:24:15 compute-0 podman[318614]: 2025-10-02 12:24:15.414769171 +0000 UTC m=+1.936214358 container remove 4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:24:15 compute-0 systemd[1]: libpod-conmon-4cc35711c251e40989aa9442b8dfc3644bce1397e033f8b13bb1fa128969d047.scope: Deactivated successfully.
Oct 02 12:24:15 compute-0 sudo[318508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 372 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 21 KiB/s wr, 233 op/s
Oct 02 12:24:15 compute-0 sudo[318658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:15 compute-0 sudo[318658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:15 compute-0 sudo[318658]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:15 compute-0 sudo[318683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:15 compute-0 sudo[318683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:15 compute-0 sudo[318683]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:15 compute-0 sudo[318709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:15 compute-0 sudo[318709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:15 compute-0 sudo[318709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:15 compute-0 sudo[318734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:24:15 compute-0 sudo[318734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.053415365 +0000 UTC m=+0.041607230 container create cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:24:16 compute-0 systemd[1]: Started libpod-conmon-cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1.scope.
Oct 02 12:24:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.0352361 +0000 UTC m=+0.023427985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.132582514 +0000 UTC m=+0.120774399 container init cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.140098059 +0000 UTC m=+0.128289924 container start cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.145034619 +0000 UTC m=+0.133226484 container attach cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:24:16 compute-0 relaxed_driscoll[318815]: 167 167
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.148261468 +0000 UTC m=+0.136453333 container died cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:24:16 compute-0 systemd[1]: libpod-cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1.scope: Deactivated successfully.
Oct 02 12:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d3d37af9e2869bbe075cc313d2cfb170de63684d1e9510fd21c82d9eceb7155-merged.mount: Deactivated successfully.
Oct 02 12:24:16 compute-0 podman[318799]: 2025-10-02 12:24:16.189819956 +0000 UTC m=+0.178011831 container remove cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:24:16 compute-0 systemd[1]: libpod-conmon-cc8ff6e446388e73021a1cb5fd668c592897efd9152a45eb409791c900172ab1.scope: Deactivated successfully.
Oct 02 12:24:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/10777761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/10777761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:16 compute-0 ceph-mon[73607]: pgmap v1859: 305 pgs: 305 active+clean; 372 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 21 KiB/s wr, 233 op/s
Oct 02 12:24:16 compute-0 podman[318840]: 2025-10-02 12:24:16.377234507 +0000 UTC m=+0.048835157 container create 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:24:16 compute-0 systemd[1]: Started libpod-conmon-5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2.scope.
Oct 02 12:24:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf983973acc52d5bae6304035c41dff7d8234da84ca4fb370fdd0fbd0850cf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf983973acc52d5bae6304035c41dff7d8234da84ca4fb370fdd0fbd0850cf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf983973acc52d5bae6304035c41dff7d8234da84ca4fb370fdd0fbd0850cf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf983973acc52d5bae6304035c41dff7d8234da84ca4fb370fdd0fbd0850cf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:16 compute-0 podman[318840]: 2025-10-02 12:24:16.353051345 +0000 UTC m=+0.024652015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:16 compute-0 podman[318840]: 2025-10-02 12:24:16.457638676 +0000 UTC m=+0.129239346 container init 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:24:16 compute-0 podman[318840]: 2025-10-02 12:24:16.471319321 +0000 UTC m=+0.142919961 container start 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:24:16 compute-0 podman[318840]: 2025-10-02 12:24:16.475886963 +0000 UTC m=+0.147487633 container attach 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:24:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:17.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:17 compute-0 competent_cray[318856]: {
Oct 02 12:24:17 compute-0 competent_cray[318856]:     "1": [
Oct 02 12:24:17 compute-0 competent_cray[318856]:         {
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "devices": [
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "/dev/loop3"
Oct 02 12:24:17 compute-0 competent_cray[318856]:             ],
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "lv_name": "ceph_lv0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "lv_size": "7511998464",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "name": "ceph_lv0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "tags": {
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.cluster_name": "ceph",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.crush_device_class": "",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.encrypted": "0",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.osd_id": "1",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.type": "block",
Oct 02 12:24:17 compute-0 competent_cray[318856]:                 "ceph.vdo": "0"
Oct 02 12:24:17 compute-0 competent_cray[318856]:             },
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "type": "block",
Oct 02 12:24:17 compute-0 competent_cray[318856]:             "vg_name": "ceph_vg0"
Oct 02 12:24:17 compute-0 competent_cray[318856]:         }
Oct 02 12:24:17 compute-0 competent_cray[318856]:     ]
Oct 02 12:24:17 compute-0 competent_cray[318856]: }
Oct 02 12:24:17 compute-0 systemd[1]: libpod-5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2.scope: Deactivated successfully.
Oct 02 12:24:17 compute-0 podman[318840]: 2025-10-02 12:24:17.244957063 +0000 UTC m=+0.916557713 container died 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-daf983973acc52d5bae6304035c41dff7d8234da84ca4fb370fdd0fbd0850cf3-merged.mount: Deactivated successfully.
Oct 02 12:24:17 compute-0 podman[318840]: 2025-10-02 12:24:17.309244207 +0000 UTC m=+0.980844867 container remove 5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:24:17 compute-0 systemd[1]: libpod-conmon-5225a95b663404948b9998a2a78e8758d059eb7c961bf6cf1f875c45aa7021c2.scope: Deactivated successfully.
Oct 02 12:24:17 compute-0 sudo[318734]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:17 compute-0 sudo[318880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:17 compute-0 sudo[318880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:17 compute-0 sudo[318880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:17 compute-0 sudo[318905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:17 compute-0 sudo[318905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:17 compute-0 sudo[318905]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 386 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 160 op/s
Oct 02 12:24:17 compute-0 sudo[318930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:17 compute-0 sudo[318930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:17 compute-0 sudo[318930]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:17 compute-0 nova_compute[257802]: 2025-10-02 12:24:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:17 compute-0 nova_compute[257802]: 2025-10-02 12:24:17.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:17 compute-0 sudo[318955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:24:17 compute-0 sudo[318955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:17 compute-0 podman[319021]: 2025-10-02 12:24:17.913961369 +0000 UTC m=+0.038449292 container create 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:24:17 compute-0 systemd[1]: Started libpod-conmon-40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2.scope.
Oct 02 12:24:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:17 compute-0 podman[319021]: 2025-10-02 12:24:17.990046674 +0000 UTC m=+0.114534617 container init 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:17 compute-0 podman[319021]: 2025-10-02 12:24:17.896594234 +0000 UTC m=+0.021082167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:17 compute-0 podman[319021]: 2025-10-02 12:24:17.996278536 +0000 UTC m=+0.120766449 container start 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:24:17 compute-0 podman[319021]: 2025-10-02 12:24:17.99970766 +0000 UTC m=+0.124195603 container attach 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:18 compute-0 zen_nash[319037]: 167 167
Oct 02 12:24:18 compute-0 systemd[1]: libpod-40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2.scope: Deactivated successfully.
Oct 02 12:24:18 compute-0 podman[319021]: 2025-10-02 12:24:18.000658883 +0000 UTC m=+0.125146796 container died 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f84c9e4d4dab9ee93bfde89bbcc27f2decdf81cb119648f5d418420cc55a2e-merged.mount: Deactivated successfully.
Oct 02 12:24:18 compute-0 podman[319021]: 2025-10-02 12:24:18.040209232 +0000 UTC m=+0.164697145 container remove 40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:24:18 compute-0 systemd[1]: libpod-conmon-40d8e9e75fa45cef2258305f7109022431e7dcf143dd730c4106507d24bda6d2.scope: Deactivated successfully.
Oct 02 12:24:18 compute-0 podman[319061]: 2025-10-02 12:24:18.223254506 +0000 UTC m=+0.047482025 container create 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:18 compute-0 systemd[1]: Started libpod-conmon-89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4.scope.
Oct 02 12:24:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36371149c439a69d8f1d9e39eb4c34b40a1e558f81a2d6aa99bedd399f8e23de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36371149c439a69d8f1d9e39eb4c34b40a1e558f81a2d6aa99bedd399f8e23de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36371149c439a69d8f1d9e39eb4c34b40a1e558f81a2d6aa99bedd399f8e23de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36371149c439a69d8f1d9e39eb4c34b40a1e558f81a2d6aa99bedd399f8e23de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:18 compute-0 podman[319061]: 2025-10-02 12:24:18.293668771 +0000 UTC m=+0.117896310 container init 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:18 compute-0 podman[319061]: 2025-10-02 12:24:18.201456792 +0000 UTC m=+0.025684361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:18 compute-0 podman[319061]: 2025-10-02 12:24:18.302961388 +0000 UTC m=+0.127188907 container start 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:18 compute-0 podman[319061]: 2025-10-02 12:24:18.314761808 +0000 UTC m=+0.138989347 container attach 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:24:18 compute-0 ceph-mon[73607]: pgmap v1860: 305 pgs: 305 active+clean; 386 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 160 op/s
Oct 02 12:24:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:18.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:19 compute-0 interesting_hugle[319077]: {
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:         "osd_id": 1,
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:         "type": "bluestore"
Oct 02 12:24:19 compute-0 interesting_hugle[319077]:     }
Oct 02 12:24:19 compute-0 interesting_hugle[319077]: }
Oct 02 12:24:19 compute-0 systemd[1]: libpod-89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4.scope: Deactivated successfully.
Oct 02 12:24:19 compute-0 podman[319061]: 2025-10-02 12:24:19.132640412 +0000 UTC m=+0.956867941 container died 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-36371149c439a69d8f1d9e39eb4c34b40a1e558f81a2d6aa99bedd399f8e23de-merged.mount: Deactivated successfully.
Oct 02 12:24:19 compute-0 podman[319061]: 2025-10-02 12:24:19.190521239 +0000 UTC m=+1.014748758 container remove 89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:24:19 compute-0 systemd[1]: libpod-conmon-89d693423f94db04327a8f1974cb5e7262dc62b134de59a778caf2b90ef2d2f4.scope: Deactivated successfully.
Oct 02 12:24:19 compute-0 sudo[318955]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:24:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:24:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev afc85b80-a082-4a8b-9375-2e015d632f51 does not exist
Oct 02 12:24:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 21c573eb-93cc-4587-a532-14cb6a1977f3 does not exist
Oct 02 12:24:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 185d8dd8-7552-4abc-9e2e-95632138cc9f does not exist
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.264402) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859264455, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 526, "num_deletes": 253, "total_data_size": 503287, "memory_usage": 514168, "flush_reason": "Manual Compaction"}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859271964, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 497678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41217, "largest_seqno": 41742, "table_properties": {"data_size": 494701, "index_size": 949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7504, "raw_average_key_size": 19, "raw_value_size": 488510, "raw_average_value_size": 1295, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407838, "oldest_key_time": 1759407838, "file_creation_time": 1759407859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7596 microseconds, and 2190 cpu microseconds.
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.271998) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 497678 bytes OK
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.272016) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275368) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275382) EVENT_LOG_v1 {"time_micros": 1759407859275377, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275396) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 500240, prev total WAL file size 500240, number of live WAL files 2.
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(486KB)], [89(10MB)]
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859275813, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11311838, "oldest_snapshot_seqno": -1}
Oct 02 12:24:19 compute-0 nova_compute[257802]: 2025-10-02 12:24:19.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:19 compute-0 sudo[319111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:19 compute-0 sudo[319111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:19 compute-0 sudo[319111]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6693 keys, 9424450 bytes, temperature: kUnknown
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859328522, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9424450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9380178, "index_size": 26432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 173766, "raw_average_key_size": 25, "raw_value_size": 9260754, "raw_average_value_size": 1383, "num_data_blocks": 1042, "num_entries": 6693, "num_filter_entries": 6693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759407859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.328726) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9424450 bytes
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.330049) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.3 rd, 178.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(41.7) write-amplify(18.9) OK, records in: 7215, records dropped: 522 output_compression: NoCompression
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.330064) EVENT_LOG_v1 {"time_micros": 1759407859330057, "job": 52, "event": "compaction_finished", "compaction_time_micros": 52783, "compaction_time_cpu_micros": 19984, "output_level": 6, "num_output_files": 1, "total_output_size": 9424450, "num_input_records": 7215, "num_output_records": 6693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859330339, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859331965, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.332010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.332014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.332015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.332017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:24:19.332018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:19 compute-0 sudo[319136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:24:19 compute-0 sudo[319136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:19 compute-0 sudo[319136]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 430 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 5.0 MiB/s wr, 184 op/s
Oct 02 12:24:19 compute-0 nova_compute[257802]: 2025-10-02 12:24:19.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:24:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:24:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:24:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:21.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:21 compute-0 ceph-mon[73607]: pgmap v1861: 305 pgs: 305 active+clean; 430 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 5.0 MiB/s wr, 184 op/s
Oct 02 12:24:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 430 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 5.0 MiB/s wr, 184 op/s
Oct 02 12:24:21 compute-0 sudo[319163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:21 compute-0 sudo[319163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:21 compute-0 sudo[319163]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:21 compute-0 sudo[319188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:21 compute-0 sudo[319188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:21 compute-0 sudo[319188]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:22 compute-0 ceph-mon[73607]: pgmap v1862: 305 pgs: 305 active+clean; 430 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 5.0 MiB/s wr, 184 op/s
Oct 02 12:24:22 compute-0 nova_compute[257802]: 2025-10-02 12:24:22.485 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407847.4838212, e1e08b12-a046-4fbd-a025-dff040cca766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:22 compute-0 nova_compute[257802]: 2025-10-02 12:24:22.485 2 INFO nova.compute.manager [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Stopped (Lifecycle Event)
Oct 02 12:24:22 compute-0 nova_compute[257802]: 2025-10-02 12:24:22.503 2 DEBUG nova.compute.manager [None req-600e0916-4e12-498a-9b5a-ccbd76164e8e - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:22 compute-0 nova_compute[257802]: 2025-10-02 12:24:22.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-0 nova_compute[257802]: 2025-10-02 12:24:22.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:23.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 205 op/s
Oct 02 12:24:24 compute-0 ceph-mon[73607]: pgmap v1863: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 205 op/s
Oct 02 12:24:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct 02 12:24:25 compute-0 nova_compute[257802]: 2025-10-02 12:24:25.977 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:26 compute-0 ceph-mon[73607]: pgmap v1864: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:26.941 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:26.941 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:26.942 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:27.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 188 op/s
Oct 02 12:24:27 compute-0 nova_compute[257802]: 2025-10-02 12:24:27.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:27 compute-0 nova_compute[257802]: 2025-10-02 12:24:27.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:28.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:28 compute-0 ceph-mon[73607]: pgmap v1865: 305 pgs: 305 active+clean; 438 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 188 op/s
Oct 02 12:24:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3511363623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3511363623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:29.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2188206189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2188206189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 440 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.3 MiB/s wr, 144 op/s
Oct 02 12:24:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2188206189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2188206189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:30 compute-0 ceph-mon[73607]: pgmap v1866: 305 pgs: 305 active+clean; 440 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.3 MiB/s wr, 144 op/s
Oct 02 12:24:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:31.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 440 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 118 KiB/s wr, 66 op/s
Oct 02 12:24:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:32 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:24:32 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:24:32 compute-0 nova_compute[257802]: 2025-10-02 12:24:32.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:32 compute-0 nova_compute[257802]: 2025-10-02 12:24:32.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:32 compute-0 ceph-mon[73607]: pgmap v1867: 305 pgs: 305 active+clean; 440 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 118 KiB/s wr, 66 op/s
Oct 02 12:24:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2167894051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:33.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:33 compute-0 nova_compute[257802]: 2025-10-02 12:24:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:33.164 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:33.164 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:24:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 372 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 616 KiB/s rd, 124 KiB/s wr, 117 op/s
Oct 02 12:24:33 compute-0 podman[319221]: 2025-10-02 12:24:33.92480518 +0000 UTC m=+0.061655261 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:24:33 compute-0 podman[319222]: 2025-10-02 12:24:33.92725252 +0000 UTC m=+0.062805339 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:24:33 compute-0 podman[319220]: 2025-10-02 12:24:33.949814202 +0000 UTC m=+0.086456718 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:24:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:34 compute-0 ceph-mon[73607]: pgmap v1868: 305 pgs: 305 active+clean; 372 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 616 KiB/s rd, 124 KiB/s wr, 117 op/s
Oct 02 12:24:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 358 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 41 KiB/s wr, 62 op/s
Oct 02 12:24:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:36 compute-0 ceph-mon[73607]: pgmap v1869: 305 pgs: 305 active+clean; 358 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 41 KiB/s wr, 62 op/s
Oct 02 12:24:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 358 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 28 KiB/s wr, 56 op/s
Oct 02 12:24:37 compute-0 nova_compute[257802]: 2025-10-02 12:24:37.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:37 compute-0 nova_compute[257802]: 2025-10-02 12:24:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:38 compute-0 ceph-mon[73607]: pgmap v1870: 305 pgs: 305 active+clean; 358 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 28 KiB/s wr, 56 op/s
Oct 02 12:24:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:39.167 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 358 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 29 KiB/s wr, 74 op/s
Oct 02 12:24:39 compute-0 podman[319277]: 2025-10-02 12:24:39.950621706 +0000 UTC m=+0.091437651 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:24:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:40 compute-0 ceph-mon[73607]: pgmap v1871: 305 pgs: 305 active+clean; 358 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 29 KiB/s wr, 74 op/s
Oct 02 12:24:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 358 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 7.3 KiB/s wr, 69 op/s
Oct 02 12:24:41 compute-0 sudo[319304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:41 compute-0 sudo[319304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:41 compute-0 sudo[319304]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:41 compute-0 sudo[319329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:41 compute-0 sudo[319329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:41 compute-0 sudo[319329]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.417 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.419 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:24:42
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'images', 'default.rgw.meta', '.mgr', 'volumes']
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.505 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:42.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.648 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.649 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.657 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.657 2 INFO nova.compute.claims [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:42 compute-0 nova_compute[257802]: 2025-10-02 12:24:42.776 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:42 compute-0 ceph-mon[73607]: pgmap v1872: 305 pgs: 305 active+clean; 358 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 7.3 KiB/s wr, 69 op/s
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:24:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:43.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272240967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.194 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.204 2 DEBUG nova.compute.provider_tree [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.257 2 DEBUG nova.scheduler.client.report [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.319 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.320 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.446 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.447 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:24:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 358 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 7.3 KiB/s wr, 172 op/s
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.527 2 INFO nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.619 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.927 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.929 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.930 2 INFO nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Creating image(s)
Oct 02 12:24:43 compute-0 nova_compute[257802]: 2025-10-02 12:24:43.967 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1272240967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.018 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.061 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.066 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.163 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.164 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.164 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.165 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.201 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.206 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:44.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.840 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:44 compute-0 nova_compute[257802]: 2025-10-02 12:24:44.940 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] resizing rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:24:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:45.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:45 compute-0 ceph-mon[73607]: pgmap v1873: 305 pgs: 305 active+clean; 358 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 7.3 KiB/s wr, 172 op/s
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.165 2 DEBUG nova.objects.instance [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.200 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.201 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Ensure instance console log exists: /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.201 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.202 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.202 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:45 compute-0 nova_compute[257802]: 2025-10-02 12:24:45.429 2 DEBUG nova.policy [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '25468893d71641a385711fd2982bb00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '10fff81da7a54740a53a0771ce916329', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:24:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 358 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 3.7 KiB/s wr, 180 op/s
Oct 02 12:24:46 compute-0 nova_compute[257802]: 2025-10-02 12:24:46.265 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Successfully created port: 530b91ba-7eee-4225-b3cd-59e896f5b3d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:24:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:46.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:47 compute-0 ceph-mon[73607]: pgmap v1874: 305 pgs: 305 active+clean; 358 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 3.7 KiB/s wr, 180 op/s
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.442 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Successfully updated port: 530b91ba-7eee-4225-b3cd-59e896f5b3d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.462 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.462 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.463 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:24:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 383 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 1.0 MiB/s wr, 182 op/s
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.563 2 DEBUG nova.compute.manager [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-changed-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.563 2 DEBUG nova.compute.manager [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Refreshing instance network info cache due to event network-changed-530b91ba-7eee-4225-b3cd-59e896f5b3d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.563 2 DEBUG oslo_concurrency.lockutils [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:24:47 compute-0 nova_compute[257802]: 2025-10-02 12:24:47.606 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.118 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3330506762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1133758493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2262425354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.712 2 DEBUG nova.network.neutron [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updating instance_info_cache with network_info: [{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.731 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.732 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance network_info: |[{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.733 2 DEBUG oslo_concurrency.lockutils [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.734 2 DEBUG nova.network.neutron [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Refreshing network info cache for port 530b91ba-7eee-4225-b3cd-59e896f5b3d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.739 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Start _get_guest_xml network_info=[{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.746 2 WARNING nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.750 2 DEBUG nova.virt.libvirt.host [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.751 2 DEBUG nova.virt.libvirt.host [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.757 2 DEBUG nova.virt.libvirt.host [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.758 2 DEBUG nova.virt.libvirt.host [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.759 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.759 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.760 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.760 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.760 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.760 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.760 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.761 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.761 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.761 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.762 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.762 2 DEBUG nova.virt.hardware [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:24:48 compute-0 nova_compute[257802]: 2025-10-02 12:24:48.764 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:24:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623757825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.180 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.211 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.215 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:49 compute-0 ceph-mon[73607]: pgmap v1875: 305 pgs: 305 active+clean; 383 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 1.0 MiB/s wr, 182 op/s
Oct 02 12:24:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2623757825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:24:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:24:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781000534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.619 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.621 2 DEBUG nova.virt.libvirt.vif [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1375297496',display_name='tempest-ServerActionsTestOtherB-server-1375297496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1375297496',id=102,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-afvu73c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:43Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.621 2 DEBUG nova.network.os_vif_util [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.622 2 DEBUG nova.network.os_vif_util [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.623 2 DEBUG nova.objects.instance [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.640 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <uuid>4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158</uuid>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <name>instance-00000066</name>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-1375297496</nova:name>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:24:48</nova:creationTime>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <nova:port uuid="530b91ba-7eee-4225-b3cd-59e896f5b3d1">
Oct 02 12:24:49 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <system>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="serial">4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="uuid">4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </system>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <os>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </os>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <features>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </features>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk">
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </source>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config">
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </source>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:24:49 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:44:00:30"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <target dev="tap530b91ba-7e"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/console.log" append="off"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <video>
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </video>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:24:49 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:24:49 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:24:49 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:24:49 compute-0 nova_compute[257802]: </domain>
Oct 02 12:24:49 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.641 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Preparing to wait for external event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.642 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.642 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.642 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.643 2 DEBUG nova.virt.libvirt.vif [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1375297496',display_name='tempest-ServerActionsTestOtherB-server-1375297496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1375297496',id=102,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-afvu73c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:43Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.643 2 DEBUG nova.network.os_vif_util [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.644 2 DEBUG nova.network.os_vif_util [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.644 2 DEBUG os_vif [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap530b91ba-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap530b91ba-7e, col_values=(('external_ids', {'iface-id': '530b91ba-7eee-4225-b3cd-59e896f5b3d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:00:30', 'vm-uuid': '4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:49 compute-0 NetworkManager[44987]: <info>  [1759407889.6512] manager: (tap530b91ba-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.658 2 INFO os_vif [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e')
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.715 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.716 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.716 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:44:00:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.717 2 INFO nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Using config drive
Oct 02 12:24:49 compute-0 nova_compute[257802]: 2025-10-02 12:24:49.753 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:50 compute-0 nova_compute[257802]: 2025-10-02 12:24:50.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:50 compute-0 nova_compute[257802]: 2025-10-02 12:24:50.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:50 compute-0 nova_compute[257802]: 2025-10-02 12:24:50.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:24:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2525089251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3781000534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:51 compute-0 ceph-mon[73607]: pgmap v1876: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:24:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 1.8 MiB/s wr, 192 op/s
Oct 02 12:24:51 compute-0 nova_compute[257802]: 2025-10-02 12:24:51.554 2 INFO nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Creating config drive at /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config
Oct 02 12:24:51 compute-0 nova_compute[257802]: 2025-10-02 12:24:51.559 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9nqhrzin execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:51 compute-0 nova_compute[257802]: 2025-10-02 12:24:51.688 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9nqhrzin" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:51 compute-0 nova_compute[257802]: 2025-10-02 12:24:51.732 2 DEBUG nova.storage.rbd_utils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:51 compute-0 nova_compute[257802]: 2025-10-02 12:24:51.738 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.251 2 DEBUG oslo_concurrency.processutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.252 2 INFO nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Deleting local config drive /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158/disk.config because it was imported into RBD.
Oct 02 12:24:52 compute-0 kernel: tap530b91ba-7e: entered promiscuous mode
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.3441] manager: (tap530b91ba-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Oct 02 12:24:52 compute-0 ovn_controller[148183]: 2025-10-02T12:24:52Z|00433|binding|INFO|Claiming lport 530b91ba-7eee-4225-b3cd-59e896f5b3d1 for this chassis.
Oct 02 12:24:52 compute-0 ovn_controller[148183]: 2025-10-02T12:24:52Z|00434|binding|INFO|530b91ba-7eee-4225-b3cd-59e896f5b3d1: Claiming fa:16:3e:44:00:30 10.100.0.5
Oct 02 12:24:52 compute-0 ceph-mon[73607]: pgmap v1877: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 1.8 MiB/s wr, 192 op/s
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.388 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:00:30 10.100.0.5'], port_security=['fa:16:3e:44:00:30 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3fd9677a-ce06-4783-a778-d114830d9fab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=530b91ba-7eee-4225-b3cd-59e896f5b3d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.391 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 530b91ba-7eee-4225-b3cd-59e896f5b3d1 in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.394 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.4148] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.413 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5e655a-0665-455c-bc39-322eac16aa20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.416 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.4163] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct 02 12:24:52 compute-0 systemd-machined[211836]: New machine qemu-50-instance-00000066.
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.418 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[195356c1-d93c-48a7-be3b-aecf083bc97c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.421 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e190fb18-bc11-45ee-a0a0-8de3d371fc50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.434 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[abb5a3ca-f5f1-4bdf-ac00-552ceaabe2df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-00000066.
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.461 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4156610b-9ad8-4867-9c3a-07c34eedad72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 systemd-udevd[319688]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.5078] device (tap530b91ba-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.5086] device (tap530b91ba-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.507 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b5496192-3bae-4f73-b0d1-f96931ce3b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.512 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[06a5d05f-ffaa-4a37-b228-cbb55d99f1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.5143] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Oct 02 12:24:52 compute-0 systemd-udevd[319691]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.555 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[93f7a61f-c292-4298-86f6-6ff91bfe3cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.558 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[96bdc1da-3243-4b15-a33d-b4e4ff2aaaad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.5801] device (tap4035a600-40): carrier: link connected
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.588 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f94e64-025f-412c-8405-d511bec5f498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:52.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.607 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db6ce6d3-6aa8-4186-9ded-3a296b3f1718]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596020, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319717, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.625 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3aee874d-882e-4c5d-9c19-79c6953dafec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596020, 'tstamp': 596020}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319718, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.648 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9762229c-d2a9-4e09-92c0-c2052454f669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596020, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319719, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.688 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[85a49015-6b69-4bf2-b806-897f2f067415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_controller[148183]: 2025-10-02T12:24:52Z|00435|binding|INFO|Setting lport 530b91ba-7eee-4225-b3cd-59e896f5b3d1 ovn-installed in OVS
Oct 02 12:24:52 compute-0 ovn_controller[148183]: 2025-10-02T12:24:52Z|00436|binding|INFO|Setting lport 530b91ba-7eee-4225-b3cd-59e896f5b3d1 up in Southbound
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.754 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[07ef9a25-559a-4824-b32b-0982da847400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.756 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.757 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.757 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:52 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 NetworkManager[44987]: <info>  [1759407892.7597] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.762 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_controller[148183]: 2025-10-02T12:24:52Z|00437|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.765 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.766 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d5902c-39ff-4f3c-892f-ee3ba9677ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.767 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:24:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:52.768 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.774 2 DEBUG nova.network.neutron [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updated VIF entry in instance network info cache for port 530b91ba-7eee-4225-b3cd-59e896f5b3d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.775 2 DEBUG nova.network.neutron [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updating instance_info_cache with network_info: [{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 nova_compute[257802]: 2025-10-02 12:24:52.797 2 DEBUG oslo_concurrency.lockutils [req-1a65c0c8-2869-4cc8-b79b-4e8c2a0e7e2e req-d63ea579-323b-4a4c-b1cb-d3497a0f9d64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:53 compute-0 podman[319789]: 2025-10-02 12:24:53.165029167 +0000 UTC m=+0.087300330 container create 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.165 2 DEBUG nova.compute.manager [req-673a470e-96db-4a92-8738-ac183f502e1e req-a9d6e4d7-782a-41d8-bf2b-8be0f5f8dea1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.166 2 DEBUG oslo_concurrency.lockutils [req-673a470e-96db-4a92-8738-ac183f502e1e req-a9d6e4d7-782a-41d8-bf2b-8be0f5f8dea1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.166 2 DEBUG oslo_concurrency.lockutils [req-673a470e-96db-4a92-8738-ac183f502e1e req-a9d6e4d7-782a-41d8-bf2b-8be0f5f8dea1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.166 2 DEBUG oslo_concurrency.lockutils [req-673a470e-96db-4a92-8738-ac183f502e1e req-a9d6e4d7-782a-41d8-bf2b-8be0f5f8dea1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.167 2 DEBUG nova.compute.manager [req-673a470e-96db-4a92-8738-ac183f502e1e req-a9d6e4d7-782a-41d8-bf2b-8be0f5f8dea1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Processing event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:24:53 compute-0 podman[319789]: 2025-10-02 12:24:53.119886201 +0000 UTC m=+0.042157374 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:24:53 compute-0 systemd[1]: Started libpod-conmon-861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac.scope.
Oct 02 12:24:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f397fd896f6558f93d56c786eb9b96e751398102a6978d4a41ea4ac3e41ef2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:53 compute-0 podman[319789]: 2025-10-02 12:24:53.28231265 +0000 UTC m=+0.204583853 container init 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:24:53 compute-0 podman[319789]: 2025-10-02 12:24:53.287924207 +0000 UTC m=+0.210195360 container start 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:53 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [NOTICE]   (319812) : New worker (319814) forked
Oct 02 12:24:53 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [NOTICE]   (319812) : Loading success.
Oct 02 12:24:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.556 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407893.5557756, 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.558 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] VM Started (Lifecycle Event)
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.560 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.564 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.568 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance spawned successfully.
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.569 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.576 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.581 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.591 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.591 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.592 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.592 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.593 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.593 2 DEBUG nova.virt.libvirt.driver [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.614 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.615 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407893.556046, 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.615 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] VM Paused (Lifecycle Event)
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.645 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.648 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407893.5634582, 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.648 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] VM Resumed (Lifecycle Event)
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.660 2 INFO nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Took 9.73 seconds to spawn the instance on the hypervisor.
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.661 2 DEBUG nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.670 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.673 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.695 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.726 2 INFO nova.compute.manager [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Took 11.11 seconds to build instance.
Oct 02 12:24:53 compute-0 nova_compute[257802]: 2025-10-02 12:24:53.742 2 DEBUG oslo_concurrency.lockutils [None req-8cd90146-ffa8-4243-b16c-3291cedac861 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:54 compute-0 nova_compute[257802]: 2025-10-02 12:24:54.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:54 compute-0 nova_compute[257802]: 2025-10-02 12:24:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0075035187511632235 of space, bias 1.0, pg target 2.251055625348967 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6443723972873627 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:24:54 compute-0 ceph-mon[73607]: pgmap v1878: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Oct 02 12:24:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:54 compute-0 nova_compute[257802]: 2025-10-02 12:24:54.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:54 compute-0 nova_compute[257802]: 2025-10-02 12:24:54.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.002 2 INFO nova.compute.manager [None req-01e30ab1-d7be-4e02-92b5-04034032793b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Pausing
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.003 2 DEBUG nova.objects.instance [None req-01e30ab1-d7be-4e02-92b5-04034032793b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1362203460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.050 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407895.0500927, 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.050 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] VM Paused (Lifecycle Event)
Oct 02 12:24:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1362203460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.052 2 DEBUG nova.compute.manager [None req-01e30ab1-d7be-4e02-92b5-04034032793b 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.074 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.076 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.099 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.246 2 DEBUG nova.compute.manager [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.247 2 DEBUG oslo_concurrency.lockutils [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.247 2 DEBUG oslo_concurrency.lockutils [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.247 2 DEBUG oslo_concurrency.lockutils [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.247 2 DEBUG nova.compute.manager [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] No waiting events found dispatching network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:55 compute-0 nova_compute[257802]: 2025-10-02 12:24:55.248 2 WARNING nova.compute.manager [req-f7c3be0c-c1f5-4efb-8ed1-46df7566cd61 req-b315ec77-eece-40ae-93fd-adfe3ad0fe18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received unexpected event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 for instance with vm_state paused and task_state None.
Oct 02 12:24:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 02 12:24:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2609753080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1362203460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1362203460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:24:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:24:56 compute-0 ceph-mon[73607]: pgmap v1879: 305 pgs: 305 active+clean; 405 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 02 12:24:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.603 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.604 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.604 2 INFO nova.compute.manager [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Shelving
Oct 02 12:24:57 compute-0 kernel: tap530b91ba-7e (unregistering): left promiscuous mode
Oct 02 12:24:57 compute-0 NetworkManager[44987]: <info>  [1759407897.6709] device (tap530b91ba-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-0 ovn_controller[148183]: 2025-10-02T12:24:57Z|00438|binding|INFO|Releasing lport 530b91ba-7eee-4225-b3cd-59e896f5b3d1 from this chassis (sb_readonly=0)
Oct 02 12:24:57 compute-0 ovn_controller[148183]: 2025-10-02T12:24:57Z|00439|binding|INFO|Setting lport 530b91ba-7eee-4225-b3cd-59e896f5b3d1 down in Southbound
Oct 02 12:24:57 compute-0 ovn_controller[148183]: 2025-10-02T12:24:57Z|00440|binding|INFO|Removing iface tap530b91ba-7e ovn-installed in OVS
Oct 02 12:24:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:57.698 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:00:30 10.100.0.5'], port_security=['fa:16:3e:44:00:30 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3fd9677a-ce06-4783-a778-d114830d9fab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=530b91ba-7eee-4225-b3cd-59e896f5b3d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:57.700 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 530b91ba-7eee-4225-b3cd-59e896f5b3d1 in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:24:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:57.702 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:57.704 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf9d416-dade-49b6-ad07-23acbd08e4b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:57.705 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:24:57 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 02 12:24:57 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000066.scope: Consumed 2.595s CPU time.
Oct 02 12:24:57 compute-0 systemd-machined[211836]: Machine qemu-50-instance-00000066 terminated.
Oct 02 12:24:57 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [NOTICE]   (319812) : haproxy version is 2.8.14-c23fe91
Oct 02 12:24:57 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [NOTICE]   (319812) : path to executable is /usr/sbin/haproxy
Oct 02 12:24:57 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [WARNING]  (319812) : Exiting Master process...
Oct 02 12:24:57 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [ALERT]    (319812) : Current worker (319814) exited with code 143 (Terminated)
Oct 02 12:24:57 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[319808]: [WARNING]  (319812) : All workers exited. Exiting... (0)
Oct 02 12:24:57 compute-0 systemd[1]: libpod-861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac.scope: Deactivated successfully.
Oct 02 12:24:57 compute-0 podman[319850]: 2025-10-02 12:24:57.852043296 +0000 UTC m=+0.047481894 container died 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.860 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance destroyed successfully.
Oct 02 12:24:57 compute-0 nova_compute[257802]: 2025-10-02 12:24:57.861 2 DEBUG nova.objects.instance [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac-userdata-shm.mount: Deactivated successfully.
Oct 02 12:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f397fd896f6558f93d56c786eb9b96e751398102a6978d4a41ea4ac3e41ef2e-merged.mount: Deactivated successfully.
Oct 02 12:24:58 compute-0 podman[319850]: 2025-10-02 12:24:58.054042974 +0000 UTC m=+0.249481602 container cleanup 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:58 compute-0 systemd[1]: libpod-conmon-861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac.scope: Deactivated successfully.
Oct 02 12:24:58 compute-0 podman[319890]: 2025-10-02 12:24:58.139766274 +0000 UTC m=+0.065818473 container remove 861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.146 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[38601403-acd4-4881-b554-a9ece406388a]: (4, ('Thu Oct  2 12:24:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac)\n861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac\nThu Oct  2 12:24:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac)\n861e2fc0fc00f96f7ab5e80ddda74fadf76aaa83648a4f93c29412092a7028ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.148 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dc95c8e4-4c93-4123-bb9c-99870f7c32ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.149 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:58 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.192 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe625c92-2dab-4256-ae81-4ed5f4a0f36a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.192 2 INFO nova.virt.libvirt.driver [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Beginning cold snapshot process
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.203 2 DEBUG nova.compute.manager [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-vif-unplugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.204 2 DEBUG oslo_concurrency.lockutils [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.206 2 DEBUG oslo_concurrency.lockutils [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.206 2 DEBUG oslo_concurrency.lockutils [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.206 2 DEBUG nova.compute.manager [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] No waiting events found dispatching network-vif-unplugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.207 2 WARNING nova.compute.manager [req-9313c972-e27d-456b-9d3b-8c89f15ff75b req-60301a4c-4190-44de-a415-8c81cacb1ac1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received unexpected event network-vif-unplugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 for instance with vm_state paused and task_state shelving.
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.224 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c08f913c-8c3d-4c73-a46d-348ea63bc640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.226 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[268813ed-d09b-4c33-a4d8-7a1e7b9fbf89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.248 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aac4c7a3-7ca3-4702-b4c3-f49e3a1600fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596012, 'reachable_time': 44216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319909, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.252 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:24:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:24:58.252 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9284ea-eb1f-4f00-aed0-7c398e8b6df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:24:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:58 compute-0 ceph-mon[73607]: pgmap v1880: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.680 2 DEBUG nova.virt.libvirt.imagebackend [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:24:58 compute-0 nova_compute[257802]: 2025-10-02 12:24:58.890 2 DEBUG nova.storage.rbd_utils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(3cb9850deb4946d09091b1895de9e773) on rbd image(4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:24:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:24:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:59.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.121 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.121 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 451 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Oct 02 12:24:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 02 12:24:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2713298449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 02 12:24:59 compute-0 nova_compute[257802]: 2025-10-02 12:24:59.883 2 DEBUG nova.storage.rbd_utils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] cloning vms/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk@3cb9850deb4946d09091b1895de9e773 to images/cf521338-6de6-477f-b445-df9c97d57576 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.042 2 DEBUG nova.storage.rbd_utils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] flattening images/cf521338-6de6-477f-b445-df9c97d57576 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.371 2 DEBUG nova.storage.rbd_utils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(3cb9850deb4946d09091b1895de9e773) on rbd image(4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.383 2 DEBUG nova.compute.manager [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.383 2 DEBUG oslo_concurrency.lockutils [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.384 2 DEBUG oslo_concurrency.lockutils [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.384 2 DEBUG oslo_concurrency.lockutils [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.385 2 DEBUG nova.compute.manager [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] No waiting events found dispatching network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.385 2 WARNING nova.compute.manager [req-d33115dc-5279-4168-8f18-aa9c9c3e4bc7 req-a2fe24bc-5b66-4af1-8afc-b04488c55863 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received unexpected event network-vif-plugged-530b91ba-7eee-4225-b3cd-59e896f5b3d1 for instance with vm_state paused and task_state shelving_image_uploading.
Oct 02 12:25:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 02 12:25:00 compute-0 ceph-mon[73607]: pgmap v1881: 305 pgs: 305 active+clean; 451 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Oct 02 12:25:00 compute-0 ceph-mon[73607]: osdmap e291: 3 total, 3 up, 3 in
Oct 02 12:25:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1937089493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 02 12:25:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 02 12:25:00 compute-0 nova_compute[257802]: 2025-10-02 12:25:00.875 2 DEBUG nova.storage.rbd_utils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(snap) on rbd image(cf521338-6de6-477f-b445-df9c97d57576) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:25:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:01 compute-0 nova_compute[257802]: 2025-10-02 12:25:01.179 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updating instance_info_cache with network_info: [{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:01 compute-0 nova_compute[257802]: 2025-10-02 12:25:01.300 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:01 compute-0 nova_compute[257802]: 2025-10-02 12:25:01.301 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:25:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 451 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 166 op/s
Oct 02 12:25:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 02 12:25:01 compute-0 ceph-mon[73607]: osdmap e292: 3 total, 3 up, 3 in
Oct 02 12:25:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4110028255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2996827895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 02 12:25:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 02 12:25:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:02 compute-0 sudo[320053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:02 compute-0 sudo[320053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:02 compute-0 sudo[320053]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:02 compute-0 sudo[320078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:02 compute-0 sudo[320078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:02 compute-0 sudo[320078]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:02 compute-0 nova_compute[257802]: 2025-10-02 12:25:02.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:02 compute-0 nova_compute[257802]: 2025-10-02 12:25:02.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:02 compute-0 ceph-mon[73607]: pgmap v1884: 305 pgs: 305 active+clean; 451 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 166 op/s
Oct 02 12:25:02 compute-0 ceph-mon[73607]: osdmap e293: 3 total, 3 up, 3 in
Oct 02 12:25:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:25:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.209 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.210 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.210 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.210 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.210 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 489 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 232 op/s
Oct 02 12:25:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510253597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.669 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.765 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.766 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:25:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/510253597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.967 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.969 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=20.80990219116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.970 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:03 compute-0 nova_compute[257802]: 2025-10-02 12:25:03.970 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.086 2 INFO nova.virt.libvirt.driver [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Snapshot image upload complete
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.087 2 DEBUG nova.compute.manager [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.089 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.090 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.090 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.159 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.203 2 INFO nova.compute.manager [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Shelve offloading
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.214 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance destroyed successfully.
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.215 2 DEBUG nova.compute.manager [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.219 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.219 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.220 2 DEBUG nova.network.neutron [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:25:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789793926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.599 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.605 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:04.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:04 compute-0 nova_compute[257802]: 2025-10-02 12:25:04.728 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:04 compute-0 podman[320151]: 2025-10-02 12:25:04.93602122 +0000 UTC m=+0.064387238 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:25:04 compute-0 podman[320150]: 2025-10-02 12:25:04.953574041 +0000 UTC m=+0.079430207 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:25:04 compute-0 podman[320152]: 2025-10-02 12:25:04.954184295 +0000 UTC m=+0.068957069 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 12:25:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:05 compute-0 ceph-mon[73607]: pgmap v1886: 305 pgs: 305 active+clean; 489 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 232 op/s
Oct 02 12:25:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3789793926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:05 compute-0 nova_compute[257802]: 2025-10-02 12:25:05.223 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:25:05 compute-0 nova_compute[257802]: 2025-10-02 12:25:05.223 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 498 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Oct 02 12:25:05 compute-0 nova_compute[257802]: 2025-10-02 12:25:05.679 2 DEBUG nova.network.neutron [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updating instance_info_cache with network_info: [{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:05 compute-0 nova_compute[257802]: 2025-10-02 12:25:05.967 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 02 12:25:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 02 12:25:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 02 12:25:07 compute-0 ceph-mon[73607]: pgmap v1887: 305 pgs: 305 active+clean; 498 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Oct 02 12:25:07 compute-0 ceph-mon[73607]: osdmap e294: 3 total, 3 up, 3 in
Oct 02 12:25:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 498 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.3 MiB/s wr, 258 op/s
Oct 02 12:25:07 compute-0 nova_compute[257802]: 2025-10-02 12:25:07.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:08 compute-0 nova_compute[257802]: 2025-10-02 12:25:08.223 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:08 compute-0 ceph-mon[73607]: pgmap v1889: 305 pgs: 305 active+clean; 498 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.3 MiB/s wr, 258 op/s
Oct 02 12:25:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:08 compute-0 nova_compute[257802]: 2025-10-02 12:25:08.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.335 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Instance destroyed successfully.
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.336 2 DEBUG nova.objects.instance [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.442 2 DEBUG nova.virt.libvirt.vif [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1375297496',display_name='tempest-ServerActionsTestOtherB-server-1375297496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1375297496',id=102,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-afvu73c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member',shelved_at='2025-10-02T12:25:04.087254',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cf521338-6de6-477f-b445-df9c97d57576'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:58Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.443 2 DEBUG nova.network.os_vif_util [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap530b91ba-7e", "ovs_interfaceid": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.445 2 DEBUG nova.network.os_vif_util [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.446 2 DEBUG os_vif [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap530b91ba-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.455 2 INFO os_vif [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:00:30,bridge_name='br-int',has_traffic_filtering=True,id=530b91ba-7eee-4225-b3cd-59e896f5b3d1,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap530b91ba-7e')
Oct 02 12:25:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 500 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 271 op/s
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.493 2 DEBUG nova.compute.manager [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Received event network-changed-530b91ba-7eee-4225-b3cd-59e896f5b3d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.493 2 DEBUG nova.compute.manager [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Refreshing instance network info cache due to event network-changed-530b91ba-7eee-4225-b3cd-59e896f5b3d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.494 2 DEBUG oslo_concurrency.lockutils [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.494 2 DEBUG oslo_concurrency.lockutils [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:09 compute-0 nova_compute[257802]: 2025-10-02 12:25:09.494 2 DEBUG nova.network.neutron [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Refreshing network info cache for port 530b91ba-7eee-4225-b3cd-59e896f5b3d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.435 2 INFO nova.virt.libvirt.driver [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Deleting instance files /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_del
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.436 2 INFO nova.virt.libvirt.driver [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Deletion of /var/lib/nova/instances/4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158_del complete
Oct 02 12:25:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.686 2 INFO nova.scheduler.client.report [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocations for instance 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158
Oct 02 12:25:10 compute-0 ceph-mon[73607]: pgmap v1890: 305 pgs: 305 active+clean; 500 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 271 op/s
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.848 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.849 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:10 compute-0 nova_compute[257802]: 2025-10-02 12:25:10.888 2 DEBUG oslo_concurrency.processutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:10 compute-0 podman[320227]: 2025-10-02 12:25:10.975625224 +0000 UTC m=+0.114723241 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:25:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310746216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:11 compute-0 nova_compute[257802]: 2025-10-02 12:25:11.389 2 DEBUG oslo_concurrency.processutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:11 compute-0 nova_compute[257802]: 2025-10-02 12:25:11.396 2 DEBUG nova.compute.provider_tree [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 500 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 519 KiB/s wr, 190 op/s
Oct 02 12:25:11 compute-0 nova_compute[257802]: 2025-10-02 12:25:11.621 2 DEBUG nova.scheduler.client.report [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2310746216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:11 compute-0 nova_compute[257802]: 2025-10-02 12:25:11.852 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:12 compute-0 nova_compute[257802]: 2025-10-02 12:25:12.267 2 DEBUG oslo_concurrency.lockutils [None req-7ecec5c8-1137-457d-9ff1-b4a649cc5405 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:12 compute-0 nova_compute[257802]: 2025-10-02 12:25:12.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:12 compute-0 nova_compute[257802]: 2025-10-02 12:25:12.859 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407897.8582985, 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:12 compute-0 nova_compute[257802]: 2025-10-02 12:25:12.860 2 INFO nova.compute.manager [-] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] VM Stopped (Lifecycle Event)
Oct 02 12:25:12 compute-0 ceph-mon[73607]: pgmap v1891: 305 pgs: 305 active+clean; 500 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 519 KiB/s wr, 190 op/s
Oct 02 12:25:12 compute-0 nova_compute[257802]: 2025-10-02 12:25:12.993 2 DEBUG nova.compute.manager [None req-b2747fa8-4d32-4ed2-8df2-1c9f20981427 - - - - - -] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:25:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:13.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:25:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 463 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 383 KiB/s wr, 164 op/s
Oct 02 12:25:13 compute-0 nova_compute[257802]: 2025-10-02 12:25:13.639 2 DEBUG nova.network.neutron [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updated VIF entry in instance network info cache for port 530b91ba-7eee-4225-b3cd-59e896f5b3d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:25:13 compute-0 nova_compute[257802]: 2025-10-02 12:25:13.640 2 DEBUG nova.network.neutron [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158] Updating instance_info_cache with network_info: [{"id": "530b91ba-7eee-4225-b3cd-59e896f5b3d1", "address": "fa:16:3e:44:00:30", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": null, "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap530b91ba-7e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:13 compute-0 nova_compute[257802]: 2025-10-02 12:25:13.814 2 DEBUG oslo_concurrency.lockutils [req-af2f6780-8742-47e7-b8bb-94b8f35dcfa6 req-d6a08a0e-7969-45b7-9bd6-7f0a7c5b0b6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3ab1f2-5ef4-4738-a6dc-0a5ae24af158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:14 compute-0 nova_compute[257802]: 2025-10-02 12:25:14.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:15 compute-0 ceph-mon[73607]: pgmap v1892: 305 pgs: 305 active+clean; 463 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 383 KiB/s wr, 164 op/s
Oct 02 12:25:15 compute-0 nova_compute[257802]: 2025-10-02 12:25:15.479 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:15 compute-0 nova_compute[257802]: 2025-10-02 12:25:15.480 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 453 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 134 op/s
Oct 02 12:25:15 compute-0 nova_compute[257802]: 2025-10-02 12:25:15.855 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:25:16 compute-0 nova_compute[257802]: 2025-10-02 12:25:16.143 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:16 compute-0 nova_compute[257802]: 2025-10-02 12:25:16.144 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:16 compute-0 nova_compute[257802]: 2025-10-02 12:25:16.153 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:25:16 compute-0 nova_compute[257802]: 2025-10-02 12:25:16.153 2 INFO nova.compute.claims [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:25:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:16 compute-0 nova_compute[257802]: 2025-10-02 12:25:16.731 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:16 compute-0 ceph-mon[73607]: pgmap v1893: 305 pgs: 305 active+clean; 453 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 134 op/s
Oct 02 12:25:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232993325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.223 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.230 2 DEBUG nova.compute.provider_tree [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.340 2 DEBUG nova.scheduler.client.report [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 456 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 197 KiB/s wr, 78 op/s
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.610 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.612 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.861 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:25:17 compute-0 nova_compute[257802]: 2025-10-02 12:25:17.861 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:25:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1232993325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:18 compute-0 nova_compute[257802]: 2025-10-02 12:25:18.064 2 INFO nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:25:18 compute-0 nova_compute[257802]: 2025-10-02 12:25:18.189 2 DEBUG nova.policy [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '49ce34df164c4ce7a673d8c2ff42451a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bb169657a27d4e129e8479b6c03d6093', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:25:18 compute-0 nova_compute[257802]: 2025-10-02 12:25:18.399 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:25:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.084 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.086 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.086 2 INFO nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Creating image(s)
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.111 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.135 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.160 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.163 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.221 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.222 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.223 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.223 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.302 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.305 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:19 compute-0 ceph-mon[73607]: pgmap v1894: 305 pgs: 305 active+clean; 456 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 197 KiB/s wr, 78 op/s
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 476 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Oct 02 12:25:19 compute-0 nova_compute[257802]: 2025-10-02 12:25:19.710 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Successfully created port: 9f5480ea-1866-4f99-81bd-19bff4c25882 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:25:19 compute-0 sudo[320397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:19 compute-0 sudo[320397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:19 compute-0 sudo[320397]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:20 compute-0 sudo[320422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:20 compute-0 sudo[320422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:20 compute-0 sudo[320422]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:20 compute-0 sudo[320447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:20 compute-0 sudo[320447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:20 compute-0 sudo[320447]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.197 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:20 compute-0 sudo[320472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:25:20 compute-0 sudo[320472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.454 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] resizing rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:25:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:25:20 compute-0 sudo[320472]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:25:20 compute-0 ceph-mon[73607]: pgmap v1895: 305 pgs: 305 active+clean; 476 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Oct 02 12:25:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2698005824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/25624865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:20.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.763 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Successfully updated port: 9f5480ea-1866-4f99-81bd-19bff4c25882 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:25:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.774 2 DEBUG nova.objects.instance [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lazy-loading 'migration_context' on Instance uuid c2a73310-9eb1-4b57-9fa0-92190f32a5d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:25:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.929 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.929 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquired lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.929 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.970 2 DEBUG nova.compute.manager [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-changed-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.970 2 DEBUG nova.compute.manager [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Refreshing instance network info cache due to event network-changed-9f5480ea-1866-4f99-81bd-19bff4c25882. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:25:20 compute-0 nova_compute[257802]: 2025-10-02 12:25:20.971 2 DEBUG oslo_concurrency.lockutils [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.002 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.002 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Ensure instance console log exists: /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.003 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.004 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.004 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:21 compute-0 nova_compute[257802]: 2025-10-02 12:25:21.191 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:25:21 compute-0 sudo[320590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:21 compute-0 sudo[320590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:21 compute-0 sudo[320590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:21 compute-0 sudo[320615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:21 compute-0 sudo[320615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:21 compute-0 sudo[320615]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:21 compute-0 sudo[320640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:21 compute-0 sudo[320640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:21 compute-0 sudo[320640]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:21 compute-0 sudo[320665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:25:21 compute-0 sudo[320665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 476 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 1.7 MiB/s wr, 82 op/s
Oct 02 12:25:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:21 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:22 compute-0 sudo[320665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.075 2 DEBUG nova.network.neutron [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Updating instance_info_cache with network_info: [{"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.199 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Releasing lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.199 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Instance network_info: |[{"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.200 2 DEBUG oslo_concurrency.lockutils [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.201 2 DEBUG nova.network.neutron [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Refreshing network info cache for port 9f5480ea-1866-4f99-81bd-19bff4c25882 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.207 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Start _get_guest_xml network_info=[{"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.214 2 WARNING nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:25:22 compute-0 sudo[320720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:22 compute-0 sudo[320720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 00368493-bafc-4cb9-9dbf-ab9ba12e899a does not exist
Oct 02 12:25:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f8b28289-222f-4946-9e87-bb6907b91f38 does not exist
Oct 02 12:25:22 compute-0 sudo[320720]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dc09a217-516e-4032-a4de-e3d0ca37e6d6 does not exist
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.228 2 DEBUG nova.virt.libvirt.host [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.230 2 DEBUG nova.virt.libvirt.host [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.237 2 DEBUG nova.virt.libvirt.host [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.238 2 DEBUG nova.virt.libvirt.host [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.239 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.240 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.240 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.241 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.241 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.241 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.242 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.242 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.242 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.243 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.243 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.243 2 DEBUG nova.virt.hardware [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.249 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:22 compute-0 sudo[320745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:22 compute-0 sudo[320745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 sudo[320745]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 sudo[320763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:22 compute-0 sudo[320763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 sudo[320763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 sudo[320796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:22 compute-0 sudo[320796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 sudo[320796]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 sudo[320840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:22 compute-0 sudo[320840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 sudo[320840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:22 compute-0 sudo[320865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:25:22 compute-0 sudo[320865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:25:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024544825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.685 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.723 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.736 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:22 compute-0 nova_compute[257802]: 2025-10-02 12:25:22.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:22 compute-0 ceph-mon[73607]: pgmap v1896: 305 pgs: 305 active+clean; 476 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 1.7 MiB/s wr, 82 op/s
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1024544825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:22 compute-0 podman[320962]: 2025-10-02 12:25:22.993982908 +0000 UTC m=+0.095531991 container create da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:25:23 compute-0 podman[320962]: 2025-10-02 12:25:22.939219767 +0000 UTC m=+0.040768870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:23.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:23 compute-0 systemd[1]: Started libpod-conmon-da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d.scope.
Oct 02 12:25:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:25:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502696595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.264 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.267 2 DEBUG nova.virt.libvirt.vif [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-599010257',id=104,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb169657a27d4e129e8479b6c03d6093',ramdisk_id='',reservation_id='r-yyvz79uu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:18Z,user_data=None,user_id='49ce34df164c4ce7a673d8c2ff42451a',uuid=c2a73310-9eb1-4b57-9fa0-92190f32a5d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.267 2 DEBUG nova.network.os_vif_util [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converting VIF {"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.268 2 DEBUG nova.network.os_vif_util [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.269 2 DEBUG nova.objects.instance [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2a73310-9eb1-4b57-9fa0-92190f32a5d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.285 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <uuid>c2a73310-9eb1-4b57-9fa0-92190f32a5d4</uuid>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <name>instance-00000068</name>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-599010257</nova:name>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:25:22</nova:creationTime>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:user uuid="49ce34df164c4ce7a673d8c2ff42451a">tempest-ServersNegativeTestMultiTenantJSON-2043219092-project-member</nova:user>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:project uuid="bb169657a27d4e129e8479b6c03d6093">tempest-ServersNegativeTestMultiTenantJSON-2043219092</nova:project>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <nova:port uuid="9f5480ea-1866-4f99-81bd-19bff4c25882">
Oct 02 12:25:23 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <system>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="serial">c2a73310-9eb1-4b57-9fa0-92190f32a5d4</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="uuid">c2a73310-9eb1-4b57-9fa0-92190f32a5d4</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </system>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <os>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </os>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <features>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </features>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk">
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </source>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config">
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </source>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:25:23 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:d2:e6:ba"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <target dev="tap9f5480ea-18"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/console.log" append="off"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <video>
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </video>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:25:23 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:25:23 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:25:23 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:25:23 compute-0 nova_compute[257802]: </domain>
Oct 02 12:25:23 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.286 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Preparing to wait for external event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.286 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.286 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.287 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.287 2 DEBUG nova.virt.libvirt.vif [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-599010257',id=104,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb169657a27d4e129e8479b6c03d6093',ramdisk_id='',reservation_id='r-yyvz79uu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:18Z,user_data=None,user_id='49ce34df164c4ce7a673d8c2ff42451a',uuid=c2a73310-9eb1-4b57-9fa0-92190f32a5d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.287 2 DEBUG nova.network.os_vif_util [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converting VIF {"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.288 2 DEBUG nova.network.os_vif_util [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.288 2 DEBUG os_vif [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f5480ea-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f5480ea-18, col_values=(('external_ids', {'iface-id': '9f5480ea-1866-4f99-81bd-19bff4c25882', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:e6:ba', 'vm-uuid': 'c2a73310-9eb1-4b57-9fa0-92190f32a5d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:23 compute-0 NetworkManager[44987]: <info>  [1759407923.2953] manager: (tap9f5480ea-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.305 2 INFO os_vif [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18')
Oct 02 12:25:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 02 12:25:23 compute-0 podman[320962]: 2025-10-02 12:25:23.451747591 +0000 UTC m=+0.553296714 container init da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:23 compute-0 podman[320962]: 2025-10-02 12:25:23.466511423 +0000 UTC m=+0.568060546 container start da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:25:23 compute-0 systemd[1]: libpod-da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d.scope: Deactivated successfully.
Oct 02 12:25:23 compute-0 tender_wilbur[320986]: 167 167
Oct 02 12:25:23 compute-0 conmon[320986]: conmon da050f175770b266c0d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d.scope/container/memory.events
Oct 02 12:25:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Oct 02 12:25:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.585 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.588 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.588 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] No VIF found with MAC fa:16:3e:d2:e6:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.589 2 INFO nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Using config drive
Oct 02 12:25:23 compute-0 podman[320962]: 2025-10-02 12:25:23.590952341 +0000 UTC m=+0.692501464 container attach da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:25:23 compute-0 podman[320962]: 2025-10-02 12:25:23.59418474 +0000 UTC m=+0.695733853 container died da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:25:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 02 12:25:23 compute-0 nova_compute[257802]: 2025-10-02 12:25:23.736 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.082 2 INFO nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Creating config drive at /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.086 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdeknyas1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/502696595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:24 compute-0 ceph-mon[73607]: osdmap e295: 3 total, 3 up, 3 in
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.215 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdeknyas1" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ecafcddffe0655d263a2f2c10fe6faa42af496ab785c1534edd8dc3440b0841-merged.mount: Deactivated successfully.
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.542 2 DEBUG nova.storage.rbd_utils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] rbd image c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.549 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.757 2 DEBUG oslo_concurrency.processutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config c2a73310-9eb1-4b57-9fa0-92190f32a5d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.759 2 INFO nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Deleting local config drive /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4/disk.config because it was imported into RBD.
Oct 02 12:25:24 compute-0 kernel: tap9f5480ea-18: entered promiscuous mode
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:24 compute-0 ovn_controller[148183]: 2025-10-02T12:25:24Z|00441|binding|INFO|Claiming lport 9f5480ea-1866-4f99-81bd-19bff4c25882 for this chassis.
Oct 02 12:25:24 compute-0 ovn_controller[148183]: 2025-10-02T12:25:24Z|00442|binding|INFO|9f5480ea-1866-4f99-81bd-19bff4c25882: Claiming fa:16:3e:d2:e6:ba 10.100.0.10
Oct 02 12:25:24 compute-0 NetworkManager[44987]: <info>  [1759407924.8243] manager: (tap9f5480ea-18): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct 02 12:25:24 compute-0 ovn_controller[148183]: 2025-10-02T12:25:24Z|00443|binding|INFO|Setting lport 9f5480ea-1866-4f99-81bd-19bff4c25882 ovn-installed in OVS
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:24 compute-0 ovn_controller[148183]: 2025-10-02T12:25:24Z|00444|binding|INFO|Setting lport 9f5480ea-1866-4f99-81bd-19bff4c25882 up in Southbound
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.842 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:e6:ba 10.100.0.10'], port_security=['fa:16:3e:d2:e6:ba 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2a73310-9eb1-4b57-9fa0-92190f32a5d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb169657a27d4e129e8479b6c03d6093', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20680fd9-94bc-4386-a20e-65de1b26fa5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2a9f71-55b9-4501-a937-2e22f46eca55, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9f5480ea-1866-4f99-81bd-19bff4c25882) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.843 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9f5480ea-1866-4f99-81bd-19bff4c25882 in datapath 8afce80e-86e3-4c7b-a554-68fd52bf1d40 bound to our chassis
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.844 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8afce80e-86e3-4c7b-a554-68fd52bf1d40
Oct 02 12:25:24 compute-0 nova_compute[257802]: 2025-10-02 12:25:24.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:24 compute-0 systemd-udevd[321078]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.856 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[60caf452-2473-4844-aecc-e1afa79b6046]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.857 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8afce80e-81 in ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.859 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8afce80e-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.859 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9ed8bb-e047-4f3f-b7d8-52f65f4c4903]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.860 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[44d00d8a-5778-4584-a72c-92e3a61e71a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 systemd-machined[211836]: New machine qemu-51-instance-00000068.
Oct 02 12:25:24 compute-0 NetworkManager[44987]: <info>  [1759407924.8687] device (tap9f5480ea-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:25:24 compute-0 NetworkManager[44987]: <info>  [1759407924.8701] device (tap9f5480ea-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.871 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a7980bad-fab2-49b5-a7ad-ca2374eade55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 podman[320962]: 2025-10-02 12:25:24.876054139 +0000 UTC m=+1.977603222 container remove da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.884 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[38d2610c-8137-40c6-a82e-df64e8c8c790]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-00000068.
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.909 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9bda1c09-d7e9-477a-bd48-5899c602304c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.914 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dc7461-c38b-4325-8381-0e7064fa0a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 NetworkManager[44987]: <info>  [1759407924.9159] manager: (tap8afce80e-80): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Oct 02 12:25:24 compute-0 systemd-udevd[321082]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.955 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b06f6353-d475-4198-9372-d009bb3e982d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.959 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8efa7e57-99b8-4553-9527-f74b86b0c4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:24 compute-0 systemd[1]: libpod-conmon-da050f175770b266c0d75877558bc424aee789adba50f1cb6da2c792caaf590d.scope: Deactivated successfully.
Oct 02 12:25:24 compute-0 NetworkManager[44987]: <info>  [1759407924.9835] device (tap8afce80e-80): carrier: link connected
Oct 02 12:25:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:24.988 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[298d1050-b604-4fc1-b0aa-e45ee4881551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.007 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0d56ab9d-8ad3-4de0-afb2-025a09768b5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8afce80e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599260, 'reachable_time': 22130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321117, 'error': None, 'target': 'ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.024 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0a447db9-e481-48a4-9425-33f8d7c1bbea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:7349'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599260, 'tstamp': 599260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321121, 'error': None, 'target': 'ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.041 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[acc214e0-1f82-4266-8abb-124d9639491b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8afce80e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599260, 'reachable_time': 22130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321132, 'error': None, 'target': 'ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.071 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9b9a33-1397-4468-b24f-00d04e60d830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.129 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[66f86391-4730-42f1-830a-3dcd4cc311fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.130 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8afce80e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.131 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.131 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8afce80e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:25 compute-0 kernel: tap8afce80e-80: entered promiscuous mode
Oct 02 12:25:25 compute-0 NetworkManager[44987]: <info>  [1759407925.1341] manager: (tap8afce80e-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.136 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8afce80e-80, col_values=(('external_ids', {'iface-id': 'e5a167e2-9aa9-416f-a3cf-0864f497a6b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:25 compute-0 podman[321119]: 2025-10-02 12:25:25.039002361 +0000 UTC m=+0.025516316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:25 compute-0 ovn_controller[148183]: 2025-10-02T12:25:25Z|00445|binding|INFO|Releasing lport e5a167e2-9aa9-416f-a3cf-0864f497a6b7 from this chassis (sb_readonly=0)
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.140 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8afce80e-86e3-4c7b-a554-68fd52bf1d40.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8afce80e-86e3-4c7b-a554-68fd52bf1d40.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.142 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[84551009-5dbc-4f37-aa71-63c7beeac27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.143 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-8afce80e-86e3-4c7b-a554-68fd52bf1d40
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/8afce80e-86e3-4c7b-a554-68fd52bf1d40.pid.haproxy
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 8afce80e-86e3-4c7b-a554-68fd52bf1d40
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:25:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:25.145 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'env', 'PROCESS_TAG=haproxy-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8afce80e-86e3-4c7b-a554-68fd52bf1d40.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.446 2 DEBUG nova.compute.manager [req-56de873c-55b3-48ae-a683-1eb32d118b11 req-e417945f-1889-41c0-8475-f1578beebc0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.447 2 DEBUG oslo_concurrency.lockutils [req-56de873c-55b3-48ae-a683-1eb32d118b11 req-e417945f-1889-41c0-8475-f1578beebc0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.447 2 DEBUG oslo_concurrency.lockutils [req-56de873c-55b3-48ae-a683-1eb32d118b11 req-e417945f-1889-41c0-8475-f1578beebc0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.447 2 DEBUG oslo_concurrency.lockutils [req-56de873c-55b3-48ae-a683-1eb32d118b11 req-e417945f-1889-41c0-8475-f1578beebc0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.447 2 DEBUG nova.compute.manager [req-56de873c-55b3-48ae-a683-1eb32d118b11 req-e417945f-1889-41c0-8475-f1578beebc0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Processing event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:25:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 532 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 4.7 MiB/s wr, 119 op/s
Oct 02 12:25:25 compute-0 podman[321119]: 2025-10-02 12:25:25.551176717 +0000 UTC m=+0.537690652 container create d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:25:25 compute-0 ceph-mon[73607]: pgmap v1897: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Oct 02 12:25:25 compute-0 systemd[1]: Started libpod-conmon-d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8.scope.
Oct 02 12:25:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.868 2 DEBUG nova.network.neutron [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Updated VIF entry in instance network info cache for port 9f5480ea-1866-4f99-81bd-19bff4c25882. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.869 2 DEBUG nova.network.neutron [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Updating instance_info_cache with network_info: [{"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:25 compute-0 nova_compute[257802]: 2025-10-02 12:25:25.903 2 DEBUG oslo_concurrency.lockutils [req-c443a6cb-b693-4a5d-975d-2b1c8306a094 req-2e1ff7c6-9700-43f7-80c6-017f0e7bf09f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c2a73310-9eb1-4b57-9fa0-92190f32a5d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:26 compute-0 podman[321119]: 2025-10-02 12:25:26.053585634 +0000 UTC m=+1.040099589 container init d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:25:26 compute-0 podman[321119]: 2025-10-02 12:25:26.063369173 +0000 UTC m=+1.049883108 container start d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.160 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.161 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407926.1599636, c2a73310-9eb1-4b57-9fa0-92190f32a5d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.162 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] VM Started (Lifecycle Event)
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.165 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.170 2 INFO nova.virt.libvirt.driver [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Instance spawned successfully.
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.170 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.190 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.197 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.200 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.201 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.201 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.201 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.202 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.202 2 DEBUG nova.virt.libvirt.driver [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.227 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.227 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407926.1612053, c2a73310-9eb1-4b57-9fa0-92190f32a5d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.227 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] VM Paused (Lifecycle Event)
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.259 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.262 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407926.16485, c2a73310-9eb1-4b57-9fa0-92190f32a5d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.262 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] VM Resumed (Lifecycle Event)
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.268 2 INFO nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Took 7.18 seconds to spawn the instance on the hypervisor.
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.268 2 DEBUG nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:26 compute-0 podman[321119]: 2025-10-02 12:25:26.273666455 +0000 UTC m=+1.260180420 container attach d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.283 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.286 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.330 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.361 2 INFO nova.compute.manager [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Took 10.24 seconds to build instance.
Oct 02 12:25:26 compute-0 nova_compute[257802]: 2025-10-02 12:25:26.380 2 DEBUG oslo_concurrency.lockutils [None req-13716b5b-34f6-4670-ab85-982bac3e91c0 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:26 compute-0 podman[321213]: 2025-10-02 12:25:26.569686255 +0000 UTC m=+0.227458682 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:25:26 compute-0 podman[321213]: 2025-10-02 12:25:26.857879065 +0000 UTC m=+0.515651482 container create b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:26 compute-0 systemd[1]: Started libpod-conmon-b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589.scope.
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:26.944 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:26.946 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:26.947 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f05850ddb4871cdde67311d432f4c29a17c25456588f0f6df44cc8226c521a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:27 compute-0 gracious_tharp[321203]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:25:27 compute-0 gracious_tharp[321203]: --> relative data size: 1.0
Oct 02 12:25:27 compute-0 gracious_tharp[321203]: --> All data devices are unavailable
Oct 02 12:25:27 compute-0 systemd[1]: libpod-d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8.scope: Deactivated successfully.
Oct 02 12:25:27 compute-0 conmon[321203]: conmon d72aa2b50026ffa8b586 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8.scope/container/memory.events
Oct 02 12:25:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:27.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:27 compute-0 podman[321119]: 2025-10-02 12:25:27.148682618 +0000 UTC m=+2.135196553 container died d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:25:27 compute-0 podman[321213]: 2025-10-02 12:25:27.15776799 +0000 UTC m=+0.815540417 container init b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:25:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:27 compute-0 podman[321213]: 2025-10-02 12:25:27.171284232 +0000 UTC m=+0.829056649 container start b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:25:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 02 12:25:27 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [NOTICE]   (321254) : New worker (321256) forked
Oct 02 12:25:27 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [NOTICE]   (321254) : Loading success.
Oct 02 12:25:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 02 12:25:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 02 12:25:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 556 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 132 op/s
Oct 02 12:25:27 compute-0 ceph-mon[73607]: pgmap v1899: 305 pgs: 305 active+clean; 532 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 4.7 MiB/s wr, 119 op/s
Oct 02 12:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-836469a21904e609eaa225542e626b416e3ce12ea110c3d6ea4841c3f9126e6c-merged.mount: Deactivated successfully.
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.895 2 DEBUG nova.compute.manager [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.896 2 DEBUG oslo_concurrency.lockutils [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.896 2 DEBUG oslo_concurrency.lockutils [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.896 2 DEBUG oslo_concurrency.lockutils [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.896 2 DEBUG nova.compute.manager [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] No waiting events found dispatching network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:27 compute-0 nova_compute[257802]: 2025-10-02 12:25:27.897 2 WARNING nova.compute.manager [req-d76c14b4-f584-4a86-8d7a-587b9764898a req-433b154b-2bf4-4483-82d2-adb2a958dada d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received unexpected event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 for instance with vm_state active and task_state None.
Oct 02 12:25:28 compute-0 podman[321119]: 2025-10-02 12:25:28.032987789 +0000 UTC m=+3.019501724 container remove d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:25:28 compute-0 sudo[320865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:28 compute-0 systemd[1]: libpod-conmon-d72aa2b50026ffa8b586e0819814cf1c508ae16195f03f5a34073ddf51155bd8.scope: Deactivated successfully.
Oct 02 12:25:28 compute-0 sudo[321266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:28 compute-0 sudo[321266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:28 compute-0 sudo[321266]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:28 compute-0 sudo[321291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:28 compute-0 sudo[321291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:28 compute-0 sudo[321291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:28 compute-0 sudo[321316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:28 compute-0 sudo[321316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:28 compute-0 sudo[321316]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:28 compute-0 nova_compute[257802]: 2025-10-02 12:25:28.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:28 compute-0 sudo[321341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:25:28 compute-0 sudo[321341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:28.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 02 12:25:28 compute-0 ceph-mon[73607]: osdmap e296: 3 total, 3 up, 3 in
Oct 02 12:25:28 compute-0 ceph-mon[73607]: pgmap v1901: 305 pgs: 305 active+clean; 556 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 132 op/s
Oct 02 12:25:28 compute-0 podman[321407]: 2025-10-02 12:25:28.695375316 +0000 UTC m=+0.031024192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:28 compute-0 podman[321407]: 2025-10-02 12:25:28.794086223 +0000 UTC m=+0.129735099 container create 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:25:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 02 12:25:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 02 12:25:28 compute-0 systemd[1]: Started libpod-conmon-48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801.scope.
Oct 02 12:25:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:29 compute-0 podman[321407]: 2025-10-02 12:25:29.019544086 +0000 UTC m=+0.355192962 container init 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:25:29 compute-0 podman[321407]: 2025-10-02 12:25:29.026196669 +0000 UTC m=+0.361845525 container start 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:25:29 compute-0 romantic_turing[321423]: 167 167
Oct 02 12:25:29 compute-0 systemd[1]: libpod-48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801.scope: Deactivated successfully.
Oct 02 12:25:29 compute-0 conmon[321423]: conmon 48f9a9eb680fa2a612dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801.scope/container/memory.events
Oct 02 12:25:29 compute-0 podman[321407]: 2025-10-02 12:25:29.107733186 +0000 UTC m=+0.443382092 container attach 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:29 compute-0 podman[321407]: 2025-10-02 12:25:29.108448134 +0000 UTC m=+0.444097000 container died 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:25:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c780b68784a54d757c2ad07ea885b546c39fa8408b608718de2bce400848c717-merged.mount: Deactivated successfully.
Oct 02 12:25:29 compute-0 podman[321407]: 2025-10-02 12:25:29.350729529 +0000 UTC m=+0.686378415 container remove 48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:25:29 compute-0 systemd[1]: libpod-conmon-48f9a9eb680fa2a612dceb82323d0e3463bfb0b73560f40f0a0c530781666801.scope: Deactivated successfully.
Oct 02 12:25:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 9.3 MiB/s wr, 395 op/s
Oct 02 12:25:29 compute-0 podman[321448]: 2025-10-02 12:25:29.541194414 +0000 UTC m=+0.027629848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:29 compute-0 podman[321448]: 2025-10-02 12:25:29.644198757 +0000 UTC m=+0.130634091 container create f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:29 compute-0 systemd[1]: Started libpod-conmon-f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27.scope.
Oct 02 12:25:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e054ca4a18ab6fa4c729b993b0d0ef377cedb68408348bd88d0d191d92a3499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e054ca4a18ab6fa4c729b993b0d0ef377cedb68408348bd88d0d191d92a3499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e054ca4a18ab6fa4c729b993b0d0ef377cedb68408348bd88d0d191d92a3499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e054ca4a18ab6fa4c729b993b0d0ef377cedb68408348bd88d0d191d92a3499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 podman[321448]: 2025-10-02 12:25:29.844952625 +0000 UTC m=+0.331388049 container init f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:25:29 compute-0 podman[321448]: 2025-10-02 12:25:29.853932274 +0000 UTC m=+0.340367648 container start f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:25:29 compute-0 ceph-mon[73607]: osdmap e297: 3 total, 3 up, 3 in
Oct 02 12:25:30 compute-0 podman[321448]: 2025-10-02 12:25:29.99990209 +0000 UTC m=+0.486337514 container attach f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:30 compute-0 clever_mayer[321465]: {
Oct 02 12:25:30 compute-0 clever_mayer[321465]:     "1": [
Oct 02 12:25:30 compute-0 clever_mayer[321465]:         {
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "devices": [
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "/dev/loop3"
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             ],
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "lv_name": "ceph_lv0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "lv_size": "7511998464",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "name": "ceph_lv0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "tags": {
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.cluster_name": "ceph",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.crush_device_class": "",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.encrypted": "0",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.osd_id": "1",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.type": "block",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:                 "ceph.vdo": "0"
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             },
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "type": "block",
Oct 02 12:25:30 compute-0 clever_mayer[321465]:             "vg_name": "ceph_vg0"
Oct 02 12:25:30 compute-0 clever_mayer[321465]:         }
Oct 02 12:25:30 compute-0 clever_mayer[321465]:     ]
Oct 02 12:25:30 compute-0 clever_mayer[321465]: }
Oct 02 12:25:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:30 compute-0 systemd[1]: libpod-f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27.scope: Deactivated successfully.
Oct 02 12:25:30 compute-0 podman[321448]: 2025-10-02 12:25:30.649657716 +0000 UTC m=+1.136093070 container died f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.773 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.777 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.777 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.778 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.778 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.780 2 INFO nova.compute.manager [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Terminating instance
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.781 2 DEBUG nova.compute.manager [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:25:30 compute-0 kernel: tap9f5480ea-18 (unregistering): left promiscuous mode
Oct 02 12:25:30 compute-0 NetworkManager[44987]: <info>  [1759407930.8367] device (tap9f5480ea-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:30 compute-0 ovn_controller[148183]: 2025-10-02T12:25:30Z|00446|binding|INFO|Releasing lport 9f5480ea-1866-4f99-81bd-19bff4c25882 from this chassis (sb_readonly=0)
Oct 02 12:25:30 compute-0 ovn_controller[148183]: 2025-10-02T12:25:30Z|00447|binding|INFO|Setting lport 9f5480ea-1866-4f99-81bd-19bff4c25882 down in Southbound
Oct 02 12:25:30 compute-0 ovn_controller[148183]: 2025-10-02T12:25:30Z|00448|binding|INFO|Removing iface tap9f5480ea-18 ovn-installed in OVS
Oct 02 12:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e054ca4a18ab6fa4c729b993b0d0ef377cedb68408348bd88d0d191d92a3499-merged.mount: Deactivated successfully.
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:30 compute-0 nova_compute[257802]: 2025-10-02 12:25:30.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:30 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 02 12:25:30 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000068.scope: Consumed 5.428s CPU time.
Oct 02 12:25:30 compute-0 systemd-machined[211836]: Machine qemu-51-instance-00000068 terminated.
Oct 02 12:25:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:30.960 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:e6:ba 10.100.0.10'], port_security=['fa:16:3e:d2:e6:ba 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2a73310-9eb1-4b57-9fa0-92190f32a5d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb169657a27d4e129e8479b6c03d6093', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20680fd9-94bc-4386-a20e-65de1b26fa5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2a9f71-55b9-4501-a937-2e22f46eca55, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9f5480ea-1866-4f99-81bd-19bff4c25882) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:30.962 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9f5480ea-1866-4f99-81bd-19bff4c25882 in datapath 8afce80e-86e3-4c7b-a554-68fd52bf1d40 unbound from our chassis
Oct 02 12:25:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:30.964 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8afce80e-86e3-4c7b-a554-68fd52bf1d40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:25:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:30.965 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a91c1e-010c-48ce-bf88-ab32ad7e0826]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:30.965 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40 namespace which is not needed anymore
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.017 2 INFO nova.virt.libvirt.driver [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Instance destroyed successfully.
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.018 2 DEBUG nova.objects.instance [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lazy-loading 'resources' on Instance uuid c2a73310-9eb1-4b57-9fa0-92190f32a5d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.064 2 DEBUG nova.virt.libvirt.vif [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-599010257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-599010257',id=104,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bb169657a27d4e129e8479b6c03d6093',ramdisk_id='',reservation_id='r-yyvz79uu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2043219092-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:26Z,user_data=None,user_id='49ce34df164c4ce7a673d8c2ff42451a',uuid=c2a73310-9eb1-4b57-9fa0-92190f32a5d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.064 2 DEBUG nova.network.os_vif_util [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converting VIF {"id": "9f5480ea-1866-4f99-81bd-19bff4c25882", "address": "fa:16:3e:d2:e6:ba", "network": {"id": "8afce80e-86e3-4c7b-a554-68fd52bf1d40", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1101810385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb169657a27d4e129e8479b6c03d6093", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f5480ea-18", "ovs_interfaceid": "9f5480ea-1866-4f99-81bd-19bff4c25882", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.065 2 DEBUG nova.network.os_vif_util [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.065 2 DEBUG os_vif [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f5480ea-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.072 2 INFO os_vif [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:e6:ba,bridge_name='br-int',has_traffic_filtering=True,id=9f5480ea-1866-4f99-81bd-19bff4c25882,network=Network(8afce80e-86e3-4c7b-a554-68fd52bf1d40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f5480ea-18')
Oct 02 12:25:31 compute-0 podman[321448]: 2025-10-02 12:25:31.115932887 +0000 UTC m=+1.602368241 container remove f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:25:31 compute-0 systemd[1]: libpod-conmon-f67f9e01c58e4217685958633ef3bcebf2a2082c4aa9c981b638253303a89c27.scope: Deactivated successfully.
Oct 02 12:25:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:31 compute-0 sudo[321341]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 ceph-mon[73607]: pgmap v1903: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 9.3 MiB/s wr, 395 op/s
Oct 02 12:25:31 compute-0 sudo[321540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:31 compute-0 sudo[321540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 sudo[321540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[321573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:31 compute-0 sudo[321573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 sudo[321573]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[321598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:31 compute-0 sudo[321598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 sudo[321598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.456 2 DEBUG nova.compute.manager [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-unplugged-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.456 2 DEBUG oslo_concurrency.lockutils [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.457 2 DEBUG oslo_concurrency.lockutils [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:31 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [NOTICE]   (321254) : haproxy version is 2.8.14-c23fe91
Oct 02 12:25:31 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [NOTICE]   (321254) : path to executable is /usr/sbin/haproxy
Oct 02 12:25:31 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [WARNING]  (321254) : Exiting Master process...
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.457 2 DEBUG oslo_concurrency.lockutils [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.459 2 DEBUG nova.compute.manager [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] No waiting events found dispatching network-vif-unplugged-9f5480ea-1866-4f99-81bd-19bff4c25882 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:31 compute-0 nova_compute[257802]: 2025-10-02 12:25:31.459 2 DEBUG nova.compute.manager [req-ad5613db-a84d-46d4-a4bb-198992e0cd62 req-0366fbfd-a13d-4fef-8649-815dc64bda9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-unplugged-9f5480ea-1866-4f99-81bd-19bff4c25882 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:25:31 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [ALERT]    (321254) : Current worker (321256) exited with code 143 (Terminated)
Oct 02 12:25:31 compute-0 neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40[321229]: [WARNING]  (321254) : All workers exited. Exiting... (0)
Oct 02 12:25:31 compute-0 systemd[1]: libpod-b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589.scope: Deactivated successfully.
Oct 02 12:25:31 compute-0 podman[321539]: 2025-10-02 12:25:31.469961999 +0000 UTC m=+0.269408010 container died b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:25:31 compute-0 sudo[321623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:25:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 304 op/s
Oct 02 12:25:31 compute-0 sudo[321623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589-userdata-shm.mount: Deactivated successfully.
Oct 02 12:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f05850ddb4871cdde67311d432f4c29a17c25456588f0f6df44cc8226c521a6-merged.mount: Deactivated successfully.
Oct 02 12:25:31 compute-0 podman[321539]: 2025-10-02 12:25:31.773753511 +0000 UTC m=+0.573199522 container cleanup b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:25:31 compute-0 systemd[1]: libpod-conmon-b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589.scope: Deactivated successfully.
Oct 02 12:25:32 compute-0 podman[321676]: 2025-10-02 12:25:32.004403621 +0000 UTC m=+0.203644670 container remove b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.014 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18f11c43-d2f5-453e-97b6-3648fb20e833]: (4, ('Thu Oct  2 12:25:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40 (b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589)\nb934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589\nThu Oct  2 12:25:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40 (b934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589)\nb934284e7f33ccf0a4cb5d7a1c4687d1390c3795f09ce33fab147d03e18f6589\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.016 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[979efb1d-b7d5-41dc-bac0-52dd91caeb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.017 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8afce80e-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:32 compute-0 kernel: tap8afce80e-80: left promiscuous mode
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.086 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b063d4f7-b1d9-4283-a3d6-a403de8363e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.114 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9074c91c-0b98-4328-a884-d8c283efac9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.115 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33ea71b6-d7fc-4657-965c-6bd043fb6638]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.150 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ee81e3-57e9-49c1-ab41-76abe9464b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599252, 'reachable_time': 28827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321711, 'error': None, 'target': 'ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d8afce80e\x2d86e3\x2d4c7b\x2da554\x2d68fd52bf1d40.mount: Deactivated successfully.
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.153 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8afce80e-86e3-4c7b-a554-68fd52bf1d40 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:25:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:32.153 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[af6350e8-b6d3-4020-a6af-7644a58a73b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.262659357 +0000 UTC m=+0.069450742 container create 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.285 2 INFO nova.virt.libvirt.driver [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Deleting instance files /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4_del
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.287 2 INFO nova.virt.libvirt.driver [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Deletion of /var/lib/nova/instances/c2a73310-9eb1-4b57-9fa0-92190f32a5d4_del complete
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.22155704 +0000 UTC m=+0.028348445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:32 compute-0 systemd[1]: Started libpod-conmon-917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6.scope.
Oct 02 12:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.401733104 +0000 UTC m=+0.208524479 container init 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.404 2 INFO nova.compute.manager [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Took 1.62 seconds to destroy the instance on the hypervisor.
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.405 2 DEBUG oslo.service.loopingcall [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.405 2 DEBUG nova.compute.manager [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.406 2 DEBUG nova.network.neutron [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.411724849 +0000 UTC m=+0.218516234 container start 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:25:32 compute-0 brave_williamson[321734]: 167 167
Oct 02 12:25:32 compute-0 systemd[1]: libpod-917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6.scope: Deactivated successfully.
Oct 02 12:25:32 compute-0 conmon[321734]: conmon 917152aaa2865210739a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6.scope/container/memory.events
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.436175677 +0000 UTC m=+0.242967062 container attach 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.437676734 +0000 UTC m=+0.244468129 container died 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fab520099a55c0315c7aca66a0aa4fac5365ffcd09355d079ff31ec2d031d31-merged.mount: Deactivated successfully.
Oct 02 12:25:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:32 compute-0 podman[321718]: 2025-10-02 12:25:32.667416811 +0000 UTC m=+0.474208166 container remove 917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:25:32 compute-0 systemd[1]: libpod-conmon-917152aaa2865210739af2be247cb34443d23c43a89ceab609e9d72c4f0414d6.scope: Deactivated successfully.
Oct 02 12:25:32 compute-0 nova_compute[257802]: 2025-10-02 12:25:32.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:32 compute-0 podman[321761]: 2025-10-02 12:25:32.810297361 +0000 UTC m=+0.024568342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:32 compute-0 podman[321761]: 2025-10-02 12:25:32.943882574 +0000 UTC m=+0.158153535 container create 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:25:33 compute-0 systemd[1]: Started libpod-conmon-40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f.scope.
Oct 02 12:25:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3f6b3be224312a1b39def71cbad25c0144fb3711204a193b7ee6236af0f772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3f6b3be224312a1b39def71cbad25c0144fb3711204a193b7ee6236af0f772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3f6b3be224312a1b39def71cbad25c0144fb3711204a193b7ee6236af0f772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3f6b3be224312a1b39def71cbad25c0144fb3711204a193b7ee6236af0f772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:33 compute-0 podman[321761]: 2025-10-02 12:25:33.270262888 +0000 UTC m=+0.484533939 container init 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:25:33 compute-0 podman[321761]: 2025-10-02 12:25:33.283557384 +0000 UTC m=+0.497828375 container start 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:33 compute-0 ceph-mon[73607]: pgmap v1904: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 304 op/s
Oct 02 12:25:33 compute-0 podman[321761]: 2025-10-02 12:25:33.326713511 +0000 UTC m=+0.540984472 container attach 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:25:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 584 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 356 op/s
Oct 02 12:25:34 compute-0 stoic_kirch[321778]: {
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:         "osd_id": 1,
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:         "type": "bluestore"
Oct 02 12:25:34 compute-0 stoic_kirch[321778]:     }
Oct 02 12:25:34 compute-0 stoic_kirch[321778]: }
Oct 02 12:25:34 compute-0 systemd[1]: libpod-40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f.scope: Deactivated successfully.
Oct 02 12:25:34 compute-0 podman[321761]: 2025-10-02 12:25:34.237628925 +0000 UTC m=+1.451899886 container died 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:34 compute-0 ceph-mon[73607]: pgmap v1905: 305 pgs: 305 active+clean; 584 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 356 op/s
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.562 2 DEBUG nova.network.neutron [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a3f6b3be224312a1b39def71cbad25c0144fb3711204a193b7ee6236af0f772-merged.mount: Deactivated successfully.
Oct 02 12:25:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.713 2 DEBUG nova.compute.manager [req-d38b0e3b-d426-4e49-b0f0-c93e18a1185d req-f6d9b283-5acd-4939-9cef-fc4e5922433a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-deleted-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.713 2 INFO nova.compute.manager [req-d38b0e3b-d426-4e49-b0f0-c93e18a1185d req-f6d9b283-5acd-4939-9cef-fc4e5922433a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Neutron deleted interface 9f5480ea-1866-4f99-81bd-19bff4c25882; detaching it from the instance and deleting it from the info cache
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.714 2 DEBUG nova.network.neutron [req-d38b0e3b-d426-4e49-b0f0-c93e18a1185d req-f6d9b283-5acd-4939-9cef-fc4e5922433a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.750 2 INFO nova.compute.manager [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Took 2.34 seconds to deallocate network for instance.
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.763 2 DEBUG nova.compute.manager [req-d38b0e3b-d426-4e49-b0f0-c93e18a1185d req-f6d9b283-5acd-4939-9cef-fc4e5922433a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Detach interface failed, port_id=9f5480ea-1866-4f99-81bd-19bff4c25882, reason: Instance c2a73310-9eb1-4b57-9fa0-92190f32a5d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.878 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.879 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:34 compute-0 podman[321761]: 2025-10-02 12:25:34.93187201 +0000 UTC m=+2.146142971 container remove 40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:34 compute-0 sudo[321623]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:25:34 compute-0 nova_compute[257802]: 2025-10-02 12:25:34.966 2 DEBUG oslo_concurrency.processutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:35 compute-0 systemd[1]: libpod-conmon-40e21dc7828cf1724b94e4838e8f50cd534e871eb30d895fda7cff9a67f1d96f.scope: Deactivated successfully.
Oct 02 12:25:35 compute-0 podman[321815]: 2025-10-02 12:25:35.104759265 +0000 UTC m=+0.061344014 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:25:35 compute-0 podman[321816]: 2025-10-02 12:25:35.107884391 +0000 UTC m=+0.064559732 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:25:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:25:35 compute-0 podman[321817]: 2025-10-02 12:25:35.129413019 +0000 UTC m=+0.074099646 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:25:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 887c1543-5129-45dd-8b8c-3ff5e3d1733e does not exist
Oct 02 12:25:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ecfd5c84-e96f-4a1b-bcb3-1d6f2805342f does not exist
Oct 02 12:25:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dbb2faeb-e436-4d1d-bd30-548e11aed136 does not exist
Oct 02 12:25:35 compute-0 sudo[321889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:35 compute-0 sudo[321889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:35 compute-0 sudo[321889]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:35 compute-0 sudo[321914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:25:35 compute-0 sudo[321914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:35 compute-0 sudo[321914]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323400518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.446 2 DEBUG oslo_concurrency.processutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.449 2 DEBUG nova.compute.manager [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.450 2 DEBUG oslo_concurrency.lockutils [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.450 2 DEBUG oslo_concurrency.lockutils [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.450 2 DEBUG oslo_concurrency.lockutils [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.450 2 DEBUG nova.compute.manager [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] No waiting events found dispatching network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.451 2 WARNING nova.compute.manager [req-9c4625db-9a79-40fe-be9f-d93e4ef7a779 req-7c804a83-04c3-4cdb-8ebf-353d1eb2de4e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Received unexpected event network-vif-plugged-9f5480ea-1866-4f99-81bd-19bff4c25882 for instance with vm_state deleted and task_state None.
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.456 2 DEBUG nova.compute.provider_tree [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 565 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.9 MiB/s wr, 302 op/s
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.527 2 DEBUG nova.scheduler.client.report [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.581 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.614 2 INFO nova.scheduler.client.report [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Deleted allocations for instance c2a73310-9eb1-4b57-9fa0-92190f32a5d4
Oct 02 12:25:35 compute-0 nova_compute[257802]: 2025-10-02 12:25:35.690 2 DEBUG oslo_concurrency.lockutils [None req-8fa65fd8-4aba-4517-97fc-ddb0b46e589e 49ce34df164c4ce7a673d8c2ff42451a bb169657a27d4e129e8479b6c03d6093 - - default default] Lock "c2a73310-9eb1-4b57-9fa0-92190f32a5d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:36 compute-0 nova_compute[257802]: 2025-10-02 12:25:36.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:36.185 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:36.186 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:25:36 compute-0 nova_compute[257802]: 2025-10-02 12:25:36.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:25:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2323400518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:37.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 02 12:25:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:37.187 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 02 12:25:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 02 12:25:37 compute-0 ceph-mon[73607]: pgmap v1906: 305 pgs: 305 active+clean; 565 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.9 MiB/s wr, 302 op/s
Oct 02 12:25:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 565 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Oct 02 12:25:37 compute-0 nova_compute[257802]: 2025-10-02 12:25:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:38 compute-0 ceph-mon[73607]: osdmap e298: 3 total, 3 up, 3 in
Oct 02 12:25:38 compute-0 ceph-mon[73607]: pgmap v1908: 305 pgs: 305 active+clean; 565 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Oct 02 12:25:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:39.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 4.1 KiB/s wr, 90 op/s
Oct 02 12:25:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/144315044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:40 compute-0 ceph-mon[73607]: pgmap v1909: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 4.1 KiB/s wr, 90 op/s
Oct 02 12:25:41 compute-0 nova_compute[257802]: 2025-10-02 12:25:41.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:41.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 4.1 KiB/s wr, 90 op/s
Oct 02 12:25:41 compute-0 podman[321944]: 2025-10-02 12:25:41.941771381 +0000 UTC m=+0.082588824 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:25:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2687855842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:42 compute-0 sudo[321970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:42 compute-0 sudo[321970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:42 compute-0 sudo[321970]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:25:42
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:25:42 compute-0 sudo[321995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:42 compute-0 sudo[321995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:42 compute-0 sudo[321995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:42 compute-0 nova_compute[257802]: 2025-10-02 12:25:42.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:43 compute-0 ceph-mon[73607]: pgmap v1910: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 4.1 KiB/s wr, 90 op/s
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:25:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:43.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:25:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 10 KiB/s wr, 88 op/s
Oct 02 12:25:43 compute-0 nova_compute[257802]: 2025-10-02 12:25:43.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:43 compute-0 nova_compute[257802]: 2025-10-02 12:25:43.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.064 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.065 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.065 2 INFO nova.compute.manager [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Unshelving
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.223 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.224 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.255 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.284 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.299 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.299 2 INFO nova.compute.claims [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.402 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89990503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.805 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.812 2 DEBUG nova.compute.provider_tree [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.863 2 DEBUG nova.scheduler.client.report [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:44 compute-0 nova_compute[257802]: 2025-10-02 12:25:44.890 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:45 compute-0 ceph-mon[73607]: pgmap v1911: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 10 KiB/s wr, 88 op/s
Oct 02 12:25:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/89990503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:25:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:45.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:25:45 compute-0 nova_compute[257802]: 2025-10-02 12:25:45.221 2 INFO nova.network.neutron [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updating port 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:25:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 14 KiB/s wr, 90 op/s
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.016 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407931.014661, c2a73310-9eb1-4b57-9fa0-92190f32a5d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.017 2 INFO nova.compute.manager [-] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] VM Stopped (Lifecycle Event)
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.036 2 DEBUG nova.compute.manager [None req-c90e09f0-293a-4b35-9397-b1e4fe65002f - - - - - -] [instance: c2a73310-9eb1-4b57-9fa0-92190f32a5d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.678 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.678 2 DEBUG nova.network.neutron [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.818 2 DEBUG nova.compute.manager [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-changed-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.819 2 DEBUG nova.compute.manager [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Refreshing instance network info cache due to event network-changed-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:25:46 compute-0 nova_compute[257802]: 2025-10-02 12:25:46.819 2 DEBUG oslo_concurrency.lockutils [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:47.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:47 compute-0 ceph-mon[73607]: pgmap v1912: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 14 KiB/s wr, 90 op/s
Oct 02 12:25:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 26 KiB/s wr, 87 op/s
Oct 02 12:25:47 compute-0 nova_compute[257802]: 2025-10-02 12:25:47.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2938540743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.030 2 DEBUG nova.network.neutron [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updating instance_info_cache with network_info: [{"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.063 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.064 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.065 2 INFO nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Creating image(s)
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.087 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.091 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.092 2 DEBUG oslo_concurrency.lockutils [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.092 2 DEBUG nova.network.neutron [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Refreshing network info cache for port 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.133 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.155 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.160 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "7cbab39e3aed4995ab266f1538a4292a3519f394" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.161 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7cbab39e3aed4995ab266f1538a4292a3519f394" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:49 compute-0 ceph-mon[73607]: pgmap v1913: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 26 KiB/s wr, 87 op/s
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.439 2 DEBUG nova.virt.libvirt.imagebackend [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/cf693858-d747-42e7-8f75-d6c36d36cc6c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/cf693858-d747-42e7-8f75-d6c36d36cc6c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:25:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 555 KiB/s rd, 28 KiB/s wr, 74 op/s
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.526 2 DEBUG nova.virt.libvirt.imagebackend [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/cf693858-d747-42e7-8f75-d6c36d36cc6c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.527 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] cloning images/cf693858-d747-42e7-8f75-d6c36d36cc6c@snap to None/7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.812 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7cbab39e3aed4995ab266f1538a4292a3519f394" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:49 compute-0 nova_compute[257802]: 2025-10-02 12:25:49.985 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.064 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] flattening vms/7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.124 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.124 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.125 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:25:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1143002270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.836 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Image rbd:vms/7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.837 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.837 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Ensure instance console log exists: /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.838 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.838 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.838 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.841 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Start _get_guest_xml network_info=[{"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:25:16Z,direct_url=<?>,disk_format='raw',id=cf693858-d747-42e7-8f75-d6c36d36cc6c,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1305802395-shelved',owner='10fff81da7a54740a53a0771ce916329',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:25:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.844 2 WARNING nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.849 2 DEBUG nova.virt.libvirt.host [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.850 2 DEBUG nova.virt.libvirt.host [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.853 2 DEBUG nova.virt.libvirt.host [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.853 2 DEBUG nova.virt.libvirt.host [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.855 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.855 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:25:16Z,direct_url=<?>,disk_format='raw',id=cf693858-d747-42e7-8f75-d6c36d36cc6c,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1305802395-shelved',owner='10fff81da7a54740a53a0771ce916329',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:25:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.855 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.856 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.856 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.856 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.856 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.856 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.857 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.857 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.857 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.857 2 DEBUG nova.virt.hardware [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.858 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:50 compute-0 nova_compute[257802]: 2025-10-02 12:25:50.876 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.064 2 DEBUG nova.network.neutron [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updated VIF entry in instance network info cache for port 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.065 2 DEBUG nova.network.neutron [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updating instance_info_cache with network_info: [{"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.099 2 DEBUG oslo_concurrency.lockutils [req-b7c68ddd-e7c2-4272-9f12-e1dc1a520b1f req-181e5f4c-94e2-4a86-bd86-5474b970f276 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:25:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264861206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.336 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.365 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.370 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 28 KiB/s wr, 53 op/s
Oct 02 12:25:51 compute-0 ceph-mon[73607]: pgmap v1914: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 555 KiB/s rd, 28 KiB/s wr, 74 op/s
Oct 02 12:25:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1264861206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:25:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461500262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.792 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.793 2 DEBUG nova.virt.libvirt.vif [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1305802395',display_name='tempest-ServerActionsTestOtherB-server-1305802395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1305802395',id=101,image_ref='cf693858-d747-42e7-8f75-d6c36d36cc6c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-bt70f33h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member',shelved_at='2025-10-02T12:25:31.322938',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='cf693858-d747-42e7-8f75-d6c36d36cc6c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=7b4bdbc9-7451-4500-8794-c8edef50d6a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.794 2 DEBUG nova.network.os_vif_util [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.795 2 DEBUG nova.network.os_vif_util [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.796 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.892 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <uuid>7b4bdbc9-7451-4500-8794-c8edef50d6a4</uuid>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <name>instance-00000065</name>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherB-server-1305802395</nova:name>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:25:50</nova:creationTime>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="cf693858-d747-42e7-8f75-d6c36d36cc6c"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <nova:port uuid="9d6e67d8-8c6a-4b95-b332-80f8674a0ebb">
Oct 02 12:25:51 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <system>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="serial">7b4bdbc9-7451-4500-8794-c8edef50d6a4</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="uuid">7b4bdbc9-7451-4500-8794-c8edef50d6a4</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </system>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <os>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </os>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <features>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </features>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk">
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </source>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config">
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </source>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:25:51 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6d:91:e2"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <target dev="tap9d6e67d8-8c"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/console.log" append="off"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <video>
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </video>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:25:51 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:25:51 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:25:51 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:25:51 compute-0 nova_compute[257802]: </domain>
Oct 02 12:25:51 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.894 2 DEBUG nova.compute.manager [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Preparing to wait for external event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.895 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.896 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.896 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.898 2 DEBUG nova.virt.libvirt.vif [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1305802395',display_name='tempest-ServerActionsTestOtherB-server-1305802395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1305802395',id=101,image_ref='cf693858-d747-42e7-8f75-d6c36d36cc6c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-bt70f33h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member',shelved_at='2025-10-02T12:25:31.322938',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='cf693858-d747-42e7-8f75-d6c36d36cc6c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=7b4bdbc9-7451-4500-8794-c8edef50d6a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.899 2 DEBUG nova.network.os_vif_util [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.900 2 DEBUG nova.network.os_vif_util [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.902 2 DEBUG os_vif [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.904 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d6e67d8-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d6e67d8-8c, col_values=(('external_ids', {'iface-id': '9d6e67d8-8c6a-4b95-b332-80f8674a0ebb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:91:e2', 'vm-uuid': '7b4bdbc9-7451-4500-8794-c8edef50d6a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:51 compute-0 NetworkManager[44987]: <info>  [1759407951.9301] manager: (tap9d6e67d8-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:51 compute-0 nova_compute[257802]: 2025-10-02 12:25:51.938 2 INFO os_vif [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c')
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.190 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.191 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.191 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:6d:91:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.192 2 INFO nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Using config drive
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.235 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.338 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.443 2 DEBUG nova.objects.instance [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'keypairs' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:25:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:25:52 compute-0 ceph-mon[73607]: pgmap v1915: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 28 KiB/s wr, 53 op/s
Oct 02 12:25:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2461500262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2681305090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3189314125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.851 2 INFO nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Creating config drive at /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config
Oct 02 12:25:52 compute-0 nova_compute[257802]: 2025-10-02 12:25:52.863 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1emy_k32 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.021 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1emy_k32" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.072 2 DEBUG nova.storage.rbd_utils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.078 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.447 2 DEBUG oslo_concurrency.processutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config 7b4bdbc9-7451-4500-8794-c8edef50d6a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.449 2 INFO nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Deleting local config drive /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4/disk.config because it was imported into RBD.
Oct 02 12:25:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 505 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Oct 02 12:25:53 compute-0 kernel: tap9d6e67d8-8c: entered promiscuous mode
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.5215] manager: (tap9d6e67d8-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 ovn_controller[148183]: 2025-10-02T12:25:53Z|00449|binding|INFO|Claiming lport 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb for this chassis.
Oct 02 12:25:53 compute-0 ovn_controller[148183]: 2025-10-02T12:25:53Z|00450|binding|INFO|9d6e67d8-8c6a-4b95-b332-80f8674a0ebb: Claiming fa:16:3e:6d:91:e2 10.100.0.10
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.5532] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.5544] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 systemd-machined[211836]: New machine qemu-52-instance-00000065.
Oct 02 12:25:53 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000065.
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.618 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:91:e2 10.100.0.10'], port_security=['fa:16:3e:6d:91:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7b4bdbc9-7451-4500-8794-c8edef50d6a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '7', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.620 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.623 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:25:53 compute-0 systemd-udevd[322399]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.639 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50fde43b-0707-4d2b-9a27-67e30e12b937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.640 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.643 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.643 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5c4b58-9d1d-463c-9429-c3c50f59e6d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.644 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7fdd45-397d-4376-a606-a506a0728fc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.6717] device (tap9d6e67d8-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.6731] device (tap9d6e67d8-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.665 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1088c4c4-a6a7-489e-8b70-5ef6d09a629d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.703 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0437020f-bd0c-4fe1-9c08-0152cce2ac9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.747 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a5204e76-21d2-489b-ae3d-4164450dce91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.7604] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.758 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5cef7ed5-9e41-4487-a9bc-0c5d385f4d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 systemd-udevd[322402]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.809 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b90433cf-e6ce-4774-ba29-66619824236a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.814 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e774981f-87e0-43c7-af2b-961dabe6bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 NetworkManager[44987]: <info>  [1759407953.8477] device (tap4035a600-40): carrier: link connected
Oct 02 12:25:53 compute-0 ovn_controller[148183]: 2025-10-02T12:25:53Z|00451|binding|INFO|Setting lport 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb ovn-installed in OVS
Oct 02 12:25:53 compute-0 ovn_controller[148183]: 2025-10-02T12:25:53Z|00452|binding|INFO|Setting lport 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb up in Southbound
Oct 02 12:25:53 compute-0 nova_compute[257802]: 2025-10-02 12:25:53.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.856 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[15e090a3-ba28-4248-85e6-50c943b2bc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.879 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5160e556-e8d6-449b-aab8-4f404a202013]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602147, 'reachable_time': 19736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322431, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.902 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[36802e8d-d4f1-4af8-9515-48f7b266cb1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602147, 'tstamp': 602147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322432, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.929 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[563e12b3-f293-4c86-8bf7-506d22a6f778]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602147, 'reachable_time': 19736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322433, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:53.972 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[48a78438-cb1c-427e-8a7d-68573039c83f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.057 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f67e4880-19fe-40c2-a3ce-a4c559761c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.060 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.061 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.061 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:54 compute-0 kernel: tap4035a600-40: entered promiscuous mode
Oct 02 12:25:54 compute-0 NetworkManager[44987]: <info>  [1759407954.0676] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.068 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:54 compute-0 ovn_controller[148183]: 2025-10-02T12:25:54Z|00453|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=1)
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.143 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.144 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9e3d37-ab62-42bf-b004-026f74a3f08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.145 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:25:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:25:54.145 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0071022066350269865 of space, bias 1.0, pg target 2.130661990508096 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6443723972873627 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005059658826540108 of space, bias 1.0, pg target 1.5077783303089523 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:25:54 compute-0 podman[322507]: 2025-10-02 12:25:54.53546788 +0000 UTC m=+0.022431531 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:25:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:54 compute-0 podman[322507]: 2025-10-02 12:25:54.71920053 +0000 UTC m=+0.206164141 container create d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.765 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407954.7637992, 7b4bdbc9-7451-4500-8794-c8edef50d6a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:25:54 compute-0 nova_compute[257802]: 2025-10-02 12:25:54.766 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] VM Started (Lifecycle Event)
Oct 02 12:25:54 compute-0 ceph-mon[73607]: pgmap v1916: 305 pgs: 305 active+clean; 505 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Oct 02 12:25:54 compute-0 systemd[1]: Started libpod-conmon-d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f.scope.
Oct 02 12:25:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef1694da96858276089c2aa659d73c68b1325c1f96811d00c258da119bce06b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:54 compute-0 podman[322507]: 2025-10-02 12:25:54.905441262 +0000 UTC m=+0.392404893 container init d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:25:54 compute-0 podman[322507]: 2025-10-02 12:25:54.911712086 +0000 UTC m=+0.398675697 container start d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:25:54 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [NOTICE]   (322527) : New worker (322529) forked
Oct 02 12:25:54 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [NOTICE]   (322527) : Loading success.
Oct 02 12:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3050220245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3050220245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:25:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 112 op/s
Oct 02 12:25:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3050220245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:25:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3050220245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:25:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:56 compute-0 nova_compute[257802]: 2025-10-02 12:25:56.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:57 compute-0 ceph-mon[73607]: pgmap v1917: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 112 op/s
Oct 02 12:25:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 143 op/s
Oct 02 12:25:57 compute-0 nova_compute[257802]: 2025-10-02 12:25:57.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:58 compute-0 nova_compute[257802]: 2025-10-02 12:25:58.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:58 compute-0 ceph-mon[73607]: pgmap v1918: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 143 op/s
Oct 02 12:25:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:25:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:26:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:01 compute-0 ceph-mon[73607]: pgmap v1919: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:26:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:01.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 12:26:01 compute-0 nova_compute[257802]: 2025-10-02 12:26:01.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:02 compute-0 ceph-mon[73607]: pgmap v1920: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 12:26:02 compute-0 sudo[322541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:02 compute-0 sudo[322541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:02 compute-0 sudo[322541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:02 compute-0 sudo[322567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:02 compute-0 sudo[322567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:02 compute-0 sudo[322567]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:02 compute-0 nova_compute[257802]: 2025-10-02 12:26:02.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:03.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 12:26:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:05.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:05 compute-0 ceph-mon[73607]: pgmap v1921: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 12:26:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 130 op/s
Oct 02 12:26:05 compute-0 podman[322594]: 2025-10-02 12:26:05.96560406 +0000 UTC m=+0.088184431 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 12:26:05 compute-0 podman[322595]: 2025-10-02 12:26:05.975298787 +0000 UTC m=+0.091600074 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:26:05 compute-0 podman[322593]: 2025-10-02 12:26:05.988690366 +0000 UTC m=+0.113362938 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.251 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.261 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407954.764673, 7b4bdbc9-7451-4500-8794-c8edef50d6a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.261 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] VM Paused (Lifecycle Event)
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.263 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.264 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.264 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:26:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:06 compute-0 ceph-mon[73607]: pgmap v1922: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 130 op/s
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.814 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.815 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.815 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.815 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.817 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.821 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:06 compute-0 nova_compute[257802]: 2025-10-02 12:26:06.994 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:07.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 98 op/s
Oct 02 12:26:07 compute-0 nova_compute[257802]: 2025-10-02 12:26:07.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/627826706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:09.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:09 compute-0 ceph-mon[73607]: pgmap v1923: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 98 op/s
Oct 02 12:26:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 10 KiB/s wr, 84 op/s
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.592 2 DEBUG nova.compute.manager [req-8b560af1-dbf6-4263-a5f9-e2d62f16a892 req-1969c925-1ed0-4dc2-980e-23771fd69a55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.593 2 DEBUG oslo_concurrency.lockutils [req-8b560af1-dbf6-4263-a5f9-e2d62f16a892 req-1969c925-1ed0-4dc2-980e-23771fd69a55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.593 2 DEBUG oslo_concurrency.lockutils [req-8b560af1-dbf6-4263-a5f9-e2d62f16a892 req-1969c925-1ed0-4dc2-980e-23771fd69a55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.594 2 DEBUG oslo_concurrency.lockutils [req-8b560af1-dbf6-4263-a5f9-e2d62f16a892 req-1969c925-1ed0-4dc2-980e-23771fd69a55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.594 2 DEBUG nova.compute.manager [req-8b560af1-dbf6-4263-a5f9-e2d62f16a892 req-1969c925-1ed0-4dc2-980e-23771fd69a55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Processing event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.595 2 DEBUG nova.compute.manager [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Instance event wait completed in 14 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.598 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759407969.5977232, 7b4bdbc9-7451-4500-8794-c8edef50d6a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.599 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] VM Resumed (Lifecycle Event)
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.600 2 DEBUG nova.virt.libvirt.driver [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.603 2 INFO nova.virt.libvirt.driver [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Instance spawned successfully.
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.655 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.658 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:09 compute-0 nova_compute[257802]: 2025-10-02 12:26:09.740 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 02 12:26:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:26:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:26:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4083597903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:10 compute-0 ceph-mon[73607]: pgmap v1924: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 10 KiB/s wr, 84 op/s
Oct 02 12:26:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 02 12:26:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 9.6 KiB/s wr, 38 op/s
Oct 02 12:26:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.585 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updating instance_info_cache with network_info: [{"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.673 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-7b4bdbc9-7451-4500-8794-c8edef50d6a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.674 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.675 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.676 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.712 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.714 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.715 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.715 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.716 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.773 2 DEBUG nova.compute.manager [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.774 2 DEBUG oslo_concurrency.lockutils [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.775 2 DEBUG oslo_concurrency.lockutils [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.775 2 DEBUG oslo_concurrency.lockutils [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.775 2 DEBUG nova.compute.manager [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] No waiting events found dispatching network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.776 2 WARNING nova.compute.manager [req-d0546524-2ded-4734-9bf4-ba99714638b1 req-381b1beb-c1c6-493a-9409-5ddd9af06f5c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received unexpected event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb for instance with vm_state shelved_offloaded and task_state spawning.
Oct 02 12:26:11 compute-0 nova_compute[257802]: 2025-10-02 12:26:11.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1562785191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:12 compute-0 ceph-mon[73607]: osdmap e299: 3 total, 3 up, 3 in
Oct 02 12:26:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381680503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.345 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:12 compute-0 podman[322678]: 2025-10-02 12:26:12.49903403 +0000 UTC m=+0.106243224 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 12:26:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.694 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.695 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.827 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.828 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=20.851421356201172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.829 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:12 compute-0 nova_compute[257802]: 2025-10-02 12:26:12.829 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:13 compute-0 ceph-mon[73607]: pgmap v1925: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 9.6 KiB/s wr, 38 op/s
Oct 02 12:26:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3381680503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:13.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.220 2 DEBUG nova.compute.manager [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.334 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 7b4bdbc9-7451-4500-8794-c8edef50d6a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.334 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.335 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.367 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 458 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 106 op/s
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.664 2 DEBUG oslo_concurrency.lockutils [None req-f802e7bb-f7b1-4c54-aae9-cc405e0c9c5d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 29.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159768168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.820 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:13 compute-0 nova_compute[257802]: 2025-10-02 12:26:13.827 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:14 compute-0 nova_compute[257802]: 2025-10-02 12:26:14.173 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2159768168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:14 compute-0 nova_compute[257802]: 2025-10-02 12:26:14.972 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:26:14 compute-0 nova_compute[257802]: 2025-10-02 12:26:14.973 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:15.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:15 compute-0 ceph-mon[73607]: pgmap v1927: 305 pgs: 305 active+clean; 458 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 106 op/s
Oct 02 12:26:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 431 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 13 KiB/s wr, 156 op/s
Oct 02 12:26:16 compute-0 ceph-mon[73607]: pgmap v1928: 305 pgs: 305 active+clean; 431 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 13 KiB/s wr, 156 op/s
Oct 02 12:26:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:16 compute-0 nova_compute[257802]: 2025-10-02 12:26:16.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 02 12:26:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 02 12:26:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 02 12:26:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 407 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 17 KiB/s wr, 144 op/s
Oct 02 12:26:17 compute-0 nova_compute[257802]: 2025-10-02 12:26:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:26:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711866365' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:26:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711866365' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:18 compute-0 ceph-mon[73607]: osdmap e300: 3 total, 3 up, 3 in
Oct 02 12:26:18 compute-0 ceph-mon[73607]: pgmap v1930: 305 pgs: 305 active+clean; 407 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 17 KiB/s wr, 144 op/s
Oct 02 12:26:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1711866365' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1711866365' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 387 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 18 KiB/s wr, 171 op/s
Oct 02 12:26:20 compute-0 ceph-mon[73607]: pgmap v1931: 305 pgs: 305 active+clean; 387 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 18 KiB/s wr, 171 op/s
Oct 02 12:26:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:21.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 387 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 138 op/s
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.703 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.704 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.704 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.705 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.705 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.706 2 INFO nova.compute.manager [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Terminating instance
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.708 2 DEBUG nova.compute.manager [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:26:21 compute-0 kernel: tap9d6e67d8-8c (unregistering): left promiscuous mode
Oct 02 12:26:21 compute-0 NetworkManager[44987]: <info>  [1759407981.7524] device (tap9d6e67d8-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:21 compute-0 ovn_controller[148183]: 2025-10-02T12:26:21Z|00454|binding|INFO|Releasing lport 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb from this chassis (sb_readonly=0)
Oct 02 12:26:21 compute-0 ovn_controller[148183]: 2025-10-02T12:26:21Z|00455|binding|INFO|Setting lport 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb down in Southbound
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:21 compute-0 ovn_controller[148183]: 2025-10-02T12:26:21Z|00456|binding|INFO|Removing iface tap9d6e67d8-8c ovn-installed in OVS
Oct 02 12:26:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:21.778 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:91:e2 10.100.0.10'], port_security=['fa:16:3e:6d:91:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7b4bdbc9-7451-4500-8794-c8edef50d6a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '9', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:21.779 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9d6e67d8-8c6a-4b95-b332-80f8674a0ebb in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis
Oct 02 12:26:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:21.780 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:26:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:21.781 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e62c6c5d-b962-4b9d-828a-10356ba96c73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:21.782 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:21 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 02 12:26:21 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000065.scope: Consumed 13.303s CPU time.
Oct 02 12:26:21 compute-0 systemd-machined[211836]: Machine qemu-52-instance-00000065 terminated.
Oct 02 12:26:21 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [NOTICE]   (322527) : haproxy version is 2.8.14-c23fe91
Oct 02 12:26:21 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [NOTICE]   (322527) : path to executable is /usr/sbin/haproxy
Oct 02 12:26:21 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [WARNING]  (322527) : Exiting Master process...
Oct 02 12:26:21 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [ALERT]    (322527) : Current worker (322529) exited with code 143 (Terminated)
Oct 02 12:26:21 compute-0 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[322523]: [WARNING]  (322527) : All workers exited. Exiting... (0)
Oct 02 12:26:21 compute-0 systemd[1]: libpod-d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f.scope: Deactivated successfully.
Oct 02 12:26:21 compute-0 conmon[322523]: conmon d659afc8949607b15888 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f.scope/container/memory.events
Oct 02 12:26:21 compute-0 podman[322758]: 2025-10-02 12:26:21.933168932 +0000 UTC m=+0.051571865 container died d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.945 2 INFO nova.virt.libvirt.driver [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Instance destroyed successfully.
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.945 2 DEBUG nova.objects.instance [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 7b4bdbc9-7451-4500-8794-c8edef50d6a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cef1694da96858276089c2aa659d73c68b1325c1f96811d00c258da119bce06b-merged.mount: Deactivated successfully.
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:21 compute-0 podman[322758]: 2025-10-02 12:26:21.985855412 +0000 UTC m=+0.104258335 container cleanup d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.996 2 DEBUG nova.virt.libvirt.vif [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1305802395',display_name='tempest-ServerActionsTestOtherB-server-1305802395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1305802395',id=101,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-bt70f33h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=7b4bdbc9-7451-4500-8794-c8edef50d6a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.997 2 DEBUG nova.network.os_vif_util [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "address": "fa:16:3e:6d:91:e2", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6e67d8-8c", "ovs_interfaceid": "9d6e67d8-8c6a-4b95-b332-80f8674a0ebb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:21 compute-0 systemd[1]: libpod-conmon-d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f.scope: Deactivated successfully.
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.997 2 DEBUG nova.network.os_vif_util [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.998 2 DEBUG os_vif [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:26:21 compute-0 nova_compute[257802]: 2025-10-02 12:26:21.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d6e67d8-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.004 2 INFO os_vif [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:91:e2,bridge_name='br-int',has_traffic_filtering=True,id=9d6e67d8-8c6a-4b95-b332-80f8674a0ebb,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6e67d8-8c')
Oct 02 12:26:22 compute-0 podman[322800]: 2025-10-02 12:26:22.052355501 +0000 UTC m=+0.044672815 container remove d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.058 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6a4fd0-42a8-43e4-a28a-2fd8406205d7]: (4, ('Thu Oct  2 12:26:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f)\nd659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f\nThu Oct  2 12:26:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (d659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f)\nd659afc8949607b1588893d422d627fdeff00c651a3474982f4d3337b540c09f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.059 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3a6b8e-165e-4003-827f-dde1c4e5e0ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.060 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 kernel: tap4035a600-40: left promiscuous mode
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.079 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbf9d55-904a-4f0b-9f6a-c29d7febddd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.106 2 DEBUG nova.compute.manager [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-unplugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.106 2 DEBUG oslo_concurrency.lockutils [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.107 2 DEBUG oslo_concurrency.lockutils [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.107 2 DEBUG oslo_concurrency.lockutils [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.107 2 DEBUG nova.compute.manager [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] No waiting events found dispatching network-vif-unplugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.107 2 DEBUG nova.compute.manager [req-bf1e4605-228b-4511-bef7-3c167339d1fd req-ba08707f-0868-454e-870e-445f29a0fb4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-unplugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.108 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4c78a276-36cc-4d54-80f4-54ddf65734e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7be6b036-6320-4dc6-8ec5-72a9690e194e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.123 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9958f0-9e84-4a59-b6ab-854ba1a57660]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602136, 'reachable_time': 29755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322834, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.127 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:26:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:22.127 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a41a4f39-66c8-4eaa-b17d-9d7494bad061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 02 12:26:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 02 12:26:22 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 02 12:26:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:22 compute-0 nova_compute[257802]: 2025-10-02 12:26:22.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-0 ceph-mon[73607]: pgmap v1932: 305 pgs: 305 active+clean; 387 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 138 op/s
Oct 02 12:26:22 compute-0 ceph-mon[73607]: osdmap e301: 3 total, 3 up, 3 in
Oct 02 12:26:22 compute-0 sudo[322837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:22 compute-0 sudo[322837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:22 compute-0 sudo[322837]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:22 compute-0 sudo[322862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:22 compute-0 sudo[322862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:22 compute-0 sudo[322862]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.152 2 INFO nova.virt.libvirt.driver [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Deleting instance files /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4_del
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.153 2 INFO nova.virt.libvirt.driver [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Deletion of /var/lib/nova/instances/7b4bdbc9-7451-4500-8794-c8edef50d6a4_del complete
Oct 02 12:26:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.210 2 INFO nova.compute.manager [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Took 1.50 seconds to destroy the instance on the hypervisor.
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.210 2 DEBUG oslo.service.loopingcall [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.210 2 DEBUG nova.compute.manager [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.211 2 DEBUG nova.network.neutron [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:26:23 compute-0 nova_compute[257802]: 2025-10-02 12:26:23.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 337 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 23 KiB/s wr, 96 op/s
Oct 02 12:26:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2806093705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.237 2 DEBUG nova.compute.manager [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.237 2 DEBUG oslo_concurrency.lockutils [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.237 2 DEBUG oslo_concurrency.lockutils [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.237 2 DEBUG oslo_concurrency.lockutils [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.238 2 DEBUG nova.compute.manager [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] No waiting events found dispatching network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.238 2 WARNING nova.compute.manager [req-38649fba-8588-4ee6-8715-ba520677062a req-7a904078-de65-4dc2-a897-ddb6be80f9b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received unexpected event network-vif-plugged-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb for instance with vm_state active and task_state deleting.
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.441 2 DEBUG nova.network.neutron [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.474 2 INFO nova.compute.manager [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Took 1.26 seconds to deallocate network for instance.
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.602 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.604 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.605 2 DEBUG nova.compute.manager [req-b055d511-4288-49ec-930e-b405f5401593 req-733556a5-e046-42d7-b6e5-a2dbda2f811f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Received event network-vif-deleted-9d6e67d8-8c6a-4b95-b332-80f8674a0ebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:24 compute-0 nova_compute[257802]: 2025-10-02 12:26:24.656 2 DEBUG oslo_concurrency.processutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710644610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.073 2 DEBUG oslo_concurrency.processutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.079 2 DEBUG nova.compute.provider_tree [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.095 2 DEBUG nova.scheduler.client.report [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.120 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.143 2 INFO nova.scheduler.client.report [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocations for instance 7b4bdbc9-7451-4500-8794-c8edef50d6a4
Oct 02 12:26:25 compute-0 ceph-mon[73607]: pgmap v1934: 305 pgs: 305 active+clean; 337 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 23 KiB/s wr, 96 op/s
Oct 02 12:26:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/710644610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:25 compute-0 nova_compute[257802]: 2025-10-02 12:26:25.222 2 DEBUG oslo_concurrency.lockutils [None req-cdb6965f-12db-4ecf-97b1-3a2b6ac11d2f 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "7b4bdbc9-7451-4500-8794-c8edef50d6a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 315 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 130 KiB/s rd, 269 KiB/s wr, 108 op/s
Oct 02 12:26:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:26.945 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:26.945 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:26.946 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:27 compute-0 nova_compute[257802]: 2025-10-02 12:26:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:27 compute-0 ceph-mon[73607]: pgmap v1935: 305 pgs: 305 active+clean; 315 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 130 KiB/s rd, 269 KiB/s wr, 108 op/s
Oct 02 12:26:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2256351142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 02 12:26:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 02 12:26:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 02 12:26:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:26:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3016839429' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:26:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3016839429' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 305 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 982 KiB/s wr, 90 op/s
Oct 02 12:26:27 compute-0 nova_compute[257802]: 2025-10-02 12:26:27.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:28 compute-0 ceph-mon[73607]: osdmap e302: 3 total, 3 up, 3 in
Oct 02 12:26:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3016839429' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3016839429' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/49128928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:28.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:29 compute-0 ceph-mon[73607]: pgmap v1937: 305 pgs: 305 active+clean; 305 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 982 KiB/s wr, 90 op/s
Oct 02 12:26:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 327 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 2.7 MiB/s wr, 129 op/s
Oct 02 12:26:30 compute-0 ceph-mon[73607]: pgmap v1938: 305 pgs: 305 active+clean; 327 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 2.7 MiB/s wr, 129 op/s
Oct 02 12:26:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 327 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 2.4 MiB/s wr, 91 op/s
Oct 02 12:26:32 compute-0 nova_compute[257802]: 2025-10-02 12:26:32.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:32 compute-0 ceph-mon[73607]: pgmap v1939: 305 pgs: 305 active+clean; 327 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 2.4 MiB/s wr, 91 op/s
Oct 02 12:26:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:32 compute-0 nova_compute[257802]: 2025-10-02 12:26:32.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:33.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 267 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Oct 02 12:26:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2143074098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:34 compute-0 ceph-mon[73607]: pgmap v1940: 305 pgs: 305 active+clean; 267 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Oct 02 12:26:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Oct 02 12:26:35 compute-0 sudo[322916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:35 compute-0 sudo[322916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:35 compute-0 sudo[322916]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:35 compute-0 sudo[322941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:35 compute-0 sudo[322941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:35 compute-0 sudo[322941]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:35 compute-0 sudo[322966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:35 compute-0 sudo[322966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:35 compute-0 sudo[322966]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:35 compute-0 sudo[322991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:26:35 compute-0 sudo[322991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:36 compute-0 sudo[322991]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:26:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:26:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:26:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:26:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:36.432 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:36 compute-0 nova_compute[257802]: 2025-10-02 12:26:36.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:36.434 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:26:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:36 compute-0 ceph-mon[73607]: pgmap v1941: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Oct 02 12:26:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:36 compute-0 podman[323047]: 2025-10-02 12:26:36.920392042 +0000 UTC m=+0.052018235 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:36 compute-0 podman[323048]: 2025-10-02 12:26:36.927122647 +0000 UTC m=+0.057272854 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 02 12:26:36 compute-0 nova_compute[257802]: 2025-10-02 12:26:36.944 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407981.9425793, 7b4bdbc9-7451-4500-8794-c8edef50d6a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:36 compute-0 nova_compute[257802]: 2025-10-02 12:26:36.944 2 INFO nova.compute.manager [-] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] VM Stopped (Lifecycle Event)
Oct 02 12:26:36 compute-0 podman[323049]: 2025-10-02 12:26:36.956792904 +0000 UTC m=+0.081442766 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:36 compute-0 nova_compute[257802]: 2025-10-02 12:26:36.976 2 DEBUG nova.compute.manager [None req-c7bc4a82-5655-4855-9461-7ca24b84759a - - - - - -] [instance: 7b4bdbc9-7451-4500-8794-c8edef50d6a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:37 compute-0 nova_compute[257802]: 2025-10-02 12:26:37.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5f95168f-c01d-475d-9b7f-001c8e092a61 does not exist
Oct 02 12:26:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0f0e238b-5a33-4ab8-879d-c4d4dea2d2f2 does not exist
Oct 02 12:26:37 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dd2b126a-f33d-41a9-a2f5-dcfe6399f15b does not exist
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:26:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:37 compute-0 sudo[323106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:37 compute-0 sudo[323106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:37 compute-0 sudo[323106]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:37 compute-0 sudo[323131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:37 compute-0 sudo[323131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:37 compute-0 sudo[323131]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Oct 02 12:26:37 compute-0 sudo[323156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:37 compute-0 sudo[323156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:37 compute-0 sudo[323156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:37 compute-0 sudo[323181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:26:37 compute-0 sudo[323181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:26:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:37 compute-0 nova_compute[257802]: 2025-10-02 12:26:37.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:37 compute-0 podman[323247]: 2025-10-02 12:26:37.970124905 +0000 UTC m=+0.042714887 container create a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:26:38 compute-0 systemd[1]: Started libpod-conmon-a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e.scope.
Oct 02 12:26:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:37.949315806 +0000 UTC m=+0.021905828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:38.058495631 +0000 UTC m=+0.131085643 container init a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:38.067656485 +0000 UTC m=+0.140246467 container start a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:26:38 compute-0 interesting_bhabha[323264]: 167 167
Oct 02 12:26:38 compute-0 systemd[1]: libpod-a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e.scope: Deactivated successfully.
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:38.075151198 +0000 UTC m=+0.147741190 container attach a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:38.076361598 +0000 UTC m=+0.148951600 container died a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0677ffa4ad5bda9d2d3e4cd7dcdb9ef71c52b8c6265993a4f093faddc2a074-merged.mount: Deactivated successfully.
Oct 02 12:26:38 compute-0 podman[323247]: 2025-10-02 12:26:38.120505949 +0000 UTC m=+0.193095931 container remove a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:26:38 compute-0 systemd[1]: libpod-conmon-a408e647f4071ade03ab008dfe61b1061d3bd3376369c35ef05051775b15b01e.scope: Deactivated successfully.
Oct 02 12:26:38 compute-0 podman[323286]: 2025-10-02 12:26:38.305771977 +0000 UTC m=+0.046351496 container create 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:38 compute-0 systemd[1]: Started libpod-conmon-17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e.scope.
Oct 02 12:26:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:38 compute-0 podman[323286]: 2025-10-02 12:26:38.287353916 +0000 UTC m=+0.027933455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:38 compute-0 podman[323286]: 2025-10-02 12:26:38.406932386 +0000 UTC m=+0.147511935 container init 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:26:38 compute-0 podman[323286]: 2025-10-02 12:26:38.414916473 +0000 UTC m=+0.155495992 container start 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:38 compute-0 podman[323286]: 2025-10-02 12:26:38.418259616 +0000 UTC m=+0.158839145 container attach 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:26:38 compute-0 ceph-mon[73607]: pgmap v1942: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Oct 02 12:26:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:39 compute-0 fervent_franklin[323302]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:26:39 compute-0 fervent_franklin[323302]: --> relative data size: 1.0
Oct 02 12:26:39 compute-0 fervent_franklin[323302]: --> All data devices are unavailable
Oct 02 12:26:39 compute-0 systemd[1]: libpod-17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e.scope: Deactivated successfully.
Oct 02 12:26:39 compute-0 podman[323286]: 2025-10-02 12:26:39.175452791 +0000 UTC m=+0.916032320 container died 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3e26616866c5793f96235ea7685977a3e7370ef9c60a60cd1e001ab6f6ed06d-merged.mount: Deactivated successfully.
Oct 02 12:26:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:39 compute-0 podman[323286]: 2025-10-02 12:26:39.238665625 +0000 UTC m=+0.979245144 container remove 17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:26:39 compute-0 systemd[1]: libpod-conmon-17548726b0fdeb13424f7aa626639a7a92c3dcdc33fdfd0e47fc8d86e762341e.scope: Deactivated successfully.
Oct 02 12:26:39 compute-0 sudo[323181]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[323332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-0 sudo[323332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[323332]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[323357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:39 compute-0 sudo[323357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[323357]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[323382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-0 sudo[323382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[323382]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[323407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:26:39 compute-0 sudo[323407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 129 op/s
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.814281978 +0000 UTC m=+0.035632878 container create f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:26:39 compute-0 systemd[1]: Started libpod-conmon-f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4.scope.
Oct 02 12:26:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.798073959 +0000 UTC m=+0.019424879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.899165694 +0000 UTC m=+0.120516614 container init f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.906408132 +0000 UTC m=+0.127759022 container start f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.909264232 +0000 UTC m=+0.130615152 container attach f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:39 compute-0 objective_chaum[323489]: 167 167
Oct 02 12:26:39 compute-0 systemd[1]: libpod-f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4.scope: Deactivated successfully.
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.911809715 +0000 UTC m=+0.133160615 container died f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f48a1a1d0e193d4f39476aac82adff616bba03b891c9fa8f3d1c10bc20d1fa3-merged.mount: Deactivated successfully.
Oct 02 12:26:39 compute-0 podman[323473]: 2025-10-02 12:26:39.961491997 +0000 UTC m=+0.182842887 container remove f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chaum, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:39 compute-0 systemd[1]: libpod-conmon-f0b18ee3cc8beff37a55508293adf0862fd409dfd460fd11d8d1e6a1c6a33aa4.scope: Deactivated successfully.
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.108311967 +0000 UTC m=+0.045024249 container create aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:26:40 compute-0 systemd[1]: Started libpod-conmon-aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e.scope.
Oct 02 12:26:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6613ae1fb191523e1870965126b9551ae65ebd47fd7e7018cf87a4db8d347d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6613ae1fb191523e1870965126b9551ae65ebd47fd7e7018cf87a4db8d347d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6613ae1fb191523e1870965126b9551ae65ebd47fd7e7018cf87a4db8d347d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6613ae1fb191523e1870965126b9551ae65ebd47fd7e7018cf87a4db8d347d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.175784205 +0000 UTC m=+0.112496537 container init aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.086539751 +0000 UTC m=+0.023252083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.181530626 +0000 UTC m=+0.118242908 container start aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.185361171 +0000 UTC m=+0.122073503 container attach aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:26:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:40 compute-0 ceph-mon[73607]: pgmap v1943: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 129 op/s
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]: {
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:     "1": [
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:         {
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "devices": [
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "/dev/loop3"
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             ],
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "lv_name": "ceph_lv0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "lv_size": "7511998464",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "name": "ceph_lv0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "tags": {
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.cluster_name": "ceph",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.crush_device_class": "",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.encrypted": "0",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.osd_id": "1",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.type": "block",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:                 "ceph.vdo": "0"
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             },
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "type": "block",
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:             "vg_name": "ceph_vg0"
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:         }
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]:     ]
Oct 02 12:26:40 compute-0 compassionate_johnson[323530]: }
Oct 02 12:26:40 compute-0 systemd[1]: libpod-aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e.scope: Deactivated successfully.
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.914388944 +0000 UTC m=+0.851101246 container died aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb6613ae1fb191523e1870965126b9551ae65ebd47fd7e7018cf87a4db8d347d-merged.mount: Deactivated successfully.
Oct 02 12:26:40 compute-0 podman[323514]: 2025-10-02 12:26:40.972403901 +0000 UTC m=+0.909116183 container remove aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_johnson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:26:40 compute-0 systemd[1]: libpod-conmon-aaed308d2a35c63660084d9ed9df33cc7a4ff920673422e5f0c288a950d2733e.scope: Deactivated successfully.
Oct 02 12:26:41 compute-0 sudo[323407]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[323552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:41 compute-0 sudo[323552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[323552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[323577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:41 compute-0 sudo[323577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[323577]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[323602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:41 compute-0 sudo[323602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[323602]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[323627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:26:41 compute-0 sudo[323627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.489398911 +0000 UTC m=+0.033662748 container create 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:41 compute-0 systemd[1]: Started libpod-conmon-0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0.scope.
Oct 02 12:26:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 104 op/s
Oct 02 12:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.554991974 +0000 UTC m=+0.099255831 container init 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.561298868 +0000 UTC m=+0.105562705 container start 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:26:41 compute-0 hardcore_lamarr[323708]: 167 167
Oct 02 12:26:41 compute-0 systemd[1]: libpod-0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0.scope: Deactivated successfully.
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.566245431 +0000 UTC m=+0.110509288 container attach 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:26:41 compute-0 conmon[323708]: conmon 0800b9ff15570f92ecf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0.scope/container/memory.events
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.567782848 +0000 UTC m=+0.112046685 container died 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.475745615 +0000 UTC m=+0.020009462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5846fa22049fc597c0329a7c25652b30bb8b0065384d305cfeb3b9160fd1b60f-merged.mount: Deactivated successfully.
Oct 02 12:26:41 compute-0 podman[323692]: 2025-10-02 12:26:41.618087565 +0000 UTC m=+0.162351402 container remove 0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:26:41 compute-0 systemd[1]: libpod-conmon-0800b9ff15570f92ecf91ad987d517278e9699a7da59905ded0072c8253287a0.scope: Deactivated successfully.
Oct 02 12:26:41 compute-0 podman[323731]: 2025-10-02 12:26:41.805052611 +0000 UTC m=+0.066414093 container create 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:26:41 compute-0 podman[323731]: 2025-10-02 12:26:41.758955689 +0000 UTC m=+0.020317191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:41 compute-0 systemd[1]: Started libpod-conmon-12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e.scope.
Oct 02 12:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffb14e76b8e706eaa192bc2e83e48097273a54e11a84b959fd7beae5c997ca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffb14e76b8e706eaa192bc2e83e48097273a54e11a84b959fd7beae5c997ca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffb14e76b8e706eaa192bc2e83e48097273a54e11a84b959fd7beae5c997ca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffb14e76b8e706eaa192bc2e83e48097273a54e11a84b959fd7beae5c997ca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:41 compute-0 podman[323731]: 2025-10-02 12:26:41.974301443 +0000 UTC m=+0.235662965 container init 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:26:41 compute-0 podman[323731]: 2025-10-02 12:26:41.986012371 +0000 UTC m=+0.247373853 container start 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:26:42 compute-0 nova_compute[257802]: 2025-10-02 12:26:42.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:42 compute-0 podman[323731]: 2025-10-02 12:26:42.071032291 +0000 UTC m=+0.332393793 container attach 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:26:42
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:42 compute-0 nova_compute[257802]: 2025-10-02 12:26:42.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]: {
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:         "osd_id": 1,
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:         "type": "bluestore"
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]:     }
Oct 02 12:26:42 compute-0 heuristic_visvesvaraya[323747]: }
Oct 02 12:26:42 compute-0 systemd[1]: libpod-12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e.scope: Deactivated successfully.
Oct 02 12:26:42 compute-0 podman[323770]: 2025-10-02 12:26:42.836713775 +0000 UTC m=+0.022367720 container died 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:42 compute-0 ceph-mon[73607]: pgmap v1944: 305 pgs: 305 active+clean; 248 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 104 op/s
Oct 02 12:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-dffb14e76b8e706eaa192bc2e83e48097273a54e11a84b959fd7beae5c997ca3-merged.mount: Deactivated successfully.
Oct 02 12:26:42 compute-0 sudo[323795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:42 compute-0 sudo[323795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:42 compute-0 sudo[323795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:26:43 compute-0 sudo[323820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:43 compute-0 sudo[323820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[323820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 podman[323770]: 2025-10-02 12:26:43.152751666 +0000 UTC m=+0.338405601 container remove 12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:26:43 compute-0 systemd[1]: libpod-conmon-12694f6138b82a41b65718901df7d83870733f4245d10ca47892b367f23c619e.scope: Deactivated successfully.
Oct 02 12:26:43 compute-0 podman[323769]: 2025-10-02 12:26:43.168014521 +0000 UTC m=+0.334964486 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:26:43 compute-0 sudo[323627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:26:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:43.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:26:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 98d23054-3823-480c-9801-a3b3a3553c5e does not exist
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aafa9765-c216-4c24-9574-75ffbbc6ee61 does not exist
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0cdcf7e6-5fdb-4037-af35-cfbace9d11b0 does not exist
Oct 02 12:26:43 compute-0 sudo[323861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:43 compute-0 sudo[323861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[323861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:26:43 compute-0 sudo[323886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:26:43 compute-0 sudo[323886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[323886]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 259 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Oct 02 12:26:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:26:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:45.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:45 compute-0 ceph-mon[73607]: pgmap v1945: 305 pgs: 305 active+clean; 259 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Oct 02 12:26:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:45.435 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 270 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 12:26:46 compute-0 ceph-mon[73607]: pgmap v1946: 305 pgs: 305 active+clean; 270 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 12:26:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:46.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:26:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:47.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.242 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.242 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.275 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:26:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.414 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.414 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.428 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.428 2 INFO nova.compute.claims [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:26:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 276 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 300 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.530 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:47 compute-0 nova_compute[257802]: 2025-10-02 12:26:47.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921931455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.059 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.065 2 DEBUG nova.compute.provider_tree [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.083 2 DEBUG nova.scheduler.client.report [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.108 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.109 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:26:48 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.178 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.179 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.212 2 INFO nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.238 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.388 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.390 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.391 2 INFO nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating image(s)
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.426 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.464 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.490 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.494 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.522 2 DEBUG nova.policy [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a89b71e2513413e922ee6d5d06362b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27a1729bf10548219b90df46839849f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.558 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.559 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.560 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.560 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.587 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.591 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e33492a6-9075-43ae-aa4d-5d5911d0f896_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:48.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:48 compute-0 ceph-mon[73607]: pgmap v1947: 305 pgs: 305 active+clean; 276 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 300 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:26:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/921931455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.888 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e33492a6-9075-43ae-aa4d-5d5911d0f896_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:48 compute-0 nova_compute[257802]: 2025-10-02 12:26:48.965 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.048 2 DEBUG nova.objects.instance [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.079 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.080 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Ensure instance console log exists: /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.081 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.081 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.081 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:26:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:49.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:26:49 compute-0 nova_compute[257802]: 2025-10-02 12:26:49.281 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Successfully created port: 98f492d6-f27f-4d70-bf01-b54dd63403df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:26:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 276 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.237 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Successfully updated port: 98f492d6-f27f-4d70-bf01-b54dd63403df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.261 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.262 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.262 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.358 2 DEBUG nova.compute.manager [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-changed-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.358 2 DEBUG nova.compute.manager [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Refreshing instance network info cache due to event network-changed-98f492d6-f27f-4d70-bf01-b54dd63403df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.359 2 DEBUG oslo_concurrency.lockutils [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:50 compute-0 nova_compute[257802]: 2025-10-02 12:26:50.489 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:26:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:26:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:26:50 compute-0 ceph-mon[73607]: pgmap v1948: 305 pgs: 305 active+clean; 276 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Oct 02 12:26:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/351298520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 276 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.705 2 DEBUG nova.network.neutron [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updating instance_info_cache with network_info: [{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.757 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.757 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance network_info: |[{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.757 2 DEBUG oslo_concurrency.lockutils [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.758 2 DEBUG nova.network.neutron [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Refreshing network info cache for port 98f492d6-f27f-4d70-bf01-b54dd63403df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.760 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start _get_guest_xml network_info=[{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.765 2 WARNING nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.768 2 DEBUG nova.virt.libvirt.host [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.769 2 DEBUG nova.virt.libvirt.host [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.771 2 DEBUG nova.virt.libvirt.host [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.772 2 DEBUG nova.virt.libvirt.host [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.773 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.773 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.773 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.774 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.774 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.774 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.774 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.774 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.775 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.775 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.775 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.775 2 DEBUG nova.virt.hardware [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:26:51 compute-0 nova_compute[257802]: 2025-10-02 12:26:51.778 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/277064801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579845261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.262 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.285 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.290 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654786546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:52.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.731 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.732 2 DEBUG nova.virt.libvirt.vif [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:48Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.733 2 DEBUG nova.network.os_vif_util [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.733 2 DEBUG nova.network.os_vif_util [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.734 2 DEBUG nova.objects.instance [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.766 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <uuid>e33492a6-9075-43ae-aa4d-5d5911d0f896</uuid>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <name>instance-0000006a</name>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1278141758</nova:name>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:26:51</nova:creationTime>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <nova:port uuid="98f492d6-f27f-4d70-bf01-b54dd63403df">
Oct 02 12:26:52 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <system>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="serial">e33492a6-9075-43ae-aa4d-5d5911d0f896</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="uuid">e33492a6-9075-43ae-aa4d-5d5911d0f896</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </system>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <os>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </os>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <features>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </features>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e33492a6-9075-43ae-aa4d-5d5911d0f896_disk">
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </source>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config">
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </source>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:26:52 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:f5:69:69"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <target dev="tap98f492d6-f2"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/console.log" append="off"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <video>
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </video>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:26:52 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:26:52 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:26:52 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:26:52 compute-0 nova_compute[257802]: </domain>
Oct 02 12:26:52 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.767 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Preparing to wait for external event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.767 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.767 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.768 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.768 2 DEBUG nova.virt.libvirt.vif [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:48Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.768 2 DEBUG nova.network.os_vif_util [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.769 2 DEBUG nova.network.os_vif_util [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.769 2 DEBUG os_vif [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f492d6-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98f492d6-f2, col_values=(('external_ids', {'iface-id': '98f492d6-f27f-4d70-bf01-b54dd63403df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:69:69', 'vm-uuid': 'e33492a6-9075-43ae-aa4d-5d5911d0f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-0 NetworkManager[44987]: <info>  [1759408012.7781] manager: (tap98f492d6-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.784 2 INFO os_vif [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2')
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.951 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.952 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.952 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:f5:69:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.953 2 INFO nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Using config drive
Oct 02 12:26:52 compute-0 nova_compute[257802]: 2025-10-02 12:26:52.974 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:53.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:53 compute-0 ceph-mon[73607]: pgmap v1949: 305 pgs: 305 active+clean; 276 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Oct 02 12:26:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2579845261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3654786546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:53 compute-0 nova_compute[257802]: 2025-10-02 12:26:53.394 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:53 compute-0 nova_compute[257802]: 2025-10-02 12:26:53.395 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:53 compute-0 nova_compute[257802]: 2025-10-02 12:26:53.395 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:53 compute-0 nova_compute[257802]: 2025-10-02 12:26:53.395 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:26:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 269 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 146 op/s
Oct 02 12:26:54 compute-0 nova_compute[257802]: 2025-10-02 12:26:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:54 compute-0 nova_compute[257802]: 2025-10-02 12:26:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0034798558766052485 of space, bias 1.0, pg target 1.0439567629815745 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6463716877210125 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:26:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:54 compute-0 nova_compute[257802]: 2025-10-02 12:26:54.984 2 INFO nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating config drive at /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config
Oct 02 12:26:54 compute-0 nova_compute[257802]: 2025-10-02 12:26:54.994 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpusp1uvln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.132 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpusp1uvln" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.163 2 DEBUG nova.storage.rbd_utils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.167 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:55 compute-0 ceph-mon[73607]: pgmap v1950: 305 pgs: 305 active+clean; 269 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 146 op/s
Oct 02 12:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3392361648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2361208000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2132485405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2132485405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 280 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.645 2 DEBUG oslo_concurrency.processutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.646 2 INFO nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deleting local config drive /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config because it was imported into RBD.
Oct 02 12:26:55 compute-0 NetworkManager[44987]: <info>  [1759408015.7207] manager: (tap98f492d6-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Oct 02 12:26:55 compute-0 kernel: tap98f492d6-f2: entered promiscuous mode
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 ovn_controller[148183]: 2025-10-02T12:26:55Z|00457|binding|INFO|Claiming lport 98f492d6-f27f-4d70-bf01-b54dd63403df for this chassis.
Oct 02 12:26:55 compute-0 ovn_controller[148183]: 2025-10-02T12:26:55Z|00458|binding|INFO|98f492d6-f27f-4d70-bf01-b54dd63403df: Claiming fa:16:3e:f5:69:69 10.100.0.12
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 systemd-machined[211836]: New machine qemu-53-instance-0000006a.
Oct 02 12:26:55 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000006a.
Oct 02 12:26:55 compute-0 systemd-udevd[324241]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:26:55 compute-0 NetworkManager[44987]: <info>  [1759408015.8444] device (tap98f492d6-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:26:55 compute-0 NetworkManager[44987]: <info>  [1759408015.8454] device (tap98f492d6-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:26:55 compute-0 ovn_controller[148183]: 2025-10-02T12:26:55Z|00459|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df ovn-installed in OVS
Oct 02 12:26:55 compute-0 nova_compute[257802]: 2025-10-02 12:26:55.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:56 compute-0 ovn_controller[148183]: 2025-10-02T12:26:56Z|00460|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df up in Southbound
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.080 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:69:69 10.100.0.12'], port_security=['fa:16:3e:f5:69:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e33492a6-9075-43ae-aa4d-5d5911d0f896', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=98f492d6-f27f-4d70-bf01-b54dd63403df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.081 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 98f492d6-f27f-4d70-bf01-b54dd63403df in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.082 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.094 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b620ba5b-1930-4b86-8287-c3ae4d9742b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.094 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.098 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.098 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[972b714c-805c-4b77-943b-b059c52a9a7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.099 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32a8ac-3eb9-4ae8-8536-2681a07fda0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.113 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2edda0af-23a5-49e4-9e5e-fcdff0234842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.132 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e4005e55-df2d-474e-9e17-0f546aa9f901]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.171 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[690bc4b2-2a7a-4fd0-8db4-d2406b424694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 systemd-udevd[324248]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.175 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c37386eb-ec00-45ac-92eb-8b2904ae8fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 NetworkManager[44987]: <info>  [1759408016.1765] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/221)
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.215 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4e997d14-a4de-4e43-baa4-196d0003e939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.218 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[21dd1d71-a3f7-4f93-9e74-e9f8809c741b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 NetworkManager[44987]: <info>  [1759408016.2422] device (tap247d774d-00): carrier: link connected
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.248 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[33eebbae-1448-4dab-ab33-fc33ec20d256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.263 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eacd02de-2cea-41c0-b4fc-250a30a7eee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608386, 'reachable_time': 25733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324316, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.278 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ab49be-6900-4d1f-ad29-dfe7059d82c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608386, 'tstamp': 608386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324317, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.293 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bc6381-5799-4e52-904a-74939692cd5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608386, 'reachable_time': 25733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324318, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.327 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a56b60f1-4890-4ae3-9b13-1753a4b91abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.396 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7609a5-4793-4737-86d8-812341e4d7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.399 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.399 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.400 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:56 compute-0 kernel: tap247d774d-00: entered promiscuous mode
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:56 compute-0 NetworkManager[44987]: <info>  [1759408016.4363] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.437 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:56 compute-0 ovn_controller[148183]: 2025-10-02T12:26:56Z|00461|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.457 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.459 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4881ef68-c233-4b10-a39c-b7310802c085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.460 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:26:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:26:56.462 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:26:56 compute-0 ceph-mon[73607]: pgmap v1951: 305 pgs: 305 active+clean; 280 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.670 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408016.6699927, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.671 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Started (Lifecycle Event)
Oct 02 12:26:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:56 compute-0 podman[324351]: 2025-10-02 12:26:56.830355063 +0000 UTC m=+0.060325545 container create ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.861 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.867 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408016.6702206, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.867 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Paused (Lifecycle Event)
Oct 02 12:26:56 compute-0 systemd[1]: Started libpod-conmon-ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d.scope.
Oct 02 12:26:56 compute-0 podman[324351]: 2025-10-02 12:26:56.791391244 +0000 UTC m=+0.021361756 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:26:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6ea05f66e151b3f317e9a5f88b080db7e4c94fedb6a2ffb8750ebffc5c9b87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:56 compute-0 podman[324351]: 2025-10-02 12:26:56.920107769 +0000 UTC m=+0.150078281 container init ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:26:56 compute-0 podman[324351]: 2025-10-02 12:26:56.925714807 +0000 UTC m=+0.155685309 container start ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:26:56 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [NOTICE]   (324371) : New worker (324373) forked
Oct 02 12:26:56 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [NOTICE]   (324371) : Loading success.
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.983 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:56 compute-0 nova_compute[257802]: 2025-10-02 12:26:56.987 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.060 2 DEBUG nova.network.neutron [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updated VIF entry in instance network info cache for port 98f492d6-f27f-4d70-bf01-b54dd63403df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.061 2 DEBUG nova.network.neutron [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updating instance_info_cache with network_info: [{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.121 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.206 2 DEBUG oslo_concurrency.lockutils [req-ff6b58a1-461d-4a75-9446-5b556ea1ee19 req-4126776a-7e4a-47bc-aed7-3fb36685e267 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.425 2 DEBUG nova.compute.manager [req-38731119-5b4d-4252-b5b1-7754763618e1 req-4c3c79d4-a128-4743-83d2-2b5a297a9be2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.426 2 DEBUG oslo_concurrency.lockutils [req-38731119-5b4d-4252-b5b1-7754763618e1 req-4c3c79d4-a128-4743-83d2-2b5a297a9be2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.426 2 DEBUG oslo_concurrency.lockutils [req-38731119-5b4d-4252-b5b1-7754763618e1 req-4c3c79d4-a128-4743-83d2-2b5a297a9be2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.427 2 DEBUG oslo_concurrency.lockutils [req-38731119-5b4d-4252-b5b1-7754763618e1 req-4c3c79d4-a128-4743-83d2-2b5a297a9be2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.427 2 DEBUG nova.compute.manager [req-38731119-5b4d-4252-b5b1-7754763618e1 req-4c3c79d4-a128-4743-83d2-2b5a297a9be2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Processing event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.428 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.431 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408017.4316888, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.432 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Resumed (Lifecycle Event)
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.433 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.436 2 INFO nova.virt.libvirt.driver [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance spawned successfully.
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.436 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.480 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.485 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.489 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.489 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.489 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.490 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.490 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.491 2 DEBUG nova.virt.libvirt.driver [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 303 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.8 MiB/s wr, 157 op/s
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.564 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.646 2 INFO nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Took 9.26 seconds to spawn the instance on the hypervisor.
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.646 2 DEBUG nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.733 2 INFO nova.compute.manager [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Took 10.41 seconds to build instance.
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:57 compute-0 nova_compute[257802]: 2025-10-02 12:26:57.824 2 DEBUG oslo_concurrency.lockutils [None req-ad5477dd-36da-47d4-98bf-bb781e914548 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:58 compute-0 ceph-mon[73607]: pgmap v1952: 305 pgs: 305 active+clean; 303 MiB data, 945 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.8 MiB/s wr, 157 op/s
Oct 02 12:26:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:26:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:26:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:26:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.4 MiB/s wr, 179 op/s
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.785 2 DEBUG nova.compute.manager [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.787 2 DEBUG oslo_concurrency.lockutils [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.788 2 DEBUG oslo_concurrency.lockutils [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.788 2 DEBUG oslo_concurrency.lockutils [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.789 2 DEBUG nova.compute.manager [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:26:59 compute-0 nova_compute[257802]: 2025-10-02 12:26:59.790 2 WARNING nova.compute.manager [req-e5de1a65-24ab-455e-8288-6a9149b0ade9 req-364b212a-198b-424e-96ea-0b4cc43a3e19 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received unexpected event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with vm_state active and task_state None.
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.587 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.588 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.588 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:27:00 compute-0 nova_compute[257802]: 2025-10-02 12:27:00.589 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:00 compute-0 ceph-mon[73607]: pgmap v1953: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.4 MiB/s wr, 179 op/s
Oct 02 12:27:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:01 compute-0 nova_compute[257802]: 2025-10-02 12:27:01.448 2 INFO nova.compute.manager [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Rebuilding instance
Oct 02 12:27:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 146 op/s
Oct 02 12:27:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3932410540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:01 compute-0 nova_compute[257802]: 2025-10-02 12:27:01.917 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:02 compute-0 ceph-mon[73607]: pgmap v1954: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 146 op/s
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.718 2 DEBUG nova.compute.manager [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:02.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:27:02 compute-0 nova_compute[257802]: 2025-10-02 12:27:02.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:03 compute-0 sudo[324385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:03 compute-0 sudo[324385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:03 compute-0 sudo[324385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:03 compute-0 sudo[324410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:03 compute-0 sudo[324410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:03 compute-0 sudo[324410]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.320 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_requests' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.371 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.400 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.418 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.434 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:27:03 compute-0 nova_compute[257802]: 2025-10-02 12:27:03.438 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:27:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.9 MiB/s wr, 234 op/s
Oct 02 12:27:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/414385312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:04 compute-0 nova_compute[257802]: 2025-10-02 12:27:04.725 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updating instance_info_cache with network_info: [{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:04 compute-0 ceph-mon[73607]: pgmap v1955: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.9 MiB/s wr, 234 op/s
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.019 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-e33492a6-9075-43ae-aa4d-5d5911d0f896" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.020 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.184 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.185 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.185 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.185 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.186 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.0 MiB/s wr, 200 op/s
Oct 02 12:27:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3587212929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.594 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.701 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.702 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3587212929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.845 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.847 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4241MB free_disk=20.900768280029297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.847 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:05 compute-0 nova_compute[257802]: 2025-10-02 12:27:05.847 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:06 compute-0 ceph-mon[73607]: pgmap v1956: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.0 MiB/s wr, 200 op/s
Oct 02 12:27:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 198 op/s
Oct 02 12:27:07 compute-0 nova_compute[257802]: 2025-10-02 12:27:07.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 nova_compute[257802]: 2025-10-02 12:27:07.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 podman[324462]: 2025-10-02 12:27:07.92533574 +0000 UTC m=+0.056345656 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:27:07 compute-0 podman[324461]: 2025-10-02 12:27:07.926682773 +0000 UTC m=+0.058380787 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:27:07 compute-0 podman[324460]: 2025-10-02 12:27:07.959696484 +0000 UTC m=+0.082407176 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:27:08 compute-0 nova_compute[257802]: 2025-10-02 12:27:08.605 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance e33492a6-9075-43ae-aa4d-5d5911d0f896 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:27:08 compute-0 nova_compute[257802]: 2025-10-02 12:27:08.605 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:27:08 compute-0 nova_compute[257802]: 2025-10-02 12:27:08.606 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:27:08 compute-0 nova_compute[257802]: 2025-10-02 12:27:08.640 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136321941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:09 compute-0 nova_compute[257802]: 2025-10-02 12:27:09.115 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:09 compute-0 nova_compute[257802]: 2025-10-02 12:27:09.121 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:27:09 compute-0 ceph-mon[73607]: pgmap v1957: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 198 op/s
Oct 02 12:27:09 compute-0 nova_compute[257802]: 2025-10-02 12:27:09.144 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:27:09 compute-0 nova_compute[257802]: 2025-10-02 12:27:09.194 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:27:09 compute-0 nova_compute[257802]: 2025-10-02 12:27:09.195 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Oct 02 12:27:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1136321941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/742325516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:11 compute-0 ovn_controller[148183]: 2025-10-02T12:27:11Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:69:69 10.100.0.12
Oct 02 12:27:11 compute-0 ovn_controller[148183]: 2025-10-02T12:27:11Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:69:69 10.100.0.12
Oct 02 12:27:11 compute-0 ceph-mon[73607]: pgmap v1958: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Oct 02 12:27:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 116 op/s
Oct 02 12:27:12 compute-0 nova_compute[257802]: 2025-10-02 12:27:12.196 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1725604606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:12 compute-0 nova_compute[257802]: 2025-10-02 12:27:12.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:13 compute-0 nova_compute[257802]: 2025-10-02 12:27:13.532 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:27:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 331 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.5 MiB/s wr, 179 op/s
Oct 02 12:27:13 compute-0 ceph-mon[73607]: pgmap v1959: 305 pgs: 305 active+clean; 341 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 116 op/s
Oct 02 12:27:13 compute-0 podman[324538]: 2025-10-02 12:27:13.93575587 +0000 UTC m=+0.080791697 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:27:14 compute-0 ceph-mon[73607]: pgmap v1960: 305 pgs: 305 active+clean; 331 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.5 MiB/s wr, 179 op/s
Oct 02 12:27:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 326 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 02 12:27:15 compute-0 kernel: tap98f492d6-f2 (unregistering): left promiscuous mode
Oct 02 12:27:15 compute-0 NetworkManager[44987]: <info>  [1759408035.8345] device (tap98f492d6-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:27:15 compute-0 ovn_controller[148183]: 2025-10-02T12:27:15Z|00462|binding|INFO|Releasing lport 98f492d6-f27f-4d70-bf01-b54dd63403df from this chassis (sb_readonly=0)
Oct 02 12:27:15 compute-0 ovn_controller[148183]: 2025-10-02T12:27:15Z|00463|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df down in Southbound
Oct 02 12:27:15 compute-0 ovn_controller[148183]: 2025-10-02T12:27:15Z|00464|binding|INFO|Removing iface tap98f492d6-f2 ovn-installed in OVS
Oct 02 12:27:15 compute-0 nova_compute[257802]: 2025-10-02 12:27:15.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:15 compute-0 nova_compute[257802]: 2025-10-02 12:27:15.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:15 compute-0 nova_compute[257802]: 2025-10-02 12:27:15.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:15 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 02 12:27:15 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006a.scope: Consumed 13.712s CPU time.
Oct 02 12:27:15 compute-0 systemd-machined[211836]: Machine qemu-53-instance-0000006a terminated.
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.031 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:69:69 10.100.0.12'], port_security=['fa:16:3e:f5:69:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e33492a6-9075-43ae-aa4d-5d5911d0f896', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=98f492d6-f27f-4d70-bf01-b54dd63403df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.032 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 98f492d6-f27f-4d70-bf01-b54dd63403df in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.034 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.035 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc93944-0444-4e47-b39c-cb227c4be88b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.035 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore
Oct 02 12:27:16 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [NOTICE]   (324371) : haproxy version is 2.8.14-c23fe91
Oct 02 12:27:16 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [NOTICE]   (324371) : path to executable is /usr/sbin/haproxy
Oct 02 12:27:16 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [WARNING]  (324371) : Exiting Master process...
Oct 02 12:27:16 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [ALERT]    (324371) : Current worker (324373) exited with code 143 (Terminated)
Oct 02 12:27:16 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[324367]: [WARNING]  (324371) : All workers exited. Exiting... (0)
Oct 02 12:27:16 compute-0 systemd[1]: libpod-ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d.scope: Deactivated successfully.
Oct 02 12:27:16 compute-0 podman[324597]: 2025-10-02 12:27:16.184256591 +0000 UTC m=+0.057701320 container died ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6ea05f66e151b3f317e9a5f88b080db7e4c94fedb6a2ffb8750ebffc5c9b87-merged.mount: Deactivated successfully.
Oct 02 12:27:16 compute-0 podman[324597]: 2025-10-02 12:27:16.372329075 +0000 UTC m=+0.245773804 container cleanup ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:27:16 compute-0 systemd[1]: libpod-conmon-ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d.scope: Deactivated successfully.
Oct 02 12:27:16 compute-0 podman[324627]: 2025-10-02 12:27:16.4945958 +0000 UTC m=+0.102834608 container remove ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.500 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb57041-be78-46c2-b203-e9c0ec566732]: (4, ('Thu Oct  2 12:27:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d)\nad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d\nThu Oct  2 12:27:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (ad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d)\nad0a511659c6c5ee41a096dac92c5a351f110151cbd57d89b39bcd158f4cc64d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.502 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4428d62e-ab33-4989-b4f2-f9a0ba548e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.502 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:16 compute-0 kernel: tap247d774d-00: left promiscuous mode
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.526 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[57f8c116-cc88-4167-b6bf-1dd06c79e752]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.544 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance shutdown successfully after 13 seconds.
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.549 2 INFO nova.virt.libvirt.driver [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance destroyed successfully.
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.555 2 INFO nova.virt.libvirt.driver [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance destroyed successfully.
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.555 2 DEBUG nova.virt.libvirt.vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:00Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.556 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.556 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.557 2 DEBUG os_vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f492d6-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.563 2 INFO os_vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2')
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.563 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4684838e-feb1-4ce1-be66-5fe0e9eacac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.564 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d30ebcf1-e88f-410d-935f-a976512b817f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.580 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ae52bcb4-46cc-4215-aa7b-d5f85ce0af02]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608378, 'reachable_time': 19222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324647, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.583 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:27:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:16.583 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[dd24d73f-7ade-4b42-b027-03f655adf2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct 02 12:27:16 compute-0 ceph-mon[73607]: pgmap v1961: 305 pgs: 305 active+clean; 326 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 02 12:27:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2976274772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.808 2 DEBUG nova.compute.manager [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.810 2 DEBUG oslo_concurrency.lockutils [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.811 2 DEBUG oslo_concurrency.lockutils [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.811 2 DEBUG oslo_concurrency.lockutils [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.812 2 DEBUG nova.compute.manager [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:16 compute-0 nova_compute[257802]: 2025-10-02 12:27:16.813 2 WARNING nova.compute.manager [req-27ec32c3-e696-4bd5-b868-92b89cd0b183 req-7d9368d6-cd6f-4f44-be30-6aa84ec3b651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received unexpected event network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with vm_state active and task_state rebuilding.
Oct 02 12:27:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 327 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 106 op/s
Oct 02 12:27:17 compute-0 nova_compute[257802]: 2025-10-02 12:27:17.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:18 compute-0 nova_compute[257802]: 2025-10-02 12:27:18.724 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deleting instance files /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896_del
Oct 02 12:27:18 compute-0 nova_compute[257802]: 2025-10-02 12:27:18.725 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deletion of /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896_del complete
Oct 02 12:27:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:18 compute-0 ceph-mon[73607]: pgmap v1962: 305 pgs: 305 active+clean; 327 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 106 op/s
Oct 02 12:27:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 280 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.302 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.303 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating image(s)
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.329 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.356 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.380 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.384 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.410 2 DEBUG nova.compute.manager [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.411 2 DEBUG oslo_concurrency.lockutils [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.411 2 DEBUG oslo_concurrency.lockutils [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.411 2 DEBUG oslo_concurrency.lockutils [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.412 2 DEBUG nova.compute.manager [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.412 2 WARNING nova.compute.manager [req-a5296fc7-4e2d-4085-a27b-2059aa7b6638 req-fa19492a-5a46-474c-a383-ca6055e8085d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received unexpected event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.445 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.446 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.446 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.447 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.467 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:20 compute-0 nova_compute[257802]: 2025-10-02 12:27:20.471 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 e33492a6-9075-43ae-aa4d-5d5911d0f896_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:27:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:27:20 compute-0 ceph-mon[73607]: pgmap v1963: 305 pgs: 305 active+clean; 280 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:27:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1016150118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.250 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 e33492a6-9075-43ae-aa4d-5d5911d0f896_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.779s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.312 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.406 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.406 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Ensure instance console log exists: /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.407 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.407 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.407 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.409 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start _get_guest_xml network_info=[{"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.413 2 WARNING nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.418 2 DEBUG nova.virt.libvirt.host [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.418 2 DEBUG nova.virt.libvirt.host [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.421 2 DEBUG nova.virt.libvirt.host [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.421 2 DEBUG nova.virt.libvirt.host [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.422 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.422 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.422 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.423 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.423 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.423 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.423 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.423 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.424 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.424 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.424 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.424 2 DEBUG nova.virt.hardware [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.424 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.454 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 280 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131693636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.856 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.889 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:21 compute-0 nova_compute[257802]: 2025-10-02 12:27:21.892 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3131693636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652102332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.316 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.317 2 DEBUG nova.virt.libvirt.vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.317 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.318 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.320 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <uuid>e33492a6-9075-43ae-aa4d-5d5911d0f896</uuid>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <name>instance-0000006a</name>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1278141758</nova:name>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:27:21</nova:creationTime>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <nova:port uuid="98f492d6-f27f-4d70-bf01-b54dd63403df">
Oct 02 12:27:22 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <system>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="serial">e33492a6-9075-43ae-aa4d-5d5911d0f896</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="uuid">e33492a6-9075-43ae-aa4d-5d5911d0f896</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </system>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <os>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </os>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <features>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </features>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e33492a6-9075-43ae-aa4d-5d5911d0f896_disk">
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config">
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </source>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:27:22 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:f5:69:69"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <target dev="tap98f492d6-f2"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/console.log" append="off"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <video>
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </video>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:27:22 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:27:22 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:27:22 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:27:22 compute-0 nova_compute[257802]: </domain>
Oct 02 12:27:22 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.322 2 DEBUG nova.compute.manager [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Preparing to wait for external event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.322 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.322 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.322 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.323 2 DEBUG nova.virt.libvirt.vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.323 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.323 2 DEBUG nova.network.os_vif_util [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.324 2 DEBUG os_vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f492d6-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98f492d6-f2, col_values=(('external_ids', {'iface-id': '98f492d6-f27f-4d70-bf01-b54dd63403df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:69:69', 'vm-uuid': 'e33492a6-9075-43ae-aa4d-5d5911d0f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-0 NetworkManager[44987]: <info>  [1759408042.3307] manager: (tap98f492d6-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.334 2 INFO os_vif [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2')
Oct 02 12:27:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:22 compute-0 nova_compute[257802]: 2025-10-02 12:27:22.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:23 compute-0 ceph-mon[73607]: pgmap v1964: 305 pgs: 305 active+clean; 280 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Oct 02 12:27:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2652102332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:23 compute-0 sudo[324901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:23 compute-0 sudo[324901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:23 compute-0 sudo[324901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:23 compute-0 sudo[324926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:23 compute-0 sudo[324926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:23 compute-0 sudo[324926]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 161 op/s
Oct 02 12:27:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:27:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:27:25 compute-0 ceph-mon[73607]: pgmap v1965: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 161 op/s
Oct 02 12:27:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 295 MiB data, 960 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.553 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.553 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.554 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:f5:69:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.555 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Using config drive
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.587 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.620 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:26 compute-0 nova_compute[257802]: 2025-10-02 12:27:26.689 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'keypairs' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:26.946 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:26.947 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:26.947 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.189 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Creating config drive at /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.193 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xmwwe54 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.322 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xmwwe54" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.350 2 DEBUG nova.storage.rbd_utils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.354 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 ceph-mon[73607]: pgmap v1966: 305 pgs: 305 active+clean; 295 MiB data, 960 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Oct 02 12:27:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 307 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 2.0 MiB/s wr, 94 op/s
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.733 2 DEBUG oslo_concurrency.processutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config e33492a6-9075-43ae-aa4d-5d5911d0f896_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.734 2 INFO nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deleting local config drive /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896/disk.config because it was imported into RBD.
Oct 02 12:27:27 compute-0 kernel: tap98f492d6-f2: entered promiscuous mode
Oct 02 12:27:27 compute-0 NetworkManager[44987]: <info>  [1759408047.7922] manager: (tap98f492d6-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Oct 02 12:27:27 compute-0 ovn_controller[148183]: 2025-10-02T12:27:27Z|00465|binding|INFO|Claiming lport 98f492d6-f27f-4d70-bf01-b54dd63403df for this chassis.
Oct 02 12:27:27 compute-0 ovn_controller[148183]: 2025-10-02T12:27:27Z|00466|binding|INFO|98f492d6-f27f-4d70-bf01-b54dd63403df: Claiming fa:16:3e:f5:69:69 10.100.0.12
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 ovn_controller[148183]: 2025-10-02T12:27:27Z|00467|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df ovn-installed in OVS
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 nova_compute[257802]: 2025-10-02 12:27:27.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 systemd-udevd[325024]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:27:27 compute-0 systemd-machined[211836]: New machine qemu-54-instance-0000006a.
Oct 02 12:27:27 compute-0 NetworkManager[44987]: <info>  [1759408047.8387] device (tap98f492d6-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:27:27 compute-0 NetworkManager[44987]: <info>  [1759408047.8396] device (tap98f492d6-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:27:27 compute-0 ovn_controller[148183]: 2025-10-02T12:27:27Z|00468|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df up in Southbound
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.839 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:69:69 10.100.0.12'], port_security=['fa:16:3e:f5:69:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e33492a6-9075-43ae-aa4d-5d5911d0f896', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=98f492d6-f27f-4d70-bf01-b54dd63403df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.840 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 98f492d6-f27f-4d70-bf01-b54dd63403df in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.841 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:27:27 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-0000006a.
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.852 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c86e1291-36f9-48d1-aa4d-08a1a9405751]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.853 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.855 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.855 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15931baf-3aeb-4bc4-8cef-2b3c23bb0225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.856 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[22a527fb-5cd2-4763-bbfa-b6a2d1f9ad0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.875 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[600396ea-18a1-4bcb-a683-c216035e3582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.892 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[703e8e81-24a2-45f5-bd4d-d2b45c0ac6c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.927 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[59517bd0-8996-44dd-8a73-adbc9cbf002f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.934 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4cec15-51ba-45bb-847e-73bde6550695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 NetworkManager[44987]: <info>  [1759408047.9383] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/225)
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.975 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[93524fa0-8463-47fe-bb12-41996e955d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:27.980 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[139e1f82-4819-4e3f-b00b-e221e687125a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 NetworkManager[44987]: <info>  [1759408048.0045] device (tap247d774d-00): carrier: link connected
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.014 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d0454749-f9da-434c-9b12-98441fecac4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.037 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a77fb86-0d56-4b5a-b09f-f4ef7faddf31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611562, 'reachable_time': 30720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325057, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.054 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4aefd8a2-8bf9-4ed8-9919-78b4d3af0078]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611562, 'tstamp': 611562}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325058, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.076 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2f6bd7-75aa-4d12-bd0c-81c23fd508f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611562, 'reachable_time': 30720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325059, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c1d879-5d69-47db-8484-225e74abdc22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.183 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[010119c5-06f2-4deb-b004-ca4c0e8afdea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.185 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.186 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.187 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:28 compute-0 NetworkManager[44987]: <info>  [1759408048.1906] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct 02 12:27:28 compute-0 kernel: tap247d774d-00: entered promiscuous mode
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.195 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:28 compute-0 ovn_controller[148183]: 2025-10-02T12:27:28Z|00469|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.227 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.228 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ac60e929-0186-4de4-8e29-5fb52227cb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.229 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:27:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:28.231 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:27:28 compute-0 ceph-mon[73607]: pgmap v1967: 305 pgs: 305 active+clean; 307 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 2.0 MiB/s wr, 94 op/s
Oct 02 12:27:28 compute-0 podman[325133]: 2025-10-02 12:27:28.58507459 +0000 UTC m=+0.058373305 container create 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:27:28 compute-0 systemd[1]: Started libpod-conmon-1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953.scope.
Oct 02 12:27:28 compute-0 podman[325133]: 2025-10-02 12:27:28.548006009 +0000 UTC m=+0.021304734 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:27:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225b0f70fec0badc436ea645d11e0d643daabe07938ef109c9b4c6dc07839c05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:28 compute-0 podman[325133]: 2025-10-02 12:27:28.68510852 +0000 UTC m=+0.158407275 container init 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:27:28 compute-0 podman[325133]: 2025-10-02 12:27:28.690053211 +0000 UTC m=+0.163351936 container start 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.706 2 DEBUG nova.compute.manager [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.706 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.707 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.707 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.707 2 DEBUG nova.compute.manager [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Processing event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:27:28 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [NOTICE]   (325153) : New worker (325155) forked
Oct 02 12:27:28 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [NOTICE]   (325153) : Loading success.
Oct 02 12:27:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.968 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for e33492a6-9075-43ae-aa4d-5d5911d0f896 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.968 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408048.9677289, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.969 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Started (Lifecycle Event)
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.971 2 DEBUG nova.compute.manager [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.974 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.977 2 INFO nova.virt.libvirt.driver [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance spawned successfully.
Oct 02 12:27:28 compute-0 nova_compute[257802]: 2025-10-02 12:27:28.977 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.017 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.021 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.084 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.084 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.085 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.085 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.086 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.086 2 DEBUG nova.virt.libvirt.driver [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:29.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.450 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.452 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408048.9680295, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.452 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Paused (Lifecycle Event)
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.538 2 DEBUG nova.compute.manager [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.582 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.587 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408048.975069, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.587 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Resumed (Lifecycle Event)
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.635 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:29 compute-0 nova_compute[257802]: 2025-10-02 12:27:29.639 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:30 compute-0 nova_compute[257802]: 2025-10-02 12:27:30.061 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:30 compute-0 nova_compute[257802]: 2025-10-02 12:27:30.062 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:30 compute-0 nova_compute[257802]: 2025-10-02 12:27:30.062 2 DEBUG nova.objects.instance [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:27:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:30 compute-0 ceph-mon[73607]: pgmap v1968: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:27:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 12:27:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:32 compute-0 nova_compute[257802]: 2025-10-02 12:27:32.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:32 compute-0 nova_compute[257802]: 2025-10-02 12:27:32.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:33 compute-0 ceph-mon[73607]: pgmap v1969: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 12:27:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.814997) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053815068, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2188, "num_deletes": 255, "total_data_size": 3685239, "memory_usage": 3735056, "flush_reason": "Manual Compaction"}
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053947428, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3616185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41744, "largest_seqno": 43930, "table_properties": {"data_size": 3606419, "index_size": 6132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20964, "raw_average_key_size": 20, "raw_value_size": 3586573, "raw_average_value_size": 3554, "num_data_blocks": 265, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407860, "oldest_key_time": 1759407860, "file_creation_time": 1759408053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 132696 microseconds, and 6780 cpu microseconds.
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.947692) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3616185 bytes OK
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.947722) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.993637) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.993703) EVENT_LOG_v1 {"time_micros": 1759408053993686, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.993736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3676304, prev total WAL file size 3676304, number of live WAL files 2.
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.995487) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3531KB)], [92(9203KB)]
Oct 02 12:27:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053995551, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13040635, "oldest_snapshot_seqno": -1}
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7173 keys, 11105692 bytes, temperature: kUnknown
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408054135530, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11105692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11056833, "index_size": 29803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 184662, "raw_average_key_size": 25, "raw_value_size": 10927747, "raw_average_value_size": 1523, "num_data_blocks": 1179, "num_entries": 7173, "num_filter_entries": 7173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.135967) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11105692 bytes
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.190591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.1 rd, 79.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7702, records dropped: 529 output_compression: NoCompression
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.190635) EVENT_LOG_v1 {"time_micros": 1759408054190616, "job": 54, "event": "compaction_finished", "compaction_time_micros": 140094, "compaction_time_cpu_micros": 45346, "output_level": 6, "num_output_files": 1, "total_output_size": 11105692, "num_input_records": 7702, "num_output_records": 7173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408054191940, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408054195246, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:33.995381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.195297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.195304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.195308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.195312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:27:34.195316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:34 compute-0 ceph-mon[73607]: pgmap v1970: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 12:27:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:35.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Oct 02 12:27:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:37 compute-0 ceph-mon[73607]: pgmap v1971: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Oct 02 12:27:37 compute-0 nova_compute[257802]: 2025-10-02 12:27:37.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Oct 02 12:27:37 compute-0 nova_compute[257802]: 2025-10-02 12:27:37.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:38 compute-0 podman[325169]: 2025-10-02 12:27:38.9524709 +0000 UTC m=+0.079407493 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:27:38 compute-0 podman[325171]: 2025-10-02 12:27:38.958313704 +0000 UTC m=+0.081734820 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:27:38 compute-0 podman[325170]: 2025-10-02 12:27:38.982389066 +0000 UTC m=+0.111259646 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:27:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:39.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 344 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 155 op/s
Oct 02 12:27:39 compute-0 ceph-mon[73607]: pgmap v1972: 305 pgs: 305 active+clean; 341 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Oct 02 12:27:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:41 compute-0 nova_compute[257802]: 2025-10-02 12:27:41.164 2 DEBUG oslo_concurrency.lockutils [None req-14f4f472-b7a2-4c80-b7ba-e8defca42eb6 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 11.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:41 compute-0 ceph-mon[73607]: pgmap v1973: 305 pgs: 305 active+clean; 344 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 155 op/s
Oct 02 12:27:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:41.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 344 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 485 KiB/s wr, 87 op/s
Oct 02 12:27:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:42 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:27:42
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'volumes', 'images', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.992 2 DEBUG nova.compute.manager [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.992 2 DEBUG oslo_concurrency.lockutils [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.993 2 DEBUG oslo_concurrency.lockutils [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.993 2 DEBUG oslo_concurrency.lockutils [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.993 2 DEBUG nova.compute.manager [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:42 compute-0 nova_compute[257802]: 2025-10-02 12:27:42.994 2 WARNING nova.compute.manager [req-6f5655d9-a44b-499e-a156-1a22f3f3d770 req-0ec8d013-9306-4223-a7e9-f332e938728a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received unexpected event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with vm_state active and task_state None.
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:27:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:43.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:27:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 12:27:43 compute-0 sudo[325223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:43 compute-0 sudo[325223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:43 compute-0 sudo[325223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:43 compute-0 sudo[325248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:43 compute-0 sudo[325248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:43 compute-0 sudo[325248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:43 compute-0 ceph-mon[73607]: pgmap v1974: 305 pgs: 305 active+clean; 344 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 485 KiB/s wr, 87 op/s
Oct 02 12:27:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1536085439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:43 compute-0 sudo[325273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:43 compute-0 sudo[325273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:43 compute-0 sudo[325273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:43 compute-0 sudo[325298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:43 compute-0 sudo[325298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:43 compute-0 sudo[325298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:43 compute-0 sudo[325323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:43 compute-0 sudo[325323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:43 compute-0 sudo[325323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:44 compute-0 sudo[325349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:27:44 compute-0 sudo[325349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:44 compute-0 podman[325347]: 2025-10-02 12:27:44.092165833 +0000 UTC m=+0.112264730 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:27:44 compute-0 sudo[325349]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:27:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:27:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:27:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:27:44 compute-0 ceph-mon[73607]: pgmap v1975: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 12:27:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:45.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 365 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 572 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Oct 02 12:27:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:45.668 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:45 compute-0 nova_compute[257802]: 2025-10-02 12:27:45.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:45.671 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:27:45 compute-0 ovn_controller[148183]: 2025-10-02T12:27:45Z|00470|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct 02 12:27:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:27:45 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a1b197a8-64dc-4d36-b8af-8d869fe2a183 does not exist
Oct 02 12:27:45 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ff91399a-8a8c-4721-9224-c9cdbc8b5f6c does not exist
Oct 02 12:27:45 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b70b020b-dfc9-45bf-9d03-6adf0640b88f does not exist
Oct 02 12:27:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:27:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:27:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:27:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:27:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:27:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:45 compute-0 sudo[325431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:45 compute-0 sudo[325431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:45 compute-0 sudo[325431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:46 compute-0 sudo[325456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:46 compute-0 sudo[325456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:46 compute-0 sudo[325456]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:46 compute-0 sudo[325481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:46 compute-0 sudo[325481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:46 compute-0 sudo[325481]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:46 compute-0 sudo[325506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:27:46 compute-0 sudo[325506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:27:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:46 compute-0 podman[325573]: 2025-10-02 12:27:46.710518177 +0000 UTC m=+0.060116949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:27:47 compute-0 podman[325573]: 2025-10-02 12:27:47.26421581 +0000 UTC m=+0.613814532 container create 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:27:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:47.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:47 compute-0 nova_compute[257802]: 2025-10-02 12:27:47.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 379 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 3.7 MiB/s wr, 44 op/s
Oct 02 12:27:47 compute-0 nova_compute[257802]: 2025-10-02 12:27:47.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:48 compute-0 systemd[1]: Started libpod-conmon-3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9.scope.
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.031 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.032 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.032 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.033 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.033 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.035 2 INFO nova.compute.manager [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Terminating instance
Oct 02 12:27:48 compute-0 nova_compute[257802]: 2025-10-02 12:27:48.037 2 DEBUG nova.compute.manager [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:27:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:48 compute-0 ceph-mon[73607]: pgmap v1976: 305 pgs: 305 active+clean; 365 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 572 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Oct 02 12:27:48 compute-0 podman[325573]: 2025-10-02 12:27:48.586228532 +0000 UTC m=+1.935827304 container init 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:27:48 compute-0 podman[325573]: 2025-10-02 12:27:48.59344168 +0000 UTC m=+1.943040362 container start 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:27:48 compute-0 quizzical_greider[325589]: 167 167
Oct 02 12:27:48 compute-0 systemd[1]: libpod-3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9.scope: Deactivated successfully.
Oct 02 12:27:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:49 compute-0 kernel: tap98f492d6-f2 (unregistering): left promiscuous mode
Oct 02 12:27:49 compute-0 NetworkManager[44987]: <info>  [1759408069.1113] device (tap98f492d6-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:27:49 compute-0 ovn_controller[148183]: 2025-10-02T12:27:49Z|00471|binding|INFO|Releasing lport 98f492d6-f27f-4d70-bf01-b54dd63403df from this chassis (sb_readonly=0)
Oct 02 12:27:49 compute-0 ovn_controller[148183]: 2025-10-02T12:27:49Z|00472|binding|INFO|Setting lport 98f492d6-f27f-4d70-bf01-b54dd63403df down in Southbound
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 ovn_controller[148183]: 2025-10-02T12:27:49Z|00473|binding|INFO|Removing iface tap98f492d6-f2 ovn-installed in OVS
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 02 12:27:49 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006a.scope: Consumed 13.147s CPU time.
Oct 02 12:27:49 compute-0 systemd-machined[211836]: Machine qemu-54-instance-0000006a terminated.
Oct 02 12:27:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:49.266 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:69:69 10.100.0.12'], port_security=['fa:16:3e:f5:69:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e33492a6-9075-43ae-aa4d-5d5911d0f896', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=98f492d6-f27f-4d70-bf01-b54dd63403df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:49.268 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 98f492d6-f27f-4d70-bf01-b54dd63403df in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis
Oct 02 12:27:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:49.269 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:27:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:49.270 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3bd7b4-8f2b-4a0f-b7e0-ad661c20bfbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:49.271 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.279 2 INFO nova.virt.libvirt.driver [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Instance destroyed successfully.
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.279 2 DEBUG nova.objects.instance [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid e33492a6-9075-43ae-aa4d-5d5911d0f896 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:49 compute-0 podman[325573]: 2025-10-02 12:27:49.314355085 +0000 UTC m=+2.663953817 container attach 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:27:49 compute-0 podman[325573]: 2025-10-02 12:27:49.315453992 +0000 UTC m=+2.665052684 container died 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:27:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:49.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.459 2 DEBUG nova.virt.libvirt.vif [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1278141758',display_name='tempest-ServerDiskConfigTestJSON-server-1278141758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1278141758',id=106,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-nqm0w6lf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:41Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=e33492a6-9075-43ae-aa4d-5d5911d0f896,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.461 2 DEBUG nova.network.os_vif_util [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "98f492d6-f27f-4d70-bf01-b54dd63403df", "address": "fa:16:3e:f5:69:69", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f492d6-f2", "ovs_interfaceid": "98f492d6-f27f-4d70-bf01-b54dd63403df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.462 2 DEBUG nova.network.os_vif_util [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.462 2 DEBUG os_vif [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f492d6-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:49 compute-0 nova_compute[257802]: 2025-10-02 12:27:49.471 2 INFO os_vif [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:69:69,bridge_name='br-int',has_traffic_filtering=True,id=98f492d6-f27f-4d70-bf01-b54dd63403df,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f492d6-f2')
Oct 02 12:27:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 383 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 4.1 MiB/s wr, 63 op/s
Oct 02 12:27:49 compute-0 ceph-mon[73607]: pgmap v1977: 305 pgs: 305 active+clean; 379 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 3.7 MiB/s wr, 44 op/s
Oct 02 12:27:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1556605539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e181428d473178f481e06ad9b23c5edaff0ae7e5d8ef2aa4c7445a807bb43398-merged.mount: Deactivated successfully.
Oct 02 12:27:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:51 compute-0 nova_compute[257802]: 2025-10-02 12:27:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:51 compute-0 nova_compute[257802]: 2025-10-02 12:27:51.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:27:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:51.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 383 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 02 12:27:51 compute-0 ceph-mon[73607]: pgmap v1978: 305 pgs: 305 active+clean; 383 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 4.1 MiB/s wr, 63 op/s
Oct 02 12:27:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2369965828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.185 2 DEBUG nova.compute.manager [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.185 2 DEBUG oslo_concurrency.lockutils [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.186 2 DEBUG oslo_concurrency.lockutils [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.186 2 DEBUG oslo_concurrency.lockutils [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.186 2 DEBUG nova.compute.manager [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.187 2 DEBUG nova.compute.manager [req-4d47f506-bae4-4ed2-ba93-476758356f67 req-325dd93e-ab45-45d7-9457-9bb1b1e56948 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-unplugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:27:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:52 compute-0 nova_compute[257802]: 2025-10-02 12:27:52.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:53 compute-0 nova_compute[257802]: 2025-10-02 12:27:53.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:53 compute-0 nova_compute[257802]: 2025-10-02 12:27:53.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:53 compute-0 podman[325573]: 2025-10-02 12:27:53.176198802 +0000 UTC m=+6.525797514 container remove 3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:27:53 compute-0 systemd[1]: libpod-conmon-3cf0b4a77419d854c74b0e9d5cff5273158a06e125065cb219db53006a0bf9a9.scope: Deactivated successfully.
Oct 02 12:27:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:53.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 388 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 3.6 MiB/s wr, 67 op/s
Oct 02 12:27:53 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [NOTICE]   (325153) : haproxy version is 2.8.14-c23fe91
Oct 02 12:27:53 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [NOTICE]   (325153) : path to executable is /usr/sbin/haproxy
Oct 02 12:27:53 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [WARNING]  (325153) : Exiting Master process...
Oct 02 12:27:53 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [ALERT]    (325153) : Current worker (325155) exited with code 143 (Terminated)
Oct 02 12:27:53 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[325149]: [WARNING]  (325153) : All workers exited. Exiting... (0)
Oct 02 12:27:53 compute-0 systemd[1]: libpod-1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953.scope: Deactivated successfully.
Oct 02 12:27:53 compute-0 podman[325666]: 2025-10-02 12:27:53.847534276 +0000 UTC m=+0.568271372 container died 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:27:54 compute-0 ceph-mon[73607]: pgmap v1979: 305 pgs: 305 active+clean; 383 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 02 12:27:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4062921269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2321297802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043113789433277495 of space, bias 1.0, pg target 1.293413682998325 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005286669167597246 of space, bias 1.0, pg target 1.5807140811115765 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.570 2 DEBUG nova.compute.manager [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.570 2 DEBUG oslo_concurrency.lockutils [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.571 2 DEBUG oslo_concurrency.lockutils [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.571 2 DEBUG oslo_concurrency.lockutils [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.571 2 DEBUG nova.compute.manager [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] No waiting events found dispatching network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:54 compute-0 nova_compute[257802]: 2025-10-02 12:27:54.572 2 WARNING nova.compute.manager [req-5a41b2a0-4b6a-48fc-8f35-8903b4c0a461 req-3e47f25b-987d-4494-842e-5aee74ea96d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received unexpected event network-vif-plugged-98f492d6-f27f-4d70-bf01-b54dd63403df for instance with vm_state active and task_state deleting.
Oct 02 12:27:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:55 compute-0 nova_compute[257802]: 2025-10-02 12:27:55.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-225b0f70fec0badc436ea645d11e0d643daabe07938ef109c9b4c6dc07839c05-merged.mount: Deactivated successfully.
Oct 02 12:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953-userdata-shm.mount: Deactivated successfully.
Oct 02 12:27:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 388 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 02 12:27:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:55.673 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:56 compute-0 nova_compute[257802]: 2025-10-02 12:27:56.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:56 compute-0 podman[325683]: 2025-10-02 12:27:56.062970845 +0000 UTC m=+2.697006109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:27:56 compute-0 podman[325666]: 2025-10-02 12:27:56.290665473 +0000 UTC m=+3.011402559 container cleanup 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:27:56 compute-0 systemd[1]: libpod-conmon-1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953.scope: Deactivated successfully.
Oct 02 12:27:56 compute-0 ceph-mon[73607]: pgmap v1980: 305 pgs: 305 active+clean; 388 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 3.6 MiB/s wr, 67 op/s
Oct 02 12:27:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2827393438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:27:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2827393438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:27:56 compute-0 podman[325683]: 2025-10-02 12:27:56.485149644 +0000 UTC m=+3.119184878 container create 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:27:56 compute-0 systemd[1]: Started libpod-conmon-1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b.scope.
Oct 02 12:27:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:57 compute-0 nova_compute[257802]: 2025-10-02 12:27:57.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:27:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:27:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 363 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 251 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Oct 02 12:27:57 compute-0 nova_compute[257802]: 2025-10-02 12:27:57.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-0 podman[325683]: 2025-10-02 12:27:58.138999214 +0000 UTC m=+4.773034478 container init 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:27:58 compute-0 podman[325683]: 2025-10-02 12:27:58.153522062 +0000 UTC m=+4.787557266 container start 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:27:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:58 compute-0 podman[325683]: 2025-10-02 12:27:58.380186714 +0000 UTC m=+5.014221938 container attach 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:27:58 compute-0 podman[325717]: 2025-10-02 12:27:58.502508462 +0000 UTC m=+2.173968219 container remove 1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.513 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[adee8f76-bf9d-4060-8a2c-86dc707be079]: (4, ('Thu Oct  2 12:27:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953)\n1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953\nThu Oct  2 12:27:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953)\n1cfe2ce8bdb409fbb5ea4a5a4e49b262250b539e8580d1c2d73a70ee7d9e7953\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.516 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb604daf-e917-4012-9d2e-e7e9d9598547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.517 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:58 compute-0 nova_compute[257802]: 2025-10-02 12:27:58.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-0 kernel: tap247d774d-00: left promiscuous mode
Oct 02 12:27:58 compute-0 nova_compute[257802]: 2025-10-02 12:27:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.524 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dac674fd-5424-4d8f-87f3-ce64777c61a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 nova_compute[257802]: 2025-10-02 12:27:58.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.563 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[79f39cd8-f47b-4608-9160-cd0c652b0c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.564 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c117af40-4f06-4a28-8859-ab77140ec4ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 ceph-mon[73607]: pgmap v1981: 305 pgs: 305 active+clean; 388 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 02 12:27:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3074040757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.589 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb3a64d-a31f-4c6c-8115-736ab915d122]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611554, 'reachable_time': 21978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325740, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.595 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:27:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:27:58.596 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ef554693-e2ee-4fcd-827f-bef99659ec9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:59 compute-0 magical_buck[325731]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:27:59 compute-0 magical_buck[325731]: --> relative data size: 1.0
Oct 02 12:27:59 compute-0 magical_buck[325731]: --> All data devices are unavailable
Oct 02 12:27:59 compute-0 systemd[1]: libpod-1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b.scope: Deactivated successfully.
Oct 02 12:27:59 compute-0 podman[325683]: 2025-10-02 12:27:59.173500378 +0000 UTC m=+5.807535582 container died 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:27:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:27:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:59 compute-0 nova_compute[257802]: 2025-10-02 12:27:59.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b383c3db3167ea45f620f6bef711d1d1c425405e679548b9ceabd86b6abd13f-merged.mount: Deactivated successfully.
Oct 02 12:27:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 348 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 61 op/s
Oct 02 12:27:59 compute-0 ceph-mon[73607]: pgmap v1982: 305 pgs: 305 active+clean; 363 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 251 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Oct 02 12:27:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3970481960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/894747619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:59 compute-0 podman[325683]: 2025-10-02 12:27:59.830277886 +0000 UTC m=+6.464313090 container remove 1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:27:59 compute-0 systemd[1]: libpod-conmon-1c1bcea233a8a71611b543dfff54422f075d912cb74dd96b9fa03f037e36417b.scope: Deactivated successfully.
Oct 02 12:27:59 compute-0 sudo[325506]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:59 compute-0 sudo[325769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:59 compute-0 sudo[325769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:59 compute-0 sudo[325769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:00 compute-0 sudo[325794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:28:00 compute-0 sudo[325794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:00 compute-0 sudo[325794]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:00 compute-0 sudo[325819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:00 compute-0 sudo[325819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:00 compute-0 sudo[325819]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:00 compute-0 sudo[325844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:28:00 compute-0 sudo[325844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.342 2 INFO nova.virt.libvirt.driver [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deleting instance files /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896_del
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.342 2 INFO nova.virt.libvirt.driver [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deletion of /var/lib/nova/instances/e33492a6-9075-43ae-aa4d-5d5911d0f896_del complete
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.480 2 INFO nova.compute.manager [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Took 12.44 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.480 2 DEBUG oslo.service.loopingcall [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.481 2 DEBUG nova.compute.manager [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:00 compute-0 nova_compute[257802]: 2025-10-02 12:28:00.481 2 DEBUG nova.network.neutron [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.565667986 +0000 UTC m=+0.087871311 container create 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.505331812 +0000 UTC m=+0.027535187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:00 compute-0 systemd[1]: Started libpod-conmon-6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d.scope.
Oct 02 12:28:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.720427821 +0000 UTC m=+0.242631176 container init 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.727231778 +0000 UTC m=+0.249435103 container start 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:00 compute-0 festive_sammet[325925]: 167 167
Oct 02 12:28:00 compute-0 systemd[1]: libpod-6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d.scope: Deactivated successfully.
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.770665126 +0000 UTC m=+0.292868451 container attach 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.771285501 +0000 UTC m=+0.293488826 container died 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6ea2d9a816cb0b3f16f7f2b6e0aba6537dbca9610f505c9ee589aaf6aa0de5-merged.mount: Deactivated successfully.
Oct 02 12:28:00 compute-0 podman[325908]: 2025-10-02 12:28:00.967979058 +0000 UTC m=+0.490182383 container remove 6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:28:00 compute-0 systemd[1]: libpod-conmon-6fa8e794f41475df34c8cd9596f26d50f9f1229d3723c83fd15e52fac88ddc8d.scope: Deactivated successfully.
Oct 02 12:28:01 compute-0 nova_compute[257802]: 2025-10-02 12:28:01.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:01 compute-0 podman[325951]: 2025-10-02 12:28:01.20687966 +0000 UTC m=+0.093033328 container create 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:28:01 compute-0 podman[325951]: 2025-10-02 12:28:01.139020182 +0000 UTC m=+0.025173880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:01 compute-0 ceph-mon[73607]: pgmap v1983: 305 pgs: 305 active+clean; 348 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 61 op/s
Oct 02 12:28:01 compute-0 systemd[1]: Started libpod-conmon-02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070.scope.
Oct 02 12:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf7ad06572bd1d5d9dde0dd97be0d631cac86c63e361b66d3c6faf8cc138e56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf7ad06572bd1d5d9dde0dd97be0d631cac86c63e361b66d3c6faf8cc138e56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf7ad06572bd1d5d9dde0dd97be0d631cac86c63e361b66d3c6faf8cc138e56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf7ad06572bd1d5d9dde0dd97be0d631cac86c63e361b66d3c6faf8cc138e56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:01 compute-0 podman[325951]: 2025-10-02 12:28:01.399277601 +0000 UTC m=+0.285431299 container init 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:28:01 compute-0 podman[325951]: 2025-10-02 12:28:01.406230862 +0000 UTC m=+0.292384530 container start 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:28:01 compute-0 podman[325951]: 2025-10-02 12:28:01.443121569 +0000 UTC m=+0.329275267 container attach 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:28:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 348 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.125 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.126 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:28:02 compute-0 competent_khayyam[325967]: {
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:     "1": [
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:         {
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "devices": [
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "/dev/loop3"
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             ],
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "lv_name": "ceph_lv0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "lv_size": "7511998464",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "name": "ceph_lv0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "tags": {
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.cluster_name": "ceph",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.crush_device_class": "",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.encrypted": "0",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.osd_id": "1",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.type": "block",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:                 "ceph.vdo": "0"
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             },
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "type": "block",
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:             "vg_name": "ceph_vg0"
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:         }
Oct 02 12:28:02 compute-0 competent_khayyam[325967]:     ]
Oct 02 12:28:02 compute-0 competent_khayyam[325967]: }
Oct 02 12:28:02 compute-0 systemd[1]: libpod-02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070.scope: Deactivated successfully.
Oct 02 12:28:02 compute-0 podman[325951]: 2025-10-02 12:28:02.256088846 +0000 UTC m=+1.142242514 container died 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bf7ad06572bd1d5d9dde0dd97be0d631cac86c63e361b66d3c6faf8cc138e56-merged.mount: Deactivated successfully.
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.388 2 DEBUG nova.network.neutron [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:02 compute-0 ceph-mon[73607]: pgmap v1984: 305 pgs: 305 active+clean; 348 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 02 12:28:02 compute-0 podman[325951]: 2025-10-02 12:28:02.481903688 +0000 UTC m=+1.368057366 container remove 02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:28:02 compute-0 systemd[1]: libpod-conmon-02e8e099a5486dfb4b0d0f2e1f44c830ab39610540a943f4ee91a40650f83070.scope: Deactivated successfully.
Oct 02 12:28:02 compute-0 sudo[325844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 sudo[325990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:02 compute-0 sudo[325990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[325990]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 sudo[326016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:28:02 compute-0 sudo[326016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[326016]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.647 2 INFO nova.compute.manager [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Took 2.17 seconds to deallocate network for instance.
Oct 02 12:28:02 compute-0 sudo[326041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:02 compute-0 sudo[326041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[326041]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.705 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.705 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:02 compute-0 sudo[326066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:28:02 compute-0 sudo[326066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.824 2 DEBUG oslo_concurrency.processutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:02 compute-0 nova_compute[257802]: 2025-10-02 12:28:02.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.115027384 +0000 UTC m=+0.085129545 container create f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.131 2 DEBUG nova.compute.manager [req-d3cde13e-e275-47ba-845c-10e518e818c8 req-aa732012-e0aa-4226-ae03-7d964485b405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Received event network-vif-deleted-98f492d6-f27f-4d70-bf01-b54dd63403df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.055145181 +0000 UTC m=+0.025247372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:03 compute-0 systemd[1]: Started libpod-conmon-f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41.scope.
Oct 02 12:28:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751243029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.234346467 +0000 UTC m=+0.204448638 container init f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.238 2 DEBUG oslo_concurrency.processutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.244424626 +0000 UTC m=+0.214526787 container start f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.246 2 DEBUG nova.compute.provider_tree [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.248354151 +0000 UTC m=+0.218456332 container attach f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:28:03 compute-0 adoring_blackburn[326163]: 167 167
Oct 02 12:28:03 compute-0 systemd[1]: libpod-f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41.scope: Deactivated successfully.
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.251574961 +0000 UTC m=+0.221677132 container died f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-81dee39467db5f4d6e3a7224634e8b7205e69d5f0eb63dcb02ed8ec3822be345-merged.mount: Deactivated successfully.
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.286 2 DEBUG nova.scheduler.client.report [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:03 compute-0 podman[326150]: 2025-10-02 12:28:03.29058426 +0000 UTC m=+0.260686421 container remove f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:28:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:03 compute-0 systemd[1]: libpod-conmon-f0dcb8062c4b92e7ba78211b37b2fc60c50e4bbbfada119d91e06245a86cba41.scope: Deactivated successfully.
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.356 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.405 2 INFO nova.scheduler.client.report [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocations for instance e33492a6-9075-43ae-aa4d-5d5911d0f896
Oct 02 12:28:03 compute-0 podman[326188]: 2025-10-02 12:28:03.462211889 +0000 UTC m=+0.043739216 container create 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:28:03 compute-0 systemd[1]: Started libpod-conmon-894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53.scope.
Oct 02 12:28:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2427195761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/751243029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3025a7d7d6e8d928f36e027686d6b46501f824949c0f6ce3d784d4161da52b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3025a7d7d6e8d928f36e027686d6b46501f824949c0f6ce3d784d4161da52b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3025a7d7d6e8d928f36e027686d6b46501f824949c0f6ce3d784d4161da52b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3025a7d7d6e8d928f36e027686d6b46501f824949c0f6ce3d784d4161da52b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:03 compute-0 podman[326188]: 2025-10-02 12:28:03.442980146 +0000 UTC m=+0.024507503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:03 compute-0 podman[326188]: 2025-10-02 12:28:03.547774333 +0000 UTC m=+0.129301670 container init 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:28:03 compute-0 podman[326188]: 2025-10-02 12:28:03.556534228 +0000 UTC m=+0.138061555 container start 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:28:03 compute-0 podman[326188]: 2025-10-02 12:28:03.560003034 +0000 UTC m=+0.141530381 container attach 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:28:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 3.2 MiB/s wr, 93 op/s
Oct 02 12:28:03 compute-0 nova_compute[257802]: 2025-10-02 12:28:03.605 2 DEBUG oslo_concurrency.lockutils [None req-b682b149-24c1-43b0-8f8b-98c5e9cb1218 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "e33492a6-9075-43ae-aa4d-5d5911d0f896" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:03 compute-0 sudo[326209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:03 compute-0 sudo[326209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:03 compute-0 sudo[326209]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:03 compute-0 sudo[326234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:03 compute-0 sudo[326234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:03 compute-0 sudo[326234]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:04 compute-0 nova_compute[257802]: 2025-10-02 12:28:04.277 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408069.2770379, e33492a6-9075-43ae-aa4d-5d5911d0f896 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:04 compute-0 nova_compute[257802]: 2025-10-02 12:28:04.277 2 INFO nova.compute.manager [-] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] VM Stopped (Lifecycle Event)
Oct 02 12:28:04 compute-0 nova_compute[257802]: 2025-10-02 12:28:04.313 2 DEBUG nova.compute.manager [None req-25eb9074-15c3-4e7e-9a1d-cdb03d87bb87 - - - - - -] [instance: e33492a6-9075-43ae-aa4d-5d5911d0f896] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:04 compute-0 great_hugle[326204]: {
Oct 02 12:28:04 compute-0 great_hugle[326204]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:28:04 compute-0 great_hugle[326204]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:28:04 compute-0 great_hugle[326204]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:28:04 compute-0 great_hugle[326204]:         "osd_id": 1,
Oct 02 12:28:04 compute-0 great_hugle[326204]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:28:04 compute-0 great_hugle[326204]:         "type": "bluestore"
Oct 02 12:28:04 compute-0 great_hugle[326204]:     }
Oct 02 12:28:04 compute-0 great_hugle[326204]: }
Oct 02 12:28:04 compute-0 systemd[1]: libpod-894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53.scope: Deactivated successfully.
Oct 02 12:28:04 compute-0 podman[326275]: 2025-10-02 12:28:04.420628183 +0000 UTC m=+0.023380276 container died 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:28:04 compute-0 nova_compute[257802]: 2025-10-02 12:28:04.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed3025a7d7d6e8d928f36e027686d6b46501f824949c0f6ce3d784d4161da52b-merged.mount: Deactivated successfully.
Oct 02 12:28:04 compute-0 ceph-mon[73607]: pgmap v1985: 305 pgs: 305 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 3.2 MiB/s wr, 93 op/s
Oct 02 12:28:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2548813839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:04 compute-0 podman[326275]: 2025-10-02 12:28:04.855891484 +0000 UTC m=+0.458643577 container remove 894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:28:04 compute-0 systemd[1]: libpod-conmon-894c13d5f7fd2cf620291e265bf15ecd413ac5a59e3907059dcaf1dbf170ab53.scope: Deactivated successfully.
Oct 02 12:28:04 compute-0 sudo[326066]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:28:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:28:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.131 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.131 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:28:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2f5daee4-8302-4c4d-b120-daeb91e68175 does not exist
Oct 02 12:28:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3df2fe32-6f62-44a1-9777-955e4625a541 does not exist
Oct 02 12:28:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5d5ac906-070e-40ee-ae0b-1ee18f3ccdba does not exist
Oct 02 12:28:05 compute-0 sudo[326293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:05 compute-0 sudo[326293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:05 compute-0 sudo[326293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:05 compute-0 sudo[326336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:28:05 compute-0 sudo[326336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:05 compute-0 sudo[326336]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 02 12:28:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975972253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.582 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.740 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.741 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=20.905776977539062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.741 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.742 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.837 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.837 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:28:05 compute-0 nova_compute[257802]: 2025-10-02 12:28:05.862 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629700483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:28:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:28:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/975972253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/879769157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-0 nova_compute[257802]: 2025-10-02 12:28:06.282 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:06 compute-0 nova_compute[257802]: 2025-10-02 12:28:06.287 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:06 compute-0 nova_compute[257802]: 2025-10-02 12:28:06.354 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:06 compute-0 nova_compute[257802]: 2025-10-02 12:28:06.393 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:28:06 compute-0 nova_compute[257802]: 2025-10-02 12:28:06.393 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:07 compute-0 ceph-mon[73607]: pgmap v1986: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 02 12:28:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4023743274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1629700483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:07.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 02 12:28:07 compute-0 nova_compute[257802]: 2025-10-02 12:28:07.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2071055452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/647394247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:09.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:09 compute-0 nova_compute[257802]: 2025-10-02 12:28:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 157 op/s
Oct 02 12:28:09 compute-0 podman[326389]: 2025-10-02 12:28:09.917056757 +0000 UTC m=+0.052919112 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:28:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 02 12:28:09 compute-0 podman[326391]: 2025-10-02 12:28:09.924895051 +0000 UTC m=+0.057447034 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:28:09 compute-0 podman[326390]: 2025-10-02 12:28:09.952645423 +0000 UTC m=+0.088507488 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:28:10 compute-0 ceph-mon[73607]: pgmap v1987: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 02 12:28:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 02 12:28:10 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 02 12:28:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:10.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:11 compute-0 ceph-mon[73607]: pgmap v1988: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 157 op/s
Oct 02 12:28:11 compute-0 ceph-mon[73607]: osdmap e303: 3 total, 3 up, 3 in
Oct 02 12:28:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:11.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:11 compute-0 nova_compute[257802]: 2025-10-02 12:28:11.393 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 158 op/s
Oct 02 12:28:12 compute-0 ceph-mon[73607]: pgmap v1990: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 158 op/s
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:12 compute-0 nova_compute[257802]: 2025-10-02 12:28:12.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:13.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 550 KiB/s wr, 175 op/s
Oct 02 12:28:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2221379576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2193085479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:14 compute-0 nova_compute[257802]: 2025-10-02 12:28:14.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:14 compute-0 podman[326452]: 2025-10-02 12:28:14.935731037 +0000 UTC m=+0.078725088 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:28:14 compute-0 ceph-mon[73607]: pgmap v1991: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 550 KiB/s wr, 175 op/s
Oct 02 12:28:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:15.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 341 KiB/s wr, 168 op/s
Oct 02 12:28:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:17 compute-0 ceph-mon[73607]: pgmap v1992: 305 pgs: 305 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 341 KiB/s wr, 168 op/s
Oct 02 12:28:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:17.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 440 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.7 MiB/s wr, 239 op/s
Oct 02 12:28:17 compute-0 nova_compute[257802]: 2025-10-02 12:28:17.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:18 compute-0 ceph-mon[73607]: pgmap v1993: 305 pgs: 305 active+clean; 440 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.7 MiB/s wr, 239 op/s
Oct 02 12:28:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:19.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:19 compute-0 nova_compute[257802]: 2025-10-02 12:28:19.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.6 MiB/s wr, 344 op/s
Oct 02 12:28:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:21 compute-0 ceph-mon[73607]: pgmap v1994: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.6 MiB/s wr, 344 op/s
Oct 02 12:28:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.3 MiB/s wr, 310 op/s
Oct 02 12:28:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:22.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:22 compute-0 nova_compute[257802]: 2025-10-02 12:28:22.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-0 ceph-mon[73607]: pgmap v1995: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.3 MiB/s wr, 310 op/s
Oct 02 12:28:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:23.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 469 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 308 op/s
Oct 02 12:28:23 compute-0 sudo[326484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:23 compute-0 sudo[326484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:23 compute-0 sudo[326484]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:23 compute-0 sudo[326509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:23 compute-0 sudo[326509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:23 compute-0 sudo[326509]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:24 compute-0 nova_compute[257802]: 2025-10-02 12:28:24.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.160 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.160 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.183 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:25 compute-0 ceph-mon[73607]: pgmap v1996: 305 pgs: 305 active+clean; 469 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 308 op/s
Oct 02 12:28:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 02 12:28:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 02 12:28:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.545 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.545 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.554 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.555 2 INFO nova.compute.claims [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:28:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 470 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.2 MiB/s wr, 260 op/s
Oct 02 12:28:25 compute-0 nova_compute[257802]: 2025-10-02 12:28:25.792 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457865936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:26 compute-0 nova_compute[257802]: 2025-10-02 12:28:26.199 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:26 compute-0 nova_compute[257802]: 2025-10-02 12:28:26.205 2 DEBUG nova.compute.provider_tree [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:26 compute-0 nova_compute[257802]: 2025-10-02 12:28:26.339 2 DEBUG nova.scheduler.client.report [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:26 compute-0 ceph-mon[73607]: osdmap e304: 3 total, 3 up, 3 in
Oct 02 12:28:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2618760061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/457865936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:26 compute-0 nova_compute[257802]: 2025-10-02 12:28:26.753 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:26 compute-0 nova_compute[257802]: 2025-10-02 12:28:26.754 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:26.947 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:26.947 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:26.948 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.256 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.257 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 483 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.615 2 INFO nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.708 2 DEBUG nova.policy [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22d56fcd2a4b4851bfd126ae4548ee9b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5533aaac08cd4856af72ef4992bb5e76', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.802 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:27 compute-0 nova_compute[257802]: 2025-10-02 12:28:27.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:28 compute-0 ceph-mon[73607]: pgmap v1998: 305 pgs: 305 active+clean; 470 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.2 MiB/s wr, 260 op/s
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.340 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.342 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.343 2 INFO nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Creating image(s)
Oct 02 12:28:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.402 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.435 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.469 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.473 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.557 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.558 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.558 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.559 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.627 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:28 compute-0 nova_compute[257802]: 2025-10-02 12:28:28.635 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 24caf505-35fd-40c1-9bcc-1f83580b142b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:28.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:29 compute-0 ceph-mon[73607]: pgmap v1999: 305 pgs: 305 active+clean; 483 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Oct 02 12:28:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:29.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:29 compute-0 nova_compute[257802]: 2025-10-02 12:28:29.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Oct 02 12:28:29 compute-0 nova_compute[257802]: 2025-10-02 12:28:29.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:29.697 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:29.698 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:28:29 compute-0 nova_compute[257802]: 2025-10-02 12:28:29.972 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Successfully created port: f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:28:30 compute-0 ceph-mon[73607]: pgmap v2000: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Oct 02 12:28:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:30 compute-0 nova_compute[257802]: 2025-10-02 12:28:30.960 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 24caf505-35fd-40c1-9bcc-1f83580b142b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.042 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] resizing rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:31.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.628 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Successfully updated port: f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.682 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.682 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.682 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.799 2 DEBUG nova.compute.manager [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.799 2 DEBUG nova.compute.manager [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing instance network info cache due to event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.800 2 DEBUG oslo_concurrency.lockutils [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:31 compute-0 nova_compute[257802]: 2025-10-02 12:28:31.929 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:28:32 compute-0 ovn_controller[148183]: 2025-10-02T12:28:32Z|00474|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.248 2 DEBUG nova.objects.instance [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'migration_context' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.335 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.336 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Ensure instance console log exists: /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.337 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.338 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.339 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:32 compute-0 nova_compute[257802]: 2025-10-02 12:28:32.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:32 compute-0 ceph-mon[73607]: pgmap v2001: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Oct 02 12:28:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 02 12:28:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 02 12:28:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 02 12:28:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Oct 02 12:28:34 compute-0 nova_compute[257802]: 2025-10-02 12:28:34.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:28:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:34.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:28:34 compute-0 ceph-mon[73607]: osdmap e305: 3 total, 3 up, 3 in
Oct 02 12:28:34 compute-0 ceph-mon[73607]: pgmap v2003: 305 pgs: 305 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.219 2 DEBUG nova.network.neutron [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:28:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.509 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.510 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Instance network_info: |[{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.510 2 DEBUG oslo_concurrency.lockutils [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.510 2 DEBUG nova.network.neutron [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.514 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Start _get_guest_xml network_info=[{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.522 2 WARNING nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.531 2 DEBUG nova.virt.libvirt.host [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.532 2 DEBUG nova.virt.libvirt.host [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.536 2 DEBUG nova.virt.libvirt.host [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.537 2 DEBUG nova.virt.libvirt.host [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.538 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.539 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.539 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.539 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.540 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.540 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.540 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.540 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.540 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.541 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.541 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.541 2 DEBUG nova.virt.hardware [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:28:35 compute-0 nova_compute[257802]: 2025-10-02 12:28:35.545 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Oct 02 12:28:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546635974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.023 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.056 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.063 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2546635974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520734327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.505 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.509 2 DEBUG nova.virt.libvirt.vif [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=111,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-cjcmnpsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=24caf505-35fd-40c1-9bcc-1f83580b142b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.509 2 DEBUG nova.network.os_vif_util [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.510 2 DEBUG nova.network.os_vif_util [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.512 2 DEBUG nova.objects.instance [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.663 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <uuid>24caf505-35fd-40c1-9bcc-1f83580b142b</uuid>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <name>instance-0000006f</name>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:name>multiattach-server-1</nova:name>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:28:35</nova:creationTime>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:user uuid="22d56fcd2a4b4851bfd126ae4548ee9b">tempest-AttachVolumeMultiAttachTest-1564585024-project-member</nova:user>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:project uuid="5533aaac08cd4856af72ef4992bb5e76">tempest-AttachVolumeMultiAttachTest-1564585024</nova:project>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <nova:port uuid="f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90">
Oct 02 12:28:36 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <system>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="serial">24caf505-35fd-40c1-9bcc-1f83580b142b</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="uuid">24caf505-35fd-40c1-9bcc-1f83580b142b</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </system>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <os>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </os>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <features>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </features>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/24caf505-35fd-40c1-9bcc-1f83580b142b_disk">
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </source>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config">
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </source>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:28:36 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:76:07:a0"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <target dev="tapf0f0fd9c-fb"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/console.log" append="off"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <video>
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </video>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:28:36 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:28:36 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:28:36 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:28:36 compute-0 nova_compute[257802]: </domain>
Oct 02 12:28:36 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.667 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Preparing to wait for external event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.667 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.668 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.668 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.669 2 DEBUG nova.virt.libvirt.vif [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=111,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-cjcmnpsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=24caf505-35fd-40c1-9bcc-1f83580b142b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.670 2 DEBUG nova.network.os_vif_util [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.671 2 DEBUG nova.network.os_vif_util [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.672 2 DEBUG os_vif [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.680 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f0fd9c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0f0fd9c-fb, col_values=(('external_ids', {'iface-id': 'f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:07:a0', 'vm-uuid': '24caf505-35fd-40c1-9bcc-1f83580b142b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:36 compute-0 NetworkManager[44987]: <info>  [1759408116.6849] manager: (tapf0f0fd9c-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.692 2 INFO os_vif [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb')
Oct 02 12:28:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:36.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.981 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.981 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.982 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:76:07:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:28:36 compute-0 nova_compute[257802]: 2025-10-02 12:28:36.982 2 INFO nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Using config drive
Oct 02 12:28:37 compute-0 nova_compute[257802]: 2025-10-02 12:28:37.010 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:37 compute-0 ceph-mon[73607]: pgmap v2004: 305 pgs: 305 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Oct 02 12:28:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2520734327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 522 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 208 op/s
Oct 02 12:28:37 compute-0 nova_compute[257802]: 2025-10-02 12:28:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:38 compute-0 nova_compute[257802]: 2025-10-02 12:28:38.406 2 INFO nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Creating config drive at /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config
Oct 02 12:28:38 compute-0 nova_compute[257802]: 2025-10-02 12:28:38.420 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqgvdphk_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:38 compute-0 nova_compute[257802]: 2025-10-02 12:28:38.566 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqgvdphk_" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:38 compute-0 nova_compute[257802]: 2025-10-02 12:28:38.593 2 DEBUG nova.storage.rbd_utils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:38 compute-0 nova_compute[257802]: 2025-10-02 12:28:38.597 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config 24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:38 compute-0 ceph-mon[73607]: pgmap v2005: 305 pgs: 305 active+clean; 522 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 208 op/s
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.355 2 DEBUG nova.network.neutron [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated VIF entry in instance network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.356 2 DEBUG nova.network.neutron [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 2.4 MiB/s wr, 192 op/s
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.643 2 DEBUG oslo_concurrency.lockutils [req-c94ac67a-7c38-4f51-9c9b-50af31f31fa5 req-172224b2-6b9e-409e-b7a7-d1f47acf341b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:39.700 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.766 2 DEBUG oslo_concurrency.processutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config 24caf505-35fd-40c1-9bcc-1f83580b142b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.767 2 INFO nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Deleting local config drive /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b/disk.config because it was imported into RBD.
Oct 02 12:28:39 compute-0 kernel: tapf0f0fd9c-fb: entered promiscuous mode
Oct 02 12:28:39 compute-0 NetworkManager[44987]: <info>  [1759408119.8695] manager: (tapf0f0fd9c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:39 compute-0 ovn_controller[148183]: 2025-10-02T12:28:39Z|00475|binding|INFO|Claiming lport f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 for this chassis.
Oct 02 12:28:39 compute-0 ovn_controller[148183]: 2025-10-02T12:28:39Z|00476|binding|INFO|f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90: Claiming fa:16:3e:76:07:a0 10.100.0.14
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:39 compute-0 nova_compute[257802]: 2025-10-02 12:28:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:39 compute-0 NetworkManager[44987]: <info>  [1759408119.9077] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 02 12:28:39 compute-0 NetworkManager[44987]: <info>  [1759408119.9096] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct 02 12:28:39 compute-0 systemd-udevd[326863]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:28:39 compute-0 NetworkManager[44987]: <info>  [1759408119.9347] device (tapf0f0fd9c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:28:39 compute-0 NetworkManager[44987]: <info>  [1759408119.9398] device (tapf0f0fd9c-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:28:39 compute-0 systemd-machined[211836]: New machine qemu-55-instance-0000006f.
Oct 02 12:28:39 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000006f.
Oct 02 12:28:40 compute-0 podman[326867]: 2025-10-02 12:28:40.050206268 +0000 UTC m=+0.078932592 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:28:40 compute-0 podman[326869]: 2025-10-02 12:28:40.05678895 +0000 UTC m=+0.074193846 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:28:40 compute-0 podman[326870]: 2025-10-02 12:28:40.060546612 +0000 UTC m=+0.081863943 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 12:28:40 compute-0 nova_compute[257802]: 2025-10-02 12:28:40.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:40 compute-0 nova_compute[257802]: 2025-10-02 12:28:40.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:40 compute-0 ceph-mon[73607]: pgmap v2006: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 2.4 MiB/s wr, 192 op/s
Oct 02 12:28:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:40.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.135 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:07:a0 10.100.0.14'], port_security=['fa:16:3e:76:07:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24caf505-35fd-40c1-9bcc-1f83580b142b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.137 158261 INFO neutron.agent.ovn.metadata.agent [-] Port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 in datapath 585473f8-52e4-4e55-96df-8a236d361126 bound to our chassis
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.138 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.154 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db389f2d-197c-4159-baab-7767d9b7a99e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.155 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap585473f8-51 in ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:28:41 compute-0 ovn_controller[148183]: 2025-10-02T12:28:41Z|00477|binding|INFO|Setting lport f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 ovn-installed in OVS
Oct 02 12:28:41 compute-0 ovn_controller[148183]: 2025-10-02T12:28:41Z|00478|binding|INFO|Setting lport f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 up in Southbound
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.159 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap585473f8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.159 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[173c77e0-9172-4833-b380-ac785c7189ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.160 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f4694645-b4e7-43c4-8b6e-b2e97ff1a5ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.182 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e6ece0-b95b-4200-b3bd-5da09a89b509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.208 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[174f6508-d280-4b9a-81c3-ae3c7f1c22be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.233 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[12e59e19-ad6f-4ed2-b13b-282db9abde4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 NetworkManager[44987]: <info>  [1759408121.2412] manager: (tap585473f8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.241 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4255ab-3284-4c31-b770-d32640c7b04e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.274 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[10804b02-8985-4703-946b-0aa8d2d8ee98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.278 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[30110fd2-16ca-4547-bc2c-fc36ac5dc49e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 NetworkManager[44987]: <info>  [1759408121.2970] device (tap585473f8-50): carrier: link connected
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.300 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5c27d1-3fee-45d9-b8e2-171d73698715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.315 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[73e200bd-e5c5-4852-be65-fc9c4e98e13a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 17909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326995, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.328 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7a5dbd-9ca7-43ba-9d81-86a43a10ad42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:8e12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618892, 'tstamp': 618892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326996, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.349 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f909667e-ead1-421b-8b2d-7cdde2ee779e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 17909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326997, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.378 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c58b2e9c-dacc-4b2a-b3f7-01e2a0b0665c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.378 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408121.378394, 24caf505-35fd-40c1-9bcc-1f83580b142b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.379 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] VM Started (Lifecycle Event)
Oct 02 12:28:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.429 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[120895ee-d0ed-446d-b435-57345c2ab6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.431 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.431 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.431 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 NetworkManager[44987]: <info>  [1759408121.4338] manager: (tap585473f8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct 02 12:28:41 compute-0 kernel: tap585473f8-50: entered promiscuous mode
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.436 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 ovn_controller[148183]: 2025-10-02T12:28:41Z|00479|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.454 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.454 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[83ea2461-84df-4002-a351-a2c3c1f6da77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.455 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:28:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:28:41.456 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'env', 'PROCESS_TAG=haproxy-585473f8-52e4-4e55-96df-8a236d361126', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/585473f8-52e4-4e55-96df-8a236d361126.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:28:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 2.5 MiB/s wr, 192 op/s
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.688 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.693 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408121.3792725, 24caf505-35fd-40c1-9bcc-1f83580b142b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:41 compute-0 nova_compute[257802]: 2025-10-02 12:28:41.694 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] VM Paused (Lifecycle Event)
Oct 02 12:28:41 compute-0 podman[327030]: 2025-10-02 12:28:41.774092281 +0000 UTC m=+0.024540655 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:28:42 compute-0 podman[327030]: 2025-10-02 12:28:42.054756571 +0000 UTC m=+0.305204865 container create fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:42 compute-0 systemd[1]: Started libpod-conmon-fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b.scope.
Oct 02 12:28:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03850c8d19234866014556142b1266693fd29e6cd4414a41be97ccf7f205b39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:42 compute-0 nova_compute[257802]: 2025-10-02 12:28:42.225 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:42 compute-0 nova_compute[257802]: 2025-10-02 12:28:42.232 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:42 compute-0 podman[327030]: 2025-10-02 12:28:42.246811933 +0000 UTC m=+0.497260307 container init fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:42 compute-0 podman[327030]: 2025-10-02 12:28:42.253854776 +0000 UTC m=+0.504303070 container start fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:28:42 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [NOTICE]   (327049) : New worker (327051) forked
Oct 02 12:28:42 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [NOTICE]   (327049) : Loading success.
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:28:42
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:28:42 compute-0 nova_compute[257802]: 2025-10-02 12:28:42.679 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:42.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:42 compute-0 nova_compute[257802]: 2025-10-02 12:28:42.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:28:43 compute-0 ceph-mon[73607]: pgmap v2007: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 2.5 MiB/s wr, 192 op/s
Oct 02 12:28:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4001133659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:28:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct 02 12:28:44 compute-0 sudo[327062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:44 compute-0 sudo[327062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:44 compute-0 sshd-session[327061]: Invalid user fatima from 167.99.55.34 port 38918
Oct 02 12:28:44 compute-0 sudo[327062]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:44 compute-0 sshd-session[327061]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:28:44 compute-0 sshd-session[327061]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:28:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/529136277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:44 compute-0 sudo[327088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:44 compute-0 sudo[327088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:44 compute-0 sudo[327088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:28:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:28:45 compute-0 ceph-mon[73607]: pgmap v2008: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct 02 12:28:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Oct 02 12:28:45 compute-0 podman[327114]: 2025-10-02 12:28:45.968960133 +0000 UTC m=+0.088759263 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:28:46 compute-0 ceph-mon[73607]: pgmap v2009: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Oct 02 12:28:46 compute-0 sshd-session[327061]: Failed password for invalid user fatima from 167.99.55.34 port 38918 ssh2
Oct 02 12:28:46 compute-0 nova_compute[257802]: 2025-10-02 12:28:46.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:46.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.731 2 DEBUG nova.compute.manager [req-c2b81c22-ea13-46af-82b9-25cf03fb7e7b req-e0520b68-fd5c-43e2-8919-e382d116f58e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.732 2 DEBUG oslo_concurrency.lockutils [req-c2b81c22-ea13-46af-82b9-25cf03fb7e7b req-e0520b68-fd5c-43e2-8919-e382d116f58e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.732 2 DEBUG oslo_concurrency.lockutils [req-c2b81c22-ea13-46af-82b9-25cf03fb7e7b req-e0520b68-fd5c-43e2-8919-e382d116f58e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.732 2 DEBUG oslo_concurrency.lockutils [req-c2b81c22-ea13-46af-82b9-25cf03fb7e7b req-e0520b68-fd5c-43e2-8919-e382d116f58e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.732 2 DEBUG nova.compute.manager [req-c2b81c22-ea13-46af-82b9-25cf03fb7e7b req-e0520b68-fd5c-43e2-8919-e382d116f58e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Processing event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.733 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.738 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408127.7387476, 24caf505-35fd-40c1-9bcc-1f83580b142b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.739 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] VM Resumed (Lifecycle Event)
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.741 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.744 2 INFO nova.virt.libvirt.driver [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Instance spawned successfully.
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.745 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:47 compute-0 sshd-session[327061]: Connection closed by invalid user fatima 167.99.55.34 port 38918 [preauth]
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.980 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.985 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.992 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.992 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.993 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.993 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.994 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:47 compute-0 nova_compute[257802]: 2025-10-02 12:28:47.994 2 DEBUG nova.virt.libvirt.driver [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.276 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.285 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.495 2 INFO nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Took 20.15 seconds to spawn the instance on the hypervisor.
Oct 02 12:28:48 compute-0 nova_compute[257802]: 2025-10-02 12:28:48.496 2 DEBUG nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:49 compute-0 nova_compute[257802]: 2025-10-02 12:28:49.157 2 INFO nova.compute.manager [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Took 23.66 seconds to build instance.
Oct 02 12:28:49 compute-0 ceph-mon[73607]: pgmap v2010: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 02 12:28:49 compute-0 nova_compute[257802]: 2025-10-02 12:28:49.262 2 DEBUG oslo_concurrency.lockutils [None req-c6709e92-c122-4d77-9ab6-644a793ae87e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:28:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/527686598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:51 compute-0 nova_compute[257802]: 2025-10-02 12:28:51.286 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:51 compute-0 nova_compute[257802]: 2025-10-02 12:28:51.286 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:28:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:28:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:28:51 compute-0 ceph-mon[73607]: pgmap v2011: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:28:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:28:51 compute-0 nova_compute[257802]: 2025-10-02 12:28:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:52 compute-0 ceph-mon[73607]: pgmap v2012: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:28:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:52.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:52 compute-0 nova_compute[257802]: 2025-10-02 12:28:52.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.317 2 DEBUG nova.compute.manager [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.318 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.318 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.318 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.319 2 DEBUG nova.compute.manager [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] No waiting events found dispatching network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:53 compute-0 nova_compute[257802]: 2025-10-02 12:28:53.319 2 WARNING nova.compute.manager [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received unexpected event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 for instance with vm_state active and task_state None.
Oct 02 12:28:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00417088335194491 of space, bias 1.0, pg target 1.251265005583473 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006489333240275451 of space, bias 1.0, pg target 1.9403106388423599 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:28:54 compute-0 ceph-mon[73607]: pgmap v2013: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 02 12:28:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3347900093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:54.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:54 compute-0 nova_compute[257802]: 2025-10-02 12:28:54.890 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:54 compute-0 nova_compute[257802]: 2025-10-02 12:28:54.891 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:54 compute-0 nova_compute[257802]: 2025-10-02 12:28:54.975 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:55 compute-0 nova_compute[257802]: 2025-10-02 12:28:55.245 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:55 compute-0 nova_compute[257802]: 2025-10-02 12:28:55.246 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:55 compute-0 nova_compute[257802]: 2025-10-02 12:28:55.254 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:55 compute-0 nova_compute[257802]: 2025-10-02 12:28:55.254 2 INFO nova.compute.claims [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:28:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 02 12:28:55 compute-0 nova_compute[257802]: 2025-10-02 12:28:55.686 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/987223863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/987223863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591151327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.150 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.156 2 DEBUG nova.compute.provider_tree [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.242 2 DEBUG nova.scheduler.client.report [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.298 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.299 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.426 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.427 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.547 2 INFO nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.656 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.673 2 DEBUG nova.policy [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2bd16d1f5f9d4eb396c474eedee67165', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:28:56 compute-0 nova_compute[257802]: 2025-10-02 12:28:56.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:56.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:56 compute-0 ceph-mon[73607]: pgmap v2014: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 02 12:28:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2591151327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.011 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.012 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.013 2 INFO nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Creating image(s)
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.036 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.061 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.085 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.089 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.117 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.152 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.153 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.154 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.154 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.183 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.187 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 437 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 397 KiB/s wr, 161 op/s
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.637 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.715 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] resizing rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:57 compute-0 nova_compute[257802]: 2025-10-02 12:28:57.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.012 2 DEBUG nova.objects.instance [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'migration_context' on Instance uuid 634c38a6-caab-410d-8748-3ec1fd6f9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.066 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.067 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Ensure instance console log exists: /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.068 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.068 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.068 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:58 compute-0 nova_compute[257802]: 2025-10-02 12:28:58.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:58 compute-0 ceph-mon[73607]: pgmap v2015: 305 pgs: 305 active+clean; 437 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 397 KiB/s wr, 161 op/s
Oct 02 12:28:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:28:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Oct 02 12:29:00 compute-0 ovn_controller[148183]: 2025-10-02T12:29:00Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:07:a0 10.100.0.14
Oct 02 12:29:00 compute-0 ovn_controller[148183]: 2025-10-02T12:29:00Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:07:a0 10.100.0.14
Oct 02 12:29:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:00.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:01 compute-0 ceph-mon[73607]: pgmap v2016: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Oct 02 12:29:01 compute-0 nova_compute[257802]: 2025-10-02 12:29:01.375 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Successfully created port: b76b92a4-1882-4f89-94f4-3a4700f9c379 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:29:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Oct 02 12:29:01 compute-0 nova_compute[257802]: 2025-10-02 12:29:01.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.070 2 DEBUG nova.compute.manager [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.070 2 DEBUG nova.compute.manager [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing instance network info cache due to event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.071 2 DEBUG oslo_concurrency.lockutils [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.071 2 DEBUG oslo_concurrency.lockutils [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.071 2 DEBUG nova.network.neutron [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:02 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:29:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:02.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:02 compute-0 nova_compute[257802]: 2025-10-02 12:29:02.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:03 compute-0 ceph-mon[73607]: pgmap v2017: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.201 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.202 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.202 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.240 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:29:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:03.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 491 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 243 op/s
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.683 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.766 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Successfully updated port: b76b92a4-1882-4f89-94f4-3a4700f9c379 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.807 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.808 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:03 compute-0 nova_compute[257802]: 2025-10-02 12:29:03.808 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/338760089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:04 compute-0 sudo[327337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:04 compute-0 sudo[327337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:04 compute-0 sudo[327337]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2621123882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:04 compute-0 sudo[327362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:04 compute-0 sudo[327362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:04 compute-0 sudo[327362]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:04 compute-0 nova_compute[257802]: 2025-10-02 12:29:04.598 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.117 2 DEBUG nova.network.neutron [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated VIF entry in instance network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.117 2 DEBUG nova.network.neutron [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:05 compute-0 ceph-mon[73607]: pgmap v2018: 305 pgs: 305 active+clean; 491 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 243 op/s
Oct 02 12:29:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2621123882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3903986672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.341 2 DEBUG nova.compute.manager [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.341 2 DEBUG nova.compute.manager [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing instance network info cache due to event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.342 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:05.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 168 op/s
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.645 2 DEBUG oslo_concurrency.lockutils [req-bde3bf79-3324-4ba2-817c-937d0a249fea req-e6613e0d-04d0-4dfc-8e3a-1a2cc8241369 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.646 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.646 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:29:05 compute-0 nova_compute[257802]: 2025-10-02 12:29:05.646 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:05 compute-0 sudo[327388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:05 compute-0 sudo[327388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:05 compute-0 sudo[327388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:05 compute-0 sudo[327413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:05 compute-0 sudo[327413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:05 compute-0 sudo[327413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:05 compute-0 sudo[327438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:05 compute-0 sudo[327438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:05 compute-0 sudo[327438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:05 compute-0 sudo[327463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:29:05 compute-0 sudo[327463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[327463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ddd00600-f0a8-42bd-9cbc-93b95ae778d9 does not exist
Oct 02 12:29:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b2a722fa-0858-45c9-9ce3-9e8ad041fd27 does not exist
Oct 02 12:29:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev edd7f48d-d39a-43b6-8213-4a822bf8da59 does not exist
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:29:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:06 compute-0 sudo[327520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-0 sudo[327520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[327520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 nova_compute[257802]: 2025-10-02 12:29:06.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:06 compute-0 sudo[327545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:06 compute-0 sudo[327545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[327545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 sudo[327570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-0 sudo[327570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[327570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 sudo[327595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:29:06 compute-0 sudo[327595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:06.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.153543291 +0000 UTC m=+0.021179261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.34176759 +0000 UTC m=+0.209403560 container create b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:29:07 compute-0 ceph-mon[73607]: pgmap v2019: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 168 op/s
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1688902075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:07 compute-0 systemd[1]: Started libpod-conmon-b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6.scope.
Oct 02 12:29:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.464551038 +0000 UTC m=+0.332187038 container init b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.472172856 +0000 UTC m=+0.339808816 container start b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:29:07 compute-0 stoic_franklin[327677]: 167 167
Oct 02 12:29:07 compute-0 systemd[1]: libpod-b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6.scope: Deactivated successfully.
Oct 02 12:29:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.51951299 +0000 UTC m=+0.387148960 container attach b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.520125474 +0000 UTC m=+0.387761424 container died b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:29:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 465 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 192 op/s
Oct 02 12:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d51ef0050a41ca3f6e7e4a6038b198442fd98cf8c12ea1591286a7c5363c228-merged.mount: Deactivated successfully.
Oct 02 12:29:07 compute-0 podman[327660]: 2025-10-02 12:29:07.723374241 +0000 UTC m=+0.591010191 container remove b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_franklin, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:29:07 compute-0 systemd[1]: libpod-conmon-b2554ba0495afb3e8a8ab437a2c10ac5a25170d531cce3e7470bffa76785b5a6.scope: Deactivated successfully.
Oct 02 12:29:07 compute-0 nova_compute[257802]: 2025-10-02 12:29:07.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:07.92138318 +0000 UTC m=+0.024165106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:08.020782604 +0000 UTC m=+0.123564520 container create 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:29:08 compute-0 systemd[1]: Started libpod-conmon-50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998.scope.
Oct 02 12:29:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:08.127754843 +0000 UTC m=+0.230536769 container init 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:08.134267564 +0000 UTC m=+0.237049480 container start 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:08.177811164 +0000 UTC m=+0.280593090 container attach 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:29:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:08 compute-0 ceph-mon[73607]: pgmap v2020: 305 pgs: 305 active+clean; 465 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 192 op/s
Oct 02 12:29:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:08.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:08 compute-0 interesting_tu[327719]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:29:08 compute-0 interesting_tu[327719]: --> relative data size: 1.0
Oct 02 12:29:08 compute-0 interesting_tu[327719]: --> All data devices are unavailable
Oct 02 12:29:08 compute-0 systemd[1]: libpod-50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998.scope: Deactivated successfully.
Oct 02 12:29:08 compute-0 podman[327703]: 2025-10-02 12:29:08.955669238 +0000 UTC m=+1.058451184 container died 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:09 compute-0 nova_compute[257802]: 2025-10-02 12:29:09.226 2 DEBUG nova.network.neutron [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3127addf05c9a35f2f8ed0ab9e22f16381d7a42604919cc1ba7d652e1063a815-merged.mount: Deactivated successfully.
Oct 02 12:29:09 compute-0 podman[327703]: 2025-10-02 12:29:09.384677426 +0000 UTC m=+1.487459332 container remove 50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:29:09 compute-0 sudo[327595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:09 compute-0 systemd[1]: libpod-conmon-50aa318ba8219391665984df8373b367ad112ff6dbef079dc080423bfa5db998.scope: Deactivated successfully.
Oct 02 12:29:09 compute-0 sudo[327747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:09 compute-0 sudo[327747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:09 compute-0 sudo[327747]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:09 compute-0 sudo[327772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:09 compute-0 sudo[327772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:09 compute-0 sudo[327772]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 5.3 MiB/s wr, 139 op/s
Oct 02 12:29:09 compute-0 sudo[327797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:09 compute-0 sudo[327797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:09 compute-0 sudo[327797]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:09 compute-0 sudo[327822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:29:09 compute-0 sudo[327822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.057764664 +0000 UTC m=+0.028744288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.171373527 +0000 UTC m=+0.142353171 container create 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:29:10 compute-0 systemd[1]: Started libpod-conmon-2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50.scope.
Oct 02 12:29:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:10 compute-0 podman[327904]: 2025-10-02 12:29:10.331171126 +0000 UTC m=+0.110551329 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.476432288 +0000 UTC m=+0.447411912 container init 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.487866009 +0000 UTC m=+0.458845633 container start 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:29:10 compute-0 elegant_shirley[327940]: 167 167
Oct 02 12:29:10 compute-0 systemd[1]: libpod-2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50.scope: Deactivated successfully.
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.549368441 +0000 UTC m=+0.520348045 container attach 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:29:10 compute-0 podman[327890]: 2025-10-02 12:29:10.55096399 +0000 UTC m=+0.521943594 container died 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:29:10 compute-0 ceph-mon[73607]: pgmap v2021: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 5.3 MiB/s wr, 139 op/s
Oct 02 12:29:10 compute-0 podman[327905]: 2025-10-02 12:29:10.791189645 +0000 UTC m=+0.564033937 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:29:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:10 compute-0 nova_compute[257802]: 2025-10-02 12:29:10.997 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:10 compute-0 nova_compute[257802]: 2025-10-02 12:29:10.997 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance network_info: |[{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:29:10 compute-0 nova_compute[257802]: 2025-10-02 12:29:10.998 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:10 compute-0 nova_compute[257802]: 2025-10-02 12:29:10.999 2 DEBUG nova.network.neutron [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.005 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start _get_guest_xml network_info=[{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd574d1ef4048129f53305520739fd84636490e7ad0bcb72689ba85033c6611f-merged.mount: Deactivated successfully.
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.012 2 WARNING nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.038 2 DEBUG nova.virt.libvirt.host [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.039 2 DEBUG nova.virt.libvirt.host [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.049 2 DEBUG nova.virt.libvirt.host [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.050 2 DEBUG nova.virt.libvirt.host [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.051 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.051 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.052 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.052 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.053 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.053 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.053 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.053 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.054 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.054 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.054 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.055 2 DEBUG nova.virt.hardware [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.058 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:11 compute-0 podman[327890]: 2025-10-02 12:29:11.243702162 +0000 UTC m=+1.214681806 container remove 2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:29:11 compute-0 podman[327906]: 2025-10-02 12:29:11.267548508 +0000 UTC m=+1.031901391 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 02 12:29:11 compute-0 systemd[1]: libpod-conmon-2a30b630ea197f0a3ea539b45e735841a63817af922bcb63b386f87cf1bc4c50.scope: Deactivated successfully.
Oct 02 12:29:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:11.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274200122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.539 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:11 compute-0 podman[328004]: 2025-10-02 12:29:11.467530324 +0000 UTC m=+0.048024742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.575 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.579 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.2 MiB/s wr, 104 op/s
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:11 compute-0 podman[328004]: 2025-10-02 12:29:11.717704205 +0000 UTC m=+0.298198613 container create 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:29:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1274200122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:11 compute-0 systemd[1]: Started libpod-conmon-0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6.scope.
Oct 02 12:29:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48488afd2195c4f4dd2668a0917813dcab970cce6aca4714d8b2c7007735e3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48488afd2195c4f4dd2668a0917813dcab970cce6aca4714d8b2c7007735e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48488afd2195c4f4dd2668a0917813dcab970cce6aca4714d8b2c7007735e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48488afd2195c4f4dd2668a0917813dcab970cce6aca4714d8b2c7007735e3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:11 compute-0 nova_compute[257802]: 2025-10-02 12:29:11.930 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364650842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:12 compute-0 podman[328004]: 2025-10-02 12:29:12.014928032 +0000 UTC m=+0.595422460 container init 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.023 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.026 2 DEBUG nova.virt.libvirt.vif [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.026 2 DEBUG nova.network.os_vif_util [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.028 2 DEBUG nova.network.os_vif_util [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:12 compute-0 podman[328004]: 2025-10-02 12:29:12.03068157 +0000 UTC m=+0.611175968 container start 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.031 2 DEBUG nova.objects.instance [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 634c38a6-caab-410d-8748-3ec1fd6f9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:12 compute-0 podman[328004]: 2025-10-02 12:29:12.069464703 +0000 UTC m=+0.649959131 container attach 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.442 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.443 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.443 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.444 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.457 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <uuid>634c38a6-caab-410d-8748-3ec1fd6f9cdc</uuid>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <name>instance-00000070</name>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestJSON-server-1187258695</nova:name>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:29:11</nova:creationTime>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:user uuid="2bd16d1f5f9d4eb396c474eedee67165">tempest-ServerActionsTestJSON-842270816-project-member</nova:user>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:project uuid="4b8ca48cb5f64ef3b0736b8be82378b8">tempest-ServerActionsTestJSON-842270816</nova:project>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <nova:port uuid="b76b92a4-1882-4f89-94f4-3a4700f9c379">
Oct 02 12:29:12 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <system>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="serial">634c38a6-caab-410d-8748-3ec1fd6f9cdc</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="uuid">634c38a6-caab-410d-8748-3ec1fd6f9cdc</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </system>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <os>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </os>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <features>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </features>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk">
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </source>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config">
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </source>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:29:12 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:23:4b:2a"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <target dev="tapb76b92a4-18"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/console.log" append="off"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <video>
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </video>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:29:12 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:29:12 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:29:12 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:29:12 compute-0 nova_compute[257802]: </domain>
Oct 02 12:29:12 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.463 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Preparing to wait for external event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.464 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.464 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.464 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.465 2 DEBUG nova.virt.libvirt.vif [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.466 2 DEBUG nova.network.os_vif_util [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.466 2 DEBUG nova.network.os_vif_util [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.467 2 DEBUG os_vif [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb76b92a4-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb76b92a4-18, col_values=(('external_ids', {'iface-id': 'b76b92a4-1882-4f89-94f4-3a4700f9c379', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:4b:2a', 'vm-uuid': '634c38a6-caab-410d-8748-3ec1fd6f9cdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:12 compute-0 NetworkManager[44987]: <info>  [1759408152.4772] manager: (tapb76b92a4-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.485 2 INFO os_vif [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18')
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.690 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.693 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.694 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.694 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.695 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:12 compute-0 tender_margulis[328061]: {
Oct 02 12:29:12 compute-0 tender_margulis[328061]:     "1": [
Oct 02 12:29:12 compute-0 tender_margulis[328061]:         {
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "devices": [
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "/dev/loop3"
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             ],
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "lv_name": "ceph_lv0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "lv_size": "7511998464",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "name": "ceph_lv0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "tags": {
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.cluster_name": "ceph",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.crush_device_class": "",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.encrypted": "0",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.osd_id": "1",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.type": "block",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:                 "ceph.vdo": "0"
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             },
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "type": "block",
Oct 02 12:29:12 compute-0 tender_margulis[328061]:             "vg_name": "ceph_vg0"
Oct 02 12:29:12 compute-0 tender_margulis[328061]:         }
Oct 02 12:29:12 compute-0 tender_margulis[328061]:     ]
Oct 02 12:29:12 compute-0 tender_margulis[328061]: }
Oct 02 12:29:12 compute-0 systemd[1]: libpod-0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6.scope: Deactivated successfully.
Oct 02 12:29:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:12.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:12 compute-0 podman[328087]: 2025-10-02 12:29:12.906560733 +0000 UTC m=+0.036066618 container died 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:29:12 compute-0 nova_compute[257802]: 2025-10-02 12:29:12.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-0 ceph-mon[73607]: pgmap v2022: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.2 MiB/s wr, 104 op/s
Oct 02 12:29:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/364650842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.105 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.106 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.106 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No VIF found with MAC fa:16:3e:23:4b:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.107 2 INFO nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Using config drive
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.143 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154318410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.268 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48488afd2195c4f4dd2668a0917813dcab970cce6aca4714d8b2c7007735e3b-merged.mount: Deactivated successfully.
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.380 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.381 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.387 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.388 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:13.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.539 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.540 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4163MB free_disk=20.876243591308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.541 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.541 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.2 MiB/s wr, 104 op/s
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.828 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 24caf505-35fd-40c1-9bcc-1f83580b142b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.828 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 634c38a6-caab-410d-8748-3ec1fd6f9cdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.828 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:29:13 compute-0 nova_compute[257802]: 2025-10-02 12:29:13.829 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:29:13 compute-0 podman[328087]: 2025-10-02 12:29:13.928372305 +0000 UTC m=+1.057878200 container remove 0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:29:13 compute-0 systemd[1]: libpod-conmon-0f6d99c3cd7a03fdf79fe2a7e857a34c9b5058b7c18065c02a7104b3b3d977b6.scope: Deactivated successfully.
Oct 02 12:29:13 compute-0 sudo[327822]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.010 2 INFO nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Creating config drive at /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.017 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u2iz92p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:14 compute-0 sudo[328130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.052 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:29:14 compute-0 sudo[328130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[328130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2154318410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/884128648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/953025965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:14 compute-0 sudo[328158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:14 compute-0 sudo[328158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[328158]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.160 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u2iz92p" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:14 compute-0 sudo[328183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 sudo[328183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[328183]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.194 2 DEBUG nova.storage.rbd_utils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.199 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:14 compute-0 sudo[328222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:29:14 compute-0 sudo[328222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.289 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.289 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.317 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.344 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.375 2 DEBUG oslo_concurrency.processutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config 634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.375 2 INFO nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Deleting local config drive /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/disk.config because it was imported into RBD.
Oct 02 12:29:14 compute-0 kernel: tapb76b92a4-18: entered promiscuous mode
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.4257] manager: (tapb76b92a4-18): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Oct 02 12:29:14 compute-0 ovn_controller[148183]: 2025-10-02T12:29:14Z|00480|binding|INFO|Claiming lport b76b92a4-1882-4f89-94f4-3a4700f9c379 for this chassis.
Oct 02 12:29:14 compute-0 ovn_controller[148183]: 2025-10-02T12:29:14Z|00481|binding|INFO|b76b92a4-1882-4f89-94f4-3a4700f9c379: Claiming fa:16:3e:23:4b:2a 10.100.0.10
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 systemd-udevd[328311]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.4690] device (tapb76b92a4-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.4702] device (tapb76b92a4-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.477 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:4b:2a 10.100.0.10'], port_security=['fa:16:3e:23:4b:2a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '634c38a6-caab-410d-8748-3ec1fd6f9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b76b92a4-1882-4f89-94f4-3a4700f9c379) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:14 compute-0 ovn_controller[148183]: 2025-10-02T12:29:14Z|00482|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 ovn-installed in OVS
Oct 02 12:29:14 compute-0 ovn_controller[148183]: 2025-10-02T12:29:14Z|00483|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 up in Southbound
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.479 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b76b92a4-1882-4f89-94f4-3a4700f9c379 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff bound to our chassis
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.480 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 systemd-machined[211836]: New machine qemu-56-instance-00000070.
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.494 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cd282379-ddf9-4a65-b3cc-934b240c4f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.495 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2c62a66-f1 in ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.497 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2c62a66-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.497 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d85ee4cd-07fd-4aca-b22a-fffffc08ef60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.498 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[317d4239-8ee8-49b1-af60-7213991cf685]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.501 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:14 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000070.
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.513 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b0751bb6-3ea2-48fb-b1a5-df5ef22d282a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.537 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a66d97bc-a79d-4c72-916b-47e3dcf476e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.565 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[45b0cdf1-9c3e-47c3-8f14-52b53807126a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.570282537 +0000 UTC m=+0.052150183 container create 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6e18cd-0a59-45b4-bb21-e02da67ef245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.5742] manager: (tapb2c62a66-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.605 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[08338725-c087-4a4e-a188-0c4a75ad92bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.609 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6e0976-c1d6-4847-93a0-ea9980efaca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 systemd[1]: Started libpod-conmon-031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b.scope.
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.6296] device (tapb2c62a66-f0): carrier: link connected
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.634 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e526e9ff-d291-4a2b-8fe4-d6bdd88cd249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.545738174 +0000 UTC m=+0.027605820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.649 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[81adc362-f34c-4e2d-8adb-5a3cae52eee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622225, 'reachable_time': 40861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328381, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.662 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6350cb82-a2b1-4e55-9e6d-fd706a4183c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622225, 'tstamp': 622225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328391, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.668155173 +0000 UTC m=+0.150022839 container init 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.675468564 +0000 UTC m=+0.157336210 container start 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.678839476 +0000 UTC m=+0.160707222 container attach 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:29:14 compute-0 competent_montalcini[328370]: 167 167
Oct 02 12:29:14 compute-0 systemd[1]: libpod-031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b.scope: Deactivated successfully.
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.681637205 +0000 UTC m=+0.163504841 container died 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.682 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45109059-b0cc-47cb-904d-a63d39aa3d95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622225, 'reachable_time': 40861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328392, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-19cebe90b78e8c328efccc314b4e93241060dc9574ca3ca4bfc9450b5d92100d-merged.mount: Deactivated successfully.
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.715 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad941b4-8ce1-40e1-bf03-32e8cb9b8297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 podman[328322]: 2025-10-02 12:29:14.731778568 +0000 UTC m=+0.213646224 container remove 031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:14 compute-0 systemd[1]: libpod-conmon-031650e3ff50b49c77ee8e754c68ec9abc12788a84b97543ffea7fec9be4367b.scope: Deactivated successfully.
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.777 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eecaee8c-330f-458c-a0a8-3a2e877f0487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.778 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.778 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.779 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2c62a66-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:14 compute-0 NetworkManager[44987]: <info>  [1759408154.7813] manager: (tapb2c62a66-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Oct 02 12:29:14 compute-0 kernel: tapb2c62a66-f0: entered promiscuous mode
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.793 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2c62a66-f0, col_values=(('external_ids', {'iface-id': '8c1234f6-f595-4979-94c0-98da2211f0e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:14 compute-0 ovn_controller[148183]: 2025-10-02T12:29:14Z|00484|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.806 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.807 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7d473656-e19a-49a5-bf1f-b3141f4d9427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.810 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:29:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:14.812 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'env', 'PROCESS_TAG=haproxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.819 2 DEBUG nova.network.neutron [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updated VIF entry in instance network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.820 2 DEBUG nova.network.neutron [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.918 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:14 compute-0 podman[328423]: 2025-10-02 12:29:14.923533432 +0000 UTC m=+0.055130737 container create 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 12:29:14 compute-0 systemd[1]: Started libpod-conmon-45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86.scope.
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.980 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:14 compute-0 nova_compute[257802]: 2025-10-02 12:29:14.988 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:14 compute-0 podman[328423]: 2025-10-02 12:29:14.904241488 +0000 UTC m=+0.035838823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607187374e4d694b57597fad9c66d209aaf318e5a05c9e7e93c472b12208d47b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607187374e4d694b57597fad9c66d209aaf318e5a05c9e7e93c472b12208d47b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607187374e4d694b57597fad9c66d209aaf318e5a05c9e7e93c472b12208d47b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607187374e4d694b57597fad9c66d209aaf318e5a05c9e7e93c472b12208d47b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.014 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:15 compute-0 podman[328423]: 2025-10-02 12:29:15.034652594 +0000 UTC m=+0.166249919 container init 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:29:15 compute-0 podman[328423]: 2025-10-02 12:29:15.040659101 +0000 UTC m=+0.172256406 container start 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:29:15 compute-0 podman[328423]: 2025-10-02 12:29:15.043525932 +0000 UTC m=+0.175123237 container attach 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.086 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.086 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.087 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.087 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:29:15 compute-0 ceph-mon[73607]: pgmap v2023: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.2 MiB/s wr, 104 op/s
Oct 02 12:29:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2173427479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1321179616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.142 2 DEBUG nova.compute.manager [req-01efd874-7149-49c7-8d8f-99d0cc54958a req-e365167e-8075-49eb-a9e2-97e8ecb056e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.142 2 DEBUG oslo_concurrency.lockutils [req-01efd874-7149-49c7-8d8f-99d0cc54958a req-e365167e-8075-49eb-a9e2-97e8ecb056e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.143 2 DEBUG oslo_concurrency.lockutils [req-01efd874-7149-49c7-8d8f-99d0cc54958a req-e365167e-8075-49eb-a9e2-97e8ecb056e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.143 2 DEBUG oslo_concurrency.lockutils [req-01efd874-7149-49c7-8d8f-99d0cc54958a req-e365167e-8075-49eb-a9e2-97e8ecb056e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.143 2 DEBUG nova.compute.manager [req-01efd874-7149-49c7-8d8f-99d0cc54958a req-e365167e-8075-49eb-a9e2-97e8ecb056e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Processing event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:29:15 compute-0 podman[328467]: 2025-10-02 12:29:15.185080373 +0000 UTC m=+0.043804159 container create 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:29:15 compute-0 systemd[1]: Started libpod-conmon-6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879.scope.
Oct 02 12:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:15 compute-0 podman[328467]: 2025-10-02 12:29:15.162318232 +0000 UTC m=+0.021042038 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef26ed21f63f444cf640d7738bbaaa79d0b48af0ba1d929048e06f049b00dd1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 podman[328467]: 2025-10-02 12:29:15.273531097 +0000 UTC m=+0.132254903 container init 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:29:15 compute-0 podman[328467]: 2025-10-02 12:29:15.27813606 +0000 UTC m=+0.136859836 container start 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:29:15 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [NOTICE]   (328487) : New worker (328489) forked
Oct 02 12:29:15 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [NOTICE]   (328487) : Loading success.
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.305 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.306 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.381 2 DEBUG nova.objects.instance [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:15.421 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:15.423 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.479 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 326 KiB/s wr, 43 op/s
Oct 02 12:29:15 compute-0 awesome_burnell[328441]: {
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:         "osd_id": 1,
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:         "type": "bluestore"
Oct 02 12:29:15 compute-0 awesome_burnell[328441]:     }
Oct 02 12:29:15 compute-0 awesome_burnell[328441]: }
Oct 02 12:29:15 compute-0 systemd[1]: libpod-45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86.scope: Deactivated successfully.
Oct 02 12:29:15 compute-0 podman[328423]: 2025-10-02 12:29:15.859388831 +0000 UTC m=+0.990986146 container died 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.876 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.877 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:15 compute-0 nova_compute[257802]: 2025-10-02 12:29:15.878 2 INFO nova.compute.manager [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Attaching volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 to /dev/vdb
Oct 02 12:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-607187374e4d694b57597fad9c66d209aaf318e5a05c9e7e93c472b12208d47b-merged.mount: Deactivated successfully.
Oct 02 12:29:15 compute-0 podman[328423]: 2025-10-02 12:29:15.923361654 +0000 UTC m=+1.054958959 container remove 45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:29:15 compute-0 systemd[1]: libpod-conmon-45524fc2a9643316453d125f15a6ed78cf1bb67d4a96598cf1bf338ba246ac86.scope: Deactivated successfully.
Oct 02 12:29:15 compute-0 sudo[328222]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:29:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:29:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 63d5e0dc-6de3-48cc-a399-0a7c3d3d9b8b does not exist
Oct 02 12:29:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2464a94e-5c10-44db-b33f-870b602a096f does not exist
Oct 02 12:29:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 15c21fbb-2bd9-4eea-be45-cb04b250be62 does not exist
Oct 02 12:29:16 compute-0 sudo[328567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:16 compute-0 sudo[328567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:16 compute-0 sudo[328567]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:16 compute-0 sudo[328598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:29:16 compute-0 sudo[328598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:16 compute-0 sudo[328598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:16 compute-0 podman[328591]: 2025-10-02 12:29:16.163591589 +0000 UTC m=+0.088149858 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.230 2 DEBUG os_brick.utils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.231 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.239 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.240 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[9e24ad57-34a5-4105-b163-9e83b69e5f2a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.241 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.247 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.247 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[f5485171-4a04-4e6c-94ca-8644e1530352]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.248 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.255 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.255 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[bd294ffa-d87a-46e5-99f2-ec98ab3a76b4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.257 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[0336ea59-a09e-4904-8958-1d592d32e1b5]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.257 2 DEBUG oslo_concurrency.processutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.285 2 DEBUG oslo_concurrency.processutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.288 2 DEBUG os_brick.initiator.connectors.lightos [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.289 2 DEBUG os_brick.initiator.connectors.lightos [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.289 2 DEBUG os_brick.initiator.connectors.lightos [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.290 2 DEBUG os_brick.utils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.290 2 DEBUG nova.virt.block_device [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating existing volume attachment record: 9234c666-4e90-4854-92be-0014295bc368 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.329 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.330 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408156.3270772, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.330 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Started (Lifecycle Event)
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.336 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.340 2 INFO nova.virt.libvirt.driver [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance spawned successfully.
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.341 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.387 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.393 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.397 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.398 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.399 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.399 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.400 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.400 2 DEBUG nova.virt.libvirt.driver [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.449 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.449 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408156.3330376, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.450 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Paused (Lifecycle Event)
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.497 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.499 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408156.335432, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.500 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Resumed (Lifecycle Event)
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.515 2 INFO nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Took 19.50 seconds to spawn the instance on the hypervisor.
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.516 2 DEBUG nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.528 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.531 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.555 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.583 2 INFO nova.compute.manager [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Took 21.37 seconds to build instance.
Oct 02 12:29:16 compute-0 nova_compute[257802]: 2025-10-02 12:29:16.624 2 DEBUG oslo_concurrency.lockutils [None req-e6da527b-c249-4791-8501-9e8e5405dcfd 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:16 compute-0 ceph-mon[73607]: pgmap v2024: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 326 KiB/s wr, 43 op/s
Oct 02 12:29:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:29:17 compute-0 nova_compute[257802]: 2025-10-02 12:29:17.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:17 compute-0 nova_compute[257802]: 2025-10-02 12:29:17.509 2 DEBUG nova.objects.instance [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:17.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 913 KiB/s wr, 48 op/s
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1564795986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.946 2 DEBUG nova.compute.manager [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.947 2 DEBUG oslo_concurrency.lockutils [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.947 2 DEBUG oslo_concurrency.lockutils [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.947 2 DEBUG oslo_concurrency.lockutils [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.947 2 DEBUG nova.compute.manager [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.948 2 WARNING nova.compute.manager [req-e837e92c-3097-4e03-aca9-ef19a5fa93e4 req-265c0233-6525-4837-9187-477f94944fda d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state None.
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.963 2 DEBUG nova.virt.libvirt.driver [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Attempting to attach volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:29:18 compute-0 nova_compute[257802]: 2025-10-02 12:29:18.965 2 DEBUG nova.virt.libvirt.guest [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct 02 12:29:18 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   </source>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:29:18 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct 02 12:29:18 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:29:18 compute-0 nova_compute[257802]: </disk>
Oct 02 12:29:18 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:29:19 compute-0 ceph-mon[73607]: pgmap v2025: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 913 KiB/s wr, 48 op/s
Oct 02 12:29:19 compute-0 nova_compute[257802]: 2025-10-02 12:29:19.265 2 DEBUG nova.virt.libvirt.driver [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:19 compute-0 nova_compute[257802]: 2025-10-02 12:29:19.265 2 DEBUG nova.virt.libvirt.driver [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:19 compute-0 nova_compute[257802]: 2025-10-02 12:29:19.266 2 DEBUG nova.virt.libvirt.driver [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:19 compute-0 nova_compute[257802]: 2025-10-02 12:29:19.266 2 DEBUG nova.virt.libvirt.driver [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:76:07:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 02 12:29:19 compute-0 nova_compute[257802]: 2025-10-02 12:29:19.799 2 DEBUG oslo_concurrency.lockutils [None req-22474508-1e68-4725-9e88-42958a97555e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:20 compute-0 nova_compute[257802]: 2025-10-02 12:29:20.774 2 DEBUG nova.compute.manager [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:20 compute-0 nova_compute[257802]: 2025-10-02 12:29:20.775 2 DEBUG nova.compute.manager [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing instance network info cache due to event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:20 compute-0 nova_compute[257802]: 2025-10-02 12:29:20.775 2 DEBUG oslo_concurrency.lockutils [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:20 compute-0 nova_compute[257802]: 2025-10-02 12:29:20.775 2 DEBUG oslo_concurrency.lockutils [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:20 compute-0 nova_compute[257802]: 2025-10-02 12:29:20.775 2 DEBUG nova.network.neutron [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:21 compute-0 ceph-mon[73607]: pgmap v2026: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 02 12:29:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:29:22 compute-0 nova_compute[257802]: 2025-10-02 12:29:22.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:22 compute-0 ceph-mon[73607]: pgmap v2027: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:29:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:23 compute-0 nova_compute[257802]: 2025-10-02 12:29:23.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:23.425 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:23.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 02 12:29:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3678077927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:24 compute-0 sudo[328673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:24 compute-0 sudo[328673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:24 compute-0 sudo[328673]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:24 compute-0 sudo[328698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:24 compute-0 sudo[328698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:24 compute-0 sudo[328698]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:24 compute-0 ceph-mon[73607]: pgmap v2028: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 02 12:29:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1670572574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:25.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.537 2 DEBUG oslo_concurrency.lockutils [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.538 2 DEBUG oslo_concurrency.lockutils [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.556 2 INFO nova.compute.manager [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Detaching volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.732 2 INFO nova.virt.block_device [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Attempting to driver detach volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 from mountpoint /dev/vdb
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.741 2 DEBUG nova.virt.libvirt.driver [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Attempting to detach device vdb from instance 24caf505-35fd-40c1-9bcc-1f83580b142b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.741 2 DEBUG nova.virt.libvirt.guest [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   </source>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]: </disk>
Oct 02 12:29:26 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.749 2 INFO nova.virt.libvirt.driver [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance 24caf505-35fd-40c1-9bcc-1f83580b142b from the persistent domain config.
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.750 2 DEBUG nova.virt.libvirt.driver [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 24caf505-35fd-40c1-9bcc-1f83580b142b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.750 2 DEBUG nova.virt.libvirt.guest [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   </source>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:29:26 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:29:26 compute-0 nova_compute[257802]: </disk>
Oct 02 12:29:26 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.880 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408166.880173, 24caf505-35fd-40c1-9bcc-1f83580b142b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.882 2 DEBUG nova.virt.libvirt.driver [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 24caf505-35fd-40c1-9bcc-1f83580b142b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.884 2 INFO nova.virt.libvirt.driver [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance 24caf505-35fd-40c1-9bcc-1f83580b142b from the live domain config.
Oct 02 12:29:26 compute-0 ceph-mon[73607]: pgmap v2029: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 02 12:29:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:26.948 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:29:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.994 2 DEBUG nova.network.neutron [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updated VIF entry in instance network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:26 compute-0 nova_compute[257802]: 2025-10-02 12:29:26.994 2 DEBUG nova.network.neutron [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:27 compute-0 nova_compute[257802]: 2025-10-02 12:29:27.018 2 DEBUG oslo_concurrency.lockutils [req-22a9a6f7-724f-4fef-a2c2-8f003e626ba8 req-f465d6e3-42a9-4468-b7b7-2be23a1b3419 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:27 compute-0 nova_compute[257802]: 2025-10-02 12:29:27.387 2 DEBUG nova.objects.instance [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:27 compute-0 nova_compute[257802]: 2025-10-02 12:29:27.473 2 DEBUG oslo_concurrency.lockutils [None req-6d3ed00b-15ce-4361-a124-a8f9f171810f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:27 compute-0 nova_compute[257802]: 2025-10-02 12:29:27.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:27.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Oct 02 12:29:28 compute-0 nova_compute[257802]: 2025-10-02 12:29:28.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:28 compute-0 ceph-mon[73607]: pgmap v2030: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Oct 02 12:29:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 12:29:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:30 compute-0 ceph-mon[73607]: pgmap v2031: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 12:29:31 compute-0 ovn_controller[148183]: 2025-10-02T12:29:31Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:4b:2a 10.100.0.10
Oct 02 12:29:31 compute-0 ovn_controller[148183]: 2025-10-02T12:29:31Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:4b:2a 10.100.0.10
Oct 02 12:29:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 12:29:32 compute-0 nova_compute[257802]: 2025-10-02 12:29:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:33 compute-0 nova_compute[257802]: 2025-10-02 12:29:33.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:33 compute-0 ceph-mon[73607]: pgmap v2032: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 12:29:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:33.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 205 op/s
Oct 02 12:29:34 compute-0 ceph-mon[73607]: pgmap v2033: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 205 op/s
Oct 02 12:29:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 02 12:29:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3796285016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:29:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:36.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:29:36 compute-0 ceph-mon[73607]: pgmap v2034: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 02 12:29:37 compute-0 nova_compute[257802]: 2025-10-02 12:29:37.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Oct 02 12:29:38 compute-0 nova_compute[257802]: 2025-10-02 12:29:38.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:39 compute-0 ceph-mon[73607]: pgmap v2035: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Oct 02 12:29:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:39.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 591 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.5 MiB/s wr, 224 op/s
Oct 02 12:29:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:40 compute-0 podman[328735]: 2025-10-02 12:29:40.934367997 +0000 UTC m=+0.064346043 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:40 compute-0 podman[328736]: 2025-10-02 12:29:40.960803077 +0000 UTC m=+0.082488829 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:29:41 compute-0 ceph-mon[73607]: pgmap v2036: 305 pgs: 305 active+clean; 591 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.5 MiB/s wr, 224 op/s
Oct 02 12:29:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:41.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 591 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.682 2 DEBUG nova.compute.manager [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.801 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.802 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.831 2 DEBUG nova.objects.instance [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_requests' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.848 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.849 2 INFO nova.compute.claims [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.849 2 DEBUG nova.objects.instance [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.870 2 DEBUG nova.objects.instance [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:41 compute-0 podman[328772]: 2025-10-02 12:29:41.918766589 +0000 UTC m=+0.052757308 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.982 2 INFO nova.compute.resource_tracker [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating resource usage from migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043
Oct 02 12:29:41 compute-0 nova_compute[257802]: 2025-10-02 12:29:41.983 2 DEBUG nova.compute.resource_tracker [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Starting to track incoming migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.119 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3513244713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:29:42
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'backups', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248791908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.720 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.729 2 DEBUG nova.compute.provider_tree [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.759 2 DEBUG nova.scheduler.client.report [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.789 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:42 compute-0 nova_compute[257802]: 2025-10-02 12:29:42.790 2 INFO nova.compute.manager [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Migrating
Oct 02 12:29:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:42.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:43 compute-0 nova_compute[257802]: 2025-10-02 12:29:43.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:29:43 compute-0 ceph-mon[73607]: pgmap v2037: 305 pgs: 305 active+clean; 591 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Oct 02 12:29:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1248791908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:29:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:29:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/199411667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:29:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/199411667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:43.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 578 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 223 op/s
Oct 02 12:29:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/199411667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/199411667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:44 compute-0 sudo[328817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:44 compute-0 sudo[328817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:44 compute-0 sudo[328817]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:44 compute-0 sudo[328842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:44 compute-0 sudo[328842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:44 compute-0 sudo[328842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:29:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:44.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:29:45 compute-0 ceph-mon[73607]: pgmap v2038: 305 pgs: 305 active+clean; 578 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 223 op/s
Oct 02 12:29:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3141947561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2274156463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:45.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 713 KiB/s rd, 2.9 MiB/s wr, 95 op/s
Oct 02 12:29:45 compute-0 sshd-session[328867]: Accepted publickey for nova from 192.168.122.101 port 40626 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:29:45 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:29:45 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:29:45 compute-0 systemd-logind[789]: New session 66 of user nova.
Oct 02 12:29:45 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:29:45 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:29:45 compute-0 systemd[328871]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:29:45 compute-0 systemd[328871]: Queued start job for default target Main User Target.
Oct 02 12:29:45 compute-0 systemd[328871]: Created slice User Application Slice.
Oct 02 12:29:45 compute-0 systemd[328871]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:29:45 compute-0 systemd[328871]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:29:45 compute-0 systemd[328871]: Reached target Paths.
Oct 02 12:29:45 compute-0 systemd[328871]: Reached target Timers.
Oct 02 12:29:45 compute-0 systemd[328871]: Starting D-Bus User Message Bus Socket...
Oct 02 12:29:45 compute-0 systemd[328871]: Starting Create User's Volatile Files and Directories...
Oct 02 12:29:45 compute-0 systemd[328871]: Finished Create User's Volatile Files and Directories.
Oct 02 12:29:45 compute-0 systemd[328871]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:29:45 compute-0 systemd[328871]: Reached target Sockets.
Oct 02 12:29:45 compute-0 systemd[328871]: Reached target Basic System.
Oct 02 12:29:45 compute-0 systemd[328871]: Reached target Main User Target.
Oct 02 12:29:45 compute-0 systemd[328871]: Startup finished in 142ms.
Oct 02 12:29:45 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:29:45 compute-0 systemd[1]: Started Session 66 of User nova.
Oct 02 12:29:45 compute-0 sshd-session[328867]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:29:45 compute-0 sshd-session[328886]: Received disconnect from 192.168.122.101 port 40626:11: disconnected by user
Oct 02 12:29:45 compute-0 sshd-session[328886]: Disconnected from user nova 192.168.122.101 port 40626
Oct 02 12:29:45 compute-0 sshd-session[328867]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:29:45 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Oct 02 12:29:45 compute-0 systemd-logind[789]: Session 66 logged out. Waiting for processes to exit.
Oct 02 12:29:45 compute-0 systemd-logind[789]: Removed session 66.
Oct 02 12:29:46 compute-0 sshd-session[328888]: Accepted publickey for nova from 192.168.122.101 port 40636 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:29:46 compute-0 systemd-logind[789]: New session 68 of user nova.
Oct 02 12:29:46 compute-0 systemd[1]: Started Session 68 of User nova.
Oct 02 12:29:46 compute-0 sshd-session[328888]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:29:46 compute-0 sshd-session[328892]: Received disconnect from 192.168.122.101 port 40636:11: disconnected by user
Oct 02 12:29:46 compute-0 sshd-session[328892]: Disconnected from user nova 192.168.122.101 port 40636
Oct 02 12:29:46 compute-0 sshd-session[328888]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:29:46 compute-0 systemd[1]: session-68.scope: Deactivated successfully.
Oct 02 12:29:46 compute-0 systemd-logind[789]: Session 68 logged out. Waiting for processes to exit.
Oct 02 12:29:46 compute-0 systemd-logind[789]: Removed session 68.
Oct 02 12:29:46 compute-0 podman[328891]: 2025-10-02 12:29:46.282236169 +0000 UTC m=+0.093417948 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:29:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:46.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:47 compute-0 ceph-mon[73607]: pgmap v2039: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 713 KiB/s rd, 2.9 MiB/s wr, 95 op/s
Oct 02 12:29:47 compute-0 nova_compute[257802]: 2025-10-02 12:29:47.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:47.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 556 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct 02 12:29:48 compute-0 nova_compute[257802]: 2025-10-02 12:29:48.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:48 compute-0 ceph-mon[73607]: pgmap v2040: 305 pgs: 305 active+clean; 556 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct 02 12:29:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:48.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.374 2 DEBUG nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.374 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.374 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.375 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.375 2 DEBUG nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.375 2 WARNING nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:29:49 compute-0 ovn_controller[148183]: 2025-10-02T12:29:49Z|00485|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:29:49 compute-0 ovn_controller[148183]: 2025-10-02T12:29:49Z|00486|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct 02 12:29:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:49.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 487 KiB/s rd, 4.0 MiB/s wr, 137 op/s
Oct 02 12:29:49 compute-0 nova_compute[257802]: 2025-10-02 12:29:49.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:50 compute-0 nova_compute[257802]: 2025-10-02 12:29:50.260 2 INFO nova.network.neutron [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:29:50 compute-0 ceph-mon[73607]: pgmap v2041: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 487 KiB/s rd, 4.0 MiB/s wr, 137 op/s
Oct 02 12:29:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:50.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.538 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.539 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.539 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.539 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.539 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.540 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.540 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.540 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.540 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.540 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.541 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.541 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.541 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.541 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.542 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.543 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.543 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.543 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.543 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.543 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.544 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.544 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.544 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.544 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[257802]: 2025-10-02 12:29:51.544 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:29:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:51.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:29:52 compute-0 nova_compute[257802]: 2025-10-02 12:29:52.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:52 compute-0 ceph-mon[73607]: pgmap v2042: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:29:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3006823484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:52.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:53 compute-0 nova_compute[257802]: 2025-10-02 12:29:53.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 165 op/s
Oct 02 12:29:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2162660339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:53 compute-0 nova_compute[257802]: 2025-10-02 12:29:53.789 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:53 compute-0 nova_compute[257802]: 2025-10-02 12:29:53.789 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.081 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.082 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.082 2 DEBUG nova.network.neutron [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.392 2 DEBUG nova.compute.manager [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.392 2 DEBUG nova.compute.manager [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing instance network info cache due to event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:54 compute-0 nova_compute[257802]: 2025-10-02 12:29:54.392 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00968619864368137 of space, bias 1.0, pg target 2.9058595931044113 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323192932253862 of space, bias 1.0, pg target 1.2883114938116509 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:29:54 compute-0 ceph-mon[73607]: pgmap v2043: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 165 op/s
Oct 02 12:29:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:29:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:29:55 compute-0 nova_compute[257802]: 2025-10-02 12:29:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:29:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4039073892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4039073892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:56 compute-0 nova_compute[257802]: 2025-10-02 12:29:56.308 2 DEBUG nova.compute.manager [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:56 compute-0 nova_compute[257802]: 2025-10-02 12:29:56.309 2 DEBUG nova.compute.manager [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing instance network info cache due to event network-changed-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:56 compute-0 nova_compute[257802]: 2025-10-02 12:29:56.309 2 DEBUG oslo_concurrency.lockutils [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:56 compute-0 nova_compute[257802]: 2025-10-02 12:29:56.309 2 DEBUG oslo_concurrency.lockutils [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:56 compute-0 nova_compute[257802]: 2025-10-02 12:29:56.309 2 DEBUG nova.network.neutron [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Refreshing network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:56 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:29:56 compute-0 systemd[328871]: Activating special unit Exit the Session...
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped target Main User Target.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped target Basic System.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped target Paths.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped target Sockets.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped target Timers.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:29:56 compute-0 systemd[328871]: Closed D-Bus User Message Bus Socket.
Oct 02 12:29:56 compute-0 systemd[328871]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:29:56 compute-0 systemd[328871]: Removed slice User Application Slice.
Oct 02 12:29:56 compute-0 systemd[328871]: Reached target Shutdown.
Oct 02 12:29:56 compute-0 systemd[328871]: Finished Exit the Session.
Oct 02 12:29:56 compute-0 systemd[328871]: Reached target Exit the Session.
Oct 02 12:29:56 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:29:56 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:29:56 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:29:56 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:29:56 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:29:56 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:29:56 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:29:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:56.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:56 compute-0 ceph-mon[73607]: pgmap v2044: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:29:57 compute-0 nova_compute[257802]: 2025-10-02 12:29:57.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Oct 02 12:29:57 compute-0 nova_compute[257802]: 2025-10-02 12:29:57.906 2 DEBUG nova.network.neutron [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.016 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.019 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.019 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.159 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.161 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.161 2 INFO nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Creating image(s)
Oct 02 12:29:58 compute-0 nova_compute[257802]: 2025-10-02 12:29:58.203 2 DEBUG nova.storage.rbd_utils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] creating snapshot(nova-resize) on rbd image(fced40d2-4fc7-4939-9fbb-bdb61d750526_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:29:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:59 compute-0 ceph-mon[73607]: pgmap v2045: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Oct 02 12:29:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2680950879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.222 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.223 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.223 2 DEBUG nova.network.neutron [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.352 2 DEBUG nova.objects.instance [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.483 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.483 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Ensure instance console log exists: /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.484 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.484 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.484 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.487 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start _get_guest_xml network_info=[{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.493 2 WARNING nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.513 2 DEBUG nova.virt.libvirt.host [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.514 2 DEBUG nova.virt.libvirt.host [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.517 2 DEBUG nova.virt.libvirt.host [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.517 2 DEBUG nova.virt.libvirt.host [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.518 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.518 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.519 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.519 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.519 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.519 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.520 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.520 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.520 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.520 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.521 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.521 2 DEBUG nova.virt.hardware [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.521 2 DEBUG nova.objects.instance [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:59 compute-0 nova_compute[257802]: 2025-10-02 12:29:59.581 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:29:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:29:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:29:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 38 KiB/s wr, 82 op/s
Oct 02 12:30:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:30:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142286615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.027 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.069 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:00 compute-0 ceph-mon[73607]: osdmap e306: 3 total, 3 up, 3 in
Oct 02 12:30:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:30:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2142286615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.438 2 DEBUG nova.network.neutron [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated VIF entry in instance network info cache for port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.439 2 DEBUG nova.network.neutron [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126536484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.499 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.500 2 DEBUG nova.virt.libvirt.vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:49Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.501 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.501 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.504 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <uuid>fced40d2-4fc7-4939-9fbb-bdb61d750526</uuid>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <name>instance-00000071</name>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-475120527</nova:name>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:29:59</nova:creationTime>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <nova:port uuid="7cb2acba-67d6-4041-97fe-10e2d80dcd21">
Oct 02 12:30:00 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <system>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="serial">fced40d2-4fc7-4939-9fbb-bdb61d750526</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="uuid">fced40d2-4fc7-4939-9fbb-bdb61d750526</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </system>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <os>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </os>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <features>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </features>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/fced40d2-4fc7-4939-9fbb-bdb61d750526_disk">
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config">
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:53:9c:95"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <target dev="tap7cb2acba-67"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/console.log" append="off"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <video>
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </video>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:30:00 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:30:00 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:30:00 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:30:00 compute-0 nova_compute[257802]: </domain>
Oct 02 12:30:00 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.505 2 DEBUG nova.virt.libvirt.vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:49Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.505 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.506 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.506 2 DEBUG os_vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.509 2 DEBUG oslo_concurrency.lockutils [req-0e7a270e-f89e-4206-be53-0945b9febdac req-64f56de8-c86c-43ed-8c3a-70c11c215461 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb2acba-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7cb2acba-67, col_values=(('external_ids', {'iface-id': '7cb2acba-67d6-4041-97fe-10e2d80dcd21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:9c:95', 'vm-uuid': 'fced40d2-4fc7-4939-9fbb-bdb61d750526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.5132] manager: (tap7cb2acba-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.522 2 INFO os_vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67')
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.646 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.646 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.646 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:53:9c:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.647 2 INFO nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Using config drive
Oct 02 12:30:00 compute-0 kernel: tap7cb2acba-67: entered promiscuous mode
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.7287] manager: (tap7cb2acba-67): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 ovn_controller[148183]: 2025-10-02T12:30:00Z|00487|binding|INFO|Claiming lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 for this chassis.
Oct 02 12:30:00 compute-0 ovn_controller[148183]: 2025-10-02T12:30:00Z|00488|binding|INFO|7cb2acba-67d6-4041-97fe-10e2d80dcd21: Claiming fa:16:3e:53:9c:95 10.100.0.10
Oct 02 12:30:00 compute-0 ovn_controller[148183]: 2025-10-02T12:30:00Z|00489|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 ovn-installed in OVS
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 ovn_controller[148183]: 2025-10-02T12:30:00Z|00490|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 up in Southbound
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.748 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:00 compute-0 nova_compute[257802]: 2025-10-02 12:30:00.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.750 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.751 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:00 compute-0 systemd-udevd[329097]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.765 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f33d13-a3f6-43b7-887a-b102f56e1c17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.766 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.768 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.768 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13faee03-d296-448e-be83-778f72fe6d16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 systemd-machined[211836]: New machine qemu-57-instance-00000071.
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.769 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[809f84e2-ab31-42a8-b6f5-f15c4bc6b986]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.7749] device (tap7cb2acba-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.7756] device (tap7cb2acba-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:00 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000071.
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.782 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[60b5110a-0baf-42fb-ae2d-88d29b21a6f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.803 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5caf742b-afcf-42af-a61e-e474f4411fe1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.830 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[77465ecd-68aa-415e-8409-2756d2ebddb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.835 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4493a16c-5823-4e0c-9e70-0623ce862dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.8364] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.869 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[181d5aa7-1794-4a14-bb81-6a6149ab2890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.872 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5e66cf-5074-4234-8174-330478a426bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 NetworkManager[44987]: <info>  [1759408200.8935] device (tap247d774d-00): carrier: link connected
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.902 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5845dee2-a576-4bbb-958f-89ad67f09708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.918 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf46d7c-3e46-4fb8-92ea-bbac730173c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626851, 'reachable_time': 40854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329129, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.934 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e264261-cb08-45ae-a248-dbd22c3957aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626851, 'tstamp': 626851}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329130, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:00.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.958 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[43e0a53f-afa8-47dd-a2a0-65b95a629bcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626851, 'reachable_time': 40854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329131, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:00.987 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2974a86b-8d35-4cdd-bd51-d7cc323988d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.062 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[057ca2d5-1e2a-49a4-90c7-42092be46752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.064 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.064 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.068 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:01 compute-0 kernel: tap247d774d-00: entered promiscuous mode
Oct 02 12:30:01 compute-0 NetworkManager[44987]: <info>  [1759408201.0713] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.076 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:01 compute-0 ovn_controller[148183]: 2025-10-02T12:30:01Z|00491|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.096 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.097 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7733c388-8eb5-4fbb-8f7a-d794b8783037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.098 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:01.098 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:01 compute-0 ceph-mon[73607]: pgmap v2047: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 38 KiB/s wr, 82 op/s
Oct 02 12:30:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/126536484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.510 2 DEBUG nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.510 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.511 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.511 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.511 2 DEBUG nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.511 2 WARNING nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_finish.
Oct 02 12:30:01 compute-0 podman[329205]: 2025-10-02 12:30:01.429282721 +0000 UTC m=+0.026298268 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:01.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 38 KiB/s wr, 82 op/s
Oct 02 12:30:01 compute-0 podman[329205]: 2025-10-02 12:30:01.611174123 +0000 UTC m=+0.208189670 container create 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:30:01 compute-0 systemd[1]: Started libpod-conmon-99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4.scope.
Oct 02 12:30:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa39a2a74cfc68851217247d72cc539b18e09fc5a29e7fc2fe583bdc827035b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.702 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408201.7022433, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.703 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Resumed (Lifecycle Event)
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.705 2 DEBUG nova.compute.manager [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.709 2 INFO nova.virt.libvirt.driver [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance running successfully.
Oct 02 12:30:01 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.712 2 DEBUG nova.virt.libvirt.guest [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.713 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:30:01 compute-0 podman[329205]: 2025-10-02 12:30:01.723533065 +0000 UTC m=+0.320548642 container init 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:30:01 compute-0 podman[329205]: 2025-10-02 12:30:01.728932118 +0000 UTC m=+0.325947655 container start 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:30:01 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [NOTICE]   (329224) : New worker (329226) forked
Oct 02 12:30:01 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [NOTICE]   (329224) : Loading success.
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.814 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.818 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.861 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.863 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408201.7024844, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.863 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Started (Lifecycle Event)
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.925 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.928 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:01 compute-0 nova_compute[257802]: 2025-10-02 12:30:01.955 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:30:02 compute-0 nova_compute[257802]: 2025-10-02 12:30:02.083 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updated VIF entry in instance network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:02 compute-0 nova_compute[257802]: 2025-10-02 12:30:02.084 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:02 compute-0 nova_compute[257802]: 2025-10-02 12:30:02.111 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:02 compute-0 ceph-mon[73607]: pgmap v2048: 305 pgs: 305 active+clean; 564 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 38 KiB/s wr, 82 op/s
Oct 02 12:30:02 compute-0 nova_compute[257802]: 2025-10-02 12:30:02.567 2 DEBUG nova.network.neutron [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:02 compute-0 nova_compute[257802]: 2025-10-02 12:30:02.628 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.090 2 DEBUG nova.virt.libvirt.driver [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.090 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Creating file /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/faacfc2d9c024eb6a955aa4286d2dd8d.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.091 2 DEBUG oslo_concurrency.processutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/faacfc2d9c024eb6a955aa4286d2dd8d.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.575 2 DEBUG oslo_concurrency.processutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/faacfc2d9c024eb6a955aa4286d2dd8d.tmp" returned: 1 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.576 2 DEBUG oslo_concurrency.processutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/faacfc2d9c024eb6a955aa4286d2dd8d.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.576 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Creating directory /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.576 2 DEBUG oslo_concurrency.processutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 592 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:30:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/11603897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.714 2 DEBUG nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.714 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.714 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.715 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.715 2 DEBUG nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.715 2 WARNING nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state resized and task_state None.
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.838 2 DEBUG oslo_concurrency.processutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:03 compute-0 nova_compute[257802]: 2025-10-02 12:30:03.843 2 DEBUG nova.virt.libvirt.driver [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.343 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.343 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.343 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:30:04 compute-0 nova_compute[257802]: 2025-10-02 12:30:04.343 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:04 compute-0 sudo[329239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:04 compute-0 sudo[329239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:04 compute-0 sudo[329239]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:04 compute-0 ceph-mon[73607]: pgmap v2049: 305 pgs: 305 active+clean; 592 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:30:04 compute-0 sudo[329264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:04 compute-0 sudo[329264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:04 compute-0 sudo[329264]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:05 compute-0 nova_compute[257802]: 2025-10-02 12:30:05.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 146 op/s
Oct 02 12:30:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2233653918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2917355383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:06 compute-0 kernel: tapb76b92a4-18 (unregistering): left promiscuous mode
Oct 02 12:30:06 compute-0 NetworkManager[44987]: <info>  [1759408206.1275] device (tapb76b92a4-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:06 compute-0 ovn_controller[148183]: 2025-10-02T12:30:06Z|00492|binding|INFO|Releasing lport b76b92a4-1882-4f89-94f4-3a4700f9c379 from this chassis (sb_readonly=0)
Oct 02 12:30:06 compute-0 ovn_controller[148183]: 2025-10-02T12:30:06Z|00493|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 down in Southbound
Oct 02 12:30:06 compute-0 ovn_controller[148183]: 2025-10-02T12:30:06Z|00494|binding|INFO|Removing iface tapb76b92a4-18 ovn-installed in OVS
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 02 12:30:06 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000070.scope: Consumed 15.866s CPU time.
Oct 02 12:30:06 compute-0 systemd-machined[211836]: Machine qemu-56-instance-00000070 terminated.
Oct 02 12:30:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:06.232 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:4b:2a 10.100.0.10'], port_security=['fa:16:3e:23:4b:2a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '634c38a6-caab-410d-8748-3ec1fd6f9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b76b92a4-1882-4f89-94f4-3a4700f9c379) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:06.235 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b76b92a4-1882-4f89-94f4-3a4700f9c379 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff unbound from our chassis
Oct 02 12:30:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:06.239 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:06.241 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[75b30586-6198-40ba-941a-c1b7d66dca45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:06.242 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace which is not needed anymore
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.358 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.386 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.386 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [NOTICE]   (328487) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:06 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [NOTICE]   (328487) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:06 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [WARNING]  (328487) : Exiting Master process...
Oct 02 12:30:06 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [ALERT]    (328487) : Current worker (328489) exited with code 143 (Terminated)
Oct 02 12:30:06 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[328483]: [WARNING]  (328487) : All workers exited. Exiting... (0)
Oct 02 12:30:06 compute-0 systemd[1]: libpod-6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879.scope: Deactivated successfully.
Oct 02 12:30:06 compute-0 podman[329312]: 2025-10-02 12:30:06.473086076 +0000 UTC m=+0.143144650 container died 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef26ed21f63f444cf640d7738bbaaa79d0b48af0ba1d929048e06f049b00dd1c-merged.mount: Deactivated successfully.
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.765 2 DEBUG nova.compute.manager [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.766 2 DEBUG oslo_concurrency.lockutils [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.767 2 DEBUG oslo_concurrency.lockutils [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.767 2 DEBUG oslo_concurrency.lockutils [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.768 2 DEBUG nova.compute.manager [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.768 2 WARNING nova.compute.manager [req-11a21f8d-96ba-4e1b-8d3f-a6fdc67e64c9 req-a38df8cf-7e0a-49c0-a4b3-e6c6cf74be9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.861 2 INFO nova.virt.libvirt.driver [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance shutdown successfully after 3 seconds.
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.869 2 INFO nova.virt.libvirt.driver [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance destroyed successfully.
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.871 2 DEBUG nova.virt.libvirt.vif [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1352928597-network", "vif_mac": "fa:16:3e:23:4b:2a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.871 2 DEBUG nova.network.os_vif_util [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1352928597-network", "vif_mac": "fa:16:3e:23:4b:2a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.873 2 DEBUG nova.network.os_vif_util [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.874 2 DEBUG os_vif [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:06 compute-0 podman[329312]: 2025-10-02 12:30:06.876566365 +0000 UTC m=+0.546624929 container cleanup 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb76b92a4-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:06 compute-0 systemd[1]: libpod-conmon-6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879.scope: Deactivated successfully.
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.895 2 INFO os_vif [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18')
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.902 2 DEBUG nova.virt.libvirt.driver [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:06 compute-0 nova_compute[257802]: 2025-10-02 12:30:06.903 2 DEBUG nova.virt.libvirt.driver [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:07 compute-0 ceph-mon[73607]: pgmap v2050: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 146 op/s
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.129 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.129 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.154 2 DEBUG neutronclient.v2_0.client [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b76b92a4-1882-4f89-94f4-3a4700f9c379 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:30:07 compute-0 podman[329352]: 2025-10-02 12:30:07.162816724 +0000 UTC m=+0.239381187 container remove 6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.168 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f0168a-7282-4d53-955d-4658cb378bdf]: (4, ('Thu Oct  2 12:30:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879)\n6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879\nThu Oct  2 12:30:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879)\n6548ea7768ffdbff4c12a69fac43b6cfb49df0afea184cd3ae1d925ef0f14879\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.169 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[be43bf76-4212-4b49-9f30-011ed23a038f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.171 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:07 compute-0 kernel: tapb2c62a66-f0: left promiscuous mode
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.190 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c708ba15-4eb5-440a-af9c-a16d95289f12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.217 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4425b50-eee8-4245-8228-b17116d569a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.218 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c66f22-0dd1-42ba-8f20-0a7f720b4408]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.235 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[350ef07d-bbc9-4e31-9e47-b52ad730a86d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622218, 'reachable_time': 43940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329368, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.237 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:07.238 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b3949e0d-39ce-4a48-ad6a-be5b84f01b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:07 compute-0 systemd[1]: run-netns-ovnmeta\x2db2c62a66\x2df9bc\x2d4a45\x2da843\x2daef2e12a7fff.mount: Deactivated successfully.
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.326 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.327 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.327 2 DEBUG oslo_concurrency.lockutils [None req-3a7465cc-47da-4e29-aa69-ae3bd8d529f0 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722030837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.544 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 616 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 186 op/s
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.645 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.646 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.648 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.649 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.651 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.651 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.815 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.817 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=20.76046371459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.817 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.817 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.951 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Applying migration context for instance fced40d2-4fc7-4939-9fbb-bdb61d750526 as it has an incoming, in-progress migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Oct 02 12:30:07 compute-0 nova_compute[257802]: 2025-10-02 12:30:07.952 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance 634c38a6-caab-410d-8748-3ec1fd6f9cdc refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.009 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating resource usage from migration 86470150-13cf-4780-a5e8-7915418b1cd9
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.010 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Starting to track outgoing migration 86470150-13cf-4780-a5e8-7915418b1cd9 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.010 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating resource usage from migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043
Oct 02 12:30:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.038 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 24caf505-35fd-40c1-9bcc-1f83580b142b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.039 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance fced40d2-4fc7-4939-9fbb-bdb61d750526 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.039 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration 86470150-13cf-4780-a5e8-7915418b1cd9 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.039 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.040 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.337 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 02 12:30:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 02 12:30:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/722030837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334044799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.825 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.833 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:30:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:30:08 compute-0 nova_compute[257802]: 2025-10-02 12:30:08.963 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.194 2 DEBUG nova.compute.manager [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.195 2 DEBUG oslo_concurrency.lockutils [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.196 2 DEBUG oslo_concurrency.lockutils [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.196 2 DEBUG oslo_concurrency.lockutils [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.196 2 DEBUG nova.compute.manager [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.197 2 WARNING nova.compute.manager [req-8912030d-91f0-4c0c-9307-7266960a0cfa req-0bc881fc-76ca-4fe4-97d8-0cd839c73242 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.275 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:30:09 compute-0 nova_compute[257802]: 2025-10-02 12:30:09.276 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:09 compute-0 ceph-mon[73607]: pgmap v2051: 305 pgs: 305 active+clean; 616 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 186 op/s
Oct 02 12:30:09 compute-0 ceph-mon[73607]: osdmap e307: 3 total, 3 up, 3 in
Oct 02 12:30:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/334044799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3492956652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 219 op/s
Oct 02 12:30:10 compute-0 nova_compute[257802]: 2025-10-02 12:30:10.572 2 DEBUG nova.compute.manager [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:10 compute-0 nova_compute[257802]: 2025-10-02 12:30:10.574 2 DEBUG nova.compute.manager [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing instance network info cache due to event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:10 compute-0 nova_compute[257802]: 2025-10-02 12:30:10.574 2 DEBUG oslo_concurrency.lockutils [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:10 compute-0 nova_compute[257802]: 2025-10-02 12:30:10.574 2 DEBUG oslo_concurrency.lockutils [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:10 compute-0 nova_compute[257802]: 2025-10-02 12:30:10.575 2 DEBUG nova.network.neutron [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:10 compute-0 ceph-mon[73607]: pgmap v2053: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 219 op/s
Oct 02 12:30:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:10.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:11.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 219 op/s
Oct 02 12:30:11 compute-0 nova_compute[257802]: 2025-10-02 12:30:11.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:11 compute-0 podman[329415]: 2025-10-02 12:30:11.967733956 +0000 UTC m=+0.100561004 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:30:11 compute-0 podman[329416]: 2025-10-02 12:30:11.98299397 +0000 UTC m=+0.113699406 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:12 compute-0 podman[329451]: 2025-10-02 12:30:12.055614046 +0000 UTC m=+0.057502955 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:30:12 compute-0 nova_compute[257802]: 2025-10-02 12:30:12.532 2 DEBUG nova.network.neutron [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updated VIF entry in instance network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:12 compute-0 nova_compute[257802]: 2025-10-02 12:30:12.533 2 DEBUG nova.network.neutron [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:12 compute-0 nova_compute[257802]: 2025-10-02 12:30:12.722 2 DEBUG oslo_concurrency.lockutils [req-a202b232-8055-4a6f-9570-2ff6d243aca0 req-b1ed5947-aa41-4990-8ef9-e6e21337d232 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 02 12:30:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 02 12:30:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 02 12:30:12 compute-0 ceph-mon[73607]: pgmap v2054: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 219 op/s
Oct 02 12:30:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:13 compute-0 nova_compute[257802]: 2025-10-02 12:30:13.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.566188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213566225, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1740, "num_deletes": 257, "total_data_size": 2858830, "memory_usage": 2894840, "flush_reason": "Manual Compaction"}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 02 12:30:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:13.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213605622, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2812386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43931, "largest_seqno": 45670, "table_properties": {"data_size": 2804580, "index_size": 4620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17011, "raw_average_key_size": 20, "raw_value_size": 2788564, "raw_average_value_size": 3311, "num_data_blocks": 202, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408054, "oldest_key_time": 1759408054, "file_creation_time": 1759408213, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 39474 microseconds, and 5797 cpu microseconds.
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.605661) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2812386 bytes OK
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.605679) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.617918) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.617932) EVENT_LOG_v1 {"time_micros": 1759408213617928, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.617947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2851493, prev total WAL file size 2851493, number of live WAL files 2.
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.618946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353035' seq:72057594037927935, type:22 .. '6C6F676D0031373537' seq:0, type:0; will stop at (end)
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2746KB)], [95(10MB)]
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213619000, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13918078, "oldest_snapshot_seqno": -1}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7484 keys, 13779051 bytes, temperature: kUnknown
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213719241, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 13779051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13725329, "index_size": 33898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 192292, "raw_average_key_size": 25, "raw_value_size": 13588043, "raw_average_value_size": 1815, "num_data_blocks": 1350, "num_entries": 7484, "num_filter_entries": 7484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408213, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719493) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13779051 bytes
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.721900) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 8015, records dropped: 531 output_compression: NoCompression
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.721924) EVENT_LOG_v1 {"time_micros": 1759408213721913, "job": 56, "event": "compaction_finished", "compaction_time_micros": 100311, "compaction_time_cpu_micros": 33083, "output_level": 6, "num_output_files": 1, "total_output_size": 13779051, "num_input_records": 8015, "num_output_records": 7484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213722585, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213724522, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.618869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.724614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.724619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.724621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.724622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:13.724624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:13 compute-0 ceph-mon[73607]: osdmap e308: 3 total, 3 up, 3 in
Oct 02 12:30:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3925013349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:14 compute-0 ovn_controller[148183]: 2025-10-02T12:30:14Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:9c:95 10.100.0.10
Oct 02 12:30:14 compute-0 ovn_controller[148183]: 2025-10-02T12:30:14Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:9c:95 10.100.0.10
Oct 02 12:30:14 compute-0 ceph-mon[73607]: pgmap v2056: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:30:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2936122552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.277 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.547 2 DEBUG nova.compute.manager [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.548 2 DEBUG oslo_concurrency.lockutils [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.548 2 DEBUG oslo_concurrency.lockutils [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.549 2 DEBUG oslo_concurrency.lockutils [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.550 2 DEBUG nova.compute.manager [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.550 2 WARNING nova.compute.manager [req-22fe4d62-9d91-4440-94fb-76662242cfd7 req-f09f155f-de97-4e5b-818e-81f4cca1d071 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state resize_finish.
Oct 02 12:30:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:15.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 1.3 MiB/s wr, 85 op/s
Oct 02 12:30:15 compute-0 nova_compute[257802]: 2025-10-02 12:30:15.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:15.739 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:15.741 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:30:16 compute-0 sudo[329473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:16 compute-0 sudo[329473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:16 compute-0 sudo[329473]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:16 compute-0 sudo[329505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:16 compute-0 sudo[329505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:16 compute-0 sudo[329505]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:16 compute-0 podman[329498]: 2025-10-02 12:30:16.692905706 +0000 UTC m=+0.085952214 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:30:16 compute-0 sudo[329543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:16 compute-0 sudo[329543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:16 compute-0 sudo[329543]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:16 compute-0 sudo[329574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:30:16 compute-0 sudo[329574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:16 compute-0 nova_compute[257802]: 2025-10-02 12:30:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:17 compute-0 ceph-mon[73607]: pgmap v2057: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 1.3 MiB/s wr, 85 op/s
Oct 02 12:30:17 compute-0 podman[329671]: 2025-10-02 12:30:17.349309065 +0000 UTC m=+0.188110566 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:30:17 compute-0 podman[329671]: 2025-10-02 12:30:17.446927955 +0000 UTC m=+0.285729476 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:30:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:17.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 208 KiB/s wr, 81 op/s
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.877 2 DEBUG nova.compute.manager [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.878 2 DEBUG oslo_concurrency.lockutils [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.878 2 DEBUG oslo_concurrency.lockutils [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.878 2 DEBUG oslo_concurrency.lockutils [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.878 2 DEBUG nova.compute.manager [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:17 compute-0 nova_compute[257802]: 2025-10-02 12:30:17.879 2 WARNING nova.compute.manager [req-8545b034-06b4-4609-9463-98a03ba04bfc req-893cc4ae-da0f-4bd4-9e46-4a3f3dde442d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:30:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:30:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:30:18 compute-0 nova_compute[257802]: 2025-10-02 12:30:18.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 podman[329806]: 2025-10-02 12:30:18.048063934 +0000 UTC m=+0.086740803 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:30:18 compute-0 podman[329806]: 2025-10-02 12:30:18.08328324 +0000 UTC m=+0.121960059 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:30:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1757451483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3727581969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:18 compute-0 nova_compute[257802]: 2025-10-02 12:30:18.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:18 compute-0 podman[329873]: 2025-10-02 12:30:18.323326052 +0000 UTC m=+0.070453864 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, vcs-type=git, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64)
Oct 02 12:30:18 compute-0 podman[329873]: 2025-10-02 12:30:18.36228241 +0000 UTC m=+0.109410212 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vcs-type=git, io.buildah.version=1.28.2)
Oct 02 12:30:18 compute-0 sudo[329574]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 02 12:30:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 02 12:30:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 02 12:30:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:18 compute-0 sudo[329925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:18 compute-0 sudo[329925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:18 compute-0 sudo[329925]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:18 compute-0 sudo[329950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:18 compute-0 sudo[329950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:18 compute-0 sudo[329950]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:18 compute-0 sudo[329975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:18 compute-0 sudo[329975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:18 compute-0 sudo[329975]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:19 compute-0 sudo[330000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:30:19 compute-0 sudo[330000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:19 compute-0 ceph-mon[73607]: pgmap v2058: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 208 KiB/s wr, 81 op/s
Oct 02 12:30:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:19 compute-0 ceph-mon[73607]: osdmap e309: 3 total, 3 up, 3 in
Oct 02 12:30:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:19 compute-0 sudo[330000]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 30dd58ee-f2f6-4815-b328-c65d86093111 does not exist
Oct 02 12:30:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4d422cd2-2cd1-4d1a-8490-ae1db9e3b1a7 does not exist
Oct 02 12:30:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ed51bbc2-ba10-478d-babd-e91105512dbe does not exist
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:30:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:30:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:19.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 154 op/s
Oct 02 12:30:19 compute-0 sudo[330057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:19 compute-0 sudo[330057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:19 compute-0 sudo[330057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:19 compute-0 sudo[330082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:19 compute-0 sudo[330082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:19 compute-0 sudo[330082]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:19 compute-0 sudo[330107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:19 compute-0 sudo[330107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:19 compute-0 sudo[330107]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:19 compute-0 sudo[330132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:30:19 compute-0 sudo[330132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.212728574 +0000 UTC m=+0.052347337 container create af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:20 compute-0 systemd[1]: Started libpod-conmon-af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862.scope.
Oct 02 12:30:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.182303786 +0000 UTC m=+0.021922609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.294625838 +0000 UTC m=+0.134244641 container init af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.301953718 +0000 UTC m=+0.141572491 container start af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:30:20 compute-0 inspiring_wiles[330215]: 167 167
Oct 02 12:30:20 compute-0 systemd[1]: libpod-af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862.scope: Deactivated successfully.
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.30813878 +0000 UTC m=+0.147757563 container attach af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.308867628 +0000 UTC m=+0.148486401 container died af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-739705c89cae42dd0dbd1615be08d5ce21baa009eb78517d5aa608a3da607d24-merged.mount: Deactivated successfully.
Oct 02 12:30:20 compute-0 podman[330198]: 2025-10-02 12:30:20.388555698 +0000 UTC m=+0.228174491 container remove af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:30:20 compute-0 systemd[1]: libpod-conmon-af1adef1df009d19d640ef3068d2db9b5d8e91127787132070cfbcb02e49e862.scope: Deactivated successfully.
Oct 02 12:30:20 compute-0 podman[330239]: 2025-10-02 12:30:20.580375454 +0000 UTC m=+0.047996891 container create 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:20 compute-0 systemd[1]: Started libpod-conmon-4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a.scope.
Oct 02 12:30:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:20 compute-0 podman[330239]: 2025-10-02 12:30:20.558872674 +0000 UTC m=+0.026494121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:20 compute-0 podman[330239]: 2025-10-02 12:30:20.665011974 +0000 UTC m=+0.132633441 container init 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:30:20 compute-0 podman[330239]: 2025-10-02 12:30:20.671562345 +0000 UTC m=+0.139183782 container start 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:20 compute-0 podman[330239]: 2025-10-02 12:30:20.677836699 +0000 UTC m=+0.145458136 container attach 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:30:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:21 compute-0 ceph-mon[73607]: pgmap v2060: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 154 op/s
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.304 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.305 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.305 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.306 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.306 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.307 2 INFO nova.compute.manager [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Terminating instance
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.308 2 DEBUG nova.compute.manager [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:30:21 compute-0 kernel: tap7cb2acba-67 (unregistering): left promiscuous mode
Oct 02 12:30:21 compute-0 NetworkManager[44987]: <info>  [1759408221.3646] device (tap7cb2acba-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:21 compute-0 ovn_controller[148183]: 2025-10-02T12:30:21Z|00495|binding|INFO|Releasing lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 from this chassis (sb_readonly=0)
Oct 02 12:30:21 compute-0 ovn_controller[148183]: 2025-10-02T12:30:21Z|00496|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 down in Southbound
Oct 02 12:30:21 compute-0 ovn_controller[148183]: 2025-10-02T12:30:21Z|00497|binding|INFO|Removing iface tap7cb2acba-67 ovn-installed in OVS
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.375 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408206.3747993, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.376 2 INFO nova.compute.manager [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Stopped (Lifecycle Event)
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.382 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.384 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.385 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.386 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[915fef07-77af-46de-b7e3-7d843177b3cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.387 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.395 2 DEBUG nova.compute.manager [None req-0492c562-f4ff-456d-91ce-948a8fe61858 - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.401 2 DEBUG nova.compute.manager [None req-0492c562-f4ff-456d-91ce-948a8fe61858 - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.422 2 INFO nova.compute.manager [None req-0492c562-f4ff-456d-91ce-948a8fe61858 - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:30:21 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 02 12:30:21 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000071.scope: Consumed 14.154s CPU time.
Oct 02 12:30:21 compute-0 systemd-machined[211836]: Machine qemu-57-instance-00000071 terminated.
Oct 02 12:30:21 compute-0 tender_almeida[330257]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:30:21 compute-0 tender_almeida[330257]: --> relative data size: 1.0
Oct 02 12:30:21 compute-0 tender_almeida[330257]: --> All data devices are unavailable
Oct 02 12:30:21 compute-0 systemd[1]: libpod-4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a.scope: Deactivated successfully.
Oct 02 12:30:21 compute-0 conmon[330257]: conmon 4f243cafbb21f6beacfe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a.scope/container/memory.events
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.552 2 INFO nova.virt.libvirt.driver [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance destroyed successfully.
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.553 2 DEBUG nova.objects.instance [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.586 2 DEBUG nova.virt.libvirt.vif [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:10Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.586 2 DEBUG nova.network.os_vif_util [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.587 2 DEBUG nova.network.os_vif_util [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.588 2 DEBUG os_vif [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb2acba-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.608 2 INFO os_vif [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67')
Oct 02 12:30:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 139 op/s
Oct 02 12:30:21 compute-0 podman[330239]: 2025-10-02 12:30:21.626275088 +0000 UTC m=+1.093896535 container died 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [NOTICE]   (329224) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [NOTICE]   (329224) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [WARNING]  (329224) : Exiting Master process...
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [WARNING]  (329224) : Exiting Master process...
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [ALERT]    (329224) : Current worker (329226) exited with code 143 (Terminated)
Oct 02 12:30:21 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[329220]: [WARNING]  (329224) : All workers exited. Exiting... (0)
Oct 02 12:30:21 compute-0 systemd[1]: libpod-99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4.scope: Deactivated successfully.
Oct 02 12:30:21 compute-0 podman[330293]: 2025-10-02 12:30:21.64223351 +0000 UTC m=+0.157138094 container died 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d06217030a61429289da3d0b6f0447e6e20a44a45a7210403f9df5630d30cf2-merged.mount: Deactivated successfully.
Oct 02 12:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa39a2a74cfc68851217247d72cc539b18e09fc5a29e7fc2fe583bdc827035b4-merged.mount: Deactivated successfully.
Oct 02 12:30:21 compute-0 podman[330239]: 2025-10-02 12:30:21.716427494 +0000 UTC m=+1.184048931 container remove 4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_almeida, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:21 compute-0 podman[330293]: 2025-10-02 12:30:21.722902163 +0000 UTC m=+0.237806737 container cleanup 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:30:21 compute-0 systemd[1]: libpod-conmon-4f243cafbb21f6beacfe4036b1832e4a4142f75a013efe9e106fc4332d46714a.scope: Deactivated successfully.
Oct 02 12:30:21 compute-0 systemd[1]: libpod-conmon-99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4.scope: Deactivated successfully.
Oct 02 12:30:21 compute-0 sudo[330132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.774 2 DEBUG nova.compute.manager [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.775 2 DEBUG oslo_concurrency.lockutils [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.775 2 DEBUG oslo_concurrency.lockutils [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.776 2 DEBUG oslo_concurrency.lockutils [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.776 2 DEBUG nova.compute.manager [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.776 2 DEBUG nova.compute.manager [req-bb545f1a-bb17-4f43-b690-19a2318b2d91 req-d38895b9-adcb-40c2-9276-3038351a3aa6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:21 compute-0 podman[330366]: 2025-10-02 12:30:21.805636127 +0000 UTC m=+0.054637304 container remove 99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.811 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[215c4f7d-ad9e-4563-9cb5-1a209ff55679]: (4, ('Thu Oct  2 12:30:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4)\n99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4\nThu Oct  2 12:30:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4)\n99551e68350bf8f68117027333980127b405877706afd685fa66c7ca61b1e7f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.813 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8ca3fb-a996-490a-9d44-a61973abb94b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.814 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:21 compute-0 sudo[330373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 kernel: tap247d774d-00: left promiscuous mode
Oct 02 12:30:21 compute-0 sudo[330373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:21 compute-0 sudo[330373]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:21 compute-0 nova_compute[257802]: 2025-10-02 12:30:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.837 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae3827-baf8-44b0-a127-6185dc019321]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.864 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33a6acc7-de16-4623-8d3f-6848c0d81340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.866 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b006e8-2e7a-4d45-a8ec-fe73faefe04b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.882 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc1c06e-4cbc-4c3d-b254-15791a58e364]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626845, 'reachable_time': 24840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330429, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.884 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:21.884 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[82a61083-1a0f-4c42-810b-551f728ca98c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct 02 12:30:21 compute-0 sudo[330404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:21 compute-0 sudo[330404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:21 compute-0 sudo[330404]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:21 compute-0 sudo[330433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:21 compute-0 sudo[330433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:21 compute-0 sudo[330433]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:21 compute-0 sudo[330458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:30:22 compute-0 sudo[330458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.160 2 INFO nova.virt.libvirt.driver [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Deleting instance files /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526_del
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.163 2 INFO nova.virt.libvirt.driver [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Deletion of /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526_del complete
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.240 2 INFO nova.compute.manager [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.241 2 DEBUG oslo.service.loopingcall [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.241 2 DEBUG nova.compute.manager [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:22 compute-0 nova_compute[257802]: 2025-10-02 12:30:22.241 2 DEBUG nova.network.neutron [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.318272531 +0000 UTC m=+0.049180750 container create 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:22 compute-0 systemd[1]: Started libpod-conmon-38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd.scope.
Oct 02 12:30:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.289462762 +0000 UTC m=+0.020370931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.398148994 +0000 UTC m=+0.129057173 container init 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.405683559 +0000 UTC m=+0.136591698 container start 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.410908228 +0000 UTC m=+0.141816407 container attach 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:30:22 compute-0 loving_mirzakhani[330540]: 167 167
Oct 02 12:30:22 compute-0 systemd[1]: libpod-38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd.scope: Deactivated successfully.
Oct 02 12:30:22 compute-0 conmon[330540]: conmon 38e5db6499d75ccdfb42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd.scope/container/memory.events
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.413074592 +0000 UTC m=+0.143982741 container died 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:30:22 compute-0 podman[330523]: 2025-10-02 12:30:22.472376069 +0000 UTC m=+0.203284218 container remove 38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:30:22 compute-0 systemd[1]: libpod-conmon-38e5db6499d75ccdfb422fca1876c416ee780daab31a7b28811e719619ca89dd.scope: Deactivated successfully.
Oct 02 12:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-75a81ff117a37bca723738b1352d0d0c5ef65fe54062cf9f3faec6a28efb26b0-merged.mount: Deactivated successfully.
Oct 02 12:30:22 compute-0 podman[330565]: 2025-10-02 12:30:22.662938194 +0000 UTC m=+0.029058115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:22 compute-0 podman[330565]: 2025-10-02 12:30:22.936131741 +0000 UTC m=+0.302251642 container create 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:30:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:30:23 compute-0 systemd[1]: Started libpod-conmon-8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f.scope.
Oct 02 12:30:23 compute-0 nova_compute[257802]: 2025-10-02 12:30:23.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e659ce0e0df0151aeb20cead4ef85570bac150e46fbe01da6a6b0e0eff52416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e659ce0e0df0151aeb20cead4ef85570bac150e46fbe01da6a6b0e0eff52416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e659ce0e0df0151aeb20cead4ef85570bac150e46fbe01da6a6b0e0eff52416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e659ce0e0df0151aeb20cead4ef85570bac150e46fbe01da6a6b0e0eff52416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:23 compute-0 podman[330565]: 2025-10-02 12:30:23.081054894 +0000 UTC m=+0.447174885 container init 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:30:23 compute-0 podman[330565]: 2025-10-02 12:30:23.091263325 +0000 UTC m=+0.457383226 container start 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:30:23 compute-0 podman[330565]: 2025-10-02 12:30:23.100298718 +0000 UTC m=+0.466418639 container attach 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:30:23 compute-0 ceph-mon[73607]: pgmap v2061: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 139 op/s
Oct 02 12:30:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 582 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 42 KiB/s wr, 198 op/s
Oct 02 12:30:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:23 compute-0 focused_goldstine[330582]: {
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:     "1": [
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:         {
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "devices": [
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "/dev/loop3"
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             ],
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "lv_name": "ceph_lv0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "lv_size": "7511998464",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "name": "ceph_lv0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "tags": {
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.cluster_name": "ceph",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.crush_device_class": "",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.encrypted": "0",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.osd_id": "1",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.type": "block",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:                 "ceph.vdo": "0"
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             },
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "type": "block",
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:             "vg_name": "ceph_vg0"
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:         }
Oct 02 12:30:23 compute-0 focused_goldstine[330582]:     ]
Oct 02 12:30:23 compute-0 focused_goldstine[330582]: }
Oct 02 12:30:23 compute-0 systemd[1]: libpod-8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f.scope: Deactivated successfully.
Oct 02 12:30:23 compute-0 podman[330565]: 2025-10-02 12:30:23.844440312 +0000 UTC m=+1.210560213 container died 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e659ce0e0df0151aeb20cead4ef85570bac150e46fbe01da6a6b0e0eff52416-merged.mount: Deactivated successfully.
Oct 02 12:30:23 compute-0 podman[330565]: 2025-10-02 12:30:23.916640488 +0000 UTC m=+1.282760389 container remove 8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:23 compute-0 systemd[1]: libpod-conmon-8ddf40406ee72209c1ee89793e5d06c09e458db9b5f3faeabe65f9ca5b44e63f.scope: Deactivated successfully.
Oct 02 12:30:23 compute-0 sudo[330458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:24 compute-0 sudo[330605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:24 compute-0 sudo[330605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:24 compute-0 sudo[330605]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:24 compute-0 sudo[330630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:24 compute-0 sudo[330630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:24 compute-0 sudo[330630]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:24 compute-0 sudo[330655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:24 compute-0 sudo[330655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:24 compute-0 sudo[330655]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:24 compute-0 sudo[330680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:30:24 compute-0 sudo[330680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.226 2 DEBUG nova.compute.manager [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.228 2 DEBUG oslo_concurrency.lockutils [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.228 2 DEBUG oslo_concurrency.lockutils [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.228 2 DEBUG oslo_concurrency.lockutils [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.228 2 DEBUG nova.compute.manager [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.228 2 WARNING nova.compute.manager [req-161b1407-497c-49a6-bb82-cd37e41c42ff req-088c343d-078c-4766-a390-97fd8731c6ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state deleting.
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.582067448 +0000 UTC m=+0.058528861 container create a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:30:24 compute-0 systemd[1]: Started libpod-conmon-a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25.scope.
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.55004633 +0000 UTC m=+0.026507773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.685221534 +0000 UTC m=+0.161682977 container init a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.69320597 +0000 UTC m=+0.169667383 container start a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:24 compute-0 nifty_wilson[330763]: 167 167
Oct 02 12:30:24 compute-0 systemd[1]: libpod-a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25.scope: Deactivated successfully.
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.703447142 +0000 UTC m=+0.179908575 container attach a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.703819771 +0000 UTC m=+0.180281204 container died a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa25a0737331bc3e0b2f9c8e359239cdd7b2e42095ad46f437e630e0a666ffc9-merged.mount: Deactivated successfully.
Oct 02 12:30:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:24.743 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:24 compute-0 podman[330746]: 2025-10-02 12:30:24.752693523 +0000 UTC m=+0.229154936 container remove a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:24 compute-0 systemd[1]: libpod-conmon-a4d9b4b3eb0e2c04ed676effb596fb562f9e9b16062d9a2d52920fe2e10d8b25.scope: Deactivated successfully.
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.788 2 DEBUG nova.network.neutron [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.837 2 INFO nova.compute.manager [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Took 2.60 seconds to deallocate network for instance.
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.884 2 DEBUG nova.compute.manager [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.884 2 DEBUG oslo_concurrency.lockutils [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.885 2 DEBUG oslo_concurrency.lockutils [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.885 2 DEBUG oslo_concurrency.lockutils [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.885 2 DEBUG nova.compute.manager [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.885 2 WARNING nova.compute.manager [req-637a97d0-7a6d-4b3d-be0e-d975b447d771 req-6896983c-5f55-465d-95fd-545ba491d101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.922 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:24 compute-0 nova_compute[257802]: 2025-10-02 12:30:24.923 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:24 compute-0 podman[330789]: 2025-10-02 12:30:24.941989726 +0000 UTC m=+0.062962589 container create 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:24 compute-0 systemd[1]: Started libpod-conmon-70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e.scope.
Oct 02 12:30:24 compute-0 podman[330789]: 2025-10-02 12:30:24.901909001 +0000 UTC m=+0.022881894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061f3a045b6b0c021bc824f88a176f2fc3f9b908701f6fffbf9ce9d7d2537105/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061f3a045b6b0c021bc824f88a176f2fc3f9b908701f6fffbf9ce9d7d2537105/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061f3a045b6b0c021bc824f88a176f2fc3f9b908701f6fffbf9ce9d7d2537105/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061f3a045b6b0c021bc824f88a176f2fc3f9b908701f6fffbf9ce9d7d2537105/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:25 compute-0 sudo[330803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:25 compute-0 sudo[330803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:25 compute-0 sudo[330803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:25 compute-0 podman[330789]: 2025-10-02 12:30:25.053476088 +0000 UTC m=+0.174448981 container init 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:30:25 compute-0 podman[330789]: 2025-10-02 12:30:25.060264814 +0000 UTC m=+0.181237687 container start 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:25 compute-0 podman[330789]: 2025-10-02 12:30:25.07721292 +0000 UTC m=+0.198185813 container attach 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:30:25 compute-0 sudo[330833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:25 compute-0 sudo[330833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:25 compute-0 sudo[330833]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.146 2 DEBUG nova.compute.manager [req-4cee80e6-186f-48ca-b2e7-3fae0211e6da req-ccbd2abd-c7a1-49ee-be88-2a52e3b48ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-deleted-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.206 2 DEBUG oslo_concurrency.processutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:25 compute-0 ceph-mon[73607]: pgmap v2062: 305 pgs: 305 active+clean; 582 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 42 KiB/s wr, 198 op/s
Oct 02 12:30:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 43 KiB/s wr, 201 op/s
Oct 02 12:30:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129781275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.645 2 DEBUG oslo_concurrency.processutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.655 2 DEBUG nova.compute.provider_tree [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.676 2 DEBUG nova.scheduler.client.report [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.717 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.809 2 INFO nova.scheduler.client.report [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocations for instance fced40d2-4fc7-4939-9fbb-bdb61d750526
Oct 02 12:30:25 compute-0 focused_nash[330828]: {
Oct 02 12:30:25 compute-0 focused_nash[330828]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:30:25 compute-0 focused_nash[330828]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:30:25 compute-0 focused_nash[330828]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:30:25 compute-0 focused_nash[330828]:         "osd_id": 1,
Oct 02 12:30:25 compute-0 focused_nash[330828]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:30:25 compute-0 focused_nash[330828]:         "type": "bluestore"
Oct 02 12:30:25 compute-0 focused_nash[330828]:     }
Oct 02 12:30:25 compute-0 focused_nash[330828]: }
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.910 2 DEBUG oslo_concurrency.lockutils [None req-7febb989-cba8-43f3-bf83-879987226970 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:25 compute-0 systemd[1]: libpod-70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e.scope: Deactivated successfully.
Oct 02 12:30:25 compute-0 podman[330789]: 2025-10-02 12:30:25.918116875 +0000 UTC m=+1.039089758 container died 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.943 2 INFO nova.compute.manager [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Swapping old allocation on dict_keys(['a293a24c-b5ed-43d1-8783-f02da4f75ad4']) held by migration 86470150-13cf-4780-a5e8-7915418b1cd9 for instance
Oct 02 12:30:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-061f3a045b6b0c021bc824f88a176f2fc3f9b908701f6fffbf9ce9d7d2537105-merged.mount: Deactivated successfully.
Oct 02 12:30:25 compute-0 nova_compute[257802]: 2025-10-02 12:30:25.979 2 DEBUG nova.scheduler.client.report [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Overwriting current allocation {'allocations': {'5abd2871-a992-42ab-bb6a-594a92f77d4d': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 56}}, 'project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'user_id': '2bd16d1f5f9d4eb396c474eedee67165', 'consumer_generation': 1} on consumer 634c38a6-caab-410d-8748-3ec1fd6f9cdc move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:30:25 compute-0 podman[330789]: 2025-10-02 12:30:25.979712059 +0000 UTC m=+1.100684932 container remove 70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_nash, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:25 compute-0 systemd[1]: libpod-conmon-70d0f12a8f9e532391c5c6b313011f6bf182eb3cf3faaaf9cb3d3bf571a5fc5e.scope: Deactivated successfully.
Oct 02 12:30:26 compute-0 sudo[330680]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e0bdf41c-1573-42f2-bb6a-f13903e49888 does not exist
Oct 02 12:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d3253142-e527-4517-a7d5-45e0a325a407 does not exist
Oct 02 12:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4e445451-a673-4ffd-b2f0-d4e38fc41431 does not exist
Oct 02 12:30:26 compute-0 sudo[330908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:26 compute-0 sudo[330908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:26 compute-0 sudo[330908]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-0 sudo[330933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:30:26 compute-0 sudo[330933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:26 compute-0 sudo[330933]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/359606026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3129781275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3446968154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.348 2 INFO nova.network.neutron [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating port b76b92a4-1882-4f89-94f4-3a4700f9c379 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.415 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.416 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.444 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.537 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.538 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.547 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.548 2 INFO nova.compute.claims [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:26 compute-0 nova_compute[257802]: 2025-10-02 12:30:26.711 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269713945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.133 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.139 2 DEBUG nova.compute.provider_tree [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.175 2 DEBUG nova.scheduler.client.report [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.211 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.211 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:30:27 compute-0 ceph-mon[73607]: pgmap v2063: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 43 KiB/s wr, 201 op/s
Oct 02 12:30:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1269713945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.405 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.406 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.497 2 INFO nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.519 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:30:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 572 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 710 KiB/s wr, 229 op/s
Oct 02 12:30:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.646 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.647 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.648 2 INFO nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Creating image(s)
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.669 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.693 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.718 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.721 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.783 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.785 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.785 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.786 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.808 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.811 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.855 2 DEBUG nova.compute.manager [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.856 2 DEBUG oslo_concurrency.lockutils [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.856 2 DEBUG oslo_concurrency.lockutils [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.857 2 DEBUG oslo_concurrency.lockutils [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.857 2 DEBUG nova.compute.manager [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:27 compute-0 nova_compute[257802]: 2025-10-02 12:30:27.857 2 WARNING nova.compute.manager [req-ca5de053-7c68-4521-9378-1dbe1a3adda6 req-b0e4fd2d-60fa-49f9-9f72-e3f0dc90236a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.066 2 DEBUG nova.policy [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a89b71e2513413e922ee6d5d06362b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27a1729bf10548219b90df46839849f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.112 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.190 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.330 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.330 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.331 2 DEBUG nova.network.neutron [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.337 2 DEBUG nova.objects.instance [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.353 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.354 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Ensure instance console log exists: /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.355 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.355 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:28 compute-0 nova_compute[257802]: 2025-10-02 12:30:28.355 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:28 compute-0 ceph-mon[73607]: pgmap v2064: 305 pgs: 305 active+clean; 572 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 710 KiB/s wr, 229 op/s
Oct 02 12:30:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Oct 02 12:30:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:30 compute-0 nova_compute[257802]: 2025-10-02 12:30:30.070 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Successfully created port: b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:30:30 compute-0 nova_compute[257802]: 2025-10-02 12:30:30.077 2 DEBUG nova.compute.manager [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:30 compute-0 nova_compute[257802]: 2025-10-02 12:30:30.077 2 DEBUG nova.compute.manager [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing instance network info cache due to event network-changed-b76b92a4-1882-4f89-94f4-3a4700f9c379. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:30 compute-0 nova_compute[257802]: 2025-10-02 12:30:30.077 2 DEBUG oslo_concurrency.lockutils [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:30 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:30:30 compute-0 ceph-mon[73607]: pgmap v2065: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Oct 02 12:30:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.324 2 DEBUG nova.network.neutron [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.399 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.400 2 DEBUG nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.428 2 DEBUG oslo_concurrency.lockutils [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.429 2 DEBUG nova.network.neutron [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Refreshing network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.486 2 DEBUG nova.storage.rbd_utils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rolling back rbd image(634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 182 op/s
Oct 02 12:30:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:31.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:31 compute-0 nova_compute[257802]: 2025-10-02 12:30:31.963 2 DEBUG nova.storage.rbd_utils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] removing snapshot(nova-resize) on rbd image(634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:30:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 02 12:30:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 02 12:30:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 02 12:30:32 compute-0 ceph-mon[73607]: pgmap v2066: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 182 op/s
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.924 2 DEBUG nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start _get_guest_xml network_info=[{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.929 2 WARNING nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.939 2 DEBUG nova.virt.libvirt.host [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.940 2 DEBUG nova.virt.libvirt.host [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.946 2 DEBUG nova.virt.libvirt.host [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.947 2 DEBUG nova.virt.libvirt.host [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.948 2 DEBUG nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.948 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.948 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.949 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.949 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.949 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.949 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.949 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.950 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.950 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.950 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.950 2 DEBUG nova.virt.hardware [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:32 compute-0 nova_compute[257802]: 2025-10-02 12:30:32.950 2 DEBUG nova.objects.instance [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 634c38a6-caab-410d-8748-3ec1fd6f9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.556 2 DEBUG oslo_concurrency.processutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 147 op/s
Oct 02 12:30:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:33.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.744 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Successfully updated port: b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.773 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.773 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:33 compute-0 nova_compute[257802]: 2025-10-02 12:30:33.773 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:33 compute-0 ceph-mon[73607]: osdmap e310: 3 total, 3 up, 3 in
Oct 02 12:30:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716550857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.017 2 DEBUG oslo_concurrency.processutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.060 2 DEBUG oslo_concurrency.processutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.197 2 DEBUG nova.compute.manager [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.197 2 DEBUG nova.compute.manager [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing instance network info cache due to event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.198 2 DEBUG oslo_concurrency.lockutils [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.360 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:30:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954367391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.476 2 DEBUG oslo_concurrency.processutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.477 2 DEBUG nova.virt.libvirt.vif [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.478 2 DEBUG nova.network.os_vif_util [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.479 2 DEBUG nova.network.os_vif_util [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.481 2 DEBUG nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <uuid>634c38a6-caab-410d-8748-3ec1fd6f9cdc</uuid>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <name>instance-00000070</name>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestJSON-server-1187258695</nova:name>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:30:32</nova:creationTime>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:user uuid="2bd16d1f5f9d4eb396c474eedee67165">tempest-ServerActionsTestJSON-842270816-project-member</nova:user>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:project uuid="4b8ca48cb5f64ef3b0736b8be82378b8">tempest-ServerActionsTestJSON-842270816</nova:project>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <nova:port uuid="b76b92a4-1882-4f89-94f4-3a4700f9c379">
Oct 02 12:30:34 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <system>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="serial">634c38a6-caab-410d-8748-3ec1fd6f9cdc</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="uuid">634c38a6-caab-410d-8748-3ec1fd6f9cdc</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </system>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <os>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </os>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <features>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </features>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk">
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/634c38a6-caab-410d-8748-3ec1fd6f9cdc_disk.config">
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:23:4b:2a"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <target dev="tapb76b92a4-18"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc/console.log" append="off"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <video>
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </video>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:30:34 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:30:34 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:30:34 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:30:34 compute-0 nova_compute[257802]: </domain>
Oct 02 12:30:34 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.483 2 DEBUG nova.compute.manager [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Preparing to wait for external event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.483 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.484 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.484 2 DEBUG oslo_concurrency.lockutils [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.485 2 DEBUG nova.virt.libvirt.vif [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.485 2 DEBUG nova.network.os_vif_util [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.486 2 DEBUG nova.network.os_vif_util [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.486 2 DEBUG os_vif [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb76b92a4-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb76b92a4-18, col_values=(('external_ids', {'iface-id': 'b76b92a4-1882-4f89-94f4-3a4700f9c379', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:4b:2a', 'vm-uuid': '634c38a6-caab-410d-8748-3ec1fd6f9cdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.4933] manager: (tapb76b92a4-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.500 2 INFO os_vif [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18')
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.554 2 DEBUG nova.network.neutron [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updated VIF entry in instance network info cache for port b76b92a4-1882-4f89-94f4-3a4700f9c379. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.554 2 DEBUG nova.network.neutron [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [{"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.585 2 DEBUG oslo_concurrency.lockutils [req-de962bcc-0700-4057-97ff-a8e1ed82dcbb req-27a64f69-1414-4ac1-92fc-1b0495829966 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-634c38a6-caab-410d-8748-3ec1fd6f9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:34 compute-0 kernel: tapb76b92a4-18: entered promiscuous mode
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.7413] manager: (tapb76b92a4-18): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct 02 12:30:34 compute-0 ovn_controller[148183]: 2025-10-02T12:30:34Z|00498|binding|INFO|Claiming lport b76b92a4-1882-4f89-94f4-3a4700f9c379 for this chassis.
Oct 02 12:30:34 compute-0 ovn_controller[148183]: 2025-10-02T12:30:34Z|00499|binding|INFO|b76b92a4-1882-4f89-94f4-3a4700f9c379: Claiming fa:16:3e:23:4b:2a 10.100.0.10
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 ovn_controller[148183]: 2025-10-02T12:30:34Z|00500|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 ovn-installed in OVS
Oct 02 12:30:34 compute-0 nova_compute[257802]: 2025-10-02 12:30:34.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:34 compute-0 systemd-udevd[331281]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:34 compute-0 systemd-machined[211836]: New machine qemu-58-instance-00000070.
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.7953] device (tapb76b92a4-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.7962] device (tapb76b92a4-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:34 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000070.
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.813 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:4b:2a 10.100.0.10'], port_security=['fa:16:3e:23:4b:2a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '634c38a6-caab-410d-8748-3ec1fd6f9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b76b92a4-1882-4f89-94f4-3a4700f9c379) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:34 compute-0 ovn_controller[148183]: 2025-10-02T12:30:34Z|00501|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 up in Southbound
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.815 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b76b92a4-1882-4f89-94f4-3a4700f9c379 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff bound to our chassis
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.816 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.828 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[42c5fee4-2ff8-4b9c-b6b0-7d748a973d78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.829 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2c62a66-f1 in ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.830 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2c62a66-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.830 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d81233-4acf-4578-a6b4-31c32b3d94e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.831 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e47ffe62-e169-4b37-a70e-63711b83989f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.841 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b6aecdfd-1fdf-4cda-91f7-5e7684356dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.864 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bb07c641-6cce-4c81-9bbd-a210b521890f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.896 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[535260b4-c2c8-4d27-8887-38d9ff23952b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 systemd-udevd[331284]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.9021] manager: (tapb2c62a66-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.901 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa14cd7-0476-4385-9792-e5cf93849d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.939 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[54ccb33b-52ff-413f-af05-46f28e20aa8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.942 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6283b74a-057d-423d-99f6-85ec9ab71d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 ceph-mon[73607]: pgmap v2068: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 147 op/s
Oct 02 12:30:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2716550857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/155291311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/954367391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4275958504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 NetworkManager[44987]: <info>  [1759408234.9658] device (tapb2c62a66-f0): carrier: link connected
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.972 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f4748f8f-ecbc-4970-ad40-771c952cb26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:34.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:34.989 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e70aeb-a39f-43b0-afd0-b65599eb155f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630259, 'reachable_time': 25624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331315, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.003 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4e29028b-c020-4e83-bfd9-49f516c19adb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630259, 'tstamp': 630259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331316, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.021 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f32482-9bd4-4bfa-a548-807c073ee542]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630259, 'reachable_time': 25624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331317, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.047 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a64e06-8101-4b15-81de-cfb2099043aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.118 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aab1686c-6054-4179-a218-2601da0e0070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.120 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.120 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.120 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2c62a66-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-0 NetworkManager[44987]: <info>  [1759408235.1226] manager: (tapb2c62a66-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct 02 12:30:35 compute-0 kernel: tapb2c62a66-f0: entered promiscuous mode
Oct 02 12:30:35 compute-0 nova_compute[257802]: 2025-10-02 12:30:35.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.126 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2c62a66-f0, col_values=(('external_ids', {'iface-id': '8c1234f6-f595-4979-94c0-98da2211f0e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-0 nova_compute[257802]: 2025-10-02 12:30:35.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-0 ovn_controller[148183]: 2025-10-02T12:30:35Z|00502|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.141 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:35 compute-0 nova_compute[257802]: 2025-10-02 12:30:35.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.142 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[302442a7-983b-4153-a464-b53eb2cbd87d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.143 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:35.143 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'env', 'PROCESS_TAG=haproxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:35 compute-0 podman[331349]: 2025-10-02 12:30:35.489939365 +0000 UTC m=+0.022525785 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Oct 02 12:30:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:35 compute-0 nova_compute[257802]: 2025-10-02 12:30:35.970 2 DEBUG nova.network.neutron [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:36 compute-0 podman[331349]: 2025-10-02 12:30:36.053418049 +0000 UTC m=+0.586004479 container create d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:30:36 compute-0 systemd[1]: Started libpod-conmon-d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb.scope.
Oct 02 12:30:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1d6888a622dff2c05e3ee73fcd396674aa0ab1439d355d70dec4cbb4c1d0c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.223 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.224 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance network_info: |[{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.225 2 DEBUG oslo_concurrency.lockutils [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.225 2 DEBUG nova.network.neutron [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.227 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start _get_guest_xml network_info=[{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:36 compute-0 podman[331349]: 2025-10-02 12:30:36.232663556 +0000 UTC m=+0.765249976 container init d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.233 2 WARNING nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.240 2 DEBUG nova.virt.libvirt.host [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.240 2 DEBUG nova.virt.libvirt.host [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.243 2 DEBUG nova.virt.libvirt.host [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.244 2 DEBUG nova.virt.libvirt.host [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:36 compute-0 podman[331349]: 2025-10-02 12:30:36.245281506 +0000 UTC m=+0.777867916 container start d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.245 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.245 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.246 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.246 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.246 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.246 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.246 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.247 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.247 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.247 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.247 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.248 2 DEBUG nova.virt.hardware [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.250 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:36 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [NOTICE]   (331410) : New worker (331413) forked
Oct 02 12:30:36 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [NOTICE]   (331410) : Loading success.
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.334 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408236.3336883, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.335 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Started (Lifecycle Event)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.365 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.370 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408236.3341234, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.370 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Paused (Lifecycle Event)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.425 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.430 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.469 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408221.5501375, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.552 2 INFO nova.compute.manager [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Stopped (Lifecycle Event)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.575 2 DEBUG nova.compute.manager [None req-c77cfe0b-4cd2-48d7-b5f2-bf4aebce6efa - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.624 2 DEBUG nova.compute.manager [req-cf78ef90-3bba-4ba9-83a0-352452511938 req-cb3b7ac3-258e-4841-980a-f20dc0aa49db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.625 2 DEBUG oslo_concurrency.lockutils [req-cf78ef90-3bba-4ba9-83a0-352452511938 req-cb3b7ac3-258e-4841-980a-f20dc0aa49db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.626 2 DEBUG oslo_concurrency.lockutils [req-cf78ef90-3bba-4ba9-83a0-352452511938 req-cb3b7ac3-258e-4841-980a-f20dc0aa49db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.626 2 DEBUG oslo_concurrency.lockutils [req-cf78ef90-3bba-4ba9-83a0-352452511938 req-cb3b7ac3-258e-4841-980a-f20dc0aa49db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.627 2 DEBUG nova.compute.manager [req-cf78ef90-3bba-4ba9-83a0-352452511938 req-cb3b7ac3-258e-4841-980a-f20dc0aa49db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Processing event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.628 2 DEBUG nova.compute.manager [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.633 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408236.6327224, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.633 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Resumed (Lifecycle Event)
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.643 2 INFO nova.virt.libvirt.driver [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance running successfully.
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.643 2 DEBUG nova.virt.libvirt.driver [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:30:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262181880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.725 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.764 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.770 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.825 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.831 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:36 compute-0 nova_compute[257802]: 2025-10-02 12:30:36.953 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:30:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.018 2 INFO nova.compute.manager [None req-be993e59-fc1b-4b7d-a999-a9d5501b7284 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance to original state: 'active'
Oct 02 12:30:37 compute-0 ceph-mon[73607]: pgmap v2069: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Oct 02 12:30:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/241820598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/262181880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817616691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.293 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.295 2 DEBUG nova.virt.libvirt.vif [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:27Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.296 2 DEBUG nova.network.os_vif_util [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.297 2 DEBUG nova.network.os_vif_util [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.298 2 DEBUG nova.objects.instance [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.328 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <uuid>6d9cbb75-985a-47af-9a91-f4a885d1b59a</uuid>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <name>instance-00000075</name>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-245965768</nova:name>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:30:36</nova:creationTime>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <nova:port uuid="b5ad4ac4-0d9f-487b-8448-19fdf04c9f36">
Oct 02 12:30:37 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <system>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="serial">6d9cbb75-985a-47af-9a91-f4a885d1b59a</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="uuid">6d9cbb75-985a-47af-9a91-f4a885d1b59a</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </system>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <os>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </os>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <features>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </features>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk">
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config">
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </source>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:30:37 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:99:6e:1d"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <target dev="tapb5ad4ac4-0d"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/console.log" append="off"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <video>
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </video>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:30:37 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:30:37 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:30:37 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:30:37 compute-0 nova_compute[257802]: </domain>
Oct 02 12:30:37 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.329 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Preparing to wait for external event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.329 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.330 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.330 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.331 2 DEBUG nova.virt.libvirt.vif [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:27Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.331 2 DEBUG nova.network.os_vif_util [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.331 2 DEBUG nova.network.os_vif_util [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.332 2 DEBUG os_vif [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5ad4ac4-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5ad4ac4-0d, col_values=(('external_ids', {'iface-id': 'b5ad4ac4-0d9f-487b-8448-19fdf04c9f36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:6e:1d', 'vm-uuid': '6d9cbb75-985a-47af-9a91-f4a885d1b59a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:37 compute-0 NetworkManager[44987]: <info>  [1759408237.3961] manager: (tapb5ad4ac4-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.406 2 INFO os_vif [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d')
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.576 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.577 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.578 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:99:6e:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.579 2 INFO nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Using config drive
Oct 02 12:30:37 compute-0 nova_compute[257802]: 2025-10-02 12:30:37.623 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 662 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 4.4 MiB/s wr, 105 op/s
Oct 02 12:30:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:37.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3817616691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.400 2 INFO nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Creating config drive at /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.410 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee9zwm31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.550 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee9zwm31" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.583 2 DEBUG nova.storage.rbd_utils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.587 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.898 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.899 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.900 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.900 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.900 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.901 2 WARNING nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state None.
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.942 2 DEBUG oslo_concurrency.processutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config 6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.942 2 INFO nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Deleting local config drive /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/disk.config because it was imported into RBD.
Oct 02 12:30:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:38.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:38 compute-0 kernel: tapb5ad4ac4-0d: entered promiscuous mode
Oct 02 12:30:38 compute-0 NetworkManager[44987]: <info>  [1759408238.9925] manager: (tapb5ad4ac4-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 02 12:30:38 compute-0 nova_compute[257802]: 2025-10-02 12:30:38.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 ovn_controller[148183]: 2025-10-02T12:30:38Z|00503|binding|INFO|Claiming lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for this chassis.
Oct 02 12:30:38 compute-0 ovn_controller[148183]: 2025-10-02T12:30:38Z|00504|binding|INFO|b5ad4ac4-0d9f-487b-8448-19fdf04c9f36: Claiming fa:16:3e:99:6e:1d 10.100.0.13
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.004 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:6e:1d 10.100.0.13'], port_security=['fa:16:3e:99:6e:1d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d9cbb75-985a-47af-9a91-f4a885d1b59a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.005 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.006 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.019 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[78b5fb20-2c02-4193-87ad-82165ee29b96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.019 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:39 compute-0 ovn_controller[148183]: 2025-10-02T12:30:39Z|00505|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 ovn-installed in OVS
Oct 02 12:30:39 compute-0 ovn_controller[148183]: 2025-10-02T12:30:39Z|00506|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 up in Southbound
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.022 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.022 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[251faa2d-7426-4ecb-b16a-320b79f09e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.024 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb40343-aabc-4509-8ede-d407d7b7d734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 systemd-udevd[331561]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:39 compute-0 systemd-machined[211836]: New machine qemu-59-instance-00000075.
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.043 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[4c10e510-9a97-448b-8370-2e3329d78b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 NetworkManager[44987]: <info>  [1759408239.0455] device (tapb5ad4ac4-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:39 compute-0 NetworkManager[44987]: <info>  [1759408239.0465] device (tapb5ad4ac4-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:39 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000075.
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.068 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[652ea4d7-67eb-4cee-af38-793fcb761f7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.098 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[885891cb-2c4d-495d-97c0-200ae664285f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 systemd-udevd[331564]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:39 compute-0 NetworkManager[44987]: <info>  [1759408239.1051] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.106 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c2956545-45da-4997-8b46-901e5c672188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.133 2 DEBUG nova.network.neutron [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updated VIF entry in instance network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.133 2 DEBUG nova.network.neutron [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.138 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[461fb4eb-2d35-4105-ad59-6d47e54f7a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.141 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0e8344-5932-4af0-b5c6-eece1dcb6534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 NetworkManager[44987]: <info>  [1759408239.1653] device (tap247d774d-00): carrier: link connected
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.172 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f315eb9b-a9c9-410d-81d9-f457eb4e6b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.188 2 DEBUG oslo_concurrency.lockutils [req-c54d5b1a-5189-40af-9a9a-89cb07517dfd req-53d24910-3714-475a-a03f-796226ab1741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.190 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb60bbe-a0de-470f-9ae0-316215ed4e62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630679, 'reachable_time': 43811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331592, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.207 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b1562b2c-18a5-4fc6-92a6-79fbf4a50d1b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630679, 'tstamp': 630679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331593, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ceph-mon[73607]: pgmap v2070: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 662 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 4.4 MiB/s wr, 105 op/s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.227 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc3b835-4ec4-404d-a2fc-48c1b58e20d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630679, 'reachable_time': 43811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331594, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.259 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e51a0f-e638-4aa7-a48f-72e2b1b78bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.319 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7f368fb8-0bbb-44c1-927c-a677c020da27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.321 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.321 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.321 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 NetworkManager[44987]: <info>  [1759408239.3239] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 02 12:30:39 compute-0 kernel: tap247d774d-00: entered promiscuous mode
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.331 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 ovn_controller[148183]: 2025-10-02T12:30:39Z|00507|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.336 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.336 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1106f1a-3204-4b87-8704-84cf90b4373b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.337 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:39.338 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:39 compute-0 nova_compute[257802]: 2025-10-02 12:30:39.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Oct 02 12:30:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:39.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:39 compute-0 podman[331667]: 2025-10-02 12:30:39.718440986 +0000 UTC m=+0.025640731 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:39 compute-0 podman[331667]: 2025-10-02 12:30:39.890542247 +0000 UTC m=+0.197741962 container create dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:30:39 compute-0 systemd[1]: Started libpod-conmon-dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8.scope.
Oct 02 12:30:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561072f661b9396786cf147e4f5d39a64b425c8ec22fc27cbafa974a2db7532a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:40 compute-0 podman[331667]: 2025-10-02 12:30:40.036048014 +0000 UTC m=+0.343247739 container init dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:30:40 compute-0 podman[331667]: 2025-10-02 12:30:40.043219471 +0000 UTC m=+0.350419186 container start dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.048 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408240.047377, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.048 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Started (Lifecycle Event)
Oct 02 12:30:40 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [NOTICE]   (331687) : New worker (331689) forked
Oct 02 12:30:40 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [NOTICE]   (331687) : Loading success.
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.161 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.165 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408240.047755, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.166 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Paused (Lifecycle Event)
Oct 02 12:30:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1211407120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.299 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.302 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.405 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.832 2 DEBUG nova.compute.manager [req-68e38246-e907-4a18-980b-11333c00ccea req-a880dde5-20b3-427b-947f-941aecdd55cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.833 2 DEBUG oslo_concurrency.lockutils [req-68e38246-e907-4a18-980b-11333c00ccea req-a880dde5-20b3-427b-947f-941aecdd55cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.833 2 DEBUG oslo_concurrency.lockutils [req-68e38246-e907-4a18-980b-11333c00ccea req-a880dde5-20b3-427b-947f-941aecdd55cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.833 2 DEBUG oslo_concurrency.lockutils [req-68e38246-e907-4a18-980b-11333c00ccea req-a880dde5-20b3-427b-947f-941aecdd55cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.833 2 DEBUG nova.compute.manager [req-68e38246-e907-4a18-980b-11333c00ccea req-a880dde5-20b3-427b-947f-941aecdd55cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Processing event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.834 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.845 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408240.8450468, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.845 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Resumed (Lifecycle Event)
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.846 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.849 2 INFO nova.virt.libvirt.driver [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance spawned successfully.
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.849 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.879 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.879 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.880 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.880 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.880 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.881 2 DEBUG nova.virt.libvirt.driver [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.884 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[257802]: 2025-10-02 12:30:40.886 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.149 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.208 2 INFO nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Took 13.56 seconds to spawn the instance on the hypervisor.
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.209 2 DEBUG nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.227 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.227 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.227 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.227 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.228 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.229 2 INFO nova.compute.manager [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Terminating instance
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.230 2 DEBUG nova.compute.manager [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:30:41 compute-0 kernel: tapb76b92a4-18 (unregistering): left promiscuous mode
Oct 02 12:30:41 compute-0 NetworkManager[44987]: <info>  [1759408241.2713] device (tapb76b92a4-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:41 compute-0 ovn_controller[148183]: 2025-10-02T12:30:41Z|00508|binding|INFO|Releasing lport b76b92a4-1882-4f89-94f4-3a4700f9c379 from this chassis (sb_readonly=0)
Oct 02 12:30:41 compute-0 ovn_controller[148183]: 2025-10-02T12:30:41Z|00509|binding|INFO|Setting lport b76b92a4-1882-4f89-94f4-3a4700f9c379 down in Southbound
Oct 02 12:30:41 compute-0 ovn_controller[148183]: 2025-10-02T12:30:41Z|00510|binding|INFO|Removing iface tapb76b92a4-18 ovn-installed in OVS
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.289 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:4b:2a 10.100.0.10'], port_security=['fa:16:3e:23:4b:2a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '634c38a6-caab-410d-8748-3ec1fd6f9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '12', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b76b92a4-1882-4f89-94f4-3a4700f9c379) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.290 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b76b92a4-1882-4f89-94f4-3a4700f9c379 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff unbound from our chassis
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.294 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.296 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e02a4f05-787a-4d09-ad9e-6db142d63871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.297 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace which is not needed anymore
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.302 2 INFO nova.compute.manager [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Took 14.79 seconds to build instance.
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.328 2 DEBUG oslo_concurrency.lockutils [None req-4c2ecb5e-c190-4336-9c73-c00ddeefe801 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:41 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 02 12:30:41 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000070.scope: Consumed 6.282s CPU time.
Oct 02 12:30:41 compute-0 systemd-machined[211836]: Machine qemu-58-instance-00000070 terminated.
Oct 02 12:30:41 compute-0 ceph-mon[73607]: pgmap v2071: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.467 2 INFO nova.virt.libvirt.driver [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Instance destroyed successfully.
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.467 2 DEBUG nova.objects.instance [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'resources' on Instance uuid 634c38a6-caab-410d-8748-3ec1fd6f9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [NOTICE]   (331410) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [NOTICE]   (331410) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [WARNING]  (331410) : Exiting Master process...
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [WARNING]  (331410) : Exiting Master process...
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.480 2 DEBUG nova.virt.libvirt.vif [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1187258695',display_name='tempest-ServerActionsTestJSON-server-1187258695',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1187258695',id=112,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-8nz9nuro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=634c38a6-caab-410d-8748-3ec1fd6f9cdc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.481 2 DEBUG nova.network.os_vif_util [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "address": "fa:16:3e:23:4b:2a", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb76b92a4-18", "ovs_interfaceid": "b76b92a4-1882-4f89-94f4-3a4700f9c379", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.481 2 DEBUG nova.network.os_vif_util [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.482 2 DEBUG os_vif [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [ALERT]    (331410) : Current worker (331413) exited with code 143 (Terminated)
Oct 02 12:30:41 compute-0 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[331406]: [WARNING]  (331410) : All workers exited. Exiting... (0)
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb76b92a4-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:41 compute-0 systemd[1]: libpod-d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb.scope: Deactivated successfully.
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.488 2 INFO os_vif [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:4b:2a,bridge_name='br-int',has_traffic_filtering=True,id=b76b92a4-1882-4f89-94f4-3a4700f9c379,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb76b92a4-18')
Oct 02 12:30:41 compute-0 podman[331720]: 2025-10-02 12:30:41.491549519 +0000 UTC m=+0.091235834 container died d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b1d6888a622dff2c05e3ee73fcd396674aa0ab1439d355d70dec4cbb4c1d0c3-merged.mount: Deactivated successfully.
Oct 02 12:30:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Oct 02 12:30:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:41.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:41 compute-0 podman[331720]: 2025-10-02 12:30:41.652954747 +0000 UTC m=+0.252641062 container cleanup d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:30:41 compute-0 systemd[1]: libpod-conmon-d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb.scope: Deactivated successfully.
Oct 02 12:30:41 compute-0 podman[331780]: 2025-10-02 12:30:41.803315064 +0000 UTC m=+0.128501010 container remove d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.809 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[071927c3-f793-49d2-be77-259872e98e40]: (4, ('Thu Oct  2 12:30:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb)\nd0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb\nThu Oct  2 12:30:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (d0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb)\nd0229124e8612d7c8bf25c374ce23a109cb7c8afd3a9d409151fb11a8f2c1afb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.811 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[75427f33-3811-45d5-a710-23bbc2c1d247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.812 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 kernel: tapb2c62a66-f0: left promiscuous mode
Oct 02 12:30:41 compute-0 nova_compute[257802]: 2025-10-02 12:30:41.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.833 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bf524145-284d-432d-a8a3-082396d90158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.860 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a45f7511-293c-433a-89f5-9ae5a2d856c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.862 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e32b7c74-ef54-43c5-a1ae-1b798113cc31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.876 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f84a976a-f0bc-49e9-85e6-c2ce91c50fd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630251, 'reachable_time': 41642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331795, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.878 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:30:41.878 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[46f07b53-8940-4f7f-a8af-0077cc1f240d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:41 compute-0 systemd[1]: run-netns-ovnmeta\x2db2c62a66\x2df9bc\x2d4a45\x2da843\x2daef2e12a7fff.mount: Deactivated successfully.
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.142 2 DEBUG nova.compute.manager [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.143 2 DEBUG oslo_concurrency.lockutils [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.144 2 DEBUG oslo_concurrency.lockutils [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.144 2 DEBUG oslo_concurrency.lockutils [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.144 2 DEBUG nova.compute.manager [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.144 2 DEBUG nova.compute.manager [req-18c914d2-3b62-4723-b8d3-97a42cbbb066 req-4be57ffc-e31f-4d15-966c-bde78d1f57e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-unplugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:30:42
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:30:42 compute-0 ceph-mon[73607]: pgmap v2072: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.768 2 INFO nova.virt.libvirt.driver [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Deleting instance files /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc_del
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.769 2 INFO nova.virt.libvirt.driver [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Deletion of /var/lib/nova/instances/634c38a6-caab-410d-8748-3ec1fd6f9cdc_del complete
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.855 2 INFO nova.compute.manager [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Took 1.63 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.856 2 DEBUG oslo.service.loopingcall [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.856 2 DEBUG nova.compute.manager [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:42 compute-0 nova_compute[257802]: 2025-10-02 12:30:42.856 2 DEBUG nova.network.neutron [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:42 compute-0 podman[331799]: 2025-10-02 12:30:42.944172863 +0000 UTC m=+0.070472044 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:30:42 compute-0 podman[331798]: 2025-10-02 12:30:42.956609558 +0000 UTC m=+0.086293712 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:30:42 compute-0 podman[331800]: 2025-10-02 12:30:42.966840381 +0000 UTC m=+0.091217355 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:30:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.009 2 DEBUG nova.compute.manager [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.009 2 DEBUG oslo_concurrency.lockutils [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.009 2 DEBUG oslo_concurrency.lockutils [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.009 2 DEBUG oslo_concurrency.lockutils [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.010 2 DEBUG nova.compute.manager [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.010 2 WARNING nova.compute.manager [req-381a0fe1-be43-416a-9a0a-9cb0e6ef54c0 req-30e154f2-03e6-4007-8194-209ce7626857 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state None.
Oct 02 12:30:43 compute-0 nova_compute[257802]: 2025-10-02 12:30:43.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:30:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 02 12:30:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 621 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.4 MiB/s wr, 325 op/s
Oct 02 12:30:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 02 12:30:43 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 02 12:30:44 compute-0 ceph-mon[73607]: pgmap v2073: 305 pgs: 305 active+clean; 621 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.4 MiB/s wr, 325 op/s
Oct 02 12:30:44 compute-0 ceph-mon[73607]: osdmap e311: 3 total, 3 up, 3 in
Oct 02 12:30:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:45 compute-0 sudo[331855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:45 compute-0 sudo[331855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:45 compute-0 sudo[331855]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.254 2 DEBUG nova.network.neutron [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:45 compute-0 sudo[331880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:45 compute-0 sudo[331880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:45 compute-0 sudo[331880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.291 2 INFO nova.compute.manager [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Took 2.43 seconds to deallocate network for instance.
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.325 2 DEBUG nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.326 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.326 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.326 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.327 2 DEBUG nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] No waiting events found dispatching network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.327 2 WARNING nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received unexpected event network-vif-plugged-b76b92a4-1882-4f89-94f4-3a4700f9c379 for instance with vm_state active and task_state deleting.
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.415 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.416 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.421 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.465 2 INFO nova.scheduler.client.report [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Deleted allocations for instance 634c38a6-caab-410d-8748-3ec1fd6f9cdc
Oct 02 12:30:45 compute-0 nova_compute[257802]: 2025-10-02 12:30:45.601 2 DEBUG oslo_concurrency.lockutils [None req-e6fc3e2f-1f82-4813-ae14-78ca6c03e7bb 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "634c38a6-caab-410d-8748-3ec1fd6f9cdc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 368 op/s
Oct 02 12:30:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:45.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3030540415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-0 nova_compute[257802]: 2025-10-02 12:30:46.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 podman[331906]: 2025-10-02 12:30:46.946419981 +0000 UTC m=+0.089134163 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:30:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:47 compute-0 ceph-mon[73607]: pgmap v2075: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 368 op/s
Oct 02 12:30:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/390636352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:47 compute-0 nova_compute[257802]: 2025-10-02 12:30:47.501 2 DEBUG nova.compute.manager [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Received event network-vif-deleted-b76b92a4-1882-4f89-94f4-3a4700f9c379 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 589 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.8 MiB/s wr, 362 op/s
Oct 02 12:30:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:47.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:48 compute-0 nova_compute[257802]: 2025-10-02 12:30:48.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:49 compute-0 ceph-mon[73607]: pgmap v2076: 305 pgs: 305 active+clean; 589 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.8 MiB/s wr, 362 op/s
Oct 02 12:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1725 writes, 7645 keys, 1725 commit groups, 1.0 writes per commit group, ingest: 10.93 MB, 0.02 MB/s
                                           Interval WAL: 1725 writes, 1725 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     61.5      0.97              0.16        28    0.035       0      0       0.0       0.0
                                             L6      1/0   13.14 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    112.5     94.6      2.61              0.72        27    0.097    156K    15K       0.0       0.0
                                            Sum      1/0   13.14 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     82.0     85.6      3.59              0.88        55    0.065    156K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.1     90.2     94.8      0.80              0.22        12    0.067     44K   3123       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    112.5     94.6      2.61              0.72        27    0.097    156K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     61.8      0.97              0.16        27    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.30 GB write, 0.09 MB/s write, 0.29 GB read, 0.08 MB/s read, 3.6 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 33.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000349 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1954,31.85 MB,10.4754%) FilterBlock(56,448.61 KB,0.14411%) IndexBlock(56,785.45 KB,0.252317%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:30:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 56 KiB/s wr, 275 op/s
Oct 02 12:30:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:49.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:49 compute-0 nova_compute[257802]: 2025-10-02 12:30:49.720 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:49 compute-0 nova_compute[257802]: 2025-10-02 12:30:49.721 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:49 compute-0 nova_compute[257802]: 2025-10-02 12:30:49.721 2 DEBUG nova.network.neutron [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1681692066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.068701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250068755, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 698, "num_deletes": 253, "total_data_size": 839810, "memory_usage": 853776, "flush_reason": "Manual Compaction"}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250076198, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 818839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45672, "largest_seqno": 46368, "table_properties": {"data_size": 815163, "index_size": 1456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8914, "raw_average_key_size": 20, "raw_value_size": 807593, "raw_average_value_size": 1823, "num_data_blocks": 63, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408213, "oldest_key_time": 1759408213, "file_creation_time": 1759408250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7522 microseconds, and 3111 cpu microseconds.
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.076232) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 818839 bytes OK
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.076247) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.081761) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.081794) EVENT_LOG_v1 {"time_micros": 1759408250081786, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.081840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 836171, prev total WAL file size 836171, number of live WAL files 2.
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.082487) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(799KB)], [98(13MB)]
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250082522, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14597890, "oldest_snapshot_seqno": -1}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7403 keys, 12708485 bytes, temperature: kUnknown
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250164083, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 12708485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12656326, "index_size": 32527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191364, "raw_average_key_size": 25, "raw_value_size": 12521448, "raw_average_value_size": 1691, "num_data_blocks": 1286, "num_entries": 7403, "num_filter_entries": 7403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.164471) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 12708485 bytes
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.166977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(33.3) write-amplify(15.5) OK, records in: 7927, records dropped: 524 output_compression: NoCompression
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.167010) EVENT_LOG_v1 {"time_micros": 1759408250166995, "job": 58, "event": "compaction_finished", "compaction_time_micros": 81736, "compaction_time_cpu_micros": 27150, "output_level": 6, "num_output_files": 1, "total_output_size": 12708485, "num_input_records": 7927, "num_output_records": 7403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250167436, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250172179, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.082386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.172345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.172350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.172351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.172353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:50 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:30:50.172354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:51 compute-0 ceph-mon[73607]: pgmap v2077: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 56 KiB/s wr, 275 op/s
Oct 02 12:30:51 compute-0 nova_compute[257802]: 2025-10-02 12:30:51.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 56 KiB/s wr, 275 op/s
Oct 02 12:30:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:30:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:51.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:30:51 compute-0 nova_compute[257802]: 2025-10-02 12:30:51.876 2 DEBUG nova.network.neutron [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:51 compute-0 nova_compute[257802]: 2025-10-02 12:30:51.919 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:51 compute-0 ovn_controller[148183]: 2025-10-02T12:30:51Z|00511|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct 02 12:30:51 compute-0 ovn_controller[148183]: 2025-10-02T12:30:51Z|00512|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3301702475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.162 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.162 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Creating file /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/1903f17e5f834d739f8386ef08905c0e.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.163 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/1903f17e5f834d739f8386ef08905c0e.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.595 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/1903f17e5f834d739f8386ef08905c0e.tmp" returned: 1 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.597 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/1903f17e5f834d739f8386ef08905c0e.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.598 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Creating directory /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.599 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.840 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:52 compute-0 nova_compute[257802]: 2025-10-02 12:30:52.846 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:30:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:53.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:53 compute-0 nova_compute[257802]: 2025-10-02 12:30:53.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:53 compute-0 nova_compute[257802]: 2025-10-02 12:30:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:53 compute-0 nova_compute[257802]: 2025-10-02 12:30:53.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:30:53 compute-0 ceph-mon[73607]: pgmap v2078: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 56 KiB/s wr, 275 op/s
Oct 02 12:30:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:30:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:53.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:54 compute-0 ovn_controller[148183]: 2025-10-02T12:30:54Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:6e:1d 10.100.0.13
Oct 02 12:30:54 compute-0 ovn_controller[148183]: 2025-10-02T12:30:54Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:6e:1d 10.100.0.13
Oct 02 12:30:54 compute-0 nova_compute[257802]: 2025-10-02 12:30:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4067525008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010692205239158757 of space, bias 1.0, pg target 3.207661571747627 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323374685929649 of space, bias 1.0, pg target 1.2840422817211057 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:30:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:55 compute-0 ceph-mon[73607]: pgmap v2079: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2766251714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3526883118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3526883118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:30:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 602 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 778 KiB/s rd, 2.4 MiB/s wr, 122 op/s
Oct 02 12:30:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:55.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:56 compute-0 nova_compute[257802]: 2025-10-02 12:30:56.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2802949614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2802949614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:56 compute-0 nova_compute[257802]: 2025-10-02 12:30:56.466 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408241.4650228, 634c38a6-caab-410d-8748-3ec1fd6f9cdc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:56 compute-0 nova_compute[257802]: 2025-10-02 12:30:56.467 2 INFO nova.compute.manager [-] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] VM Stopped (Lifecycle Event)
Oct 02 12:30:56 compute-0 nova_compute[257802]: 2025-10-02 12:30:56.536 2 DEBUG nova.compute.manager [None req-58a8e8b8-e1e2-4539-803a-a17975948290 - - - - - -] [instance: 634c38a6-caab-410d-8748-3ec1fd6f9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:56 compute-0 nova_compute[257802]: 2025-10-02 12:30:56.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:57.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 02 12:30:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 02 12:30:57 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 02 12:30:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 640 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.6 MiB/s wr, 122 op/s
Oct 02 12:30:57 compute-0 ceph-mon[73607]: pgmap v2080: 305 pgs: 305 active+clean; 602 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 778 KiB/s rd, 2.4 MiB/s wr, 122 op/s
Oct 02 12:30:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:57.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:58 compute-0 nova_compute[257802]: 2025-10-02 12:30:58.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:58 compute-0 ceph-mon[73607]: osdmap e312: 3 total, 3 up, 3 in
Oct 02 12:30:58 compute-0 ceph-mon[73607]: pgmap v2082: 305 pgs: 305 active+clean; 640 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.6 MiB/s wr, 122 op/s
Oct 02 12:30:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/755243422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/190023607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2031110600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:59 compute-0 nova_compute[257802]: 2025-10-02 12:30:59.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 417 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 02 12:30:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:30:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:59.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2771577985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:00 compute-0 nova_compute[257802]: 2025-10-02 12:31:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:00 compute-0 ceph-mon[73607]: pgmap v2083: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 417 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 02 12:31:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:01 compute-0 nova_compute[257802]: 2025-10-02 12:31:01.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:01 compute-0 nova_compute[257802]: 2025-10-02 12:31:01.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 417 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 02 12:31:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:01.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:02 compute-0 nova_compute[257802]: 2025-10-02 12:31:02.888 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:31:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:03.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:03 compute-0 nova_compute[257802]: 2025-10-02 12:31:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:03 compute-0 ceph-mon[73607]: pgmap v2084: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 417 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 02 12:31:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 170 op/s
Oct 02 12:31:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:03.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:05 compute-0 ceph-mon[73607]: pgmap v2085: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 170 op/s
Oct 02 12:31:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/638648941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:05 compute-0 kernel: tapb5ad4ac4-0d (unregistering): left promiscuous mode
Oct 02 12:31:05 compute-0 NetworkManager[44987]: <info>  [1759408265.1935] device (tapb5ad4ac4-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:31:05 compute-0 ovn_controller[148183]: 2025-10-02T12:31:05Z|00513|binding|INFO|Releasing lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 from this chassis (sb_readonly=0)
Oct 02 12:31:05 compute-0 ovn_controller[148183]: 2025-10-02T12:31:05Z|00514|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 down in Southbound
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 ovn_controller[148183]: 2025-10-02T12:31:05Z|00515|binding|INFO|Removing iface tapb5ad4ac4-0d ovn-installed in OVS
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.269 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:6e:1d 10.100.0.13'], port_security=['fa:16:3e:99:6e:1d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d9cbb75-985a-47af-9a91-f4a885d1b59a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.272 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.276 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.277 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[83d11477-0b4e-43c6-844e-c9241fc25b93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.278 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore
Oct 02 12:31:05 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 02 12:31:05 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000075.scope: Consumed 14.204s CPU time.
Oct 02 12:31:05 compute-0 systemd-machined[211836]: Machine qemu-59-instance-00000075 terminated.
Oct 02 12:31:05 compute-0 sudo[331955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:05 compute-0 sudo[331955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:05 compute-0 sudo[331955]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [NOTICE]   (331687) : haproxy version is 2.8.14-c23fe91
Oct 02 12:31:05 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [NOTICE]   (331687) : path to executable is /usr/sbin/haproxy
Oct 02 12:31:05 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [ALERT]    (331687) : Current worker (331689) exited with code 143 (Terminated)
Oct 02 12:31:05 compute-0 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[331683]: [WARNING]  (331687) : All workers exited. Exiting... (0)
Oct 02 12:31:05 compute-0 systemd[1]: libpod-dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8.scope: Deactivated successfully.
Oct 02 12:31:05 compute-0 podman[331987]: 2025-10-02 12:31:05.456423173 +0000 UTC m=+0.060028207 container died dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:31:05 compute-0 sudo[331995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:05 compute-0 sudo[331995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:05 compute-0 sudo[331995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8-userdata-shm.mount: Deactivated successfully.
Oct 02 12:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-561072f661b9396786cf147e4f5d39a64b425c8ec22fc27cbafa974a2db7532a-merged.mount: Deactivated successfully.
Oct 02 12:31:05 compute-0 podman[331987]: 2025-10-02 12:31:05.50107716 +0000 UTC m=+0.104682194 container cleanup dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:31:05 compute-0 systemd[1]: libpod-conmon-dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8.scope: Deactivated successfully.
Oct 02 12:31:05 compute-0 podman[332052]: 2025-10-02 12:31:05.565287979 +0000 UTC m=+0.041013590 container remove dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7dedd289-7fb2-4adf-b9ea-3c8369b38426]: (4, ('Thu Oct  2 12:31:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8)\ndfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8\nThu Oct  2 12:31:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (dfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8)\ndfe7bc7571fa0597ca54b1e1e47463a4fbcd8233150d506cd87d2ab3612506c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.575 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[83f6b644-6e44-43ff-a30d-4b272c4265d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.576 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 kernel: tap247d774d-00: left promiscuous mode
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.597 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[27d01e55-ad5d-4f7f-b764-17ed5713f308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.625 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd9c93a-b7e3-4ca7-b23b-d62a31cc9343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.626 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da58a879-e07a-4d11-b8a2-8dd52fc5cf19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 182 op/s
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.640 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[919a89a8-8bc2-422d-81b1-d5207295e7bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630671, 'reachable_time': 35301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332070, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.642 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:31:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:05.642 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[0b32744d-10b9-4fa3-aa54-911644173435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct 02 12:31:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:05.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.901 2 INFO nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance shutdown successfully after 13 seconds.
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.907 2 INFO nova.virt.libvirt.driver [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance destroyed successfully.
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.908 2 DEBUG nova.virt.libvirt.vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:48Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.909 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.910 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.911 2 DEBUG os_vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5ad4ac4-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.919 2 INFO os_vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d')
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.923 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:05 compute-0 nova_compute[257802]: 2025-10-02 12:31:05.923 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:31:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/52124419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.188 2 DEBUG neutronclient.v2_0.client [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.191 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.471 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.472 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.472 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.726 2 DEBUG nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.727 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.727 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.727 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.727 2 DEBUG nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:06 compute-0 nova_compute[257802]: 2025-10-02 12:31:06.727 2 WARNING nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:31:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:07 compute-0 ceph-mon[73607]: pgmap v2086: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 182 op/s
Oct 02 12:31:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 105 KiB/s wr, 170 op/s
Oct 02 12:31:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.163 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.163 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.163 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.164 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.164 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 02 12:31:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 02 12:31:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.379 2 DEBUG nova.compute.manager [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.379 2 DEBUG nova.compute.manager [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing instance network info cache due to event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.379 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.380 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.380 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286495006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:08 compute-0 nova_compute[257802]: 2025-10-02 12:31:08.627 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.061 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.062 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.064 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.064 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.207 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.208 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4207MB free_disk=20.73931884765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.208 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.208 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:09 compute-0 ceph-mon[73607]: pgmap v2087: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 105 KiB/s wr, 170 op/s
Oct 02 12:31:09 compute-0 ceph-mon[73607]: osdmap e313: 3 total, 3 up, 3 in
Oct 02 12:31:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4286495006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.364 2 DEBUG nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.364 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.365 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.365 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.365 2 DEBUG nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.365 2 WARNING nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:31:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 292 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 54 KiB/s wr, 196 op/s
Oct 02 12:31:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:09.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:09 compute-0 nova_compute[257802]: 2025-10-02 12:31:09.952 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance 6d9cbb75-985a-47af-9a91-f4a885d1b59a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.075 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating resource usage from migration 649cb3df-1832-4eb9-bab2-7623fb261c1e
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.075 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting to track outgoing migration 649cb3df-1832-4eb9-bab2-7623fb261c1e with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.122 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 24caf505-35fd-40c1-9bcc-1f83580b142b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.122 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration 649cb3df-1832-4eb9-bab2-7623fb261c1e is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.201 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:10 compute-0 ceph-mon[73607]: pgmap v2089: 305 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 292 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 54 KiB/s wr, 196 op/s
Oct 02 12:31:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3832861014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278147138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.632 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.639 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.692 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.744 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.745 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:10 compute-0 nova_compute[257802]: 2025-10-02 12:31:10.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1278147138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 292 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 54 KiB/s wr, 196 op/s
Oct 02 12:31:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:11 compute-0 nova_compute[257802]: 2025-10-02 12:31:11.709 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updated VIF entry in instance network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:11 compute-0 nova_compute[257802]: 2025-10-02 12:31:11.710 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:11 compute-0 nova_compute[257802]: 2025-10-02 12:31:11.780 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 02 12:31:12 compute-0 ceph-mon[73607]: pgmap v2090: 305 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 292 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 54 KiB/s wr, 196 op/s
Oct 02 12:31:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 02 12:31:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:13 compute-0 nova_compute[257802]: 2025-10-02 12:31:13.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 159 op/s
Oct 02 12:31:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:13.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:13 compute-0 ceph-mon[73607]: osdmap e314: 3 total, 3 up, 3 in
Oct 02 12:31:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2343410255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:13 compute-0 podman[332121]: 2025-10-02 12:31:13.918057169 +0000 UTC m=+0.056063319 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:13 compute-0 podman[332122]: 2025-10-02 12:31:13.930888314 +0000 UTC m=+0.068107824 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:31:13 compute-0 podman[332123]: 2025-10-02 12:31:13.930861104 +0000 UTC m=+0.064068616 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:31:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:15.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:15 compute-0 nova_compute[257802]: 2025-10-02 12:31:15.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 147 op/s
Oct 02 12:31:15 compute-0 ceph-mon[73607]: pgmap v2092: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 159 op/s
Oct 02 12:31:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3237724281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:15 compute-0 nova_compute[257802]: 2025-10-02 12:31:15.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:16 compute-0 ceph-mon[73607]: pgmap v2093: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 147 op/s
Oct 02 12:31:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 648 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 786 KiB/s wr, 154 op/s
Oct 02 12:31:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:17.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:17 compute-0 nova_compute[257802]: 2025-10-02 12:31:17.746 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:17 compute-0 podman[332182]: 2025-10-02 12:31:17.940021824 +0000 UTC m=+0.080148131 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:18 compute-0 nova_compute[257802]: 2025-10-02 12:31:18.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 02 12:31:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 02 12:31:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 02 12:31:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:19 compute-0 ceph-mon[73607]: pgmap v2094: 305 pgs: 305 active+clean; 648 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 786 KiB/s wr, 154 op/s
Oct 02 12:31:19 compute-0 ceph-mon[73607]: osdmap e315: 3 total, 3 up, 3 in
Oct 02 12:31:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 666 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Oct 02 12:31:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:19.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:20 compute-0 nova_compute[257802]: 2025-10-02 12:31:20.455 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408265.4544487, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:20 compute-0 nova_compute[257802]: 2025-10-02 12:31:20.456 2 INFO nova.compute.manager [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Stopped (Lifecycle Event)
Oct 02 12:31:20 compute-0 ceph-mon[73607]: pgmap v2096: 305 pgs: 305 active+clean; 666 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Oct 02 12:31:20 compute-0 nova_compute[257802]: 2025-10-02 12:31:20.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:21.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.458 2 DEBUG nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.458 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.459 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.459 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.459 2 DEBUG nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:21 compute-0 nova_compute[257802]: 2025-10-02 12:31:21.459 2 WARNING nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_finish.
Oct 02 12:31:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 666 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 200 op/s
Oct 02 12:31:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:21.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:22 compute-0 nova_compute[257802]: 2025-10-02 12:31:22.283 2 DEBUG nova.compute.manager [None req-c9450b06-2242-4bad-aa39-78408d4cf4ac - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:22 compute-0 nova_compute[257802]: 2025-10-02 12:31:22.286 2 DEBUG nova.compute.manager [None req-c9450b06-2242-4bad-aa39-78408d4cf4ac - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:22 compute-0 nova_compute[257802]: 2025-10-02 12:31:22.740 2 INFO nova.compute.manager [None req-c9450b06-2242-4bad-aa39-78408d4cf4ac - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:31:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:23 compute-0 nova_compute[257802]: 2025-10-02 12:31:23.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:23 compute-0 ceph-mon[73607]: pgmap v2097: 305 pgs: 305 active+clean; 666 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 200 op/s
Oct 02 12:31:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 02 12:31:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.149 2 DEBUG nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.150 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.150 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.150 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.151 2 DEBUG nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:24 compute-0 nova_compute[257802]: 2025-10-02 12:31:24.151 2 WARNING nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state resized and task_state None.
Oct 02 12:31:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:25.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:25 compute-0 ceph-mon[73607]: pgmap v2098: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 02 12:31:25 compute-0 sudo[332212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:25 compute-0 sudo[332212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:25 compute-0 sudo[332212]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:25 compute-0 sudo[332237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:25 compute-0 sudo[332237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:25 compute-0 sudo[332237]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 175 op/s
Oct 02 12:31:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:25 compute-0 nova_compute[257802]: 2025-10-02 12:31:25.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:26 compute-0 ceph-mon[73607]: pgmap v2099: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 175 op/s
Oct 02 12:31:26 compute-0 sudo[332263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:26 compute-0 sudo[332263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:26 compute-0 sudo[332263]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:26 compute-0 sudo[332288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:26 compute-0 sudo[332288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:26 compute-0 sudo[332288]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:26 compute-0 sudo[332313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:26 compute-0 sudo[332313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:26 compute-0 sudo[332313]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:26 compute-0 sudo[332338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:31:26 compute-0 sudo[332338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:26.949 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:26.950 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:26.951 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:27.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:27 compute-0 sudo[332338]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 91174719-8dc3-4fed-b759-97f0668dc028 does not exist
Oct 02 12:31:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 917e98ba-f11e-4fb7-a196-e483f1d42f06 does not exist
Oct 02 12:31:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev efc49bc1-e7ac-4caf-9c1c-5ed3245e0458 does not exist
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:31:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:31:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:27 compute-0 sudo[332393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:27 compute-0 sudo[332393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:27 compute-0 sudo[332393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 150 op/s
Oct 02 12:31:27 compute-0 sudo[332418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:27 compute-0 sudo[332418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:27 compute-0 sudo[332418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:31:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:27.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:31:27 compute-0 sudo[332443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:27 compute-0 sudo[332443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:27 compute-0 sudo[332443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:27 compute-0 sudo[332468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:31:27 compute-0 sudo[332468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.095 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.095 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.096 2 DEBUG nova.compute.manager [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Going to confirm migration 18 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.125061399 +0000 UTC m=+0.039753128 container create 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:28 compute-0 systemd[1]: Started libpod-conmon-37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf.scope.
Oct 02 12:31:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.104970036 +0000 UTC m=+0.019661795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.207111737 +0000 UTC m=+0.121803486 container init 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.2133701 +0000 UTC m=+0.128061829 container start 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.21701805 +0000 UTC m=+0.131709779 container attach 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:31:28 compute-0 admiring_dubinsky[332548]: 167 167
Oct 02 12:31:28 compute-0 systemd[1]: libpod-37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf.scope: Deactivated successfully.
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.222022633 +0000 UTC m=+0.136714362 container died 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a6716085e9f50f5f13d883ac5ab2da10bff595d7a74c4c78fff89c0ad9e4824-merged.mount: Deactivated successfully.
Oct 02 12:31:28 compute-0 podman[332532]: 2025-10-02 12:31:28.268102116 +0000 UTC m=+0.182793845 container remove 37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:28 compute-0 systemd[1]: libpod-conmon-37bf81140193a0975587215b2abfd3e5bb31ef03ed37bc86cf4393caf18544cf.scope: Deactivated successfully.
Oct 02 12:31:28 compute-0 podman[332572]: 2025-10-02 12:31:28.4744825 +0000 UTC m=+0.045734995 container create 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:28 compute-0 systemd[1]: Started libpod-conmon-929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b.scope.
Oct 02 12:31:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:28 compute-0 podman[332572]: 2025-10-02 12:31:28.449817164 +0000 UTC m=+0.021069679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:28 compute-0 podman[332572]: 2025-10-02 12:31:28.554428856 +0000 UTC m=+0.125681371 container init 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 02 12:31:28 compute-0 podman[332572]: 2025-10-02 12:31:28.560400472 +0000 UTC m=+0.131652967 container start 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:31:28 compute-0 podman[332572]: 2025-10-02 12:31:28.56393723 +0000 UTC m=+0.135189725 container attach 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:28 compute-0 ceph-mon[73607]: pgmap v2100: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 150 op/s
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.913 2 DEBUG neutronclient.v2_0.client [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.915 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.915 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.915 2 DEBUG nova.network.neutron [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:28 compute-0 nova_compute[257802]: 2025-10-02 12:31:28.916 2 DEBUG nova.objects.instance [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'info_cache' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.040 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.040 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:29.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.327 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:31:29 compute-0 beautiful_cerf[332589]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:31:29 compute-0 beautiful_cerf[332589]: --> relative data size: 1.0
Oct 02 12:31:29 compute-0 beautiful_cerf[332589]: --> All data devices are unavailable
Oct 02 12:31:29 compute-0 systemd[1]: libpod-929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b.scope: Deactivated successfully.
Oct 02 12:31:29 compute-0 podman[332605]: 2025-10-02 12:31:29.413208209 +0000 UTC m=+0.022401292 container died 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab54f51177527498133a214bcaa68c2c15f350ac02f3f53f73885a871076eec4-merged.mount: Deactivated successfully.
Oct 02 12:31:29 compute-0 podman[332605]: 2025-10-02 12:31:29.483800925 +0000 UTC m=+0.092993988 container remove 929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:29 compute-0 systemd[1]: libpod-conmon-929daa3e152b628862fb8f2d277c8aa8174627454b7a85e748d3aaee083a614b.scope: Deactivated successfully.
Oct 02 12:31:29 compute-0 sudo[332468]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:29 compute-0 sudo[332620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:29 compute-0 sudo[332620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:29 compute-0 sudo[332620]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.591 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.592 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.598 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.599 2 INFO nova.compute.claims [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:31:29 compute-0 sudo[332645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:29 compute-0 sudo[332645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:29 compute-0 sudo[332645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 769 KiB/s wr, 95 op/s
Oct 02 12:31:29 compute-0 sudo[332670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:29 compute-0 sudo[332670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:29 compute-0 sudo[332670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:29 compute-0 sudo[332695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:31:29 compute-0 sudo[332695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:29 compute-0 nova_compute[257802]: 2025-10-02 12:31:29.874 2 DEBUG nova.compute.manager [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.035 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.084994276 +0000 UTC m=+0.044978577 container create 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:31:30 compute-0 systemd[1]: Started libpod-conmon-991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1.scope.
Oct 02 12:31:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.15918235 +0000 UTC m=+0.119166681 container init 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.064288616 +0000 UTC m=+0.024272927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.169063253 +0000 UTC m=+0.129047554 container start 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:31:30 compute-0 pedantic_dubinsky[332776]: 167 167
Oct 02 12:31:30 compute-0 systemd[1]: libpod-991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1.scope: Deactivated successfully.
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.183483597 +0000 UTC m=+0.143467908 container attach 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.183812376 +0000 UTC m=+0.143796677 container died 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.205 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1eddefb282f5b8413195d6a915edf48573c683e57b2cb543ec24d2a94e924d5-merged.mount: Deactivated successfully.
Oct 02 12:31:30 compute-0 podman[332758]: 2025-10-02 12:31:30.257997468 +0000 UTC m=+0.217981769 container remove 991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:31:30 compute-0 systemd[1]: libpod-conmon-991501f0795145a285c3fb6211dcbda930752a6bc0431ab3477ab39a301b1ba1.scope: Deactivated successfully.
Oct 02 12:31:30 compute-0 podman[332821]: 2025-10-02 12:31:30.423164989 +0000 UTC m=+0.044298549 container create a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:31:30 compute-0 systemd[1]: Started libpod-conmon-a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2.scope.
Oct 02 12:31:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958ea782006b5325488eb2da042ac3a6028517b68e37836a3ec629cfe02e0887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958ea782006b5325488eb2da042ac3a6028517b68e37836a3ec629cfe02e0887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958ea782006b5325488eb2da042ac3a6028517b68e37836a3ec629cfe02e0887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958ea782006b5325488eb2da042ac3a6028517b68e37836a3ec629cfe02e0887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:30 compute-0 podman[332821]: 2025-10-02 12:31:30.40327238 +0000 UTC m=+0.024405970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520578586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.520 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.525 2 DEBUG nova.compute.provider_tree [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:30 compute-0 podman[332821]: 2025-10-02 12:31:30.541821727 +0000 UTC m=+0.162955317 container init a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:31:30 compute-0 podman[332821]: 2025-10-02 12:31:30.547904047 +0000 UTC m=+0.169037617 container start a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:31:30 compute-0 podman[332821]: 2025-10-02 12:31:30.596110542 +0000 UTC m=+0.217244132 container attach a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.612 2 DEBUG nova.scheduler.client.report [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:30 compute-0 ceph-mon[73607]: pgmap v2101: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 769 KiB/s wr, 95 op/s
Oct 02 12:31:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3520578586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.765 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.765 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.767 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.799 2 DEBUG nova.network.neutron [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:30 compute-0 nova_compute[257802]: 2025-10-02 12:31:30.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:31.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.102 2 DEBUG nova.objects.instance [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.215 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.216 2 DEBUG nova.objects.instance [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.227 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.228 2 INFO nova.compute.claims [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.228 2 DEBUG nova.objects.instance [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.242 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.243 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.281 2 DEBUG nova.objects.instance [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.283 2 INFO nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]: {
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:     "1": [
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:         {
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "devices": [
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "/dev/loop3"
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             ],
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "lv_name": "ceph_lv0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "lv_size": "7511998464",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "name": "ceph_lv0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "tags": {
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.cluster_name": "ceph",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.crush_device_class": "",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.encrypted": "0",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.osd_id": "1",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.type": "block",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:                 "ceph.vdo": "0"
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             },
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "type": "block",
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:             "vg_name": "ceph_vg0"
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:         }
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]:     ]
Oct 02 12:31:31 compute-0 unruffled_proskuriakova[332838]: }
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.329 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.332 2 DEBUG nova.storage.rbd_utils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] removing snapshot(nova-resize) on rbd image(6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:31:31 compute-0 systemd[1]: libpod-a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2.scope: Deactivated successfully.
Oct 02 12:31:31 compute-0 podman[332886]: 2025-10-02 12:31:31.367033306 +0000 UTC m=+0.022760802 container died a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.370 2 INFO nova.compute.resource_tracker [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating resource usage from migration 5b161cce-b4d4-4d69-a865-c0e36b836911
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.371 2 DEBUG nova.compute.resource_tracker [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Starting to track incoming migration 5b161cce-b4d4-4d69-a865-c0e36b836911 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-958ea782006b5325488eb2da042ac3a6028517b68e37836a3ec629cfe02e0887-merged.mount: Deactivated successfully.
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.495 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.526 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.528 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.529 2 INFO nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Creating image(s)
Oct 02 12:31:31 compute-0 podman[332886]: 2025-10-02 12:31:31.545425051 +0000 UTC m=+0.201152477 container remove a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:31:31 compute-0 systemd[1]: libpod-conmon-a9df67ab652fff69b2416725d8e6e7555a42bff6a9979734d689ba535e65f2d2.scope: Deactivated successfully.
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.561 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:31 compute-0 sudo[332695]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.595 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.629 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.640 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 122 KiB/s wr, 59 op/s
Oct 02 12:31:31 compute-0 sudo[332938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:31 compute-0 sudo[332938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:31 compute-0 sudo[332938]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.670 2 DEBUG nova.policy [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc18358f9af64753bc8892379b9244c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dda4e7689e7440639479cd7b0e4c17df', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.708 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.709 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.710 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.710 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:31.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:31 compute-0 sudo[333001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:31 compute-0 sudo[333001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.736 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:31 compute-0 sudo[333001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.741 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 02 12:31:31 compute-0 sudo[333046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:31 compute-0 sudo[333046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:31 compute-0 sudo[333046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:31 compute-0 sudo[333079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:31:31 compute-0 sudo[333079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 02 12:31:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 02 12:31:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623554107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.934 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.942 2 DEBUG nova.virt.libvirt.vif [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:23Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.943 2 DEBUG nova.network.os_vif_util [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.944 2 DEBUG nova.network.os_vif_util [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.944 2 DEBUG os_vif [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5ad4ac4-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.952 2 INFO os_vif [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d')
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.953 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.957 2 DEBUG nova.compute.provider_tree [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:31 compute-0 nova_compute[257802]: 2025-10-02 12:31:31.984 2 DEBUG nova.scheduler.client.report [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.019 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.020 2 INFO nova.compute.manager [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Migrating
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.030 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.206588776 +0000 UTC m=+0.062008475 container create e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.245 2 DEBUG oslo_concurrency.processutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.162909482 +0000 UTC m=+0.018329201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:32 compute-0 systemd[1]: Started libpod-conmon-e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a.scope.
Oct 02 12:31:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.335534076 +0000 UTC m=+0.190953895 container init e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.343363009 +0000 UTC m=+0.198782708 container start e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.347381948 +0000 UTC m=+0.202801647 container attach e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:31:32 compute-0 nervous_sinoussi[333175]: 167 167
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.349441868 +0000 UTC m=+0.204861567 container died e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:31:32 compute-0 systemd[1]: libpod-e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a.scope: Deactivated successfully.
Oct 02 12:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f59d24563ce0ce92907e01de61fbc697a9d02bf3a8e9958d0e3b82479814c13-merged.mount: Deactivated successfully.
Oct 02 12:31:32 compute-0 podman[333159]: 2025-10-02 12:31:32.422215177 +0000 UTC m=+0.277634876 container remove e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:31:32 compute-0 systemd[1]: libpod-conmon-e58033985d4e35f426b799610f493297f93d0f7bfcebfcb7c610dc0a5724408a.scope: Deactivated successfully.
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.471 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.597 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] resizing rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:31:32 compute-0 podman[333253]: 2025-10-02 12:31:32.639069229 +0000 UTC m=+0.058258454 container create a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:31:32 compute-0 systemd[1]: Started libpod-conmon-a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358.scope.
Oct 02 12:31:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:32 compute-0 podman[333253]: 2025-10-02 12:31:32.615042418 +0000 UTC m=+0.034231673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33248e361dd89c306be796bd9382bc6359ba4f82514d006286039cee187d5dbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33248e361dd89c306be796bd9382bc6359ba4f82514d006286039cee187d5dbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33248e361dd89c306be796bd9382bc6359ba4f82514d006286039cee187d5dbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33248e361dd89c306be796bd9382bc6359ba4f82514d006286039cee187d5dbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:32 compute-0 podman[333253]: 2025-10-02 12:31:32.726782636 +0000 UTC m=+0.145971891 container init a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:31:32 compute-0 podman[333253]: 2025-10-02 12:31:32.736141495 +0000 UTC m=+0.155330720 container start a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.762 2 DEBUG oslo_concurrency.processutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.768 2 DEBUG nova.compute.provider_tree [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:32 compute-0 podman[333253]: 2025-10-02 12:31:32.771233708 +0000 UTC m=+0.190422963 container attach a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.787 2 DEBUG nova.scheduler.client.report [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.791 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Successfully created port: c012bc9a-1128-406f-a560-fca6e0baaf71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.861 2 DEBUG nova.objects.instance [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lazy-loading 'migration_context' on Instance uuid 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.905 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.905 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Ensure instance console log exists: /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.905 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.906 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.906 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:32 compute-0 ceph-mon[73607]: pgmap v2102: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 122 KiB/s wr, 59 op/s
Oct 02 12:31:32 compute-0 ceph-mon[73607]: osdmap e316: 3 total, 3 up, 3 in
Oct 02 12:31:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1623554107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2583777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:32 compute-0 nova_compute[257802]: 2025-10-02 12:31:32.913 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:33.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:33 compute-0 nova_compute[257802]: 2025-10-02 12:31:33.060 2 INFO nova.scheduler.client.report [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocation for migration 649cb3df-1832-4eb9-bab2-7623fb261c1e
Oct 02 12:31:33 compute-0 nova_compute[257802]: 2025-10-02 12:31:33.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-0 nova_compute[257802]: 2025-10-02 12:31:33.129 2 DEBUG oslo_concurrency.lockutils [None req-b996634b-09bf-44a2-b513-7c39795db4a2 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]: {
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:         "osd_id": 1,
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:         "type": "bluestore"
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]:     }
Oct 02 12:31:33 compute-0 dazzling_johnson[333289]: }
Oct 02 12:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:33 compute-0 systemd[1]: libpod-a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358.scope: Deactivated successfully.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.599446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293599487, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 253, "total_data_size": 809414, "memory_usage": 821936, "flush_reason": "Manual Compaction"}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 02 12:31:33 compute-0 conmon[333289]: conmon a6e8032acb38e921bf21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358.scope/container/memory.events
Oct 02 12:31:33 compute-0 podman[333253]: 2025-10-02 12:31:33.60106934 +0000 UTC m=+1.020258565 container died a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293606613, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 569462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46369, "largest_seqno": 47073, "table_properties": {"data_size": 566249, "index_size": 1057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9029, "raw_average_key_size": 21, "raw_value_size": 559267, "raw_average_value_size": 1303, "num_data_blocks": 47, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408251, "oldest_key_time": 1759408251, "file_creation_time": 1759408293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 7243 microseconds, and 2769 cpu microseconds.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.606665) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 569462 bytes OK
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.606711) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.609423) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.609441) EVENT_LOG_v1 {"time_micros": 1759408293609435, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.609459) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 805750, prev total WAL file size 805750, number of live WAL files 2.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.610319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353037' seq:72057594037927935, type:22 .. '6D6772737461740031373630' seq:0, type:0; will stop at (end)
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(556KB)], [101(12MB)]
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293610361, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 13277947, "oldest_snapshot_seqno": -1}
Oct 02 12:31:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 711 KiB/s wr, 77 op/s
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7326 keys, 9648626 bytes, temperature: kUnknown
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293696325, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9648626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9601222, "index_size": 27964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 190091, "raw_average_key_size": 25, "raw_value_size": 9472031, "raw_average_value_size": 1292, "num_data_blocks": 1096, "num_entries": 7326, "num_filter_entries": 7326, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.696615) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9648626 bytes
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.698963) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 112.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(40.3) write-amplify(16.9) OK, records in: 7832, records dropped: 506 output_compression: NoCompression
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.698983) EVENT_LOG_v1 {"time_micros": 1759408293698974, "job": 60, "event": "compaction_finished", "compaction_time_micros": 86070, "compaction_time_cpu_micros": 47345, "output_level": 6, "num_output_files": 1, "total_output_size": 9648626, "num_input_records": 7832, "num_output_records": 7326, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293699348, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 02 12:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-33248e361dd89c306be796bd9382bc6359ba4f82514d006286039cee187d5dbd-merged.mount: Deactivated successfully.
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293702486, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.609929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.702607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.702613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.702615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.702617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:31:33.702619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:31:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:33.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:33 compute-0 podman[333253]: 2025-10-02 12:31:33.741044382 +0000 UTC m=+1.160233607 container remove a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:33 compute-0 systemd[1]: libpod-conmon-a6e8032acb38e921bf21a42e835c6174ea2ee225f09b317964bdd6a015f63358.scope: Deactivated successfully.
Oct 02 12:31:33 compute-0 sudo[333079]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 03f33073-7da2-4174-8baa-7aa3a278f7f0 does not exist
Oct 02 12:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 917b86d1-4ecc-4671-921f-5b21406297a1 does not exist
Oct 02 12:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7c1ea172-3ffe-4d22-9c82-2c6fc3b8ee7d does not exist
Oct 02 12:31:33 compute-0 sudo[333344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:33 compute-0 sudo[333344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:33 compute-0 sudo[333344]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:33 compute-0 sudo[333369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:31:33 compute-0 sudo[333369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:33 compute-0 sudo[333369]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.191 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Successfully updated port: c012bc9a-1128-406f-a560-fca6e0baaf71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.298 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.298 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquired lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.298 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.340 2 DEBUG nova.compute.manager [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-changed-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.341 2 DEBUG nova.compute.manager [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Refreshing instance network info cache due to event network-changed-c012bc9a-1128-406f-a560-fca6e0baaf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.341 2 DEBUG oslo_concurrency.lockutils [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:34 compute-0 nova_compute[257802]: 2025-10-02 12:31:34.639 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:31:34 compute-0 sshd-session[333395]: Accepted publickey for nova from 192.168.122.102 port 45962 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:31:34 compute-0 systemd-logind[789]: New session 69 of user nova.
Oct 02 12:31:34 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:31:34 compute-0 ceph-mon[73607]: pgmap v2104: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 711 KiB/s wr, 77 op/s
Oct 02 12:31:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:31:34 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:31:34 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:31:34 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:31:34 compute-0 systemd[333399]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:35 compute-0 systemd[333399]: Queued start job for default target Main User Target.
Oct 02 12:31:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:35.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:35 compute-0 systemd[333399]: Created slice User Application Slice.
Oct 02 12:31:35 compute-0 systemd[333399]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:31:35 compute-0 systemd[333399]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:31:35 compute-0 systemd[333399]: Reached target Paths.
Oct 02 12:31:35 compute-0 systemd[333399]: Reached target Timers.
Oct 02 12:31:35 compute-0 systemd[333399]: Starting D-Bus User Message Bus Socket...
Oct 02 12:31:35 compute-0 systemd[333399]: Starting Create User's Volatile Files and Directories...
Oct 02 12:31:35 compute-0 systemd[333399]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:31:35 compute-0 systemd[333399]: Finished Create User's Volatile Files and Directories.
Oct 02 12:31:35 compute-0 systemd[333399]: Reached target Sockets.
Oct 02 12:31:35 compute-0 systemd[333399]: Reached target Basic System.
Oct 02 12:31:35 compute-0 systemd[333399]: Reached target Main User Target.
Oct 02 12:31:35 compute-0 systemd[333399]: Startup finished in 158ms.
Oct 02 12:31:35 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:31:35 compute-0 systemd[1]: Started Session 69 of User nova.
Oct 02 12:31:35 compute-0 sshd-session[333395]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:35 compute-0 sshd-session[333414]: Received disconnect from 192.168.122.102 port 45962:11: disconnected by user
Oct 02 12:31:35 compute-0 sshd-session[333414]: Disconnected from user nova 192.168.122.102 port 45962
Oct 02 12:31:35 compute-0 sshd-session[333395]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:31:35 compute-0 systemd-logind[789]: Session 69 logged out. Waiting for processes to exit.
Oct 02 12:31:35 compute-0 systemd[1]: session-69.scope: Deactivated successfully.
Oct 02 12:31:35 compute-0 systemd-logind[789]: Removed session 69.
Oct 02 12:31:35 compute-0 sshd-session[333416]: Accepted publickey for nova from 192.168.122.102 port 45966 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:31:35 compute-0 systemd-logind[789]: New session 71 of user nova.
Oct 02 12:31:35 compute-0 systemd[1]: Started Session 71 of User nova.
Oct 02 12:31:35 compute-0 sshd-session[333416]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:35 compute-0 sshd-session[333419]: Received disconnect from 192.168.122.102 port 45966:11: disconnected by user
Oct 02 12:31:35 compute-0 sshd-session[333419]: Disconnected from user nova 192.168.122.102 port 45966
Oct 02 12:31:35 compute-0 sshd-session[333416]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:31:35 compute-0 systemd[1]: session-71.scope: Deactivated successfully.
Oct 02 12:31:35 compute-0 systemd-logind[789]: Session 71 logged out. Waiting for processes to exit.
Oct 02 12:31:35 compute-0 systemd-logind[789]: Removed session 71.
Oct 02 12:31:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 1.4 MiB/s wr, 86 op/s
Oct 02 12:31:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:35 compute-0 nova_compute[257802]: 2025-10-02 12:31:35.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:37 compute-0 ceph-mon[73607]: pgmap v2105: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 1.4 MiB/s wr, 86 op/s
Oct 02 12:31:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:37.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.490 2 DEBUG nova.network.neutron [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Updating instance_info_cache with network_info: [{"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.530 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Releasing lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.530 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Instance network_info: |[{"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.530 2 DEBUG oslo_concurrency.lockutils [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.531 2 DEBUG nova.network.neutron [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Refreshing network info cache for port c012bc9a-1128-406f-a560-fca6e0baaf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.533 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Start _get_guest_xml network_info=[{"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.539 2 WARNING nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.548 2 DEBUG nova.virt.libvirt.host [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.549 2 DEBUG nova.virt.libvirt.host [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.555 2 DEBUG nova.virt.libvirt.host [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.556 2 DEBUG nova.virt.libvirt.host [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.557 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.557 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.558 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.558 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.559 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.559 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.559 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.560 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.560 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.560 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.561 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.561 2 DEBUG nova.virt.hardware [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:31:37 compute-0 nova_compute[257802]: 2025-10-02 12:31:37.564 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Oct 02 12:31:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687242217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.020 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.045 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.049 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2687242217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119987833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.472 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.473 2 DEBUG nova.virt.libvirt.vif [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1526479921',display_name='tempest-ServerMetadataNegativeTestJSON-server-1526479921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1526479921',id=119,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dda4e7689e7440639479cd7b0e4c17df',ramdisk_id='',reservation_id='r-y8tu3z9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-965970566',owner_user_name='tempest-ServerMetadataNegativeTestJSON-965970566-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:31Z,user_data=None,user_id='fc18358f9af64753bc8892379b9244c6',uuid=06e8b42b-8275-4d3a-9155-14ffb6e7fb84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.474 2 DEBUG nova.network.os_vif_util [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converting VIF {"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.475 2 DEBUG nova.network.os_vif_util [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.477 2 DEBUG nova.objects.instance [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lazy-loading 'pci_devices' on Instance uuid 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.513 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <uuid>06e8b42b-8275-4d3a-9155-14ffb6e7fb84</uuid>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <name>instance-00000077</name>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1526479921</nova:name>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:31:37</nova:creationTime>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:user uuid="fc18358f9af64753bc8892379b9244c6">tempest-ServerMetadataNegativeTestJSON-965970566-project-member</nova:user>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:project uuid="dda4e7689e7440639479cd7b0e4c17df">tempest-ServerMetadataNegativeTestJSON-965970566</nova:project>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <nova:port uuid="c012bc9a-1128-406f-a560-fca6e0baaf71">
Oct 02 12:31:38 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <system>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="serial">06e8b42b-8275-4d3a-9155-14ffb6e7fb84</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="uuid">06e8b42b-8275-4d3a-9155-14ffb6e7fb84</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </system>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <os>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </os>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <features>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </features>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk">
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </source>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config">
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </source>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:31:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:68:dc:16"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <target dev="tapc012bc9a-11"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/console.log" append="off"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <video>
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </video>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:31:38 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:31:38 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:31:38 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:31:38 compute-0 nova_compute[257802]: </domain>
Oct 02 12:31:38 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.514 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Preparing to wait for external event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.515 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.515 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.516 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.517 2 DEBUG nova.virt.libvirt.vif [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1526479921',display_name='tempest-ServerMetadataNegativeTestJSON-server-1526479921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1526479921',id=119,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dda4e7689e7440639479cd7b0e4c17df',ramdisk_id='',reservation_id='r-y8tu3z9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-965970566',owner_user_name='tempest-ServerMetadataNegativeTestJSON-965970566-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:31Z,user_data=None,user_id='fc18358f9af64753bc8892379b9244c6',uuid=06e8b42b-8275-4d3a-9155-14ffb6e7fb84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.517 2 DEBUG nova.network.os_vif_util [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converting VIF {"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.518 2 DEBUG nova.network.os_vif_util [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.519 2 DEBUG os_vif [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc012bc9a-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc012bc9a-11, col_values=(('external_ids', {'iface-id': 'c012bc9a-1128-406f-a560-fca6e0baaf71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:dc:16', 'vm-uuid': '06e8b42b-8275-4d3a-9155-14ffb6e7fb84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:38 compute-0 NetworkManager[44987]: <info>  [1759408298.5289] manager: (tapc012bc9a-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.537 2 INFO os_vif [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11')
Oct 02 12:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.657 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.657 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.657 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] No VIF found with MAC fa:16:3e:68:dc:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.657 2 INFO nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Using config drive
Oct 02 12:31:38 compute-0 nova_compute[257802]: 2025-10-02 12:31:38.683 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 02 12:31:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 02 12:31:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:39.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:39 compute-0 ceph-mon[73607]: pgmap v2106: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Oct 02 12:31:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3119987833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-0 ceph-mon[73607]: osdmap e317: 3 total, 3 up, 3 in
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.632 2 INFO nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Creating config drive at /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.637 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkl94tsn7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 492 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Oct 02 12:31:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.748 2 DEBUG nova.compute.manager [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.749 2 DEBUG oslo_concurrency.lockutils [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.749 2 DEBUG oslo_concurrency.lockutils [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.749 2 DEBUG oslo_concurrency.lockutils [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.750 2 DEBUG nova.compute.manager [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.750 2 WARNING nova.compute.manager [req-4ee1c75a-8fff-4407-94c5-a4dd6bdf635f req-7c1a4234-8e38-45c3-ba9e-b5f4dd8e18c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received unexpected event network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with vm_state active and task_state resize_migrating.
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.777 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkl94tsn7" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.826 2 DEBUG nova.storage.rbd_utils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] rbd image 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:39 compute-0 nova_compute[257802]: 2025-10-02 12:31:39.830 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.330 2 DEBUG nova.network.neutron [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Updated VIF entry in instance network info cache for port c012bc9a-1128-406f-a560-fca6e0baaf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.331 2 DEBUG nova.network.neutron [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Updating instance_info_cache with network_info: [{"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.373 2 DEBUG oslo_concurrency.lockutils [req-e18f39d6-026b-499c-aa61-7e5ea973fee0 req-e27097c4-1625-4920-9a98-583b919b9450 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-06e8b42b-8275-4d3a-9155-14ffb6e7fb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.390 2 INFO nova.network.neutron [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating port 386c73f3-c5a1-4edb-894f-841beabaecbd with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:31:40 compute-0 ceph-mon[73607]: pgmap v2108: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 492 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.891 2 DEBUG oslo_concurrency.processutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config 06e8b42b-8275-4d3a-9155-14ffb6e7fb84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.892 2 INFO nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Deleting local config drive /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84/disk.config because it was imported into RBD.
Oct 02 12:31:40 compute-0 NetworkManager[44987]: <info>  [1759408300.9547] manager: (tapc012bc9a-11): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct 02 12:31:40 compute-0 kernel: tapc012bc9a-11: entered promiscuous mode
Oct 02 12:31:40 compute-0 ovn_controller[148183]: 2025-10-02T12:31:40Z|00516|binding|INFO|Claiming lport c012bc9a-1128-406f-a560-fca6e0baaf71 for this chassis.
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ovn_controller[148183]: 2025-10-02T12:31:40Z|00517|binding|INFO|c012bc9a-1128-406f-a560-fca6e0baaf71: Claiming fa:16:3e:68:dc:16 10.100.0.9
Oct 02 12:31:40 compute-0 ovn_controller[148183]: 2025-10-02T12:31:40Z|00518|binding|INFO|Setting lport c012bc9a-1128-406f-a560-fca6e0baaf71 ovn-installed in OVS
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ovn_controller[148183]: 2025-10-02T12:31:40Z|00519|binding|INFO|Setting lport c012bc9a-1128-406f-a560-fca6e0baaf71 up in Southbound
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.978 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dc:16 10.100.0.9'], port_security=['fa:16:3e:68:dc:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06e8b42b-8275-4d3a-9155-14ffb6e7fb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dda4e7689e7440639479cd7b0e4c17df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6aa390f-2fa0-44c0-ae49-1e62408022df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d5c3d8f-0441-49ce-abe6-9ea3fee8180d, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c012bc9a-1128-406f-a560-fca6e0baaf71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.979 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c012bc9a-1128-406f-a560-fca6e0baaf71 in datapath ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab bound to our chassis
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.981 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab
Oct 02 12:31:40 compute-0 nova_compute[257802]: 2025-10-02 12:31:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.994 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b102f466-e933-494c-a775-4ace8902e91b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.995 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef70e8f0-e1 in ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:31:40 compute-0 systemd-machined[211836]: New machine qemu-60-instance-00000077.
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.997 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef70e8f0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.997 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b2ef3d-4725-4048-bcfc-e53b0eec6538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:40.998 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8230d33-0e91-487e-8d28-66064b99b0fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000077.
Oct 02 12:31:41 compute-0 systemd-udevd[333561]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.016 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[3c71481f-4b7c-4f48-8847-72782ba3457a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 NetworkManager[44987]: <info>  [1759408301.0294] device (tapc012bc9a-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:31:41 compute-0 NetworkManager[44987]: <info>  [1759408301.0303] device (tapc012bc9a-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.033 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[93e61cc2-9be1-4b91-80f4-f0c3c076e78a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:31:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.066 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2f27fe-81e6-44ac-8290-d02af1c3d7c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 systemd-udevd[333564]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:41 compute-0 NetworkManager[44987]: <info>  [1759408301.0717] manager: (tapef70e8f0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.070 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ea76ca30-50f2-4144-a9de-13897e933378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.109 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa26d77-3636-4c14-8967-095c0285deb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.113 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7815a8-f6d4-4819-9c69-34f766c8c7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 NetworkManager[44987]: <info>  [1759408301.1377] device (tapef70e8f0-e0): carrier: link connected
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.143 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[be19ca56-b779-480f-b26f-a75161917d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.162 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a53eefd3-fee7-47b1-a999-bc7ea3db38df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef70e8f0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a7:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636876, 'reachable_time': 41338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333592, 'error': None, 'target': 'ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.182 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7db2d66b-646c-4603-b84a-29fa68ff59c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:a774'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636876, 'tstamp': 636876}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333593, 'error': None, 'target': 'ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.200 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8943e3f-73b4-45e7-bdbf-25e5da385d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef70e8f0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a7:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636876, 'reachable_time': 41338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333594, 'error': None, 'target': 'ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37cdf79f-6d53-4c1a-b604-d709c316a271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.286 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2190089e-5508-4580-a259-520069e4e400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.288 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef70e8f0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.288 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.289 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef70e8f0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:41 compute-0 NetworkManager[44987]: <info>  [1759408301.2929] manager: (tapef70e8f0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 02 12:31:41 compute-0 kernel: tapef70e8f0-e0: entered promiscuous mode
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.298 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef70e8f0-e0, col_values=(('external_ids', {'iface-id': '0a6bcd16-df2d-469e-828f-8e5aa07075a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:41 compute-0 ovn_controller[148183]: 2025-10-02T12:31:41Z|00520|binding|INFO|Releasing lport 0a6bcd16-df2d-469e-828f-8e5aa07075a0 from this chassis (sb_readonly=0)
Oct 02 12:31:41 compute-0 nova_compute[257802]: 2025-10-02 12:31:41.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:41 compute-0 nova_compute[257802]: 2025-10-02 12:31:41.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.321 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.322 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e97a13-bcd7-417d-81fc-b701824c1006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.322 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab.pid.haproxy
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:31:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:41.323 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'env', 'PROCESS_TAG=haproxy-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:31:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Oct 02 12:31:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:41 compute-0 podman[333626]: 2025-10-02 12:31:41.678275735 +0000 UTC m=+0.030345867 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:31:41 compute-0 nova_compute[257802]: 2025-10-02 12:31:41.796 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:41 compute-0 nova_compute[257802]: 2025-10-02 12:31:41.797 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquired lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:41 compute-0 nova_compute[257802]: 2025-10-02 12:31:41.797 2 DEBUG nova.network.neutron [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:41 compute-0 podman[333626]: 2025-10-02 12:31:41.872628454 +0000 UTC m=+0.224698556 container create b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:31:41 compute-0 systemd[1]: Started libpod-conmon-b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357.scope.
Oct 02 12:31:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40ee7f4c84efbe1a843c82e9418dcc4c5ca133094ed38cc4c6bb27a3f0a09f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.029 2 DEBUG nova.compute.manager [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-changed-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.030 2 DEBUG nova.compute.manager [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Refreshing instance network info cache due to event network-changed-386c73f3-c5a1-4edb-894f-841beabaecbd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.031 2 DEBUG oslo_concurrency.lockutils [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:42.034 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.069 2 DEBUG nova.compute.manager [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.070 2 DEBUG oslo_concurrency.lockutils [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.070 2 DEBUG oslo_concurrency.lockutils [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.071 2 DEBUG oslo_concurrency.lockutils [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.071 2 DEBUG nova.compute.manager [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.072 2 WARNING nova.compute.manager [req-1a1bef82-45d0-49f3-92f7-1c2cec8353c0 req-98ab2c51-3289-493c-99ca-3094bdfc058d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received unexpected event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with vm_state active and task_state resize_migrated.
Oct 02 12:31:42 compute-0 podman[333626]: 2025-10-02 12:31:42.08924003 +0000 UTC m=+0.441310172 container init b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:31:42 compute-0 podman[333626]: 2025-10-02 12:31:42.097366789 +0000 UTC m=+0.449436921 container start b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:31:42 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [NOTICE]   (333687) : New worker (333690) forked
Oct 02 12:31:42 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [NOTICE]   (333687) : Loading success.
Oct 02 12:31:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:42.263 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:31:42
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.634 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408302.6340008, 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.634 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] VM Started (Lifecycle Event)
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.682 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.688 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408302.6342916, 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.688 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] VM Paused (Lifecycle Event)
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.715 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.718 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.740 2 DEBUG nova.compute.manager [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.741 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.741 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.741 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.741 2 DEBUG nova.compute.manager [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Processing event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.742 2 DEBUG nova.compute.manager [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.742 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.742 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.742 2 DEBUG oslo_concurrency.lockutils [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.743 2 DEBUG nova.compute.manager [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] No waiting events found dispatching network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.743 2 WARNING nova.compute.manager [req-b7535b9e-d9be-4a04-9f6d-bc2270d1a796 req-f825a54a-e646-4b89-81b8-3571e6446352 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received unexpected event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 for instance with vm_state building and task_state spawning.
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.743 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.751 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.753 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.753 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408302.745795, 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.754 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] VM Resumed (Lifecycle Event)
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.762 2 INFO nova.virt.libvirt.driver [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Instance spawned successfully.
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.763 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.789 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.790 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.791 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.792 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.792 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.793 2 DEBUG nova.virt.libvirt.driver [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.800 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.804 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.839 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.897 2 INFO nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Took 11.37 seconds to spawn the instance on the hypervisor.
Oct 02 12:31:42 compute-0 nova_compute[257802]: 2025-10-02 12:31:42.897 2 DEBUG nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:42 compute-0 ceph-mon[73607]: pgmap v2109: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Oct 02 12:31:43 compute-0 nova_compute[257802]: 2025-10-02 12:31:43.028 2 INFO nova.compute.manager [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Took 13.47 seconds to build instance.
Oct 02 12:31:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:43.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:43 compute-0 nova_compute[257802]: 2025-10-02 12:31:43.059 2 DEBUG oslo_concurrency.lockutils [None req-ad02924e-0248-415d-b2b0-f9b2f5532bc5 fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:31:43 compute-0 nova_compute[257802]: 2025-10-02 12:31:43.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:31:43 compute-0 nova_compute[257802]: 2025-10-02 12:31:43.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 659 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 1.6 MiB/s wr, 80 op/s
Oct 02 12:31:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.239 2 DEBUG nova.network.neutron [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating instance_info_cache with network_info: [{"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.274 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Releasing lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.277 2 DEBUG oslo_concurrency.lockutils [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.277 2 DEBUG nova.network.neutron [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Refreshing network info cache for port 386c73f3-c5a1-4edb-894f-841beabaecbd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/74631981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.379 2 DEBUG os_brick.utils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.380 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.389 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.390 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[663d26bf-b45d-47e0-822d-6aa2d573f509]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.391 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.398 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.398 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ef41a653-ce50-4928-88ba-58eff1b570e9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.399 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.412 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.413 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[1f22ceac-6401-4dac-8c1e-9c16e5703fdf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.414 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5d4ba1-91b6-4b18-a6cf-884ea870a901]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.414 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.452 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.454 2 DEBUG os_brick.initiator.connectors.lightos [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.454 2 DEBUG os_brick.initiator.connectors.lightos [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.454 2 DEBUG os_brick.initiator.connectors.lightos [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:31:44 compute-0 nova_compute[257802]: 2025-10-02 12:31:44.455 2 DEBUG os_brick.utils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:31:44 compute-0 podman[333708]: 2025-10-02 12:31:44.932751049 +0000 UTC m=+0.072009761 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 12:31:44 compute-0 podman[333710]: 2025-10-02 12:31:44.945555764 +0000 UTC m=+0.076093832 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:44 compute-0 podman[333709]: 2025-10-02 12:31:44.95720574 +0000 UTC m=+0.086524038 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:45 compute-0 ceph-mon[73607]: pgmap v2110: 305 pgs: 305 active+clean; 659 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 1.6 MiB/s wr, 80 op/s
Oct 02 12:31:45 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:31:45 compute-0 systemd[333399]: Activating special unit Exit the Session...
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped target Main User Target.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped target Basic System.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped target Paths.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped target Sockets.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped target Timers.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:31:45 compute-0 systemd[333399]: Closed D-Bus User Message Bus Socket.
Oct 02 12:31:45 compute-0 systemd[333399]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:31:45 compute-0 systemd[333399]: Removed slice User Application Slice.
Oct 02 12:31:45 compute-0 systemd[333399]: Reached target Shutdown.
Oct 02 12:31:45 compute-0 systemd[333399]: Finished Exit the Session.
Oct 02 12:31:45 compute-0 systemd[333399]: Reached target Exit the Session.
Oct 02 12:31:45 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:31:45 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:31:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:31:45 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:31:45 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:31:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:31:45 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:31:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 870 KiB/s rd, 861 KiB/s wr, 93 op/s
Oct 02 12:31:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:45 compute-0 sudo[333763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:45 compute-0 sudo[333763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:45 compute-0 nova_compute[257802]: 2025-10-02 12:31:45.793 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:31:45 compute-0 sudo[333763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:45 compute-0 nova_compute[257802]: 2025-10-02 12:31:45.796 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:31:45 compute-0 nova_compute[257802]: 2025-10-02 12:31:45.796 2 INFO nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Creating image(s)
Oct 02 12:31:45 compute-0 nova_compute[257802]: 2025-10-02 12:31:45.869 2 DEBUG nova.storage.rbd_utils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] creating snapshot(nova-resize) on rbd image(c8b713f4-4f41-4153-928c-164f2ed108ed_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:31:45 compute-0 sudo[333788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:45 compute-0 sudo[333788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:45 compute-0 sudo[333788]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Oct 02 12:31:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1365739230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:46 compute-0 ceph-mon[73607]: pgmap v2111: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 870 KiB/s rd, 861 KiB/s wr, 93 op/s
Oct 02 12:31:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Oct 02 12:31:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Oct 02 12:31:46 compute-0 nova_compute[257802]: 2025-10-02 12:31:46.933 2 DEBUG nova.objects.instance [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:46 compute-0 nova_compute[257802]: 2025-10-02 12:31:46.986 2 DEBUG nova.network.neutron [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updated VIF entry in instance network info cache for port 386c73f3-c5a1-4edb-894f-841beabaecbd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:46 compute-0 nova_compute[257802]: 2025-10-02 12:31:46.986 2 DEBUG nova.network.neutron [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating instance_info_cache with network_info: [{"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:47.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.062 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.063 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Ensure instance console log exists: /var/lib/nova/instances/c8b713f4-4f41-4153-928c-164f2ed108ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.063 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.064 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.064 2 DEBUG oslo_concurrency.lockutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.066 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Start _get_guest_xml network_info=[{"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "vif_mac": "fa:16:3e:94:65:0d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': None, 'guest_format': None, 'attachment_id': 'b75fb30b-6a63-4647-ae72-6a623b91c7a3', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1f1fe097-f4b6-4748-bf18-8e487e0f3ba6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1f1fe097-f4b6-4748-bf18-8e487e0f3ba6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'c8b713f4-4f41-4153-928c-164f2ed108ed', 'attached_at': '2025-10-02T12:31:45.000000', 'detached_at': '', 'volume_id': '1f1fe097-f4b6-4748-bf18-8e487e0f3ba6', 'multiattach': True, 'serial': '1f1fe097-f4b6-4748-bf18-8e487e0f3ba6'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.067 2 DEBUG oslo_concurrency.lockutils [req-7b2c8041-3f9a-4f8d-b264-88de9512b43a req-bc436fcc-6c04-4315-874e-e8054022ce55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.073 2 WARNING nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.078 2 DEBUG nova.virt.libvirt.host [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.078 2 DEBUG nova.virt.libvirt.host [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.082 2 DEBUG nova.virt.libvirt.host [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.082 2 DEBUG nova.virt.libvirt.host [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.083 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.083 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.084 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.084 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.084 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.084 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.084 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.085 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.085 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.085 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.085 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.085 2 DEBUG nova.virt.hardware [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.086 2 DEBUG nova.objects.instance [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.102 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:47.265 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289115384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.518 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:47 compute-0 nova_compute[257802]: 2025-10-02 12:31:47.570 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 879 KiB/s wr, 131 op/s
Oct 02 12:31:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:47.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:47 compute-0 ceph-mon[73607]: osdmap e318: 3 total, 3 up, 3 in
Oct 02 12:31:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/289115384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262405039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.071 2 DEBUG oslo_concurrency.processutils [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.106 2 DEBUG nova.virt.libvirt.vif [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=115,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-1m21sn7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=c8b713f4-4f41-4153-928c-164f2ed108ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "vif_mac": "fa:16:3e:94:65:0d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.107 2 DEBUG nova.network.os_vif_util [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "vif_mac": "fa:16:3e:94:65:0d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.108 2 DEBUG nova.network.os_vif_util [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.110 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <uuid>c8b713f4-4f41-4153-928c-164f2ed108ed</uuid>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <name>instance-00000073</name>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:name>multiattach-server-1</nova:name>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:31:47</nova:creationTime>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:user uuid="22d56fcd2a4b4851bfd126ae4548ee9b">tempest-AttachVolumeMultiAttachTest-1564585024-project-member</nova:user>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:project uuid="5533aaac08cd4856af72ef4992bb5e76">tempest-AttachVolumeMultiAttachTest-1564585024</nova:project>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <nova:port uuid="386c73f3-c5a1-4edb-894f-841beabaecbd">
Oct 02 12:31:48 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <system>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="serial">c8b713f4-4f41-4153-928c-164f2ed108ed</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="uuid">c8b713f4-4f41-4153-928c-164f2ed108ed</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </system>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <os>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </os>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <features>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </features>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c8b713f4-4f41-4153-928c-164f2ed108ed_disk">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </source>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c8b713f4-4f41-4153-928c-164f2ed108ed_disk.config">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </source>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-1f1fe097-f4b6-4748-bf18-8e487e0f3ba6">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </source>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:31:48 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <serial>1f1fe097-f4b6-4748-bf18-8e487e0f3ba6</serial>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <shareable/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:94:65:0d"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <target dev="tap386c73f3-c5"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c8b713f4-4f41-4153-928c-164f2ed108ed/console.log" append="off"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <video>
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </video>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:31:48 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:31:48 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:31:48 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:31:48 compute-0 nova_compute[257802]: </domain>
Oct 02 12:31:48 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.117 2 DEBUG nova.virt.libvirt.vif [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=115,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-1m21sn7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=c8b713f4-4f41-4153-928c-164f2ed108ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "vif_mac": "fa:16:3e:94:65:0d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.118 2 DEBUG nova.network.os_vif_util [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "vif_mac": "fa:16:3e:94:65:0d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.120 2 DEBUG nova.network.os_vif_util [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.121 2 DEBUG os_vif [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.125 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386c73f3-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.125 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap386c73f3-c5, col_values=(('external_ids', {'iface-id': '386c73f3-c5a1-4edb-894f-841beabaecbd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:65:0d', 'vm-uuid': 'c8b713f4-4f41-4153-928c-164f2ed108ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 NetworkManager[44987]: <info>  [1759408308.1280] manager: (tap386c73f3-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.133 2 INFO os_vif [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5')
Oct 02 12:31:48 compute-0 podman[333950]: 2025-10-02 12:31:48.251062672 +0000 UTC m=+0.090605438 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.261 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.261 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.262 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.262 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:94:65:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.262 2 INFO nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Using config drive
Oct 02 12:31:48 compute-0 NetworkManager[44987]: <info>  [1759408308.3585] manager: (tap386c73f3-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Oct 02 12:31:48 compute-0 kernel: tap386c73f3-c5: entered promiscuous mode
Oct 02 12:31:48 compute-0 ovn_controller[148183]: 2025-10-02T12:31:48Z|00521|binding|INFO|Claiming lport 386c73f3-c5a1-4edb-894f-841beabaecbd for this chassis.
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 ovn_controller[148183]: 2025-10-02T12:31:48Z|00522|binding|INFO|386c73f3-c5a1-4edb-894f-841beabaecbd: Claiming fa:16:3e:94:65:0d 10.100.0.4
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.373 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:65:0d 10.100.0.4'], port_security=['fa:16:3e:94:65:0d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c8b713f4-4f41-4153-928c-164f2ed108ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=386c73f3-c5a1-4edb-894f-841beabaecbd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.374 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 386c73f3-c5a1-4edb-894f-841beabaecbd in datapath 585473f8-52e4-4e55-96df-8a236d361126 bound to our chassis
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.376 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 ovn_controller[148183]: 2025-10-02T12:31:48Z|00523|binding|INFO|Setting lport 386c73f3-c5a1-4edb-894f-841beabaecbd ovn-installed in OVS
Oct 02 12:31:48 compute-0 ovn_controller[148183]: 2025-10-02T12:31:48Z|00524|binding|INFO|Setting lport 386c73f3-c5a1-4edb-894f-841beabaecbd up in Southbound
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 systemd-machined[211836]: New machine qemu-61-instance-00000073.
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.399 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37b50fa2-e19e-4d07-ad26-6fcae6fc3862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000073.
Oct 02 12:31:48 compute-0 systemd-udevd[334011]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.429 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6a45ffa5-c033-4932-ac09-121f44127a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.435 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[51fa00de-d6d0-4454-9fee-aaca34d58a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 NetworkManager[44987]: <info>  [1759408308.4375] device (tap386c73f3-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:31:48 compute-0 NetworkManager[44987]: <info>  [1759408308.4385] device (tap386c73f3-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.470 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c102e6-77e1-4204-887f-2ebba13dfc47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.491 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b6bd49-aaaf-4369-a1e5-b32a1720ac4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 35411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334021, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.507 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6ef75c-60e0-46f8-8c51-cadf6cc484bf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618902, 'tstamp': 618902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334023, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618905, 'tstamp': 618905}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334023, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.509 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.512 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.512 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.512 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:48.513 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.826 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.827 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.827 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.827 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.827 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.828 2 INFO nova.compute.manager [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Terminating instance
Oct 02 12:31:48 compute-0 nova_compute[257802]: 2025-10-02 12:31:48.829 2 DEBUG nova.compute.manager [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:31:49 compute-0 kernel: tapc012bc9a-11 (unregistering): left promiscuous mode
Oct 02 12:31:49 compute-0 NetworkManager[44987]: <info>  [1759408309.0204] device (tapc012bc9a-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:31:49 compute-0 ovn_controller[148183]: 2025-10-02T12:31:49Z|00525|binding|INFO|Releasing lport c012bc9a-1128-406f-a560-fca6e0baaf71 from this chassis (sb_readonly=0)
Oct 02 12:31:49 compute-0 ovn_controller[148183]: 2025-10-02T12:31:49Z|00526|binding|INFO|Setting lport c012bc9a-1128-406f-a560-fca6e0baaf71 down in Southbound
Oct 02 12:31:49 compute-0 ovn_controller[148183]: 2025-10-02T12:31:49Z|00527|binding|INFO|Removing iface tapc012bc9a-11 ovn-installed in OVS
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.032 2 DEBUG nova.compute.manager [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.032 2 DEBUG oslo_concurrency.lockutils [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.033 2 DEBUG oslo_concurrency.lockutils [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.033 2 DEBUG oslo_concurrency.lockutils [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.033 2 DEBUG nova.compute.manager [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:49.033 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dc:16 10.100.0.9'], port_security=['fa:16:3e:68:dc:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06e8b42b-8275-4d3a-9155-14ffb6e7fb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dda4e7689e7440639479cd7b0e4c17df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6aa390f-2fa0-44c0-ae49-1e62408022df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d5c3d8f-0441-49ce-abe6-9ea3fee8180d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c012bc9a-1128-406f-a560-fca6e0baaf71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.033 2 WARNING nova.compute.manager [req-6d7460c6-8c4e-4524-a2dc-013a535677e5 req-a968322e-0b00-4386-97e6-58f07cf3d31e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received unexpected event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with vm_state active and task_state resize_finish.
Oct 02 12:31:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:49.034 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c012bc9a-1128-406f-a560-fca6e0baaf71 in datapath ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab unbound from our chassis
Oct 02 12:31:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:49.035 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:31:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:49.036 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[95b3423d-5880-4bcf-a9ac-774792f4957d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:49.037 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab namespace which is not needed anymore
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:49.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:49 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 02 12:31:49 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000077.scope: Consumed 7.585s CPU time.
Oct 02 12:31:49 compute-0 systemd-machined[211836]: Machine qemu-60-instance-00000077 terminated.
Oct 02 12:31:49 compute-0 ceph-mon[73607]: pgmap v2113: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 879 KiB/s wr, 131 op/s
Oct 02 12:31:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/800553834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4262405039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/259927622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/250638847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.268 2 INFO nova.virt.libvirt.driver [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Instance destroyed successfully.
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.268 2 DEBUG nova.objects.instance [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lazy-loading 'resources' on Instance uuid 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [NOTICE]   (333687) : haproxy version is 2.8.14-c23fe91
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [NOTICE]   (333687) : path to executable is /usr/sbin/haproxy
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [WARNING]  (333687) : Exiting Master process...
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [WARNING]  (333687) : Exiting Master process...
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [ALERT]    (333687) : Current worker (333690) exited with code 143 (Terminated)
Oct 02 12:31:49 compute-0 neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab[333678]: [WARNING]  (333687) : All workers exited. Exiting... (0)
Oct 02 12:31:49 compute-0 systemd[1]: libpod-b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357.scope: Deactivated successfully.
Oct 02 12:31:49 compute-0 podman[334105]: 2025-10-02 12:31:49.308688045 +0000 UTC m=+0.154572742 container died b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.344 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408309.3440876, c8b713f4-4f41-4153-928c-164f2ed108ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.344 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] VM Resumed (Lifecycle Event)
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.346 2 DEBUG nova.compute.manager [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.350 2 INFO nova.virt.libvirt.driver [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Instance running successfully.
Oct 02 12:31:49 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.353 2 DEBUG nova.virt.libvirt.guest [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.354 2 DEBUG nova.virt.libvirt.driver [None req-bbfe8bb3-4f9f-443e-a7a1-89d804bda2d7 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.389 2 DEBUG nova.virt.libvirt.vif [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1526479921',display_name='tempest-ServerMetadataNegativeTestJSON-server-1526479921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1526479921',id=119,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dda4e7689e7440639479cd7b0e4c17df',ramdisk_id='',reservation_id='r-y8tu3z9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-965970566',owner_user_name='tempest-ServerMetadataNegativeTestJSON-965970566-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:42Z,user_data=None,user_id='fc18358f9af64753bc8892379b9244c6',uuid=06e8b42b-8275-4d3a-9155-14ffb6e7fb84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.389 2 DEBUG nova.network.os_vif_util [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converting VIF {"id": "c012bc9a-1128-406f-a560-fca6e0baaf71", "address": "fa:16:3e:68:dc:16", "network": {"id": "ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-34488102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dda4e7689e7440639479cd7b0e4c17df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc012bc9a-11", "ovs_interfaceid": "c012bc9a-1128-406f-a560-fca6e0baaf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.390 2 DEBUG nova.network.os_vif_util [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.391 2 DEBUG os_vif [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc012bc9a-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.398 2 INFO os_vif [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dc:16,bridge_name='br-int',has_traffic_filtering=True,id=c012bc9a-1128-406f-a560-fca6e0baaf71,network=Network(ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc012bc9a-11')
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.431 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.438 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.486 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.486 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408309.3447526, c8b713f4-4f41-4153-928c-164f2ed108ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.487 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] VM Started (Lifecycle Event)
Oct 02 12:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357-userdata-shm.mount: Deactivated successfully.
Oct 02 12:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40ee7f4c84efbe1a843c82e9418dcc4c5ca133094ed38cc4c6bb27a3f0a09f0-merged.mount: Deactivated successfully.
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.622 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.625 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 691 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Oct 02 12:31:49 compute-0 nova_compute[257802]: 2025-10-02 12:31:49.699 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:31:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:31:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:49.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:31:50 compute-0 podman[334105]: 2025-10-02 12:31:50.096168425 +0000 UTC m=+0.942053112 container cleanup b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:31:50 compute-0 systemd[1]: libpod-conmon-b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357.scope: Deactivated successfully.
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.464 2 DEBUG nova.compute.manager [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-unplugged-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.465 2 DEBUG oslo_concurrency.lockutils [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.466 2 DEBUG oslo_concurrency.lockutils [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.466 2 DEBUG oslo_concurrency.lockutils [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.466 2 DEBUG nova.compute.manager [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] No waiting events found dispatching network-vif-unplugged-c012bc9a-1128-406f-a560-fca6e0baaf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.467 2 DEBUG nova.compute.manager [req-22814f51-a677-4bc5-9645-4c02d0db3f03 req-91d52e41-0a34-455c-a255-fd36350dfddf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-unplugged-c012bc9a-1128-406f-a560-fca6e0baaf71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:31:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3523354863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-0 podman[334162]: 2025-10-02 12:31:50.833109574 +0000 UTC m=+0.714754634 container remove b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.843 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[60a85e57-47ad-4e73-b922-0404f38b82cb]: (4, ('Thu Oct  2 12:31:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab (b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357)\nb89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357\nThu Oct  2 12:31:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab (b89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357)\nb89832061313eff219701b70765b9ea155efd39d9196fce3e24ae40c8ebd6357\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.845 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0318b3c0-14cb-4f5d-b6b7-a9ee3eb91730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.846 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef70e8f0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:50 compute-0 kernel: tapef70e8f0-e0: left promiscuous mode
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.853 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebb94bd-5da1-4106-af59-ebc7c161358e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 nova_compute[257802]: 2025-10-02 12:31:50.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.875 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da4a21b7-c79d-4c9f-9dce-bc6d6a353d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.876 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[61df0575-9a85-446f-bc72-e6982e372c69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.896 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e661f38b-5f16-4477-b26a-21fe6c33be2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636868, 'reachable_time': 17129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334180, 'error': None, 'target': 'ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:50 compute-0 systemd[1]: run-netns-ovnmeta\x2def70e8f0\x2de539\x2d4fdd\x2d92b0\x2db2d5e451b5ab.mount: Deactivated successfully.
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.901 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef70e8f0-e539-4fdd-92b0-b2d5e451b5ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:31:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:31:50.902 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b45b05f3-6dc5-404d-b7b5-31d3691d4c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:51.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.156 2 DEBUG nova.compute.manager [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.157 2 DEBUG oslo_concurrency.lockutils [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.157 2 DEBUG oslo_concurrency.lockutils [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.157 2 DEBUG oslo_concurrency.lockutils [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.157 2 DEBUG nova.compute.manager [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:51 compute-0 nova_compute[257802]: 2025-10-02 12:31:51.158 2 WARNING nova.compute.manager [req-2e4aa462-ef48-49ae-9fcc-8a2bd97a3379 req-37ef9860-985d-405e-999c-50fa4f7c611b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received unexpected event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with vm_state resized and task_state None.
Oct 02 12:31:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 691 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Oct 02 12:31:51 compute-0 ceph-mon[73607]: pgmap v2114: 305 pgs: 305 active+clean; 691 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Oct 02 12:31:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.685 2 DEBUG nova.compute.manager [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.685 2 DEBUG oslo_concurrency.lockutils [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.685 2 DEBUG oslo_concurrency.lockutils [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.686 2 DEBUG oslo_concurrency.lockutils [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.686 2 DEBUG nova.compute.manager [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] No waiting events found dispatching network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.686 2 WARNING nova.compute.manager [req-566e7004-ccdc-46ae-a86b-029a7b2afa38 req-e40a1575-4b42-48f1-8719-dd983e7a324b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received unexpected event network-vif-plugged-c012bc9a-1128-406f-a560-fca6e0baaf71 for instance with vm_state active and task_state deleting.
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.698 2 INFO nova.virt.libvirt.driver [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Deleting instance files /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84_del
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.699 2 INFO nova.virt.libvirt.driver [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Deletion of /var/lib/nova/instances/06e8b42b-8275-4d3a-9155-14ffb6e7fb84_del complete
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.802 2 INFO nova.compute.manager [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Took 3.97 seconds to destroy the instance on the hypervisor.
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.803 2 DEBUG oslo.service.loopingcall [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.804 2 DEBUG nova.compute.manager [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:31:52 compute-0 nova_compute[257802]: 2025-10-02 12:31:52.804 2 DEBUG nova.network.neutron [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:31:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:53.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:53 compute-0 nova_compute[257802]: 2025-10-02 12:31:53.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:53 compute-0 ceph-mon[73607]: pgmap v2115: 305 pgs: 305 active+clean; 691 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Oct 02 12:31:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 662 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.2 MiB/s wr, 326 op/s
Oct 02 12:31:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:53.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:31:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/12138334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.274 2 DEBUG nova.network.neutron [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.408 2 INFO nova.compute.manager [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Took 1.60 seconds to deallocate network for instance.
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012228750814256468 of space, bias 1.0, pg target 3.6686252442769405 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323374685929649 of space, bias 1.0, pg target 1.2840422817211057 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.442 2 DEBUG nova.compute.manager [req-b05c160c-708e-4652-bf59-a00d3a52d227 req-8e7d6f51-273a-467f-8482-791a1b30a4e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Received event network-vif-deleted-c012bc9a-1128-406f-a560-fca6e0baaf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.528 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.529 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:54 compute-0 nova_compute[257802]: 2025-10-02 12:31:54.621 2 DEBUG oslo_concurrency.processutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292250106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.048 2 DEBUG oslo_concurrency.processutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.054 2 DEBUG nova.compute.provider_tree [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088357111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.096 2 DEBUG nova.scheduler.client.report [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:31:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088357111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.168 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:55 compute-0 ceph-mon[73607]: pgmap v2116: 305 pgs: 305 active+clean; 662 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.2 MiB/s wr, 326 op/s
Oct 02 12:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4292250106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4088357111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4088357111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.224 2 INFO nova.scheduler.client.report [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Deleted allocations for instance 06e8b42b-8275-4d3a-9155-14ffb6e7fb84
Oct 02 12:31:55 compute-0 nova_compute[257802]: 2025-10-02 12:31:55.399 2 DEBUG oslo_concurrency.lockutils [None req-92d9e2ea-2336-4b8c-a457-43de2b85df1d fc18358f9af64753bc8892379b9244c6 dda4e7689e7440639479cd7b0e4c17df - - default default] Lock "06e8b42b-8275-4d3a-9155-14ffb6e7fb84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.1 MiB/s wr, 352 op/s
Oct 02 12:31:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Oct 02 12:31:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2174074340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Oct 02 12:31:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Oct 02 12:31:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:57 compute-0 ceph-mon[73607]: pgmap v2117: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.1 MiB/s wr, 352 op/s
Oct 02 12:31:57 compute-0 ceph-mon[73607]: osdmap e319: 3 total, 3 up, 3 in
Oct 02 12:31:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4239914508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.4 MiB/s wr, 332 op/s
Oct 02 12:31:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:58 compute-0 nova_compute[257802]: 2025-10-02 12:31:58.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:58 compute-0 nova_compute[257802]: 2025-10-02 12:31:58.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:31:59 compute-0 ceph-mon[73607]: pgmap v2119: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.4 MiB/s wr, 332 op/s
Oct 02 12:31:59 compute-0 nova_compute[257802]: 2025-10-02 12:31:59.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 17 KiB/s wr, 290 op/s
Oct 02 12:31:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:31:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:31:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:00 compute-0 nova_compute[257802]: 2025-10-02 12:32:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:00 compute-0 nova_compute[257802]: 2025-10-02 12:32:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:00 compute-0 ovn_controller[148183]: 2025-10-02T12:32:00Z|00528|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:32:00 compute-0 nova_compute[257802]: 2025-10-02 12:32:00.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:00 compute-0 ceph-mon[73607]: pgmap v2120: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 17 KiB/s wr, 290 op/s
Oct 02 12:32:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:01.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:01 compute-0 nova_compute[257802]: 2025-10-02 12:32:01.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:01 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 12:32:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 17 KiB/s wr, 290 op/s
Oct 02 12:32:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:02 compute-0 nova_compute[257802]: 2025-10-02 12:32:02.126 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:03 compute-0 ceph-mon[73607]: pgmap v2121: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 17 KiB/s wr, 290 op/s
Oct 02 12:32:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3781105023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.096 2 DEBUG nova.compute.manager [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-changed-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.096 2 DEBUG nova.compute.manager [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Refreshing instance network info cache due to event network-changed-386c73f3-c5a1-4edb-894f-841beabaecbd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.097 2 DEBUG oslo_concurrency.lockutils [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.097 2 DEBUG oslo_concurrency.lockutils [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:03 compute-0 nova_compute[257802]: 2025-10-02 12:32:03.097 2 DEBUG nova.network.neutron [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Refreshing network info cache for port 386c73f3-c5a1-4edb-894f-841beabaecbd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:03 compute-0 ovn_controller[148183]: 2025-10-02T12:32:03Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:65:0d 10.100.0.4
Oct 02 12:32:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 17 KiB/s wr, 184 op/s
Oct 02 12:32:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:04 compute-0 nova_compute[257802]: 2025-10-02 12:32:04.266 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408309.265295, 06e8b42b-8275-4d3a-9155-14ffb6e7fb84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:04 compute-0 nova_compute[257802]: 2025-10-02 12:32:04.267 2 INFO nova.compute.manager [-] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] VM Stopped (Lifecycle Event)
Oct 02 12:32:04 compute-0 nova_compute[257802]: 2025-10-02 12:32:04.288 2 DEBUG nova.compute.manager [None req-4a451a3e-5560-4ff1-9368-852a307b2a8f - - - - - -] [instance: 06e8b42b-8275-4d3a-9155-14ffb6e7fb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:04 compute-0 nova_compute[257802]: 2025-10-02 12:32:04.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:05.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:05 compute-0 ceph-mon[73607]: pgmap v2122: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 17 KiB/s wr, 184 op/s
Oct 02 12:32:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3497539909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:05 compute-0 nova_compute[257802]: 2025-10-02 12:32:05.536 2 DEBUG nova.network.neutron [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updated VIF entry in instance network info cache for port 386c73f3-c5a1-4edb-894f-841beabaecbd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:05 compute-0 nova_compute[257802]: 2025-10-02 12:32:05.536 2 DEBUG nova.network.neutron [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating instance_info_cache with network_info: [{"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:05 compute-0 nova_compute[257802]: 2025-10-02 12:32:05.565 2 DEBUG oslo_concurrency.lockutils [req-8c0ceb37-0881-43aa-a68f-4d0b3c6f8504 req-ecd40f21-3140-41a7-8515-80369cc415fd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c8b713f4-4f41-4153-928c-164f2ed108ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 599 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 145 op/s
Oct 02 12:32:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:05 compute-0 sudo[334211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:05 compute-0 sudo[334211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:05 compute-0 sudo[334211]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:06 compute-0 sudo[334236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:06 compute-0 sudo[334236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:06 compute-0 sudo[334236]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.319 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.320 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.320 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:32:06 compute-0 nova_compute[257802]: 2025-10-02 12:32:06.321 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/989010263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:06 compute-0 ceph-mon[73607]: pgmap v2123: 305 pgs: 305 active+clean; 599 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 145 op/s
Oct 02 12:32:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:07.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 571 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 132 op/s
Oct 02 12:32:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:32:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:32:08 compute-0 nova_compute[257802]: 2025-10-02 12:32:08.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Oct 02 12:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Oct 02 12:32:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Oct 02 12:32:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.150 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [{"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.162 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-24caf505-35fd-40c1-9bcc-1f83580b142b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.163 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.163 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.184 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.185 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.185 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.185 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.186 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:09 compute-0 ceph-mon[73607]: pgmap v2124: 305 pgs: 305 active+clean; 571 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 132 op/s
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 30 KiB/s wr, 125 op/s
Oct 02 12:32:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076818352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.696 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:09.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.804 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.805 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.805 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.807 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.808 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.957 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.958 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4012MB free_disk=20.772258758544922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.959 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:09 compute-0 nova_compute[257802]: 2025-10-02 12:32:09.959 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.042 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 24caf505-35fd-40c1-9bcc-1f83580b142b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.042 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance c8b713f4-4f41-4153-928c-164f2ed108ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.042 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.043 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.104 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078611878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:10 compute-0 ceph-mon[73607]: osdmap e320: 3 total, 3 up, 3 in
Oct 02 12:32:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4076818352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.522 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.533 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:10 compute-0 ovn_controller[148183]: 2025-10-02T12:32:10Z|00529|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.555 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.592 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.593 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.994 2 DEBUG oslo_concurrency.lockutils [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:10 compute-0 nova_compute[257802]: 2025-10-02 12:32:10.995 2 DEBUG oslo_concurrency.lockutils [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.019 2 INFO nova.compute.manager [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Detaching volume 1f1fe097-f4b6-4748-bf18-8e487e0f3ba6
Oct 02 12:32:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:11.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.188 2 INFO nova.virt.block_device [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Attempting to driver detach volume 1f1fe097-f4b6-4748-bf18-8e487e0f3ba6 from mountpoint /dev/vdb
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.199 2 DEBUG nova.virt.libvirt.driver [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Attempting to detach device vdb from instance c8b713f4-4f41-4153-928c-164f2ed108ed from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.200 2 DEBUG nova.virt.libvirt.guest [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-1f1fe097-f4b6-4748-bf18-8e487e0f3ba6">
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   </source>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <serial>1f1fe097-f4b6-4748-bf18-8e487e0f3ba6</serial>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]: </disk>
Oct 02 12:32:11 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.356 2 INFO nova.virt.libvirt.driver [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance c8b713f4-4f41-4153-928c-164f2ed108ed from the persistent domain config.
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.356 2 DEBUG nova.virt.libvirt.driver [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c8b713f4-4f41-4153-928c-164f2ed108ed from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.357 2 DEBUG nova.virt.libvirt.guest [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-1f1fe097-f4b6-4748-bf18-8e487e0f3ba6">
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   </source>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <serial>1f1fe097-f4b6-4748-bf18-8e487e0f3ba6</serial>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <shareable/>
Oct 02 12:32:11 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:32:11 compute-0 nova_compute[257802]: </disk>
Oct 02 12:32:11 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.527 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408331.5272584, c8b713f4-4f41-4153-928c-164f2ed108ed => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.531 2 DEBUG nova.virt.libvirt.driver [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c8b713f4-4f41-4153-928c-164f2ed108ed _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.534 2 INFO nova.virt.libvirt.driver [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance c8b713f4-4f41-4153-928c-164f2ed108ed from the live domain config.
Oct 02 12:32:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 30 KiB/s wr, 125 op/s
Oct 02 12:32:11 compute-0 ceph-mon[73607]: pgmap v2126: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 30 KiB/s wr, 125 op/s
Oct 02 12:32:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1078611878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/71294947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.883 2 DEBUG nova.objects.instance [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:11 compute-0 nova_compute[257802]: 2025-10-02 12:32:11.966 2 DEBUG oslo_concurrency.lockutils [None req-885157cc-902b-4a78-8c46-9806b2a1af49 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:12 compute-0 ceph-mon[73607]: pgmap v2127: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 30 KiB/s wr, 125 op/s
Oct 02 12:32:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:13.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:13 compute-0 nova_compute[257802]: 2025-10-02 12:32:13.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 14 KiB/s wr, 52 op/s
Oct 02 12:32:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:14 compute-0 nova_compute[257802]: 2025-10-02 12:32:14.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:14 compute-0 nova_compute[257802]: 2025-10-02 12:32:14.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:15 compute-0 ceph-mon[73607]: pgmap v2128: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 14 KiB/s wr, 52 op/s
Oct 02 12:32:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 14 KiB/s wr, 37 op/s
Oct 02 12:32:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:15 compute-0 podman[334314]: 2025-10-02 12:32:15.922210257 +0000 UTC m=+0.060321665 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:32:15 compute-0 podman[334315]: 2025-10-02 12:32:15.942913315 +0000 UTC m=+0.070557605 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:32:15 compute-0 podman[334316]: 2025-10-02 12:32:15.95125079 +0000 UTC m=+0.083999846 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:17.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:17 compute-0 ceph-mon[73607]: pgmap v2129: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 14 KiB/s wr, 37 op/s
Oct 02 12:32:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 12:32:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:17.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:18 compute-0 nova_compute[257802]: 2025-10-02 12:32:18.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-0 ovn_controller[148183]: 2025-10-02T12:32:18Z|00530|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:32:18 compute-0 nova_compute[257802]: 2025-10-02 12:32:18.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-0 nova_compute[257802]: 2025-10-02 12:32:18.529 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:19 compute-0 podman[334372]: 2025-10-02 12:32:19.026634031 +0000 UTC m=+0.153969847 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:32:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:19.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:19 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:32:19 compute-0 ceph-mon[73607]: pgmap v2130: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 12:32:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3362849676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:19 compute-0 nova_compute[257802]: 2025-10-02 12:32:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 02 12:32:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:20 compute-0 ceph-mon[73607]: pgmap v2131: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 02 12:32:20 compute-0 nova_compute[257802]: 2025-10-02 12:32:20.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:21.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 KiB/s wr, 9 op/s
Oct 02 12:32:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:22 compute-0 ceph-mon[73607]: pgmap v2132: 305 pgs: 305 active+clean; 521 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 KiB/s wr, 9 op/s
Oct 02 12:32:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:23.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:23 compute-0 nova_compute[257802]: 2025-10-02 12:32:23.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 551 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 40 op/s
Oct 02 12:32:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:32:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:32:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1614874640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:24 compute-0 nova_compute[257802]: 2025-10-02 12:32:24.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:24 compute-0 nova_compute[257802]: 2025-10-02 12:32:24.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:25 compute-0 ceph-mon[73607]: pgmap v2133: 305 pgs: 305 active+clean; 551 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 40 op/s
Oct 02 12:32:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2322270525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:25 compute-0 ceph-osd[83986]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:32:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 02 12:32:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:26 compute-0 sudo[334401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:26 compute-0 sudo[334401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:26 compute-0 sudo[334401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:26 compute-0 sudo[334426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:26 compute-0 sudo[334426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:26 compute-0 sudo[334426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:26 compute-0 nova_compute[257802]: 2025-10-02 12:32:26.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:26.950 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:26.951 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:26.952 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:27 compute-0 ceph-mon[73607]: pgmap v2134: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 02 12:32:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 580 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 12:32:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:28 compute-0 nova_compute[257802]: 2025-10-02 12:32:28.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:29.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:29 compute-0 ceph-mon[73607]: pgmap v2135: 305 pgs: 305 active+clean; 580 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 12:32:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1663262144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.591 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.591 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.607 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.679 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.680 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.686 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:32:29 compute-0 nova_compute[257802]: 2025-10-02 12:32:29.686 2 INFO nova.compute.claims [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:32:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:29.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.048 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591967235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.550 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.557 2 DEBUG nova.compute.provider_tree [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.591 2 DEBUG nova.scheduler.client.report [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.624 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.627 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.683 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.683 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.709 2 INFO nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.728 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.776 2 INFO nova.virt.block_device [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Booting with volume 3c189565-c036-47f3-b9f5-ed22810ceabf at /dev/vda
Oct 02 12:32:30 compute-0 nova_compute[257802]: 2025-10-02 12:32:30.973 2 DEBUG nova.policy [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22d56fcd2a4b4851bfd126ae4548ee9b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5533aaac08cd4856af72ef4992bb5e76', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.041 2 DEBUG os_brick.utils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.042 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.053 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.053 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[981497c3-9d40-42b8-a017-f9d8a63b463d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.054 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.064 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.064 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[3948b226-82a6-42b4-bf46-7b98f27f6b17]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.065 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.073 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.073 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6c748c78-0652-4c3f-9a8f-878aeb26271b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.074 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1acc98-802c-424b-8f17-c81acf1a5a8e]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.075 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.103 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.105 2 DEBUG os_brick.initiator.connectors.lightos [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.105 2 DEBUG os_brick.initiator.connectors.lightos [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.105 2 DEBUG os_brick.initiator.connectors.lightos [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.106 2 DEBUG os_brick.utils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:32:31 compute-0 nova_compute[257802]: 2025-10-02 12:32:31.106 2 DEBUG nova.virt.block_device [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Updating existing volume attachment record: 4f150204-75b3-49f6-909c-f07b38002c63 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:32:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:31.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:31 compute-0 ceph-mon[73607]: pgmap v2136: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:32:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1591967235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 02 12:32:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:31.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.082 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Successfully created port: 6eb1649f-3d7a-49bf-9bce-af667e856eb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:32:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1631283109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.393 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.396 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.397 2 INFO nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Creating image(s)
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.398 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.398 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Ensure instance console log exists: /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.399 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.400 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:32 compute-0 nova_compute[257802]: 2025-10-02 12:32:32.400 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.163 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Successfully updated port: 6eb1649f-3d7a-49bf-9bce-af667e856eb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.180 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.180 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquired lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.181 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:33 compute-0 ceph-mon[73607]: pgmap v2137: 305 pgs: 305 active+clean; 597 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 02 12:32:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4142552725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2085883857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:33 compute-0 nova_compute[257802]: 2025-10-02 12:32:33.352 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:32:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 605 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.2 MiB/s wr, 193 op/s
Oct 02 12:32:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:33.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.120 2 DEBUG nova.compute.manager [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-changed-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.120 2 DEBUG nova.compute.manager [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Refreshing instance network info cache due to event network-changed-6eb1649f-3d7a-49bf-9bce-af667e856eb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.121 2 DEBUG oslo_concurrency.lockutils [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:34 compute-0 sudo[334484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:34 compute-0 sudo[334484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:34 compute-0 sudo[334484]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2458578435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2437038837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:34 compute-0 sudo[334509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:34 compute-0 sudo[334509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:34 compute-0 sudo[334509]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.431 2 DEBUG nova.network.neutron [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Updating instance_info_cache with network_info: [{"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.449 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Releasing lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.449 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Instance network_info: |[{"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.450 2 DEBUG oslo_concurrency.lockutils [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.450 2 DEBUG nova.network.neutron [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Refreshing network info cache for port 6eb1649f-3d7a-49bf-9bce-af667e856eb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.453 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Start _get_guest_xml network_info=[{"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '4f150204-75b3-49f6-909c-f07b38002c63', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3c189565-c036-47f3-b9f5-ed22810ceabf', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3c189565-c036-47f3-b9f5-ed22810ceabf', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '466310fc-8494-43ad-ab29-1691041fc97d', 'attached_at': '', 'detached_at': '', 'volume_id': '3c189565-c036-47f3-b9f5-ed22810ceabf', 'serial': '3c189565-c036-47f3-b9f5-ed22810ceabf', 'multiattach': True}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.458 2 WARNING nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.464 2 DEBUG nova.virt.libvirt.host [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.464 2 DEBUG nova.virt.libvirt.host [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:32:34 compute-0 sudo[334534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:34 compute-0 sudo[334534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.472 2 DEBUG nova.virt.libvirt.host [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:32:34 compute-0 sudo[334534]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.473 2 DEBUG nova.virt.libvirt.host [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.474 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.475 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.475 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.476 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.476 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.476 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.476 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.477 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.477 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.477 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.477 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.478 2 DEBUG nova.virt.hardware [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.510 2 DEBUG nova.storage.rbd_utils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 466310fc-8494-43ad-ab29-1691041fc97d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.514 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:34 compute-0 sudo[334559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:32:34 compute-0 sudo[334559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912421671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:34 compute-0 nova_compute[257802]: 2025-10-02 12:32:34.948 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:34 compute-0 sudo[334559]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.005 2 DEBUG nova.virt.libvirt.vif [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1184551469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1184551469',id=123,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-06v11211',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:30Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=466310fc-8494-43ad-ab29-1691041fc97d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.006 2 DEBUG nova.network.os_vif_util [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.007 2 DEBUG nova.network.os_vif_util [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.008 2 DEBUG nova.objects.instance [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 466310fc-8494-43ad-ab29-1691041fc97d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.026 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <uuid>466310fc-8494-43ad-ab29-1691041fc97d</uuid>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <name>instance-0000007b</name>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-1184551469</nova:name>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:32:34</nova:creationTime>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:user uuid="22d56fcd2a4b4851bfd126ae4548ee9b">tempest-AttachVolumeMultiAttachTest-1564585024-project-member</nova:user>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:project uuid="5533aaac08cd4856af72ef4992bb5e76">tempest-AttachVolumeMultiAttachTest-1564585024</nova:project>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <nova:port uuid="6eb1649f-3d7a-49bf-9bce-af667e856eb8">
Oct 02 12:32:35 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <system>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="serial">466310fc-8494-43ad-ab29-1691041fc97d</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="uuid">466310fc-8494-43ad-ab29-1691041fc97d</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </system>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <os>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </os>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <features>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </features>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/466310fc-8494-43ad-ab29-1691041fc97d_disk.config">
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </source>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-3c189565-c036-47f3-b9f5-ed22810ceabf">
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </source>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:32:35 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <serial>3c189565-c036-47f3-b9f5-ed22810ceabf</serial>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <shareable/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:d1:47:f0"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <target dev="tap6eb1649f-3d"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/console.log" append="off"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <video>
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </video>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:32:35 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:32:35 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:32:35 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:32:35 compute-0 nova_compute[257802]: </domain>
Oct 02 12:32:35 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.030 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Preparing to wait for external event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.030 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.030 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.031 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.031 2 DEBUG nova.virt.libvirt.vif [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1184551469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1184551469',id=123,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-06v11211',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:30Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=466310fc-8494-43ad-ab29-1691041fc97d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.031 2 DEBUG nova.network.os_vif_util [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.032 2 DEBUG nova.network.os_vif_util [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.032 2 DEBUG os_vif [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eb1649f-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6eb1649f-3d, col_values=(('external_ids', {'iface-id': '6eb1649f-3d7a-49bf-9bce-af667e856eb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:47:f0', 'vm-uuid': '466310fc-8494-43ad-ab29-1691041fc97d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:35 compute-0 NetworkManager[44987]: <info>  [1759408355.0398] manager: (tap6eb1649f-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.046 2 INFO os_vif [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d')
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.100 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.100 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.101 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:d1:47:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.101 2 INFO nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Using config drive
Oct 02 12:32:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.127 2 DEBUG nova.storage.rbd_utils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 466310fc-8494-43ad-ab29-1691041fc97d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0990fa20-c337-4b7b-8350-a7e1b67b1186 does not exist
Oct 02 12:32:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0d4afc78-70d4-48bc-82b6-95d70b0d4fea does not exist
Oct 02 12:32:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 083ff42c-1984-4644-8c2a-c152bc7f913f does not exist
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:32:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:35 compute-0 sudo[334675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:35 compute-0 sudo[334675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:35 compute-0 sudo[334675]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:35 compute-0 sudo[334700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:35 compute-0 sudo[334700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:35 compute-0 sudo[334700]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:35 compute-0 sudo[334725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:35 compute-0 sudo[334725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:35 compute-0 sudo[334725]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:35 compute-0 ceph-mon[73607]: pgmap v2138: 305 pgs: 305 active+clean; 605 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.2 MiB/s wr, 193 op/s
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1912421671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:32:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:35 compute-0 sudo[334750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:32:35 compute-0 sudo[334750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.664 2 INFO nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Creating config drive at /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.670 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7ysunes execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.803 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7ysunes" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.837 2 DEBUG nova.storage.rbd_utils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image 466310fc-8494-43ad-ab29-1691041fc97d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:35 compute-0 nova_compute[257802]: 2025-10-02 12:32:35.840 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config 466310fc-8494-43ad-ab29-1691041fc97d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.845722152 +0000 UTC m=+0.039577323 container create a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:32:35 compute-0 systemd[1]: Started libpod-conmon-a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128.scope.
Oct 02 12:32:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.917394104 +0000 UTC m=+0.111249305 container init a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.924030298 +0000 UTC m=+0.117885479 container start a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.829063693 +0000 UTC m=+0.022918894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.927424832 +0000 UTC m=+0.121280033 container attach a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:32:35 compute-0 affectionate_cartwright[334854]: 167 167
Oct 02 12:32:35 compute-0 systemd[1]: libpod-a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128.scope: Deactivated successfully.
Oct 02 12:32:35 compute-0 conmon[334854]: conmon a9ef9e6180b382324d0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128.scope/container/memory.events
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.930193519 +0000 UTC m=+0.124048700 container died a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-410d1b0582963626a863de15f57224dbf93e561b6ffc26619e5ba4bfc47bbc8b-merged.mount: Deactivated successfully.
Oct 02 12:32:35 compute-0 podman[334819]: 2025-10-02 12:32:35.972007677 +0000 UTC m=+0.165862858 container remove a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:35 compute-0 systemd[1]: libpod-conmon-a9ef9e6180b382324d0e48b0892289ed2b24b65b91e187b8bb1840d37c12f128.scope: Deactivated successfully.
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.076 2 DEBUG oslo_concurrency.processutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config 466310fc-8494-43ad-ab29-1691041fc97d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.077 2 INFO nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Deleting local config drive /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d/disk.config because it was imported into RBD.
Oct 02 12:32:36 compute-0 NetworkManager[44987]: <info>  [1759408356.1342] manager: (tap6eb1649f-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Oct 02 12:32:36 compute-0 kernel: tap6eb1649f-3d: entered promiscuous mode
Oct 02 12:32:36 compute-0 systemd-udevd[334918]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_controller[148183]: 2025-10-02T12:32:36Z|00531|binding|INFO|Claiming lport 6eb1649f-3d7a-49bf-9bce-af667e856eb8 for this chassis.
Oct 02 12:32:36 compute-0 ovn_controller[148183]: 2025-10-02T12:32:36Z|00532|binding|INFO|6eb1649f-3d7a-49bf-9bce-af667e856eb8: Claiming fa:16:3e:d1:47:f0 10.100.0.11
Oct 02 12:32:36 compute-0 NetworkManager[44987]: <info>  [1759408356.1901] device (tap6eb1649f-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:36 compute-0 podman[334895]: 2025-10-02 12:32:36.190492279 +0000 UTC m=+0.088742053 container create 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.191 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:47:f0 10.100.0.11'], port_security=['fa:16:3e:d1:47:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '466310fc-8494-43ad-ab29-1691041fc97d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29622381-99e2-43f7-9de1-ba82c9e0ff23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6eb1649f-3d7a-49bf-9bce-af667e856eb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:36 compute-0 NetworkManager[44987]: <info>  [1759408356.1922] device (tap6eb1649f-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.192 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb1649f-3d7a-49bf-9bce-af667e856eb8 in datapath 585473f8-52e4-4e55-96df-8a236d361126 bound to our chassis
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.194 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:32:36 compute-0 ovn_controller[148183]: 2025-10-02T12:32:36Z|00533|binding|INFO|Setting lport 6eb1649f-3d7a-49bf-9bce-af667e856eb8 ovn-installed in OVS
Oct 02 12:32:36 compute-0 ovn_controller[148183]: 2025-10-02T12:32:36Z|00534|binding|INFO|Setting lport 6eb1649f-3d7a-49bf-9bce-af667e856eb8 up in Southbound
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.210 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1f97d0-f1a7-4f18-8b5f-3cef9429dfda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 systemd-machined[211836]: New machine qemu-62-instance-0000007b.
Oct 02 12:32:36 compute-0 podman[334895]: 2025-10-02 12:32:36.128669349 +0000 UTC m=+0.026919143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:36 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-0000007b.
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.237 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[773bff68-e2f7-44f2-a4a5-97e3f5fac0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.237 2 DEBUG nova.network.neutron [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Updated VIF entry in instance network info cache for port 6eb1649f-3d7a-49bf-9bce-af667e856eb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.238 2 DEBUG nova.network.neutron [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Updating instance_info_cache with network_info: [{"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.239 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[36fc70ae-f5ae-4b62-bce4-6fa487cde38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 systemd[1]: Started libpod-conmon-4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2.scope.
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.252 2 DEBUG oslo_concurrency.lockutils [req-a7814658-08b0-498b-ae73-423baf611ba8 req-d5a282e3-c2c7-490a-bdba-f44ca2dc3bc0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-466310fc-8494-43ad-ab29-1691041fc97d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.272 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[fec40641-3bed-424d-bfdc-ddfe78fddeab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 podman[334895]: 2025-10-02 12:32:36.280073171 +0000 UTC m=+0.178322955 container init 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:32:36 compute-0 podman[334895]: 2025-10-02 12:32:36.289300748 +0000 UTC m=+0.187550522 container start 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:32:36 compute-0 podman[334895]: 2025-10-02 12:32:36.293387659 +0000 UTC m=+0.191637483 container attach 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.297 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e32b14f-332c-4a4f-bb4b-1300424c2880]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 35411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334937, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.324 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b352533a-1950-48d1-a202-46132cfda04a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618902, 'tstamp': 618902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334942, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618905, 'tstamp': 618905}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334942, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.326 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.331 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.331 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.331 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:36.332 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.531 2 DEBUG nova.compute.manager [req-db92f2a6-203d-497d-8ac6-379645a6cd2c req-143f390d-c797-4317-a06c-7fc849acc465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.532 2 DEBUG oslo_concurrency.lockutils [req-db92f2a6-203d-497d-8ac6-379645a6cd2c req-143f390d-c797-4317-a06c-7fc849acc465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.532 2 DEBUG oslo_concurrency.lockutils [req-db92f2a6-203d-497d-8ac6-379645a6cd2c req-143f390d-c797-4317-a06c-7fc849acc465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.532 2 DEBUG oslo_concurrency.lockutils [req-db92f2a6-203d-497d-8ac6-379645a6cd2c req-143f390d-c797-4317-a06c-7fc849acc465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.532 2 DEBUG nova.compute.manager [req-db92f2a6-203d-497d-8ac6-379645a6cd2c req-143f390d-c797-4317-a06c-7fc849acc465 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Processing event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:32:36 compute-0 ovn_controller[148183]: 2025-10-02T12:32:36Z|00535|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:32:36 compute-0 ceph-mon[73607]: pgmap v2139: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 02 12:32:36 compute-0 nova_compute[257802]: 2025-10-02 12:32:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:37 compute-0 compassionate_yonath[334929]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:32:37 compute-0 compassionate_yonath[334929]: --> relative data size: 1.0
Oct 02 12:32:37 compute-0 compassionate_yonath[334929]: --> All data devices are unavailable
Oct 02 12:32:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:37 compute-0 systemd[1]: libpod-4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2.scope: Deactivated successfully.
Oct 02 12:32:37 compute-0 podman[334895]: 2025-10-02 12:32:37.126391029 +0000 UTC m=+1.024640813 container died 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f074899dec52e69373049d523a48b0194a2918b323e78534b236943c5c9d92a-merged.mount: Deactivated successfully.
Oct 02 12:32:37 compute-0 podman[334895]: 2025-10-02 12:32:37.178242994 +0000 UTC m=+1.076492768 container remove 4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:32:37 compute-0 systemd[1]: libpod-conmon-4541df11994e4acbd1d68164a06b48a6367cd703bb764a9bbe7543d31881fed2.scope: Deactivated successfully.
Oct 02 12:32:37 compute-0 sudo[334750]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:37 compute-0 sudo[335011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:37 compute-0 sudo[335011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:37 compute-0 sudo[335011]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:37 compute-0 sudo[335036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:37 compute-0 sudo[335036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:37 compute-0 sudo[335036]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.342 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.342 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408357.3415077, 466310fc-8494-43ad-ab29-1691041fc97d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.343 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] VM Started (Lifecycle Event)
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.350 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.355 2 INFO nova.virt.libvirt.driver [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Instance spawned successfully.
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.355 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:32:37 compute-0 sudo[335061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:37 compute-0 sudo[335061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:37 compute-0 sudo[335061]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.374 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.381 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.386 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.386 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.387 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.387 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.387 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.388 2 DEBUG nova.virt.libvirt.driver [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.416 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.417 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408357.342631, 466310fc-8494-43ad-ab29-1691041fc97d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.417 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] VM Paused (Lifecycle Event)
Oct 02 12:32:37 compute-0 sudo[335086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:32:37 compute-0 sudo[335086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.464 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.467 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408357.344802, 466310fc-8494-43ad-ab29-1691041fc97d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.467 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] VM Resumed (Lifecycle Event)
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.499 2 INFO nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Took 5.11 seconds to spawn the instance on the hypervisor.
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.500 2 DEBUG nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.502 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.506 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.535 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.574 2 INFO nova.compute.manager [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Took 7.92 seconds to build instance.
Oct 02 12:32:37 compute-0 nova_compute[257802]: 2025-10-02 12:32:37.592 2 DEBUG oslo_concurrency.lockutils [None req-bb8d63a8-7c6e-4fce-856a-2247c7ca5a0a 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 631 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 182 op/s
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.719804779 +0000 UTC m=+0.041910012 container create 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:37 compute-0 systemd[1]: Started libpod-conmon-5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b.scope.
Oct 02 12:32:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2536513026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.701936169 +0000 UTC m=+0.024041422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.815147773 +0000 UTC m=+0.137253026 container init 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.82278833 +0000 UTC m=+0.144893563 container start 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.826163744 +0000 UTC m=+0.148269007 container attach 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:32:37 compute-0 eager_montalcini[335167]: 167 167
Oct 02 12:32:37 compute-0 systemd[1]: libpod-5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b.scope: Deactivated successfully.
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.832567831 +0000 UTC m=+0.154673064 container died 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:32:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc97ca5885790167c277c76eeddca73c98f77ebe27d2642b292c858303dd10b-merged.mount: Deactivated successfully.
Oct 02 12:32:37 compute-0 podman[335151]: 2025-10-02 12:32:37.875471246 +0000 UTC m=+0.197576469 container remove 5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:32:37 compute-0 systemd[1]: libpod-conmon-5b12b91752b6af7e57bc80c5aa7b242a5196413eb1fbe3855a08dc2aded6ca7b.scope: Deactivated successfully.
Oct 02 12:32:38 compute-0 podman[335194]: 2025-10-02 12:32:38.053708788 +0000 UTC m=+0.045259904 container create 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:32:38 compute-0 systemd[1]: Started libpod-conmon-9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67.scope.
Oct 02 12:32:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabd194f3a0ab27f8095a10fd75bd6a4c294e880e1cfd5b9208695b0c312ac36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:38 compute-0 podman[335194]: 2025-10-02 12:32:38.03429932 +0000 UTC m=+0.025850456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabd194f3a0ab27f8095a10fd75bd6a4c294e880e1cfd5b9208695b0c312ac36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabd194f3a0ab27f8095a10fd75bd6a4c294e880e1cfd5b9208695b0c312ac36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabd194f3a0ab27f8095a10fd75bd6a4c294e880e1cfd5b9208695b0c312ac36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:38 compute-0 podman[335194]: 2025-10-02 12:32:38.146727045 +0000 UTC m=+0.138278201 container init 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:32:38 compute-0 nova_compute[257802]: 2025-10-02 12:32:38.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:38 compute-0 podman[335194]: 2025-10-02 12:32:38.154220899 +0000 UTC m=+0.145772005 container start 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:32:38 compute-0 podman[335194]: 2025-10-02 12:32:38.15831862 +0000 UTC m=+0.149869756 container attach 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:32:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:38 compute-0 ceph-mon[73607]: pgmap v2140: 305 pgs: 305 active+clean; 631 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 182 op/s
Oct 02 12:32:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1278154967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:38 compute-0 amazing_cannon[335211]: {
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:     "1": [
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:         {
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "devices": [
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "/dev/loop3"
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             ],
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "lv_name": "ceph_lv0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "lv_size": "7511998464",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "name": "ceph_lv0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "tags": {
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.cluster_name": "ceph",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.crush_device_class": "",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.encrypted": "0",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.osd_id": "1",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.type": "block",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:                 "ceph.vdo": "0"
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             },
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "type": "block",
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:             "vg_name": "ceph_vg0"
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:         }
Oct 02 12:32:38 compute-0 amazing_cannon[335211]:     ]
Oct 02 12:32:38 compute-0 amazing_cannon[335211]: }
Oct 02 12:32:38 compute-0 systemd[1]: libpod-9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67.scope: Deactivated successfully.
Oct 02 12:32:39 compute-0 podman[335221]: 2025-10-02 12:32:39.049588202 +0000 UTC m=+0.040486857 container died 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cabd194f3a0ab27f8095a10fd75bd6a4c294e880e1cfd5b9208695b0c312ac36-merged.mount: Deactivated successfully.
Oct 02 12:32:39 compute-0 podman[335221]: 2025-10-02 12:32:39.117646665 +0000 UTC m=+0.108545310 container remove 9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:32:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:32:39 compute-0 systemd[1]: libpod-conmon-9bb626007845de46e9b21b9d9c51018b27d4f0e3e30f7345007450eedcdcfd67.scope: Deactivated successfully.
Oct 02 12:32:39 compute-0 sudo[335086]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:39 compute-0 sudo[335236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:39 compute-0 sudo[335236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:39 compute-0 sudo[335236]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:39 compute-0 sudo[335261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:39 compute-0 sudo[335261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:39 compute-0 sudo[335261]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:39 compute-0 sudo[335286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:39 compute-0 sudo[335286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:39 compute-0 sudo[335286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:39 compute-0 sudo[335311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:32:39 compute-0 sudo[335311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 209 op/s
Oct 02 12:32:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Oct 02 12:32:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Oct 02 12:32:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Oct 02 12:32:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.846364321 +0000 UTC m=+0.057909665 container create 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.866 2 DEBUG nova.compute.manager [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.867 2 DEBUG oslo_concurrency.lockutils [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.867 2 DEBUG oslo_concurrency.lockutils [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.868 2 DEBUG oslo_concurrency.lockutils [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.868 2 DEBUG nova.compute.manager [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] No waiting events found dispatching network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:39 compute-0 nova_compute[257802]: 2025-10-02 12:32:39.869 2 WARNING nova.compute.manager [req-02ffa305-bfb2-4423-9665-1c1b24ed92fc req-872391d6-e6d6-490d-bc6e-60bce6bab719 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received unexpected event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 for instance with vm_state active and task_state None.
Oct 02 12:32:39 compute-0 systemd[1]: Started libpod-conmon-08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202.scope.
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.823336415 +0000 UTC m=+0.034881779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.931484054 +0000 UTC m=+0.143029408 container init 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.937886712 +0000 UTC m=+0.149432056 container start 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.942055674 +0000 UTC m=+0.153601018 container attach 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:32:39 compute-0 inspiring_lamarr[335391]: 167 167
Oct 02 12:32:39 compute-0 systemd[1]: libpod-08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202.scope: Deactivated successfully.
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.945369655 +0000 UTC m=+0.156914999 container died 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3afeed0386095866494aab4c0cd8a12c34eb82cdda0286fc7344e64505f46d8-merged.mount: Deactivated successfully.
Oct 02 12:32:39 compute-0 podman[335375]: 2025-10-02 12:32:39.985806059 +0000 UTC m=+0.197351403 container remove 08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lamarr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:32:39 compute-0 systemd[1]: libpod-conmon-08ec05c64ac7c8ed9fb252d2d814b11957ea3cb597708c1d3c74ef3bd20ee202.scope: Deactivated successfully.
Oct 02 12:32:40 compute-0 nova_compute[257802]: 2025-10-02 12:32:40.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:40 compute-0 podman[335414]: 2025-10-02 12:32:40.172322275 +0000 UTC m=+0.045192272 container create 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:32:40 compute-0 systemd[1]: Started libpod-conmon-842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7.scope.
Oct 02 12:32:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d813413e94c29f4e2d601a6a269fbd9920664ccd3451ac5abbd93af3328798be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d813413e94c29f4e2d601a6a269fbd9920664ccd3451ac5abbd93af3328798be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d813413e94c29f4e2d601a6a269fbd9920664ccd3451ac5abbd93af3328798be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d813413e94c29f4e2d601a6a269fbd9920664ccd3451ac5abbd93af3328798be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:40 compute-0 podman[335414]: 2025-10-02 12:32:40.151142274 +0000 UTC m=+0.024012291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:40 compute-0 podman[335414]: 2025-10-02 12:32:40.26324726 +0000 UTC m=+0.136117267 container init 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:40 compute-0 podman[335414]: 2025-10-02 12:32:40.269849883 +0000 UTC m=+0.142719880 container start 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:32:40 compute-0 podman[335414]: 2025-10-02 12:32:40.273510723 +0000 UTC m=+0.146380740 container attach 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:32:40 compute-0 ceph-mon[73607]: pgmap v2141: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 209 op/s
Oct 02 12:32:40 compute-0 ceph-mon[73607]: osdmap e321: 3 total, 3 up, 3 in
Oct 02 12:32:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Oct 02 12:32:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Oct 02 12:32:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Oct 02 12:32:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]: {
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:         "osd_id": 1,
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:         "type": "bluestore"
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]:     }
Oct 02 12:32:41 compute-0 inspiring_perlman[335431]: }
Oct 02 12:32:41 compute-0 systemd[1]: libpod-842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7.scope: Deactivated successfully.
Oct 02 12:32:41 compute-0 podman[335414]: 2025-10-02 12:32:41.160760276 +0000 UTC m=+1.033630273 container died 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d813413e94c29f4e2d601a6a269fbd9920664ccd3451ac5abbd93af3328798be-merged.mount: Deactivated successfully.
Oct 02 12:32:41 compute-0 podman[335414]: 2025-10-02 12:32:41.212097129 +0000 UTC m=+1.084967126 container remove 842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:41 compute-0 systemd[1]: libpod-conmon-842782ef48e9818dc8f2039e4c3d50d2c8d96c167d54c1e579c4009f01bc45f7.scope: Deactivated successfully.
Oct 02 12:32:41 compute-0 sudo[335311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:32:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:32:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 59738906-d9df-43f0-a783-6fdfa60e6f90 does not exist
Oct 02 12:32:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3d7966eb-c100-4b61-aad1-64f5b85adf51 does not exist
Oct 02 12:32:41 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cdd3a5a2-50fe-4e4b-b0db-8c7cd4d84c59 does not exist
Oct 02 12:32:41 compute-0 sudo[335464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:41 compute-0 sudo[335464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:41 compute-0 sudo[335464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:41 compute-0 sudo[335489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:32:41 compute-0 sudo[335489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:41 compute-0 sudo[335489]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 122 op/s
Oct 02 12:32:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:42 compute-0 ceph-mon[73607]: osdmap e322: 3 total, 3 up, 3 in
Oct 02 12:32:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:32:42
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.927 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.927 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.928 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.928 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.928 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.929 2 INFO nova.compute.manager [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Terminating instance
Oct 02 12:32:42 compute-0 nova_compute[257802]: 2025-10-02 12:32:42.930 2 DEBUG nova.compute.manager [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.123 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.123 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:32:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 ceph-mon[73607]: pgmap v2144: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 122 op/s
Oct 02 12:32:43 compute-0 kernel: tap6eb1649f-3d (unregistering): left promiscuous mode
Oct 02 12:32:43 compute-0 NetworkManager[44987]: <info>  [1759408363.3186] device (tap6eb1649f-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:43 compute-0 ovn_controller[148183]: 2025-10-02T12:32:43Z|00536|binding|INFO|Releasing lport 6eb1649f-3d7a-49bf-9bce-af667e856eb8 from this chassis (sb_readonly=0)
Oct 02 12:32:43 compute-0 ovn_controller[148183]: 2025-10-02T12:32:43Z|00537|binding|INFO|Setting lport 6eb1649f-3d7a-49bf-9bce-af667e856eb8 down in Southbound
Oct 02 12:32:43 compute-0 ovn_controller[148183]: 2025-10-02T12:32:43Z|00538|binding|INFO|Removing iface tap6eb1649f-3d ovn-installed in OVS
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.362 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:47:f0 10.100.0.11'], port_security=['fa:16:3e:d1:47:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '466310fc-8494-43ad-ab29-1691041fc97d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29622381-99e2-43f7-9de1-ba82c9e0ff23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6eb1649f-3d7a-49bf-9bce-af667e856eb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.363 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb1649f-3d7a-49bf-9bce-af667e856eb8 in datapath 585473f8-52e4-4e55-96df-8a236d361126 unbound from our chassis
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.365 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e509a001-7bd1-4cd3-8bd5-2f0c8185d4c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.410 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[232bed1a-c2f4-4c1d-b6f6-128708aaf462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.413 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d8b82-e153-45b1-8151-be3051959432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 02 12:32:43 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007b.scope: Consumed 6.671s CPU time.
Oct 02 12:32:43 compute-0 systemd-machined[211836]: Machine qemu-62-instance-0000007b terminated.
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.442 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8068e2ce-cc15-4dfd-aff9-4038cf45c831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.460 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd6fcf0-ba18-4dd8-98c5-fb0e1bf28d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 35411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335527, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.476 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0409944-d650-4166-af98-e4e30b2aa883]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618902, 'tstamp': 618902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335528, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618905, 'tstamp': 618905}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335528, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.477 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.483 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.484 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.484 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:43.484 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.575 2 INFO nova.virt.libvirt.driver [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Instance destroyed successfully.
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.575 2 DEBUG nova.objects.instance [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid 466310fc-8494-43ad-ab29-1691041fc97d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.594 2 DEBUG nova.virt.libvirt.vif [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1184551469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1184551469',id=123,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-06v11211',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:37Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=466310fc-8494-43ad-ab29-1691041fc97d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.594 2 DEBUG nova.network.os_vif_util [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "address": "fa:16:3e:d1:47:f0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb1649f-3d", "ovs_interfaceid": "6eb1649f-3d7a-49bf-9bce-af667e856eb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.595 2 DEBUG nova.network.os_vif_util [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.595 2 DEBUG os_vif [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb1649f-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:43 compute-0 nova_compute[257802]: 2025-10-02 12:32:43.602 2 INFO os_vif [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:47:f0,bridge_name='br-int',has_traffic_filtering=True,id=6eb1649f-3d7a-49bf-9bce-af667e856eb8,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb1649f-3d')
Oct 02 12:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 661 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 340 op/s
Oct 02 12:32:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.201 2 DEBUG nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-unplugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.202 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.202 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.202 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.202 2 DEBUG nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] No waiting events found dispatching network-vif-unplugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.202 2 DEBUG nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-unplugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.203 2 DEBUG nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.203 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "466310fc-8494-43ad-ab29-1691041fc97d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.203 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.204 2 DEBUG oslo_concurrency.lockutils [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.204 2 DEBUG nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] No waiting events found dispatching network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.204 2 WARNING nova.compute.manager [req-7e6edbf8-6996-4f88-bc00-9714b05cfe17 req-6882a92a-badc-4212-b7f7-9846940fb3cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received unexpected event network-vif-plugged-6eb1649f-3d7a-49bf-9bce-af667e856eb8 for instance with vm_state active and task_state deleting.
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.208 2 INFO nova.virt.libvirt.driver [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Deleting instance files /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d_del
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.209 2 INFO nova.virt.libvirt.driver [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Deletion of /var/lib/nova/instances/466310fc-8494-43ad-ab29-1691041fc97d_del complete
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.264 2 INFO nova.compute.manager [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.265 2 DEBUG oslo.service.loopingcall [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.265 2 DEBUG nova.compute.manager [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:32:44 compute-0 nova_compute[257802]: 2025-10-02 12:32:44.265 2 DEBUG nova.network.neutron [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:32:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:45 compute-0 ceph-mon[73607]: pgmap v2145: 305 pgs: 305 active+clean; 661 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 340 op/s
Oct 02 12:32:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 1.5 MiB/s wr, 346 op/s
Oct 02 12:32:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.199 2 DEBUG nova.network.neutron [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.216 2 INFO nova.compute.manager [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Took 1.95 seconds to deallocate network for instance.
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.290 2 DEBUG nova.compute.manager [req-39a2a953-eb70-4077-9614-d3d2bca22730 req-4916c54d-100a-4b1a-b968-aab9fac3d58b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Received event network-vif-deleted-6eb1649f-3d7a-49bf-9bce-af667e856eb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/680480766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:46 compute-0 sudo[335561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:46 compute-0 sudo[335561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:46 compute-0 sudo[335561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:46 compute-0 podman[335585]: 2025-10-02 12:32:46.443998949 +0000 UTC m=+0.064791484 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:32:46 compute-0 sudo[335611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:46 compute-0 sudo[335611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:46 compute-0 podman[335586]: 2025-10-02 12:32:46.449389641 +0000 UTC m=+0.069194382 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:32:46 compute-0 podman[335587]: 2025-10-02 12:32:46.450337415 +0000 UTC m=+0.058590422 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:46 compute-0 sudo[335611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.461 2 INFO nova.compute.manager [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Took 0.25 seconds to detach 1 volumes for instance.
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.514 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.514 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:46 compute-0 nova_compute[257802]: 2025-10-02 12:32:46.610 2 DEBUG oslo_concurrency.processutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760902498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.115 2 DEBUG oslo_concurrency.processutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.122 2 DEBUG nova.compute.provider_tree [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.137 2 DEBUG nova.scheduler.client.report [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.158 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.205 2 INFO nova.scheduler.client.report [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Deleted allocations for instance 466310fc-8494-43ad-ab29-1691041fc97d
Oct 02 12:32:47 compute-0 nova_compute[257802]: 2025-10-02 12:32:47.276 2 DEBUG oslo_concurrency.lockutils [None req-1de02e1d-d257-4d96-be15-dc915b3e4e16 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "466310fc-8494-43ad-ab29-1691041fc97d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:47 compute-0 ceph-mon[73607]: pgmap v2146: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 1.5 MiB/s wr, 346 op/s
Oct 02 12:32:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/760902498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 22 KiB/s wr, 322 op/s
Oct 02 12:32:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:47.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:48 compute-0 nova_compute[257802]: 2025-10-02 12:32:48.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Oct 02 12:32:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Oct 02 12:32:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Oct 02 12:32:48 compute-0 ceph-mon[73607]: pgmap v2147: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 22 KiB/s wr, 322 op/s
Oct 02 12:32:48 compute-0 nova_compute[257802]: 2025-10-02 12:32:48.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:48 compute-0 nova_compute[257802]: 2025-10-02 12:32:48.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:49 compute-0 ceph-mon[73607]: osdmap e323: 3 total, 3 up, 3 in
Oct 02 12:32:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 604 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 21 KiB/s wr, 307 op/s
Oct 02 12:32:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:49.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:49 compute-0 podman[335690]: 2025-10-02 12:32:49.943915297 +0000 UTC m=+0.078541192 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 12:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:32:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262017287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:32:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262017287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Oct 02 12:32:50 compute-0 ovn_controller[148183]: 2025-10-02T12:32:50Z|00539|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct 02 12:32:50 compute-0 nova_compute[257802]: 2025-10-02 12:32:50.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Oct 02 12:32:50 compute-0 ceph-mon[73607]: pgmap v2149: 305 pgs: 305 active+clean; 604 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 21 KiB/s wr, 307 op/s
Oct 02 12:32:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3262017287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3262017287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Oct 02 12:32:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:51.125 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:51.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 604 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 112 op/s
Oct 02 12:32:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:51.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:52 compute-0 ceph-mon[73607]: osdmap e324: 3 total, 3 up, 3 in
Oct 02 12:32:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3224193537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:53 compute-0 ceph-mon[73607]: pgmap v2151: 305 pgs: 305 active+clean; 604 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 112 op/s
Oct 02 12:32:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3224193537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:53 compute-0 nova_compute[257802]: 2025-10-02 12:32:53.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:53 compute-0 nova_compute[257802]: 2025-10-02 12:32:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 KiB/s wr, 137 op/s
Oct 02 12:32:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:53.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:54 compute-0 nova_compute[257802]: 2025-10-02 12:32:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:54 compute-0 nova_compute[257802]: 2025-10-02 12:32:54.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009698012632607483 of space, bias 1.0, pg target 2.9094037897822447 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323374685929649 of space, bias 1.0, pg target 1.2883656564070354 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019035062465103294 of space, bias 1.0, pg target 0.5653413552135679 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:32:54 compute-0 nova_compute[257802]: 2025-10-02 12:32:54.794 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:54 compute-0 nova_compute[257802]: 2025-10-02 12:32:54.794 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.060 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313152823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313152823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.183 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.184 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.192 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.193 2 INFO nova.compute.claims [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.328 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:55 compute-0 ceph-mon[73607]: pgmap v2152: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 KiB/s wr, 137 op/s
Oct 02 12:32:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3040535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/313152823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/313152823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 2.7 KiB/s wr, 78 op/s
Oct 02 12:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341854761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.775 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.781 2 DEBUG nova.compute.provider_tree [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776294454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:32:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776294454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.833 2 DEBUG nova.scheduler.client.report [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.864 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.865 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.934 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.935 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:32:55 compute-0 nova_compute[257802]: 2025-10-02 12:32:55.966 2 INFO nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.016 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.133 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.135 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.135 2 INFO nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Creating image(s)
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.162 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.190 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.216 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.220 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.283 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.283 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.284 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.284 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.311 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.315 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a53afa14-bb7b-4723-8239-2ed285f1bc94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.523 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.524 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.524 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.524 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.524 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.526 2 INFO nova.compute.manager [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Terminating instance
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.526 2 DEBUG nova.compute.manager [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.576 2 DEBUG nova.policy [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c932f0d0e594f00855572fbe06ee3aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/341854761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1776294454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1776294454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1348520795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:56 compute-0 kernel: tap386c73f3-c5 (unregistering): left promiscuous mode
Oct 02 12:32:56 compute-0 NetworkManager[44987]: <info>  [1759408376.6881] device (tap386c73f3-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 ovn_controller[148183]: 2025-10-02T12:32:56Z|00540|binding|INFO|Releasing lport 386c73f3-c5a1-4edb-894f-841beabaecbd from this chassis (sb_readonly=0)
Oct 02 12:32:56 compute-0 ovn_controller[148183]: 2025-10-02T12:32:56Z|00541|binding|INFO|Setting lport 386c73f3-c5a1-4edb-894f-841beabaecbd down in Southbound
Oct 02 12:32:56 compute-0 ovn_controller[148183]: 2025-10-02T12:32:56Z|00542|binding|INFO|Removing iface tap386c73f3-c5 ovn-installed in OVS
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.701 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:65:0d 10.100.0.4'], port_security=['fa:16:3e:94:65:0d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c8b713f4-4f41-4153-928c-164f2ed108ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=386c73f3-c5a1-4edb-894f-841beabaecbd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.702 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 386c73f3-c5a1-4edb-894f-841beabaecbd in datapath 585473f8-52e4-4e55-96df-8a236d361126 unbound from our chassis
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.704 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.724 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46e756d2-1f47-4869-b360-c1601c3edd8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.751 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1481cc-3bbc-47c8-bb55-b7943ae5932e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.754 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d596d182-6e8c-430f-ac2a-2fbee0810809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 02 12:32:56 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000073.scope: Consumed 15.879s CPU time.
Oct 02 12:32:56 compute-0 systemd-machined[211836]: Machine qemu-61-instance-00000073 terminated.
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.781 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccc0e5f-7b37-4766-93e6-bdfc82d6af24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.800 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[229716de-3189-4d9c-808b-cb7a491d8e32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618892, 'reachable_time': 35411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335844, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.820 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[93cc8e0b-cca7-4529-a9d0-7748a361d999]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618902, 'tstamp': 618902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335848, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618905, 'tstamp': 618905}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335848, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.822 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.829 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.829 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.830 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:32:56.830 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.961 2 INFO nova.virt.libvirt.driver [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Instance destroyed successfully.
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.962 2 DEBUG nova.objects.instance [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid c8b713f4-4f41-4153-928c-164f2ed108ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.988 2 DEBUG nova.virt.libvirt.vif [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=115,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-1m21sn7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=c8b713f4-4f41-4153-928c-164f2ed108ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.989 2 DEBUG nova.network.os_vif_util [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "386c73f3-c5a1-4edb-894f-841beabaecbd", "address": "fa:16:3e:94:65:0d", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386c73f3-c5", "ovs_interfaceid": "386c73f3-c5a1-4edb-894f-841beabaecbd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.990 2 DEBUG nova.network.os_vif_util [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.990 2 DEBUG os_vif [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386c73f3-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:56 compute-0 nova_compute[257802]: 2025-10-02 12:32:56.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:57 compute-0 nova_compute[257802]: 2025-10-02 12:32:57.000 2 INFO os_vif [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:65:0d,bridge_name='br-int',has_traffic_filtering=True,id=386c73f3-c5a1-4edb-894f-841beabaecbd,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap386c73f3-c5')
Oct 02 12:32:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:32:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:32:57 compute-0 nova_compute[257802]: 2025-10-02 12:32:57.486 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Successfully created port: 8a410c4c-94ba-44f0-9056-16dbab7db1d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:32:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 592 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 1.5 MiB/s wr, 85 op/s
Oct 02 12:32:57 compute-0 nova_compute[257802]: 2025-10-02 12:32:57.707 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a53afa14-bb7b-4723-8239-2ed285f1bc94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:57 compute-0 ceph-mon[73607]: pgmap v2153: 305 pgs: 305 active+clean; 568 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 2.7 KiB/s wr, 78 op/s
Oct 02 12:32:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:57 compute-0 nova_compute[257802]: 2025-10-02 12:32:57.938 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] resizing rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.574 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408363.5728729, 466310fc-8494-43ad-ab29-1691041fc97d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.575 2 INFO nova.compute.manager [-] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] VM Stopped (Lifecycle Event)
Oct 02 12:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 39K writes, 151K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s
                                           Cumulative WAL: 39K writes, 13K syncs, 2.85 writes per sync, written: 0.14 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9006 writes, 35K keys, 9006 commit groups, 1.0 writes per commit group, ingest: 36.22 MB, 0.06 MB/s
                                           Interval WAL: 9006 writes, 3347 syncs, 2.69 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.630 2 DEBUG nova.compute.manager [None req-657d3381-f72a-4f1f-8838-511a56a3f987 - - - - - -] [instance: 466310fc-8494-43ad-ab29-1691041fc97d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.668 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Successfully updated port: 8a410c4c-94ba-44f0-9056-16dbab7db1d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.683 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.683 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.683 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Oct 02 12:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Oct 02 12:32:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.895 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.899 2 DEBUG nova.compute.manager [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-changed-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.900 2 DEBUG nova.compute.manager [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Refreshing instance network info cache due to event network-changed-8a410c4c-94ba-44f0-9056-16dbab7db1d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.900 2 DEBUG oslo_concurrency.lockutils [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.901 2 DEBUG nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.901 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.901 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.902 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.902 2 DEBUG nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.902 2 DEBUG nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-unplugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.902 2 DEBUG nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.903 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.903 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.903 2 DEBUG oslo_concurrency.lockutils [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.903 2 DEBUG nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] No waiting events found dispatching network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.903 2 WARNING nova.compute.manager [req-b7668ae9-da5f-41ee-8ace-fd91b93efc5d req-12c90d51-1f73-4e08-8630-4f3fda58dedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received unexpected event network-vif-plugged-386c73f3-c5a1-4edb-894f-841beabaecbd for instance with vm_state active and task_state deleting.
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.908 2 DEBUG nova.objects.instance [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.923 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.924 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Ensure instance console log exists: /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.924 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.924 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:58 compute-0 nova_compute[257802]: 2025-10-02 12:32:58.925 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:59 compute-0 ceph-mon[73607]: pgmap v2154: 305 pgs: 305 active+clean; 592 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 1.5 MiB/s wr, 85 op/s
Oct 02 12:32:59 compute-0 ceph-mon[73607]: osdmap e325: 3 total, 3 up, 3 in
Oct 02 12:32:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 671 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 7.7 MiB/s wr, 209 op/s
Oct 02 12:32:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:32:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:59.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3260648778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4028793524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.648 2 INFO nova.virt.libvirt.driver [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Deleting instance files /var/lib/nova/instances/c8b713f4-4f41-4153-928c-164f2ed108ed_del
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.648 2 INFO nova.virt.libvirt.driver [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Deletion of /var/lib/nova/instances/c8b713f4-4f41-4153-928c-164f2ed108ed_del complete
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.717 2 INFO nova.compute.manager [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Took 4.19 seconds to destroy the instance on the hypervisor.
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.718 2 DEBUG oslo.service.loopingcall [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.718 2 DEBUG nova.compute.manager [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:33:00 compute-0 nova_compute[257802]: 2025-10-02 12:33:00.719 2 DEBUG nova.network.neutron [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:01.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.208 2 DEBUG nova.network.neutron [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.239 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.239 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance network_info: |[{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.240 2 DEBUG oslo_concurrency.lockutils [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.240 2 DEBUG nova.network.neutron [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Refreshing network info cache for port 8a410c4c-94ba-44f0-9056-16dbab7db1d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.243 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start _get_guest_xml network_info=[{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.248 2 WARNING nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.253 2 DEBUG nova.virt.libvirt.host [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.253 2 DEBUG nova.virt.libvirt.host [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.259 2 DEBUG nova.virt.libvirt.host [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.259 2 DEBUG nova.virt.libvirt.host [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.260 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.261 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.261 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.261 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.262 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.262 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.262 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.262 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.262 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.263 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.263 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.263 2 DEBUG nova.virt.hardware [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.265 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:01 compute-0 ceph-mon[73607]: pgmap v2156: 305 pgs: 305 active+clean; 671 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 7.7 MiB/s wr, 209 op/s
Oct 02 12:33:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429998602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.683 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 671 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 6.7 MiB/s wr, 182 op/s
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.718 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.724 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:01.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:01 compute-0 nova_compute[257802]: 2025-10-02 12:33:01.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.007 2 DEBUG nova.network.neutron [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.038 2 INFO nova.compute.manager [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Took 1.32 seconds to deallocate network for instance.
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.091 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.092 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.102 2 DEBUG nova.compute.manager [req-76d92c3d-fa15-40cf-878c-369a1b635445 req-b3158d2e-dfd1-46bb-880c-ee73cf7571d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Received event network-vif-deleted-386c73f3-c5a1-4edb-894f-841beabaecbd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.179 2 DEBUG oslo_concurrency.processutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258569572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.217 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.219 2 DEBUG nova.virt.libvirt.vif [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-408124247',display_name='tempest-ServerRescueNegativeTestJSON-server-408124247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-408124247',id=126,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ysobwts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:56Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=a53afa14-bb7b-4723-8239-2ed285f1bc94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.219 2 DEBUG nova.network.os_vif_util [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.220 2 DEBUG nova.network.os_vif_util [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.222 2 DEBUG nova.objects.instance [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.262 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <uuid>a53afa14-bb7b-4723-8239-2ed285f1bc94</uuid>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <name>instance-0000007e</name>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-408124247</nova:name>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:33:01</nova:creationTime>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <nova:port uuid="8a410c4c-94ba-44f0-9056-16dbab7db1d9">
Oct 02 12:33:02 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <system>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="serial">a53afa14-bb7b-4723-8239-2ed285f1bc94</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="uuid">a53afa14-bb7b-4723-8239-2ed285f1bc94</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </system>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <os>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </os>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <features>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </features>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a53afa14-bb7b-4723-8239-2ed285f1bc94_disk">
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config">
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:02 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:2f:ff:7e"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <target dev="tap8a410c4c-94"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/console.log" append="off"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <video>
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </video>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:33:02 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:33:02 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:33:02 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:33:02 compute-0 nova_compute[257802]: </domain>
Oct 02 12:33:02 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.263 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Preparing to wait for external event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.263 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.264 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.264 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.265 2 DEBUG nova.virt.libvirt.vif [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-408124247',display_name='tempest-ServerRescueNegativeTestJSON-server-408124247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-408124247',id=126,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ysobwts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:56Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=a53afa14-bb7b-4723-8239-2ed285f1bc94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.265 2 DEBUG nova.network.os_vif_util [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.265 2 DEBUG nova.network.os_vif_util [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.266 2 DEBUG os_vif [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a410c4c-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a410c4c-94, col_values=(('external_ids', {'iface-id': '8a410c4c-94ba-44f0-9056-16dbab7db1d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:ff:7e', 'vm-uuid': 'a53afa14-bb7b-4723-8239-2ed285f1bc94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:02 compute-0 NetworkManager[44987]: <info>  [1759408382.2722] manager: (tap8a410c4c-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.276 2 INFO os_vif [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94')
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.367 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.367 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.368 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:2f:ff:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.368 2 INFO nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Using config drive
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.396 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082583953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.666 2 DEBUG oslo_concurrency.processutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.671 2 DEBUG nova.compute.provider_tree [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1429998602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:02 compute-0 ceph-mon[73607]: pgmap v2157: 305 pgs: 305 active+clean; 671 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 6.7 MiB/s wr, 182 op/s
Oct 02 12:33:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1258569572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.693 2 DEBUG nova.scheduler.client.report [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.720 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.756 2 INFO nova.scheduler.client.report [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Deleted allocations for instance c8b713f4-4f41-4153-928c-164f2ed108ed
Oct 02 12:33:02 compute-0 nova_compute[257802]: 2025-10-02 12:33:02.874 2 DEBUG oslo_concurrency.lockutils [None req-7c2c9a71-417c-4270-9029-8e48c21c492e 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "c8b713f4-4f41-4153-928c-164f2ed108ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.080 2 INFO nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Creating config drive at /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.086 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7y8tmb2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.144 2 DEBUG nova.network.neutron [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updated VIF entry in instance network info cache for port 8a410c4c-94ba-44f0-9056-16dbab7db1d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.145 2 DEBUG nova.network.neutron [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.162 2 DEBUG oslo_concurrency.lockutils [req-8251492e-07ea-4f21-9be8-5b0a4130bddf req-3485f329-34d9-4942-85eb-811238efe6cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.222 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7y8tmb2g" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.251 2 DEBUG nova.storage.rbd_utils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.255 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.446 2 DEBUG oslo_concurrency.processutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.447 2 INFO nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Deleting local config drive /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config because it was imported into RBD.
Oct 02 12:33:03 compute-0 kernel: tap8a410c4c-94: entered promiscuous mode
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.5150] manager: (tap8a410c4c-94): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.517 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.517 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:03 compute-0 ovn_controller[148183]: 2025-10-02T12:33:03Z|00543|binding|INFO|Claiming lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 for this chassis.
Oct 02 12:33:03 compute-0 ovn_controller[148183]: 2025-10-02T12:33:03Z|00544|binding|INFO|8a410c4c-94ba-44f0-9056-16dbab7db1d9: Claiming fa:16:3e:2f:ff:7e 10.100.0.14
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 ovn_controller[148183]: 2025-10-02T12:33:03Z|00545|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 ovn-installed in OVS
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 ovn_controller[148183]: 2025-10-02T12:33:03Z|00546|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 up in Southbound
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.543 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:ff:7e 10.100.0.14'], port_security=['fa:16:3e:2f:ff:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a53afa14-bb7b-4723-8239-2ed285f1bc94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=8a410c4c-94ba-44f0-9056-16dbab7db1d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.544 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 8a410c4c-94ba-44f0-9056-16dbab7db1d9 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.545 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:03 compute-0 systemd-udevd[336114]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.556 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[58bd77f8-ec80-4f0b-91eb-42fe7c976bba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.557 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape58f4ba2-c1 in ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.559 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape58f4ba2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.559 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[69251e91-6bc4-4034-b9b6-0e8a7de1a139]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.562 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[276096b5-cdaa-42c7-b4f4-17f398c0b08b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 systemd-machined[211836]: New machine qemu-63-instance-0000007e.
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.5688] device (tap8a410c4c-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.5703] device (tap8a410c4c-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.576 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba3cb0b-f6de-445a-9a6c-e7059e1fa05b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-0000007e.
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.591 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b68a18b-6067-49fc-ac53-ba703f16288c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.616 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1b59bb8c-9842-4af9-bbca-c8766c81b03b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 systemd-udevd[336117]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.6217] manager: (tape58f4ba2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/259)
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.620 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b88564-5690-4534-a73d-5673d0b9d221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.641 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.654 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[934b0ee1-7eae-4951-84b5-98722f01fb3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.657 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7309dc-5a21-4e5c-b43a-49e40688ba5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.6795] device (tape58f4ba2-c0): carrier: link connected
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.685 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1c8072-a884-439d-8c7c-b705adc31ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.8 MiB/s wr, 202 op/s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.701 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[24443a0d-af15-4da7-9fa6-a1115e66222b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645130, 'reachable_time': 29724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336146, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.718 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba91011-0abd-4141-ae83-69282eecdf5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:ad0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645130, 'tstamp': 645130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336147, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.739 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[278aee43-8484-411e-8fa6-22fb40b01be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645130, 'reachable_time': 29724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336148, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.752 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.752 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.758 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.759 2 INFO nova.compute.claims [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:33:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2082583953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.775 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8501a1-0d9e-4dc3-8645-dd881e1b4a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.828 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[08206b53-5703-4eeb-8581-3513b54e225e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.829 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.830 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.830 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:03 compute-0 kernel: tape58f4ba2-c0: entered promiscuous mode
Oct 02 12:33:03 compute-0 NetworkManager[44987]: <info>  [1759408383.8324] manager: (tape58f4ba2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.839 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:03 compute-0 ovn_controller[148183]: 2025-10-02T12:33:03Z|00547|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.844 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.846 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[20e4287e-8a89-439a-9657-7e943fb243dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.846 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:03.847 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'env', 'PROCESS_TAG=haproxy-e58f4ba2-c72c-42b8-acea-ca6241431726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e58f4ba2-c72c-42b8-acea-ca6241431726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:03 compute-0 nova_compute[257802]: 2025-10-02 12:33:03.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:03.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.022 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.206 2 DEBUG nova.compute.manager [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.207 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.208 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.208 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.208 2 DEBUG nova.compute.manager [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Processing event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.208 2 DEBUG nova.compute.manager [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.208 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.209 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.209 2 DEBUG oslo_concurrency.lockutils [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.209 2 DEBUG nova.compute.manager [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.209 2 WARNING nova.compute.manager [req-41338b81-5e61-497e-aee9-5b3d0d2213f1 req-5fc5206f-cfeb-46ea-9963-b7406aae5aa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state building and task_state spawning.
Oct 02 12:33:04 compute-0 podman[336213]: 2025-10-02 12:33:04.187128258 +0000 UTC m=+0.031366812 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555244338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.510 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.516 2 DEBUG nova.compute.provider_tree [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.580 2 DEBUG nova.scheduler.client.report [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.669 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.670 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.716 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408384.7157233, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.716 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Started (Lifecycle Event)
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.718 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.721 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.725 2 INFO nova.virt.libvirt.driver [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance spawned successfully.
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.725 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.738 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.738 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:33:04 compute-0 podman[336213]: 2025-10-02 12:33:04.750809036 +0000 UTC m=+0.595047560 container create adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.759 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.764 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.772 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.772 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.773 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.773 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.773 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.773 2 DEBUG nova.virt.libvirt.driver [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.785 2 INFO nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.794 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.794 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408384.7183049, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.795 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Paused (Lifecycle Event)
Oct 02 12:33:04 compute-0 systemd[1]: Started libpod-conmon-adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b.scope.
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.834 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.841 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408384.7215905, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.842 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Resumed (Lifecycle Event)
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.843 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91503f57da98b3b6cdaca381d69036ecb9a3a6913c3ea07f1b80e2187ae89425/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.919 2 INFO nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Took 8.79 seconds to spawn the instance on the hypervisor.
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.920 2 DEBUG nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.926 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.929 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:04 compute-0 podman[336213]: 2025-10-02 12:33:04.96982002 +0000 UTC m=+0.814058644 container init adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:33:04 compute-0 podman[336213]: 2025-10-02 12:33:04.975992972 +0000 UTC m=+0.820231536 container start adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:04 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.977 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:04 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [NOTICE]   (336261) : New worker (336263) forked
Oct 02 12:33:04 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [NOTICE]   (336261) : Loading success.
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:04.999 2 DEBUG nova.policy [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c2fbed9aaf84b4e864db97bec4c797c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '385766b9209941f3ab805e8d5e2af163', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.060 2 INFO nova.compute.manager [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Took 9.91 seconds to build instance.
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.117 2 DEBUG oslo_concurrency.lockutils [None req-ddc84800-1a6b-4b16-8276-e06b3bf3984e 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:05.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.157 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.159 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.159 2 INFO nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Creating image(s)
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.194 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.305 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:05 compute-0 ceph-mon[73607]: pgmap v2158: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.8 MiB/s wr, 202 op/s
Oct 02 12:33:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2555244338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.397 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.405 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.473 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.475 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.476 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.476 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.510 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.515 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.8 MiB/s wr, 249 op/s
Oct 02 12:33:05 compute-0 nova_compute[257802]: 2025-10-02 12:33:05.827 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Successfully created port: 37f1ce06-c620-444d-822e-67c8de421fd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:33:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:06 compute-0 nova_compute[257802]: 2025-10-02 12:33:06.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:06 compute-0 nova_compute[257802]: 2025-10-02 12:33:06.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:33:06 compute-0 nova_compute[257802]: 2025-10-02 12:33:06.175 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:33:06 compute-0 sudo[336366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:06 compute-0 sudo[336366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:06 compute-0 sudo[336366]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:06 compute-0 sudo[336391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:06 compute-0 sudo[336391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:06 compute-0 sudo[336391]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:06 compute-0 ceph-mon[73607]: pgmap v2159: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.8 MiB/s wr, 249 op/s
Oct 02 12:33:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:33:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.661 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 230 op/s
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.724 2 INFO nova.compute.manager [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Rescuing
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.724 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.725 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.725 2 DEBUG nova.network.neutron [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:07 compute-0 nova_compute[257802]: 2025-10-02 12:33:07.729 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] resizing rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:33:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:07.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.033 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Successfully updated port: 37f1ce06-c620-444d-822e-67c8de421fd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:33:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1162337237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3341860301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.171 2 DEBUG nova.compute.manager [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-changed-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.171 2 DEBUG nova.compute.manager [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Refreshing instance network info cache due to event network-changed-37f1ce06-c620-444d-822e-67c8de421fd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.171 2 DEBUG oslo_concurrency.lockutils [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.171 2 DEBUG oslo_concurrency.lockutils [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.172 2 DEBUG nova.network.neutron [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Refreshing network info cache for port 37f1ce06-c620-444d-822e-67c8de421fd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.202 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.384 2 DEBUG nova.network.neutron [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.453 2 DEBUG nova.objects.instance [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'migration_context' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.526 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.527 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Ensure instance console log exists: /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.527 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.527 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:08 compute-0 nova_compute[257802]: 2025-10-02 12:33:08.528 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:09 compute-0 ceph-mon[73607]: pgmap v2160: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 230 op/s
Oct 02 12:33:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4158264808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:09.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.163 2 DEBUG nova.network.neutron [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.202 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.202 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.202 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.203 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.203 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.231 2 DEBUG oslo_concurrency.lockutils [req-2c582459-165d-4c4c-8438-08555c0d970f req-0d26a0a4-0a6e-45cf-b034-f82f4682f3b7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.232 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquired lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.232 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.527 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.581 2 DEBUG nova.network.neutron [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286435729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.672 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:09 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.682 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 253 op/s
Oct 02 12:33:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:09.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:09.999 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.000 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.003 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.004 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.131 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:33:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4286435729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2147367564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.186 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.187 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.764171600341797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.187 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.188 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.315 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 24caf505-35fd-40c1-9bcc-1f83580b142b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.315 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance a53afa14-bb7b-4723-8239-2ed285f1bc94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.315 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 49fade07-5c87-4e89-bbec-cce4fc94a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.316 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.316 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.421 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.583 2 DEBUG nova.network.neutron [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Updating instance_info_cache with network_info: [{"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.713 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Releasing lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.714 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance network_info: |[{"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.717 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start _get_guest_xml network_info=[{"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.724 2 WARNING nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.727 2 DEBUG nova.virt.libvirt.host [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.728 2 DEBUG nova.virt.libvirt.host [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.730 2 DEBUG nova.virt.libvirt.host [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.731 2 DEBUG nova.virt.libvirt.host [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.732 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.732 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.732 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.732 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.733 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.733 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.733 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.733 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.734 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.734 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.734 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.734 2 DEBUG nova.virt.hardware [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.737 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.924 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.930 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:10 compute-0 nova_compute[257802]: 2025-10-02 12:33:10.957 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.066 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.066 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835125872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.229 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.249 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.254 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 583 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 252 op/s
Oct 02 12:33:11 compute-0 ceph-mon[73607]: pgmap v2161: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 253 op/s
Oct 02 12:33:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/203929170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3835125872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:11.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.958 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408376.9573925, c8b713f4-4f41-4153-928c-164f2ed108ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.959 2 INFO nova.compute.manager [-] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] VM Stopped (Lifecycle Event)
Oct 02 12:33:11 compute-0 nova_compute[257802]: 2025-10-02 12:33:11.987 2 DEBUG nova.compute.manager [None req-c84bd474-2008-495c-bd7f-2ef0a11f5319 - - - - - -] [instance: c8b713f4-4f41-4153-928c-164f2ed108ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388379494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.099 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.102 2 DEBUG nova.virt.libvirt.vif [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:04Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.103 2 DEBUG nova.network.os_vif_util [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.105 2 DEBUG nova.network.os_vif_util [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.108 2 DEBUG nova.objects.instance [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.128 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <uuid>49fade07-5c87-4e89-bbec-cce4fc94a4a2</uuid>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <name>instance-0000007f</name>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-222361521</nova:name>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:33:10</nova:creationTime>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:user uuid="1c2fbed9aaf84b4e864db97bec4c797c">tempest-ListServerFiltersTestJSON-542915701-project-member</nova:user>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:project uuid="385766b9209941f3ab805e8d5e2af163">tempest-ListServerFiltersTestJSON-542915701</nova:project>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <nova:port uuid="37f1ce06-c620-444d-822e-67c8de421fd6">
Oct 02 12:33:12 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <system>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="serial">49fade07-5c87-4e89-bbec-cce4fc94a4a2</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="uuid">49fade07-5c87-4e89-bbec-cce4fc94a4a2</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </system>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <os>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </os>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <features>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </features>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk">
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config">
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:12 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:52:ee:74"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <target dev="tap37f1ce06-c6"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/console.log" append="off"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <video>
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </video>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:33:12 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:33:12 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:33:12 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:33:12 compute-0 nova_compute[257802]: </domain>
Oct 02 12:33:12 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.131 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Preparing to wait for external event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.131 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.133 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.133 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.135 2 DEBUG nova.virt.libvirt.vif [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:04Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.136 2 DEBUG nova.network.os_vif_util [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.137 2 DEBUG nova.network.os_vif_util [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.138 2 DEBUG os_vif [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37f1ce06-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37f1ce06-c6, col_values=(('external_ids', {'iface-id': '37f1ce06-c620-444d-822e-67c8de421fd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:ee:74', 'vm-uuid': '49fade07-5c87-4e89-bbec-cce4fc94a4a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:12 compute-0 NetworkManager[44987]: <info>  [1759408392.1505] manager: (tap37f1ce06-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.167 2 INFO os_vif [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6')
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.392 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.393 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.394 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] No VIF found with MAC fa:16:3e:52:ee:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.394 2 INFO nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Using config drive
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.422 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.759 2 INFO nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Creating config drive at /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.766 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqnf9j1a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:12 compute-0 ceph-mon[73607]: pgmap v2162: 305 pgs: 305 active+clean; 583 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 252 op/s
Oct 02 12:33:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/388379494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1281428022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.906 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqnf9j1a" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.945 2 DEBUG nova.storage.rbd_utils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] rbd image 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:12 compute-0 nova_compute[257802]: 2025-10-02 12:33:12.948 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.428 2 DEBUG oslo_concurrency.processutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config 49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.430 2 INFO nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Deleting local config drive /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/disk.config because it was imported into RBD.
Oct 02 12:33:13 compute-0 kernel: tap37f1ce06-c6: entered promiscuous mode
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.4904] manager: (tap37f1ce06-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 ovn_controller[148183]: 2025-10-02T12:33:13Z|00548|binding|INFO|Claiming lport 37f1ce06-c620-444d-822e-67c8de421fd6 for this chassis.
Oct 02 12:33:13 compute-0 ovn_controller[148183]: 2025-10-02T12:33:13Z|00549|binding|INFO|37f1ce06-c620-444d-822e-67c8de421fd6: Claiming fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.508 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.509 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 bound to our chassis
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.511 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:13 compute-0 systemd-udevd[336669]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.530 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d77f9b35-cf5c-476f-aba7-ec6e189e45e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.532 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f3d344f-71 in ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.535 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f3d344f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.536 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ec00915a-b0d4-4b79-8da1-1ddf0895faff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.537 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa02203-ff40-43b2-8880-29dc5ff61981]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.5441] device (tap37f1ce06-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:13 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.5465] device (tap37f1ce06-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.548 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1e081d-7cac-48e7-a26e-8497c08e1244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 systemd-machined[211836]: New machine qemu-64-instance-0000007f.
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 ovn_controller[148183]: 2025-10-02T12:33:13Z|00550|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 ovn-installed in OVS
Oct 02 12:33:13 compute-0 ovn_controller[148183]: 2025-10-02T12:33:13Z|00551|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 up in Southbound
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-0000007f.
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab696a8-b08f-40ba-871a-bfa2432713b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.602 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4df9cb81-64e1-44c4-9b47-53377e09ce16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.6106] manager: (tap9f3d344f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.609 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c07bdd-ca91-4abb-9e56-c09bb9d5ac0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.653 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d32b0723-1752-4e85-871a-6a089394d45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.656 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9a51b22c-5d93-45ab-b61e-d0399f85db0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.6781] device (tap9f3d344f-70): carrier: link connected
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.685 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6cc116-f707-413d-b405-2652abc1e3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 638 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.0 MiB/s wr, 284 op/s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.699 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad53e59-726c-4127-92e1-1249f943d8df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f3d344f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:00:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646130, 'reachable_time': 43119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336708, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.713 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fc8a88-b7f6-4a14-85cc-a8b564837a08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646130, 'tstamp': 646130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336709, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.733 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f46420d6-9797-4940-a6a1-1a884a813995]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f3d344f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:00:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646130, 'reachable_time': 43119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336710, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.766 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e3850e86-cdeb-4f99-b0c4-5293f1426c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.819 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[62fb2d0a-b8dd-425f-aeb7-73fba6cda903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.820 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f3d344f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.820 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.821 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f3d344f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:13 compute-0 NetworkManager[44987]: <info>  [1759408393.8231] manager: (tap9f3d344f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Oct 02 12:33:13 compute-0 kernel: tap9f3d344f-70: entered promiscuous mode
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.825 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f3d344f-70, col_values=(('external_ids', {'iface-id': '93989b20-c703-4abe-88be-5f6a3f1c5cdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 ovn_controller[148183]: 2025-10-02T12:33:13Z|00552|binding|INFO|Releasing lport 93989b20-c703-4abe-88be-5f6a3f1c5cdc from this chassis (sb_readonly=0)
Oct 02 12:33:13 compute-0 nova_compute[257802]: 2025-10-02 12:33:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.842 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.844 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[324b11c1-ea4d-4ece-8cd7-d9a8ed16d6b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.844 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:13.845 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'env', 'PROCESS_TAG=haproxy-9f3d344f-7e5f-4676-877b-da313e338dc0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f3d344f-7e5f-4676-877b-da313e338dc0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:13.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:14 compute-0 podman[336760]: 2025-10-02 12:33:14.178492992 +0000 UTC m=+0.024128624 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3074780554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2872398020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1295974800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.625 2 DEBUG nova.compute.manager [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.626 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.627 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.627 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.627 2 DEBUG nova.compute.manager [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Processing event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.628 2 DEBUG nova.compute.manager [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.628 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.628 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.628 2 DEBUG oslo_concurrency.lockutils [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.629 2 DEBUG nova.compute.manager [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.629 2 WARNING nova.compute.manager [req-677e517f-a62e-427e-b1fb-1e5055c02aa0 req-75fd0e0c-4445-441d-a268-f5a156a7d67f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state building and task_state spawning.
Oct 02 12:33:14 compute-0 podman[336760]: 2025-10-02 12:33:14.666509179 +0000 UTC m=+0.512144801 container create 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.795 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.797 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408394.7943137, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.798 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Started (Lifecycle Event)
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.802 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.805 2 INFO nova.virt.libvirt.driver [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance spawned successfully.
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.808 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.819 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.823 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:14 compute-0 systemd[1]: Started libpod-conmon-2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891.scope.
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.843 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.844 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.844 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.845 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.845 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.846 2 DEBUG nova.virt.libvirt.driver [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.860 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.860 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408394.794487, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.860 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Paused (Lifecycle Event)
Oct 02 12:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96494ce09e7c5cf7a0291f5fe5c6385953cbc1f7fd38fe8fe8bcf429a258bead/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.958 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.964 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408394.802137, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.964 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Resumed (Lifecycle Event)
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.979 2 INFO nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Took 9.82 seconds to spawn the instance on the hypervisor.
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.979 2 DEBUG nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.990 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:14 compute-0 nova_compute[257802]: 2025-10-02 12:33:14.994 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:15 compute-0 nova_compute[257802]: 2025-10-02 12:33:15.030 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:15 compute-0 podman[336760]: 2025-10-02 12:33:15.062576137 +0000 UTC m=+0.908211789 container init 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:33:15 compute-0 nova_compute[257802]: 2025-10-02 12:33:15.066 2 INFO nova.compute.manager [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Took 11.33 seconds to build instance.
Oct 02 12:33:15 compute-0 podman[336760]: 2025-10-02 12:33:15.069914938 +0000 UTC m=+0.915550560 container start 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:15 compute-0 nova_compute[257802]: 2025-10-02 12:33:15.081 2 DEBUG oslo_concurrency.lockutils [None req-0c186f68-a672-48f4-9c7d-dea484f837f7 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:15 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [NOTICE]   (336805) : New worker (336807) forked
Oct 02 12:33:15 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [NOTICE]   (336805) : Loading success.
Oct 02 12:33:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:15 compute-0 ceph-mon[73607]: pgmap v2163: 305 pgs: 305 active+clean; 638 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.0 MiB/s wr, 284 op/s
Oct 02 12:33:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1855908522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3273742260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 716 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.2 MiB/s wr, 271 op/s
Oct 02 12:33:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/469280999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:16 compute-0 ceph-mon[73607]: pgmap v2164: 305 pgs: 305 active+clean; 716 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.2 MiB/s wr, 271 op/s
Oct 02 12:33:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2781084812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:16 compute-0 podman[336818]: 2025-10-02 12:33:16.916824526 +0000 UTC m=+0.057354631 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:33:16 compute-0 podman[336819]: 2025-10-02 12:33:16.935608677 +0000 UTC m=+0.069068159 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:16 compute-0 podman[336820]: 2025-10-02 12:33:16.952655616 +0000 UTC m=+0.087616094 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:17 compute-0 nova_compute[257802]: 2025-10-02 12:33:17.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 716 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.2 MiB/s wr, 228 op/s
Oct 02 12:33:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/155040870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/155040870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.440 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.441 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.441 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.441 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.442 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.443 2 INFO nova.compute.manager [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Terminating instance
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.443 2 DEBUG nova.compute.manager [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:33:18 compute-0 kernel: tapf0f0fd9c-fb (unregistering): left promiscuous mode
Oct 02 12:33:18 compute-0 NetworkManager[44987]: <info>  [1759408398.6265] device (tapf0f0fd9c-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 ovn_controller[148183]: 2025-10-02T12:33:18Z|00553|binding|INFO|Releasing lport f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 from this chassis (sb_readonly=0)
Oct 02 12:33:18 compute-0 ovn_controller[148183]: 2025-10-02T12:33:18Z|00554|binding|INFO|Setting lport f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 down in Southbound
Oct 02 12:33:18 compute-0 ovn_controller[148183]: 2025-10-02T12:33:18Z|00555|binding|INFO|Removing iface tapf0f0fd9c-fb ovn-installed in OVS
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:18.645 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:07:a0 10.100.0.14'], port_security=['fa:16:3e:76:07:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24caf505-35fd-40c1-9bcc-1f83580b142b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:18.646 158261 INFO neutron.agent.ovn.metadata.agent [-] Port f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 in datapath 585473f8-52e4-4e55-96df-8a236d361126 unbound from our chassis
Oct 02 12:33:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:18.648 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 585473f8-52e4-4e55-96df-8a236d361126, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:18.648 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fb10bd4c-53a0-4920-9904-57d570ac05a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:18.649 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 namespace which is not needed anymore
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 02 12:33:18 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006f.scope: Consumed 24.221s CPU time.
Oct 02 12:33:18 compute-0 systemd-machined[211836]: Machine qemu-55-instance-0000006f terminated.
Oct 02 12:33:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:18 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [NOTICE]   (327049) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:18 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [NOTICE]   (327049) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:18 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [WARNING]  (327049) : Exiting Master process...
Oct 02 12:33:18 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [ALERT]    (327049) : Current worker (327051) exited with code 143 (Terminated)
Oct 02 12:33:18 compute-0 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[327045]: [WARNING]  (327049) : All workers exited. Exiting... (0)
Oct 02 12:33:18 compute-0 systemd[1]: libpod-fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b.scope: Deactivated successfully.
Oct 02 12:33:18 compute-0 podman[336901]: 2025-10-02 12:33:18.797565455 +0000 UTC m=+0.054395499 container died fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.882 2 INFO nova.virt.libvirt.driver [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Instance destroyed successfully.
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.883 2 DEBUG nova.objects.instance [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid 24caf505-35fd-40c1-9bcc-1f83580b142b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.924 2 DEBUG nova.virt.libvirt.vif [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=111,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-cjcmnpsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=24caf505-35fd-40c1-9bcc-1f83580b142b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.925 2 DEBUG nova.network.os_vif_util [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "address": "fa:16:3e:76:07:a0", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f0fd9c-fb", "ovs_interfaceid": "f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.926 2 DEBUG nova.network.os_vif_util [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.926 2 DEBUG os_vif [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f0fd9c-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-0 nova_compute[257802]: 2025-10-02 12:33:18.934 2 INFO os_vif [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:07:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f0fd9c-fb')
Oct 02 12:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a03850c8d19234866014556142b1266693fd29e6cd4414a41be97ccf7f205b39-merged.mount: Deactivated successfully.
Oct 02 12:33:19 compute-0 podman[336901]: 2025-10-02 12:33:19.023721854 +0000 UTC m=+0.280551898 container cleanup fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:33:19 compute-0 ceph-mon[73607]: pgmap v2165: 305 pgs: 305 active+clean; 716 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.2 MiB/s wr, 228 op/s
Oct 02 12:33:19 compute-0 systemd[1]: libpod-conmon-fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b.scope: Deactivated successfully.
Oct 02 12:33:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:19 compute-0 podman[336955]: 2025-10-02 12:33:19.252008878 +0000 UTC m=+0.207612536 container remove fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.258 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0cc4bb-086f-450d-8d34-cee646f2ffab]: (4, ('Thu Oct  2 12:33:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 (fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b)\nfe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b\nThu Oct  2 12:33:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 (fe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b)\nfe6010cddc0ab9ea08e296fed7305e663c5cad5cd5b6d3f1ba1873e417b52c5b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.260 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fa809643-643d-4ac5-80fd-268d472b9cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.261 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-0 kernel: tap585473f8-50: left promiscuous mode
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.267 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b003b03b-a0d4-4162-a190-746b0861c45e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.295 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7d52c8ae-70e9-43e4-9c9c-31afbea2afd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.296 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0996f86d-6de9-4b13-a54d-9f20c8c71dc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.314 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef2db39-2e8b-4ffe-a672-6c32fb04cd9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618885, 'reachable_time': 15108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336970, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d585473f8\x2d52e4\x2d4e55\x2d96df\x2d8a236d361126.mount: Deactivated successfully.
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.318 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:19.318 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1cab2449-6ce3-459a-aaaf-2b55fa76a5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.497 2 DEBUG nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-unplugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.497 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.498 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.498 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.498 2 DEBUG nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] No waiting events found dispatching network-vif-unplugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.499 2 DEBUG nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-unplugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.499 2 DEBUG nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.499 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.500 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.500 2 DEBUG oslo_concurrency.lockutils [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.500 2 DEBUG nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] No waiting events found dispatching network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.501 2 WARNING nova.compute.manager [req-df1e287d-e0b7-4273-a9b1-8ec7598c5ee8 req-d37933ca-c70d-417b-8c31-fc12f5e77e96 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received unexpected event network-vif-plugged-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 for instance with vm_state active and task_state deleting.
Oct 02 12:33:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 362 op/s
Oct 02 12:33:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.903 2 INFO nova.virt.libvirt.driver [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Deleting instance files /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b_del
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.905 2 INFO nova.virt.libvirt.driver [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Deletion of /var/lib/nova/instances/24caf505-35fd-40c1-9bcc-1f83580b142b_del complete
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.962 2 INFO nova.compute.manager [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Took 1.52 seconds to destroy the instance on the hypervisor.
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.962 2 DEBUG oslo.service.loopingcall [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.962 2 DEBUG nova.compute.manager [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:33:19 compute-0 nova_compute[257802]: 2025-10-02 12:33:19.963 2 DEBUG nova.network.neutron [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.181 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.574 2 DEBUG nova.network.neutron [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.596 2 INFO nova.compute.manager [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Took 0.63 seconds to deallocate network for instance.
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.642 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.642 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.661 2 DEBUG nova.compute.manager [req-aff719e0-00cc-4d2d-9a81-2109d7d17732 req-d8c4215d-3a26-4fc5-95f8-d3ff9e9ca5aa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Received event network-vif-deleted-f0f0fd9c-fb6c-414e-9dde-7c6b58c63e90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:20 compute-0 nova_compute[257802]: 2025-10-02 12:33:20.721 2 DEBUG oslo_concurrency.processutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:20 compute-0 podman[336976]: 2025-10-02 12:33:20.992720394 +0000 UTC m=+0.131484773 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.066 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:21 compute-0 ceph-mon[73607]: pgmap v2166: 305 pgs: 305 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 362 op/s
Oct 02 12:33:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263562727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.203 2 DEBUG oslo_concurrency.processutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.208 2 DEBUG nova.compute.provider_tree [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.232 2 DEBUG nova.scheduler.client.report [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.262 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.317 2 INFO nova.scheduler.client.report [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Deleted allocations for instance 24caf505-35fd-40c1-9bcc-1f83580b142b
Oct 02 12:33:21 compute-0 nova_compute[257802]: 2025-10-02 12:33:21.404 2 DEBUG oslo_concurrency.lockutils [None req-43f03d36-7b48-4d1f-a788-9e6853774fc0 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "24caf505-35fd-40c1-9bcc-1f83580b142b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 738 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 9.8 MiB/s wr, 337 op/s
Oct 02 12:33:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:21.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1263562727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:22 compute-0 kernel: tap8a410c4c-94 (unregistering): left promiscuous mode
Oct 02 12:33:22 compute-0 NetworkManager[44987]: <info>  [1759408402.7210] device (tap8a410c4c-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:22 compute-0 ovn_controller[148183]: 2025-10-02T12:33:22Z|00556|binding|INFO|Releasing lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 from this chassis (sb_readonly=0)
Oct 02 12:33:22 compute-0 ovn_controller[148183]: 2025-10-02T12:33:22Z|00557|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 down in Southbound
Oct 02 12:33:22 compute-0 ovn_controller[148183]: 2025-10-02T12:33:22Z|00558|binding|INFO|Removing iface tap8a410c4c-94 ovn-installed in OVS
Oct 02 12:33:22 compute-0 nova_compute[257802]: 2025-10-02 12:33:22.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:22.734 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:ff:7e 10.100.0.14'], port_security=['fa:16:3e:2f:ff:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a53afa14-bb7b-4723-8239-2ed285f1bc94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=8a410c4c-94ba-44f0-9056-16dbab7db1d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:22.735 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 8a410c4c-94ba-44f0-9056-16dbab7db1d9 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis
Oct 02 12:33:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:22.737 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e58f4ba2-c72c-42b8-acea-ca6241431726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:22.738 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d85bd83e-b207-40db-8bd8-1282f95b8690]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:22.739 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace which is not needed anymore
Oct 02 12:33:22 compute-0 nova_compute[257802]: 2025-10-02 12:33:22.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:22 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 02 12:33:22 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007e.scope: Consumed 14.305s CPU time.
Oct 02 12:33:22 compute-0 systemd-machined[211836]: Machine qemu-63-instance-0000007e terminated.
Oct 02 12:33:22 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [NOTICE]   (336261) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:22 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [NOTICE]   (336261) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:22 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [WARNING]  (336261) : Exiting Master process...
Oct 02 12:33:22 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [ALERT]    (336261) : Current worker (336263) exited with code 143 (Terminated)
Oct 02 12:33:22 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[336257]: [WARNING]  (336261) : All workers exited. Exiting... (0)
Oct 02 12:33:22 compute-0 nova_compute[257802]: 2025-10-02 12:33:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:22 compute-0 systemd[1]: libpod-adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b.scope: Deactivated successfully.
Oct 02 12:33:22 compute-0 nova_compute[257802]: 2025-10-02 12:33:22.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:22 compute-0 podman[337044]: 2025-10-02 12:33:22.97053631 +0000 UTC m=+0.147741553 container died adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-91503f57da98b3b6cdaca381d69036ecb9a3a6913c3ea07f1b80e2187ae89425-merged.mount: Deactivated successfully.
Oct 02 12:33:23 compute-0 podman[337044]: 2025-10-02 12:33:23.019635047 +0000 UTC m=+0.196840280 container cleanup adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:33:23 compute-0 systemd[1]: libpod-conmon-adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b.scope: Deactivated successfully.
Oct 02 12:33:23 compute-0 podman[337081]: 2025-10-02 12:33:23.094305103 +0000 UTC m=+0.055157728 container remove adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.101 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db8ed3ea-fc5d-4eb2-9636-3d963958c162]: (4, ('Thu Oct  2 12:33:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b)\nadb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b\nThu Oct  2 12:33:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (adb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b)\nadb392b14230f829e9e6217ca1c167b9bf175727f3c11e4cc31bfbb2c0f7570b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.104 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[684f8bbe-5484-433a-97a0-add8a5af5f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.105 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:23 compute-0 kernel: tape58f4ba2-c0: left promiscuous mode
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.135 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7f76dcf7-73af-430e-b90e-04242d9490ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.161 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aea722f6-43bf-465f-8706-f811bbe20a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.162 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ef4de5-6d04-4446-a92c-82c3058eba1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.179 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcf3e6a-94d9-489d-877b-c74119c415c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645123, 'reachable_time': 40650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337100, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 systemd[1]: run-netns-ovnmeta\x2de58f4ba2\x2dc72c\x2d42b8\x2dacea\x2dca6241431726.mount: Deactivated successfully.
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.185 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:23.185 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9a4ecd-744a-49cb-ab09-490ebc5e9784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.196 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance shutdown successfully after 13 seconds.
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.200 2 INFO nova.virt.libvirt.driver [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance destroyed successfully.
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.200 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'numa_topology' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.221 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Attempting rescue
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.222 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.225 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.225 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Creating image(s)
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.337 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 ceph-mon[73607]: pgmap v2167: 305 pgs: 305 active+clean; 738 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 9.8 MiB/s wr, 337 op/s
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.343 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.456 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.481 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.484 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.513 2 DEBUG nova.compute.manager [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.514 2 DEBUG oslo_concurrency.lockutils [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.514 2 DEBUG oslo_concurrency.lockutils [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.514 2 DEBUG oslo_concurrency.lockutils [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.514 2 DEBUG nova.compute.manager [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.515 2 WARNING nova.compute.manager [req-77969820-8536-410c-9983-1745ec180bee req-3ce10004-27f2-428a-8133-c697c163d2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state active and task_state rescuing.
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.549 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.550 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.551 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.551 2 DEBUG oslo_concurrency.lockutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.579 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.583 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 674 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 404 op/s
Oct 02 12:33:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:23.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:23 compute-0 nova_compute[257802]: 2025-10-02 12:33:23.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.099 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.099 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.100 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.100 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.100 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.101 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.123 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.140 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.141 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Image id c2d0c2bc-fe21-4689-86ae-d6728c15874c yields fingerprint 50c3d0e01c5fd68886c717f1fdd053015a0fe968 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.141 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] image c2d0c2bc-fe21-4689-86ae-d6728c15874c at (/var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968): checking
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.141 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] image c2d0c2bc-fe21-4689-86ae-d6728c15874c at (/var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.142 2 INFO oslo.privsep.daemon [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpfzz26d8a/privsep.sock']
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.352 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.353 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.368 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.369 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start _get_guest_xml network_info=[{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:2f:ff:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.369 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.385 2 WARNING nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.390 2 DEBUG nova.virt.libvirt.host [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.390 2 DEBUG nova.virt.libvirt.host [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.393 2 DEBUG nova.virt.libvirt.host [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.394 2 DEBUG nova.virt.libvirt.host [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.395 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.395 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.396 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.396 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.396 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.396 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.397 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.397 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.397 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.398 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.398 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.398 2 DEBUG nova.virt.hardware [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.398 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.416 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:24 compute-0 ceph-mon[73607]: pgmap v2168: 305 pgs: 305 active+clean; 674 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 404 op/s
Oct 02 12:33:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251958065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.867 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:24 compute-0 nova_compute[257802]: 2025-10-02 12:33:24.868 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014417915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.265 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.267 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.559 2 DEBUG nova.compute.manager [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.560 2 DEBUG oslo_concurrency.lockutils [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.560 2 DEBUG oslo_concurrency.lockutils [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.560 2 DEBUG oslo_concurrency.lockutils [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.561 2 DEBUG nova.compute.manager [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.561 2 WARNING nova.compute.manager [req-b5f167fa-31c8-439c-bd54-ad8e09f62755 req-b768cd92-1436-4975-ba8b-81a6c8552ebc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state active and task_state rescuing.
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.667 2 INFO oslo.privsep.daemon [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Spawned new privsep daemon via rootwrap
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.538 20794 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.543 20794 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.545 20794 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.546 20794 INFO oslo.privsep.daemon [-] privsep daemon running as pid 20794
Oct 02 12:33:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4251958065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3014417915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 635 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.9 MiB/s wr, 521 op/s
Oct 02 12:33:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584982344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.740 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.742 2 DEBUG nova.virt.libvirt.vif [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-408124247',display_name='tempest-ServerRescueNegativeTestJSON-server-408124247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-408124247',id=126,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ysobwts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:04Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=a53afa14-bb7b-4723-8239-2ed285f1bc94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:2f:ff:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.743 2 DEBUG nova.network.os_vif_util [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:2f:ff:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.744 2 DEBUG nova.network.os_vif_util [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.745 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.763 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <uuid>a53afa14-bb7b-4723-8239-2ed285f1bc94</uuid>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <name>instance-0000007e</name>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-408124247</nova:name>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:33:24</nova:creationTime>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <nova:port uuid="8a410c4c-94ba-44f0-9056-16dbab7db1d9">
Oct 02 12:33:25 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="serial">a53afa14-bb7b-4723-8239-2ed285f1bc94</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="uuid">a53afa14-bb7b-4723-8239-2ed285f1bc94</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.rescue">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a53afa14-bb7b-4723-8239-2ed285f1bc94_disk">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config.rescue">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:2f:ff:7e"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <target dev="tap8a410c4c-94"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/console.log" append="off"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:33:25 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:33:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:33:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:33:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:33:25 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.777 2 INFO nova.virt.libvirt.driver [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance destroyed successfully.
Oct 02 12:33:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.911 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.914 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] a53afa14-bb7b-4723-8239-2ed285f1bc94 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.914 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] 49fade07-5c87-4e89-bbec-cce4fc94a4a2 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.914 2 WARNING nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.914 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Active base files: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.915 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Removable base files: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.915 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.916 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.916 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.916 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.917 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.976 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.977 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.977 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.977 2 DEBUG nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:2f:ff:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:25 compute-0 nova_compute[257802]: 2025-10-02 12:33:25.978 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Using config drive
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.002 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.022 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.058 2 DEBUG nova.objects.instance [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'keypairs' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.488 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Creating config drive at /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.494 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaefde7t1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.562 2 DEBUG oslo_concurrency.lockutils [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.563 2 DEBUG oslo_concurrency.lockutils [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.563 2 DEBUG nova.compute.manager [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.569 2 DEBUG nova.compute.manager [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.570 2 DEBUG nova.objects.instance [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'flavor' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.598 2 DEBUG nova.virt.libvirt.driver [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.633 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaefde7t1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.661 2 DEBUG nova.storage.rbd_utils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:26 compute-0 nova_compute[257802]: 2025-10-02 12:33:26.667 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:26 compute-0 sudo[337291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:26 compute-0 sudo[337291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:26 compute-0 sudo[337291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:26 compute-0 sudo[337335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:26 compute-0 sudo[337335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:26 compute-0 sudo[337335]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:26 compute-0 ceph-mon[73607]: pgmap v2169: 305 pgs: 305 active+clean; 635 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.9 MiB/s wr, 521 op/s
Oct 02 12:33:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3584982344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:26.951 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:26.952 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:26.952 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:27.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.301 2 DEBUG oslo_concurrency.processutils [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue a53afa14-bb7b-4723-8239-2ed285f1bc94_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.302 2 INFO nova.virt.libvirt.driver [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Deleting local config drive /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94/disk.config.rescue because it was imported into RBD.
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.3521] manager: (tap8a410c4c-94): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct 02 12:33:27 compute-0 kernel: tap8a410c4c-94: entered promiscuous mode
Oct 02 12:33:27 compute-0 ovn_controller[148183]: 2025-10-02T12:33:27Z|00559|binding|INFO|Claiming lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 for this chassis.
Oct 02 12:33:27 compute-0 ovn_controller[148183]: 2025-10-02T12:33:27Z|00560|binding|INFO|8a410c4c-94ba-44f0-9056-16dbab7db1d9: Claiming fa:16:3e:2f:ff:7e 10.100.0.14
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 ovn_controller[148183]: 2025-10-02T12:33:27Z|00561|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 ovn-installed in OVS
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 systemd-machined[211836]: New machine qemu-65-instance-0000007e.
Oct 02 12:33:27 compute-0 ovn_controller[148183]: 2025-10-02T12:33:27Z|00562|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 up in Southbound
Oct 02 12:33:27 compute-0 systemd-udevd[337393]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.387 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:ff:7e 10.100.0.14'], port_security=['fa:16:3e:2f:ff:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a53afa14-bb7b-4723-8239-2ed285f1bc94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=8a410c4c-94ba-44f0-9056-16dbab7db1d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.388 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 8a410c4c-94ba-44f0-9056-16dbab7db1d9 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.389 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:27 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000007e.
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.3995] device (tap8a410c4c-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.4002] device (tap8a410c4c-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.401 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[91bb4f42-4bff-4fe5-8c88-64e15791ec72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.402 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape58f4ba2-c1 in ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.403 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape58f4ba2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.403 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f52adb4c-c3ab-4804-988f-e67e560047c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.404 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[704f199e-2c29-459f-9093-8f5fb1294020]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.414 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[72c055d7-cbaf-440a-91c5-16c2e7e57526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.436 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1de64bc0-214f-43fd-bb32-343739b37093]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.463 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d643b65b-f480-4c63-853a-1a998a7c65c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 systemd-udevd[337396]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.4689] manager: (tape58f4ba2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.468 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee549c34-7ba2-48c6-9153-2119ada158ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.499 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[288c2024-0b6b-40b1-92b9-3c8886e3d5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.502 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8620ebe6-8f82-4898-ad8d-d62673553c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.5247] device (tape58f4ba2-c0): carrier: link connected
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.532 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[93e94f76-0927-4945-8592-b6eadecaeaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.548 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[584c7b16-2db6-4667-ad78-d04555604f9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 30641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337426, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.563 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[70a9df60-bfee-43b6-859f-3b5a27e46c61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:ad0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647515, 'tstamp': 647515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337427, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.579 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f96cc8b9-0687-4b1a-b1b8-d0527dab4c16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 30641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337428, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.608 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[335fb29d-0fce-473c-a778-a7c4ffe570d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.667 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1437fc5-c66a-4b8c-97d0-1cda2d0963b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.668 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.668 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.669 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 NetworkManager[44987]: <info>  [1759408407.6712] manager: (tape58f4ba2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 02 12:33:27 compute-0 kernel: tape58f4ba2-c0: entered promiscuous mode
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.673 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:27 compute-0 ovn_controller[148183]: 2025-10-02T12:33:27Z|00563|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.691 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.692 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c150c070-1719-4108-8d9c-5a954ebd72f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.693 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:27.694 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'env', 'PROCESS_TAG=haproxy-e58f4ba2-c72c-42b8-acea-ca6241431726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e58f4ba2-c72c-42b8-acea-ca6241431726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 635 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 4.6 MiB/s wr, 474 op/s
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.788 2 DEBUG nova.compute.manager [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.788 2 DEBUG oslo_concurrency.lockutils [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.788 2 DEBUG oslo_concurrency.lockutils [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.789 2 DEBUG oslo_concurrency.lockutils [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.789 2 DEBUG nova.compute.manager [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:27 compute-0 nova_compute[257802]: 2025-10-02 12:33:27.789 2 WARNING nova.compute.manager [req-df36b02e-fbf0-4455-85ff-fc64e25309c7 req-b256982f-b4d2-4fb8-8aa9-a2b4b60a9edb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state active and task_state rescuing.
Oct 02 12:33:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:28 compute-0 podman[337496]: 2025-10-02 12:33:28.030709257 +0000 UTC m=+0.025634981 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/542825886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-0 nova_compute[257802]: 2025-10-02 12:33:28.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:28 compute-0 nova_compute[257802]: 2025-10-02 12:33:28.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:29 compute-0 podman[337496]: 2025-10-02 12:33:29.585379951 +0000 UTC m=+1.580305655 container create 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:33:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 6.6 MiB/s wr, 502 op/s
Oct 02 12:33:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:29.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.947 2 DEBUG nova.compute.manager [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.948 2 DEBUG oslo_concurrency.lockutils [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.948 2 DEBUG oslo_concurrency.lockutils [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.948 2 DEBUG oslo_concurrency.lockutils [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.949 2 DEBUG nova.compute.manager [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:29 compute-0 nova_compute[257802]: 2025-10-02 12:33:29.949 2 WARNING nova.compute.manager [req-8a1c66ee-f884-40a0-a845-6ef46cfcd5d6 req-e397e859-93a5-4e24-94f9-447a2f5d2803 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state active and task_state rescuing.
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.176 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for a53afa14-bb7b-4723-8239-2ed285f1bc94 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.176 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408410.1754732, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.176 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Resumed (Lifecycle Event)
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.180 2 DEBUG nova.compute.manager [None req-194e7163-18d7-4426-9f00-8be6bfc52545 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.220 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.225 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:30 compute-0 ceph-mon[73607]: pgmap v2170: 305 pgs: 305 active+clean; 635 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 4.6 MiB/s wr, 474 op/s
Oct 02 12:33:30 compute-0 systemd[1]: Started libpod-conmon-045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8.scope.
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.292 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.293 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408410.1783705, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.293 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Started (Lifecycle Event)
Oct 02 12:33:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.312 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98626666109bd08180d9fa50b75947bc4e663092b586e25c99dd8343a9eb526/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:30 compute-0 nova_compute[257802]: 2025-10-02 12:33:30.316 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:30 compute-0 podman[337496]: 2025-10-02 12:33:30.466124024 +0000 UTC m=+2.461049748 container init 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:33:30 compute-0 podman[337496]: 2025-10-02 12:33:30.472249374 +0000 UTC m=+2.467175098 container start 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:33:30 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [NOTICE]   (337542) : New worker (337544) forked
Oct 02 12:33:30 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [NOTICE]   (337542) : Loading success.
Oct 02 12:33:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:31.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:31 compute-0 ceph-mon[73607]: pgmap v2171: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 6.6 MiB/s wr, 502 op/s
Oct 02 12:33:31 compute-0 unix_chkpwd[337554]: password check failed for user (root)
Oct 02 12:33:31 compute-0 sshd-session[337510]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.167.220.139  user=root
Oct 02 12:33:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.5 MiB/s wr, 384 op/s
Oct 02 12:33:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:33.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:33 compute-0 nova_compute[257802]: 2025-10-02 12:33:33.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:33 compute-0 ovn_controller[148183]: 2025-10-02T12:33:33Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:33:33 compute-0 ovn_controller[148183]: 2025-10-02T12:33:33Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:33:33 compute-0 sshd-session[337510]: Failed password for root from 43.167.220.139 port 51196 ssh2
Oct 02 12:33:33 compute-0 ceph-mon[73607]: pgmap v2172: 305 pgs: 305 active+clean; 678 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.5 MiB/s wr, 384 op/s
Oct 02 12:33:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 703 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 401 op/s
Oct 02 12:33:33 compute-0 nova_compute[257802]: 2025-10-02 12:33:33.881 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408398.8807435, 24caf505-35fd-40c1-9bcc-1f83580b142b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:33 compute-0 nova_compute[257802]: 2025-10-02 12:33:33.881 2 INFO nova.compute.manager [-] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] VM Stopped (Lifecycle Event)
Oct 02 12:33:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:33 compute-0 nova_compute[257802]: 2025-10-02 12:33:33.928 2 DEBUG nova.compute.manager [None req-92ba62ee-4eaf-48ba-b4c9-3fe9db6cb30a - - - - - -] [instance: 24caf505-35fd-40c1-9bcc-1f83580b142b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:34 compute-0 nova_compute[257802]: 2025-10-02 12:33:34.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:34 compute-0 ceph-mon[73607]: pgmap v2173: 305 pgs: 305 active+clean; 703 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 401 op/s
Oct 02 12:33:35 compute-0 sshd-session[337510]: Connection closed by authenticating user root 43.167.220.139 port 51196 [preauth]
Oct 02 12:33:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 8.2 MiB/s wr, 462 op/s
Oct 02 12:33:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:36 compute-0 nova_compute[257802]: 2025-10-02 12:33:36.641 2 DEBUG nova.virt.libvirt.driver [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:33:36 compute-0 ceph-mon[73607]: pgmap v2174: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 8.2 MiB/s wr, 462 op/s
Oct 02 12:33:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2238847500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 314 op/s
Oct 02 12:33:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:38 compute-0 nova_compute[257802]: 2025-10-02 12:33:38.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:39 compute-0 nova_compute[257802]: 2025-10-02 12:33:39.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:39 compute-0 ceph-mon[73607]: pgmap v2175: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 314 op/s
Oct 02 12:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:33:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1689216891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:33:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1689216891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 731 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 346 op/s
Oct 02 12:33:39 compute-0 kernel: tap37f1ce06-c6 (unregistering): left promiscuous mode
Oct 02 12:33:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:39 compute-0 NetworkManager[44987]: <info>  [1759408419.9390] device (tap37f1ce06-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:39 compute-0 ovn_controller[148183]: 2025-10-02T12:33:39Z|00564|binding|INFO|Releasing lport 37f1ce06-c620-444d-822e-67c8de421fd6 from this chassis (sb_readonly=0)
Oct 02 12:33:39 compute-0 ovn_controller[148183]: 2025-10-02T12:33:39Z|00565|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 down in Southbound
Oct 02 12:33:39 compute-0 ovn_controller[148183]: 2025-10-02T12:33:39Z|00566|binding|INFO|Removing iface tap37f1ce06-c6 ovn-installed in OVS
Oct 02 12:33:39 compute-0 nova_compute[257802]: 2025-10-02 12:33:39.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:39 compute-0 nova_compute[257802]: 2025-10-02 12:33:39.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:39.981 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:39.982 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 unbound from our chassis
Oct 02 12:33:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:39.984 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f3d344f-7e5f-4676-877b-da313e338dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:39.985 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d66d7e-af66-4c17-b15c-ab500d4843d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:39.985 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 namespace which is not needed anymore
Oct 02 12:33:39 compute-0 nova_compute[257802]: 2025-10-02 12:33:39.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 02 12:33:40 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007f.scope: Consumed 15.028s CPU time.
Oct 02 12:33:40 compute-0 systemd-machined[211836]: Machine qemu-64-instance-0000007f terminated.
Oct 02 12:33:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1689216891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1689216891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [NOTICE]   (336805) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [NOTICE]   (336805) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [WARNING]  (336805) : Exiting Master process...
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [WARNING]  (336805) : Exiting Master process...
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [ALERT]    (336805) : Current worker (336807) exited with code 143 (Terminated)
Oct 02 12:33:40 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[336801]: [WARNING]  (336805) : All workers exited. Exiting... (0)
Oct 02 12:33:40 compute-0 kernel: tap37f1ce06-c6: entered promiscuous mode
Oct 02 12:33:40 compute-0 NetworkManager[44987]: <info>  [1759408420.1643] manager: (tap37f1ce06-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Oct 02 12:33:40 compute-0 kernel: tap37f1ce06-c6 (unregistering): left promiscuous mode
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00567|binding|INFO|Claiming lport 37f1ce06-c620-444d-822e-67c8de421fd6 for this chassis.
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00568|binding|INFO|37f1ce06-c620-444d-822e-67c8de421fd6: Claiming fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:33:40 compute-0 systemd[1]: libpod-2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891.scope: Deactivated successfully.
Oct 02 12:33:40 compute-0 podman[337582]: 2025-10-02 12:33:40.178408446 +0000 UTC m=+0.108729273 container died 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.180 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00569|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 ovn-installed in OVS
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00570|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 up in Southbound
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00571|binding|INFO|Releasing lport 37f1ce06-c620-444d-822e-67c8de421fd6 from this chassis (sb_readonly=1)
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00572|if_status|INFO|Dropped 2 log messages in last 1197 seconds (most recently, 1197 seconds ago) due to excessive rate
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00573|if_status|INFO|Not setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 down as sb is readonly
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00574|binding|INFO|Removing iface tap37f1ce06-c6 ovn-installed in OVS
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00575|binding|INFO|Releasing lport 37f1ce06-c620-444d-822e-67c8de421fd6 from this chassis (sb_readonly=0)
Oct 02 12:33:40 compute-0 ovn_controller[148183]: 2025-10-02T12:33:40Z|00576|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 down in Southbound
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.308 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-96494ce09e7c5cf7a0291f5fe5c6385953cbc1f7fd38fe8fe8bcf429a258bead-merged.mount: Deactivated successfully.
Oct 02 12:33:40 compute-0 podman[337582]: 2025-10-02 12:33:40.416202763 +0000 UTC m=+0.346523580 container cleanup 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:40 compute-0 systemd[1]: libpod-conmon-2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891.scope: Deactivated successfully.
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.656 2 INFO nova.virt.libvirt.driver [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance shutdown successfully after 14 seconds.
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.661 2 INFO nova.virt.libvirt.driver [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance destroyed successfully.
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.661 2 DEBUG nova.objects.instance [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'numa_topology' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.673 2 DEBUG nova.compute.manager [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:40 compute-0 podman[337624]: 2025-10-02 12:33:40.714524437 +0000 UTC m=+0.276578100 container remove 2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.722 2 DEBUG oslo_concurrency.lockutils [None req-014b02b2-6f90-4550-a39d-70c9257067e2 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.724 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3c98d9f9-7c6f-4dc2-8b7a-437b0365bd36]: (4, ('Thu Oct  2 12:33:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 (2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891)\n2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891\nThu Oct  2 12:33:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 (2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891)\n2ac3da1013dc8276e261eb1710237ee889d57f38f1f3183ebe70c577cd5e9891\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.726 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b86f5f4-ea4f-430d-b6d8-af0ac208d041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.728 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f3d344f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 kernel: tap9f3d344f-70: left promiscuous mode
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.758 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dda214-ce6c-4070-831d-23983d751fb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.779 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[66c3726c-b29d-4696-86a6-02ba52f2128c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.780 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ffec42ed-8127-4daa-aca9-e3b51d8294bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.799 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ffed35ff-074d-42ff-be19-3e566ef86dbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646122, 'reachable_time': 23331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337643, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f3d344f\x2d7e5f\x2d4676\x2d877b\x2dda313e338dc0.mount: Deactivated successfully.
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.803 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.803 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[480c3f0d-866b-4cc5-9878-d100d576a95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.804 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 unbound from our chassis
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.805 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f3d344f-7e5f-4676-877b-da313e338dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.805 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8816e54d-b5a5-4084-a4a2-f9dd3b7034e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.806 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 unbound from our chassis
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.807 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f3d344f-7e5f-4676-877b-da313e338dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:40.807 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd5cb4-c109-4235-97d6-4a97a3d5e31d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.886 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 WARNING nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.888 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 WARNING nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.889 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.890 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.890 2 DEBUG oslo_concurrency.lockutils [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.890 2 DEBUG nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:40 compute-0 nova_compute[257802]: 2025-10-02 12:33:40.890 2 WARNING nova.compute.manager [req-f410c3e6-55fc-4b5e-8dfd-d66592455918 req-09849605-83a6-4250-b3e7-42ae098f3abe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:41 compute-0 ceph-mon[73607]: pgmap v2176: 305 pgs: 305 active+clean; 731 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 346 op/s
Oct 02 12:33:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 670 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 356 op/s
Oct 02 12:33:41 compute-0 sudo[337646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:41 compute-0 sudo[337646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:41 compute-0 sudo[337646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:41 compute-0 sudo[337671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:41 compute-0 sudo[337671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:41 compute-0 sudo[337671]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:42 compute-0 sudo[337696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:42 compute-0 sudo[337696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:42 compute-0 sudo[337696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:42 compute-0 sudo[337721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:33:42 compute-0 sudo[337721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2179889415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:33:42
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'volumes']
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:42 compute-0 sudo[337721]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:42 compute-0 sudo[337778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:42 compute-0 sudo[337778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:42 compute-0 sudo[337778]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:42 compute-0 sudo[337803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:43 compute-0 sudo[337803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:43 compute-0 sudo[337803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.049 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.049 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.050 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.050 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.050 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.050 2 WARNING nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.050 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.051 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.051 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.052 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.052 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.052 2 WARNING nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.053 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.053 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.053 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.053 2 DEBUG oslo_concurrency.lockutils [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.053 2 DEBUG nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.054 2 WARNING nova.compute.manager [req-fd02276b-e0f7-4f13-bdd9-a6c855ca356b req-46f02064-be18-4eda-91f2-71fd0f1af400 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state None.
Oct 02 12:33:43 compute-0 sudo[337828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:43 compute-0 sudo[337828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:43 compute-0 sudo[337828]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.083 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'flavor' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.107 2 DEBUG oslo_concurrency.lockutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.107 2 DEBUG oslo_concurrency.lockutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquired lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.108 2 DEBUG nova.network.neutron [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.108 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'info_cache' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:43 compute-0 sudo[337853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:33:43 compute-0 sudo[337853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:43 compute-0 sudo[337853]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:33:43 compute-0 ceph-mon[73607]: pgmap v2177: 305 pgs: 305 active+clean; 670 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 356 op/s
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:33:43 compute-0 ovn_controller[148183]: 2025-10-02T12:33:43Z|00577|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 378 op/s
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:33:43 compute-0 ovn_controller[148183]: 2025-10-02T12:33:43Z|00578|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:33:43 compute-0 nova_compute[257802]: 2025-10-02 12:33:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:33:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mon[73607]: pgmap v2178: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 378 op/s
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ovn_controller[148183]: 2025-10-02T12:33:44Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:ff:7e 10.100.0.14
Oct 02 12:33:44 compute-0 ovn_controller[148183]: 2025-10-02T12:33:44Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:ff:7e 10.100.0.14
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.782 2 DEBUG nova.network.neutron [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Updating instance_info_cache with network_info: [{"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.870 2 DEBUG oslo_concurrency.lockutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Releasing lock "refresh_cache-49fade07-5c87-4e89-bbec-cce4fc94a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 29998592-5479-4fb7-afd7-1453bd5d0336 does not exist
Oct 02 12:33:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 668abf81-4494-4c68-8082-b8f3e487f262 does not exist
Oct 02 12:33:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9e508cc6-3255-42a3-9ea4-8f3e0b05aea3 does not exist
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:33:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:44.879 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:44.880 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:33:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:33:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:33:44 compute-0 sudo[337899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:44 compute-0 sudo[337899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:44 compute-0 sudo[337899]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.964 2 INFO nova.virt.libvirt.driver [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance destroyed successfully.
Oct 02 12:33:44 compute-0 nova_compute[257802]: 2025-10-02 12:33:44.965 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'numa_topology' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:44 compute-0 sudo[337924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:45 compute-0 sudo[337924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:45 compute-0 sudo[337924]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.025 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'resources' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:45 compute-0 sudo[337949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:45 compute-0 sudo[337949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:45 compute-0 sudo[337949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.082 2 DEBUG nova.virt.libvirt.vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:40Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.082 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.083 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.083 2 DEBUG os_vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37f1ce06-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.090 2 INFO os_vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6')
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.100 2 DEBUG nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start _get_guest_xml network_info=[{"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.104 2 WARNING nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.111 2 DEBUG nova.virt.libvirt.host [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.112 2 DEBUG nova.virt.libvirt.host [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.115 2 DEBUG nova.virt.libvirt.host [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.115 2 DEBUG nova.virt.libvirt.host [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.117 2 DEBUG nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.117 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.117 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.117 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.117 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.118 2 DEBUG nova.virt.hardware [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.119 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:45 compute-0 sudo[337974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:33:45 compute-0 sudo[337974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.173 2 DEBUG oslo_concurrency.processutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:45.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.494141229 +0000 UTC m=+0.092221198 container create dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.42419915 +0000 UTC m=+0.022279129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:45 compute-0 systemd[1]: Started libpod-conmon-dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591.scope.
Oct 02 12:33:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652250723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.620 2 DEBUG oslo_concurrency.processutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.629383795 +0000 UTC m=+0.227463784 container init dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.636265914 +0000 UTC m=+0.234345883 container start dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:45 compute-0 kind_wu[338071]: 167 167
Oct 02 12:33:45 compute-0 systemd[1]: libpod-dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591.scope: Deactivated successfully.
Oct 02 12:33:45 compute-0 nova_compute[257802]: 2025-10-02 12:33:45.663 2 DEBUG oslo_concurrency.processutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.696437673 +0000 UTC m=+0.294517662 container attach dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:45 compute-0 podman[338055]: 2025-10-02 12:33:45.697009917 +0000 UTC m=+0.295089886 container died dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:33:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 313 op/s
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:33:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2652250723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:45.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-390b522511e10b1b8ae10933c25db620c5d6bbc7cbe9608eb807ff62701f3528-merged.mount: Deactivated successfully.
Oct 02 12:33:46 compute-0 podman[338055]: 2025-10-02 12:33:46.087310643 +0000 UTC m=+0.685390612 container remove dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:33:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53116457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:46 compute-0 systemd[1]: libpod-conmon-dd9e22e265f06d8158a9bced53470a2a82f20e43ae6ba49891d944d19cfde591.scope: Deactivated successfully.
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.135 2 DEBUG oslo_concurrency.processutils [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.137 2 DEBUG nova.virt.libvirt.vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:40Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.137 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.138 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.139 2 DEBUG nova.objects.instance [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.179 2 DEBUG nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <uuid>49fade07-5c87-4e89-bbec-cce4fc94a4a2</uuid>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <name>instance-0000007f</name>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-222361521</nova:name>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:33:45</nova:creationTime>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:user uuid="1c2fbed9aaf84b4e864db97bec4c797c">tempest-ListServerFiltersTestJSON-542915701-project-member</nova:user>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:project uuid="385766b9209941f3ab805e8d5e2af163">tempest-ListServerFiltersTestJSON-542915701</nova:project>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <nova:port uuid="37f1ce06-c620-444d-822e-67c8de421fd6">
Oct 02 12:33:46 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <system>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="serial">49fade07-5c87-4e89-bbec-cce4fc94a4a2</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="uuid">49fade07-5c87-4e89-bbec-cce4fc94a4a2</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </system>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <os>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </os>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <features>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </features>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk">
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/49fade07-5c87-4e89-bbec-cce4fc94a4a2_disk.config">
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </source>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:33:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:52:ee:74"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <target dev="tap37f1ce06-c6"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2/console.log" append="off"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <video>
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </video>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:33:46 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:33:46 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:33:46 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:33:46 compute-0 nova_compute[257802]: </domain>
Oct 02 12:33:46 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.180 2 DEBUG nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.181 2 DEBUG nova.virt.libvirt.driver [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.182 2 DEBUG nova.virt.libvirt.vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:40Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.182 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.183 2 DEBUG nova.network.os_vif_util [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.184 2 DEBUG os_vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37f1ce06-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37f1ce06-c6, col_values=(('external_ids', {'iface-id': '37f1ce06-c620-444d-822e-67c8de421fd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:ee:74', 'vm-uuid': '49fade07-5c87-4e89-bbec-cce4fc94a4a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.1940] manager: (tap37f1ce06-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.203 2 INFO os_vif [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6')
Oct 02 12:33:46 compute-0 podman[338140]: 2025-10-02 12:33:46.284721117 +0000 UTC m=+0.055319122 container create 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:46 compute-0 kernel: tap37f1ce06-c6: entered promiscuous mode
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.2966] manager: (tap37f1ce06-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 ovn_controller[148183]: 2025-10-02T12:33:46Z|00579|binding|INFO|Claiming lport 37f1ce06-c620-444d-822e-67c8de421fd6 for this chassis.
Oct 02 12:33:46 compute-0 ovn_controller[148183]: 2025-10-02T12:33:46Z|00580|binding|INFO|37f1ce06-c620-444d-822e-67c8de421fd6: Claiming fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:33:46 compute-0 systemd-udevd[338168]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:46 compute-0 systemd[1]: Started libpod-conmon-9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87.scope.
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.3450] device (tap37f1ce06-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.3463] device (tap37f1ce06-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:46 compute-0 podman[338140]: 2025-10-02 12:33:46.258923522 +0000 UTC m=+0.029521597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.354 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.355 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 bound to our chassis
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.356 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 systemd-machined[211836]: New machine qemu-66-instance-0000007f.
Oct 02 12:33:46 compute-0 ovn_controller[148183]: 2025-10-02T12:33:46Z|00581|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 ovn-installed in OVS
Oct 02 12:33:46 compute-0 ovn_controller[148183]: 2025-10-02T12:33:46Z|00582|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 up in Southbound
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.368 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b50c7cd7-e88f-4836-898f-64e13e28b726]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.368 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f3d344f-71 in ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.370 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f3d344f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.370 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76b54bf5-797b-4eb4-a9f0-c2b626639638]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.371 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b66b0ff8-2626-4bff-ab7b-a4d92e88c5be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:46 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000007f.
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.385 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[16bfc7c4-07ea-4f96-841f-0341f610c873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:46 compute-0 podman[338140]: 2025-10-02 12:33:46.411548804 +0000 UTC m=+0.182146839 container init 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.412 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3322fac7-ab47-4f96-9421-80590e223da4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 podman[338140]: 2025-10-02 12:33:46.420783982 +0000 UTC m=+0.191381977 container start 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:46 compute-0 podman[338140]: 2025-10-02 12:33:46.431546507 +0000 UTC m=+0.202144512 container attach 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.454 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[817db859-8461-4cc1-b1dd-de96a5933576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 systemd-udevd[338171]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.4613] manager: (tap9f3d344f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.460 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0be7ec-0829-45bd-8caa-8f9b78c1496f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.498 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[043c12c4-9dcf-4014-81f6-3c4cf23843d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.502 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e37e9f93-4c53-4321-b481-ecf3e9f2b8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.5258] device (tap9f3d344f-70): carrier: link connected
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.532 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cede78f7-5b5a-40e5-8f80-0e5eff6d9dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.550 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[deb38555-dfbf-4822-83bc-d31fffebdecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f3d344f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:00:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649415, 'reachable_time': 26936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338207, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.566 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3b03446d-d17c-44bf-93b1-3f7be7e65669]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649415, 'tstamp': 649415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338208, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.580 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c12b29-68d4-43ea-a7d1-eb402e111c73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f3d344f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:00:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649415, 'reachable_time': 26936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338210, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.609 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76e523ab-0fa9-43a3-87de-ccef14ff2229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.660 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3678b4-5c7d-4f75-a155-6b8c2b968670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.662 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f3d344f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.662 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.663 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f3d344f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 kernel: tap9f3d344f-70: entered promiscuous mode
Oct 02 12:33:46 compute-0 NetworkManager[44987]: <info>  [1759408426.6658] manager: (tap9f3d344f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.667 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f3d344f-70, col_values=(('external_ids', {'iface-id': '93989b20-c703-4abe-88be-5f6a3f1c5cdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 ovn_controller[148183]: 2025-10-02T12:33:46Z|00583|binding|INFO|Releasing lport 93989b20-c703-4abe-88be-5f6a3f1c5cdc from this chassis (sb_readonly=0)
Oct 02 12:33:46 compute-0 nova_compute[257802]: 2025-10-02 12:33:46.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.685 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.686 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[229cf118-9df1-4697-9c98-74f590443375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.687 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/9f3d344f-7e5f-4676-877b-da313e338dc0.pid.haproxy
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 9f3d344f-7e5f-4676-877b-da313e338dc0
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:46.688 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'env', 'PROCESS_TAG=haproxy-9f3d344f-7e5f-4676-877b-da313e338dc0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f3d344f-7e5f-4676-877b-da313e338dc0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:46 compute-0 sudo[338220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:46 compute-0 sudo[338220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:46 compute-0 sudo[338220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:46 compute-0 sudo[338252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:46 compute-0 sudo[338252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:46 compute-0 sudo[338252]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:47 compute-0 ceph-mon[73607]: pgmap v2179: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 313 op/s
Oct 02 12:33:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/53116457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:47 compute-0 podman[338334]: 2025-10-02 12:33:47.059247509 +0000 UTC m=+0.021922391 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:47.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:47 compute-0 podman[338334]: 2025-10-02 12:33:47.199416595 +0000 UTC m=+0.162091457 container create 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:33:47 compute-0 systemd[1]: Started libpod-conmon-14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75.scope.
Oct 02 12:33:47 compute-0 amazing_hawking[338170]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:33:47 compute-0 amazing_hawking[338170]: --> relative data size: 1.0
Oct 02 12:33:47 compute-0 amazing_hawking[338170]: --> All data devices are unavailable
Oct 02 12:33:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856072de0ddef5f474532536696e5c4757671171d8f7898b43dce04ac5f22535/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:47 compute-0 systemd[1]: libpod-9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87.scope: Deactivated successfully.
Oct 02 12:33:47 compute-0 podman[338334]: 2025-10-02 12:33:47.380297932 +0000 UTC m=+0.342972804 container init 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:33:47 compute-0 podman[338353]: 2025-10-02 12:33:47.386480044 +0000 UTC m=+0.144402262 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:47 compute-0 podman[338334]: 2025-10-02 12:33:47.387406476 +0000 UTC m=+0.350081328 container start 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:33:47 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [NOTICE]   (338428) : New worker (338430) forked
Oct 02 12:33:47 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [NOTICE]   (338428) : Loading success.
Oct 02 12:33:47 compute-0 podman[338354]: 2025-10-02 12:33:47.433668604 +0000 UTC m=+0.190935135 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:47 compute-0 podman[338355]: 2025-10-02 12:33:47.434240378 +0000 UTC m=+0.190660919 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 12:33:47 compute-0 podman[338140]: 2025-10-02 12:33:47.446618322 +0000 UTC m=+1.217216327 container died 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b2a03598119b4b4270ff1ab479353ca0c510f92d6521da1c7c9cd4c787a88f1-merged.mount: Deactivated successfully.
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.537 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 49fade07-5c87-4e89-bbec-cce4fc94a4a2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.537 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408427.535274, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.537 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Resumed (Lifecycle Event)
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.539 2 DEBUG nova.compute.manager [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.542 2 INFO nova.virt.libvirt.driver [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance rebooted successfully.
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.542 2 DEBUG nova.compute.manager [None req-e3a08dcc-a382-41ae-a23b-1524da9f207d 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:47 compute-0 podman[338416]: 2025-10-02 12:33:47.549224255 +0000 UTC m=+0.188980657 container remove 9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:47 compute-0 systemd[1]: libpod-conmon-9a43121cfee5d3bbdcd765eed27900bdf5ad262c55bc470a339bdd69c6022f87.scope: Deactivated successfully.
Oct 02 12:33:47 compute-0 sudo[337974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:47 compute-0 sudo[338441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:47 compute-0 sudo[338441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:47 compute-0 sudo[338441]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.685 2 DEBUG nova.compute.manager [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.686 2 DEBUG oslo_concurrency.lockutils [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.687 2 DEBUG oslo_concurrency.lockutils [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.687 2 DEBUG oslo_concurrency.lockutils [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.688 2 DEBUG nova.compute.manager [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.688 2 WARNING nova.compute.manager [req-2b47436c-0996-4da2-a487-807bd3753a82 req-64e0059d-0cbb-4d72-8ab9-e0f67c664c55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:33:47 compute-0 sudo[338466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:47 compute-0 sudo[338466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:47 compute-0 sudo[338466]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 168 KiB/s wr, 148 op/s
Oct 02 12:33:47 compute-0 sudo[338491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:47 compute-0 sudo[338491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:47 compute-0 sudo[338491]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.757 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.762 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:47 compute-0 sudo[338516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:33:47 compute-0 sudo[338516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.924 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.925 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408427.5368066, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:47 compute-0 nova_compute[257802]: 2025-10-02 12:33:47.925 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Started (Lifecycle Event)
Oct 02 12:33:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:48 compute-0 nova_compute[257802]: 2025-10-02 12:33:48.089 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:48 compute-0 nova_compute[257802]: 2025-10-02 12:33:48.092 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.12291226 +0000 UTC m=+0.021828048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.227734636 +0000 UTC m=+0.126650384 container create 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:48 compute-0 systemd[1]: Started libpod-conmon-5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e.scope.
Oct 02 12:33:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:48 compute-0 nova_compute[257802]: 2025-10-02 12:33:48.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.364432877 +0000 UTC m=+0.263348645 container init 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.373741796 +0000 UTC m=+0.272657554 container start 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:33:48 compute-0 ecstatic_yalow[338598]: 167 167
Oct 02 12:33:48 compute-0 systemd[1]: libpod-5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e.scope: Deactivated successfully.
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.383919296 +0000 UTC m=+0.282835054 container attach 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.38447734 +0000 UTC m=+0.283393098 container died 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2072f7b0cdc3298ad6433fcb7c9aadb1f7900eb4880842e5eb0b6b27f82bad90-merged.mount: Deactivated successfully.
Oct 02 12:33:48 compute-0 podman[338582]: 2025-10-02 12:33:48.473924709 +0000 UTC m=+0.372840457 container remove 5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:33:48 compute-0 systemd[1]: libpod-conmon-5152fd4aa71af4033a116b925587e8d5f96b6116ec9fdb53752d9b57083af72e.scope: Deactivated successfully.
Oct 02 12:33:48 compute-0 podman[338622]: 2025-10-02 12:33:48.689749935 +0000 UTC m=+0.051936218 container create fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:48 compute-0 systemd[1]: Started libpod-conmon-fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2.scope.
Oct 02 12:33:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:48 compute-0 podman[338622]: 2025-10-02 12:33:48.667471267 +0000 UTC m=+0.029657570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc09353d11ca0c63f4413df5a23db0d3b250af0562b4aa5e8d5d10c23d45f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc09353d11ca0c63f4413df5a23db0d3b250af0562b4aa5e8d5d10c23d45f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc09353d11ca0c63f4413df5a23db0d3b250af0562b4aa5e8d5d10c23d45f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc09353d11ca0c63f4413df5a23db0d3b250af0562b4aa5e8d5d10c23d45f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:48 compute-0 podman[338622]: 2025-10-02 12:33:48.783698905 +0000 UTC m=+0.145885218 container init fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:48 compute-0 podman[338622]: 2025-10-02 12:33:48.791096197 +0000 UTC m=+0.153282480 container start fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:48 compute-0 podman[338622]: 2025-10-02 12:33:48.797141086 +0000 UTC m=+0.159327369 container attach fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:33:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:33:48.882 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:49 compute-0 ceph-mon[73607]: pgmap v2180: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 168 KiB/s wr, 148 op/s
Oct 02 12:33:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2887617718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-0 great_borg[338638]: {
Oct 02 12:33:49 compute-0 great_borg[338638]:     "1": [
Oct 02 12:33:49 compute-0 great_borg[338638]:         {
Oct 02 12:33:49 compute-0 great_borg[338638]:             "devices": [
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "/dev/loop3"
Oct 02 12:33:49 compute-0 great_borg[338638]:             ],
Oct 02 12:33:49 compute-0 great_borg[338638]:             "lv_name": "ceph_lv0",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "lv_size": "7511998464",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "name": "ceph_lv0",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "tags": {
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.cluster_name": "ceph",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.crush_device_class": "",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.encrypted": "0",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.osd_id": "1",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.type": "block",
Oct 02 12:33:49 compute-0 great_borg[338638]:                 "ceph.vdo": "0"
Oct 02 12:33:49 compute-0 great_borg[338638]:             },
Oct 02 12:33:49 compute-0 great_borg[338638]:             "type": "block",
Oct 02 12:33:49 compute-0 great_borg[338638]:             "vg_name": "ceph_vg0"
Oct 02 12:33:49 compute-0 great_borg[338638]:         }
Oct 02 12:33:49 compute-0 great_borg[338638]:     ]
Oct 02 12:33:49 compute-0 great_borg[338638]: }
Oct 02 12:33:49 compute-0 systemd[1]: libpod-fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2.scope: Deactivated successfully.
Oct 02 12:33:49 compute-0 podman[338622]: 2025-10-02 12:33:49.54367785 +0000 UTC m=+0.905864133 container died fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-27cc09353d11ca0c63f4413df5a23db0d3b250af0562b4aa5e8d5d10c23d45f8-merged.mount: Deactivated successfully.
Oct 02 12:33:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 585 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 982 KiB/s wr, 201 op/s
Oct 02 12:33:49 compute-0 podman[338622]: 2025-10-02 12:33:49.828976194 +0000 UTC m=+1.191162517 container remove fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_borg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:49 compute-0 systemd[1]: libpod-conmon-fe0607735b3e14d6dfc4de4dda34c4b9eaa14ee43ffa6dca1d76a1469f2e60f2.scope: Deactivated successfully.
Oct 02 12:33:49 compute-0 sudo[338516]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:49 compute-0 sudo[338661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:49 compute-0 sudo[338661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:49 compute-0 sudo[338661]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:49 compute-0 auditd[705]: Audit daemon rotating log files
Oct 02 12:33:50 compute-0 sudo[338686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:50 compute-0 sudo[338686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:50 compute-0 sudo[338686]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:50 compute-0 sudo[338711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:50 compute-0 sudo[338711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:50 compute-0 sudo[338711]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:50 compute-0 sudo[338736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:33:50 compute-0 sudo[338736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.217 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.218 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.218 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.218 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.218 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:50 compute-0 nova_compute[257802]: 2025-10-02 12:33:50.219 2 WARNING nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state active and task_state None.
Oct 02 12:33:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1028812945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/129542748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.432326148 +0000 UTC m=+0.049568159 container create 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:33:50 compute-0 systemd[1]: Started libpod-conmon-16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3.scope.
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.401627003 +0000 UTC m=+0.018869034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.530467961 +0000 UTC m=+0.147710002 container init 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.538603601 +0000 UTC m=+0.155845622 container start 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:33:50 compute-0 loving_euler[338819]: 167 167
Oct 02 12:33:50 compute-0 systemd[1]: libpod-16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3.scope: Deactivated successfully.
Oct 02 12:33:50 compute-0 conmon[338819]: conmon 16d562c62948cdcf5ca5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3.scope/container/memory.events
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.560042838 +0000 UTC m=+0.177284879 container attach 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.560908149 +0000 UTC m=+0.178150160 container died 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b66f2a8b3a20f31261c7cb885904f87a539a4acbcbef4ec3d5c838455e7e55dd-merged.mount: Deactivated successfully.
Oct 02 12:33:50 compute-0 podman[338802]: 2025-10-02 12:33:50.670667297 +0000 UTC m=+0.287909308 container remove 16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:50 compute-0 systemd[1]: libpod-conmon-16d562c62948cdcf5ca56c1b6059d5449293299fbc22aa4b4fa0713cbc232ed3.scope: Deactivated successfully.
Oct 02 12:33:50 compute-0 podman[338845]: 2025-10-02 12:33:50.852942009 +0000 UTC m=+0.026455292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:51 compute-0 podman[338845]: 2025-10-02 12:33:51.14546442 +0000 UTC m=+0.318977683 container create 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:51 compute-0 nova_compute[257802]: 2025-10-02 12:33:51.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:51 compute-0 systemd[1]: Started libpod-conmon-21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0.scope.
Oct 02 12:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c99d4c5a78fb767c38b412515072cd51420ce3216c9e89e34420ba41ecaf21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c99d4c5a78fb767c38b412515072cd51420ce3216c9e89e34420ba41ecaf21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c99d4c5a78fb767c38b412515072cd51420ce3216c9e89e34420ba41ecaf21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c99d4c5a78fb767c38b412515072cd51420ce3216c9e89e34420ba41ecaf21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:51 compute-0 ceph-mon[73607]: pgmap v2181: 305 pgs: 305 active+clean; 585 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 982 KiB/s wr, 201 op/s
Oct 02 12:33:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 593 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 208 op/s
Oct 02 12:33:51 compute-0 podman[338845]: 2025-10-02 12:33:51.75191341 +0000 UTC m=+0.925426693 container init 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:33:51 compute-0 podman[338845]: 2025-10-02 12:33:51.763495266 +0000 UTC m=+0.937008529 container start 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:51 compute-0 podman[338845]: 2025-10-02 12:33:51.812241683 +0000 UTC m=+0.985754946 container attach 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:33:51 compute-0 podman[338860]: 2025-10-02 12:33:51.887919825 +0000 UTC m=+0.713622126 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:33:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:52 compute-0 ceph-mon[73607]: pgmap v2182: 305 pgs: 305 active+clean; 593 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 208 op/s
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]: {
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:         "osd_id": 1,
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:         "type": "bluestore"
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]:     }
Oct 02 12:33:52 compute-0 wonderful_jemison[338871]: }
Oct 02 12:33:52 compute-0 systemd[1]: libpod-21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0.scope: Deactivated successfully.
Oct 02 12:33:52 compute-0 podman[338845]: 2025-10-02 12:33:52.658679584 +0000 UTC m=+1.832192847 container died 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c99d4c5a78fb767c38b412515072cd51420ce3216c9e89e34420ba41ecaf21-merged.mount: Deactivated successfully.
Oct 02 12:33:52 compute-0 podman[338845]: 2025-10-02 12:33:52.836914076 +0000 UTC m=+2.010427339 container remove 21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:33:52 compute-0 systemd[1]: libpod-conmon-21ff47ed27a36d97cb9f2a52d2427562435df1d63d9692c9c1484d9688f8c1f0.scope: Deactivated successfully.
Oct 02 12:33:52 compute-0 sudo[338736]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:33:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:33:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8a18cc72-dd7e-4b69-80e1-c832f9776b52 does not exist
Oct 02 12:33:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 66e9a7ee-0db2-486d-9d27-bb8ae7e0c60a does not exist
Oct 02 12:33:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 082ddfef-9e5e-4aca-862e-00a152faeaec does not exist
Oct 02 12:33:53 compute-0 sudo[338920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:53 compute-0 sudo[338920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:53 compute-0 sudo[338920]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:53 compute-0 sudo[338945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:33:53 compute-0 sudo[338945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:53 compute-0 sudo[338945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:53 compute-0 nova_compute[257802]: 2025-10-02 12:33:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 02 12:33:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:33:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012834899323003909 of space, bias 1.0, pg target 3.8504697969011725 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6421021121231156 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:33:55 compute-0 nova_compute[257802]: 2025-10-02 12:33:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:55 compute-0 nova_compute[257802]: 2025-10-02 12:33:55.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:33:55 compute-0 nova_compute[257802]: 2025-10-02 12:33:55.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:55 compute-0 nova_compute[257802]: 2025-10-02 12:33:55.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2019756427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:33:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2019756427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:55 compute-0 nova_compute[257802]: 2025-10-02 12:33:55.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:33:55 compute-0 ceph-mon[73607]: pgmap v2183: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 02 12:33:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4283116276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:33:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2019756427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2019756427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:56 compute-0 nova_compute[257802]: 2025-10-02 12:33:56.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:57 compute-0 nova_compute[257802]: 2025-10-02 12:33:57.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:33:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:33:57 compute-0 ceph-mon[73607]: pgmap v2184: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:33:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/148215647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 02 12:33:57 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 02 12:33:57 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:57.849706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:33:57 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 02 12:33:57 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437849762, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1787, "num_deletes": 255, "total_data_size": 2889703, "memory_usage": 2935488, "flush_reason": "Manual Compaction"}
Oct 02 12:33:57 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 02 12:33:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408438041936, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2820366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47074, "largest_seqno": 48860, "table_properties": {"data_size": 2812169, "index_size": 4883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18403, "raw_average_key_size": 20, "raw_value_size": 2795326, "raw_average_value_size": 3180, "num_data_blocks": 211, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408293, "oldest_key_time": 1759408293, "file_creation_time": 1759408437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 192264 microseconds, and 6730 cpu microseconds.
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.041976) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2820366 bytes OK
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.041994) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.088073) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.088122) EVENT_LOG_v1 {"time_micros": 1759408438088112, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.088147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 2882051, prev total WAL file size 2882051, number of live WAL files 2.
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.089300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2754KB)], [104(9422KB)]
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408438089400, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12468992, "oldest_snapshot_seqno": -1}
Oct 02 12:33:58 compute-0 nova_compute[257802]: 2025-10-02 12:33:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7674 keys, 10591126 bytes, temperature: kUnknown
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408438388396, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10591126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10540620, "index_size": 30234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198332, "raw_average_key_size": 25, "raw_value_size": 10404526, "raw_average_value_size": 1355, "num_data_blocks": 1185, "num_entries": 7674, "num_filter_entries": 7674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.388618) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10591126 bytes
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.404584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.7 rd, 35.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.2 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 8205, records dropped: 531 output_compression: NoCompression
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.404623) EVENT_LOG_v1 {"time_micros": 1759408438404608, "job": 62, "event": "compaction_finished", "compaction_time_micros": 299059, "compaction_time_cpu_micros": 24900, "output_level": 6, "num_output_files": 1, "total_output_size": 10591126, "num_input_records": 8205, "num_output_records": 7674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408438405341, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408438407029, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.089172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.407071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.407076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.407078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.407079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:33:58.407080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:33:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:58 compute-0 ceph-mon[73607]: pgmap v2185: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 02 12:33:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:33:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Oct 02 12:33:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:33:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:33:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:59.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:00 compute-0 nova_compute[257802]: 2025-10-02 12:34:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:00 compute-0 ceph-mon[73607]: pgmap v2186: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Oct 02 12:34:01 compute-0 nova_compute[257802]: 2025-10-02 12:34:01.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:01 compute-0 nova_compute[257802]: 2025-10-02 12:34:01.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:01 compute-0 nova_compute[257802]: 2025-10-02 12:34:01.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.0 MiB/s wr, 140 op/s
Oct 02 12:34:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:02 compute-0 ovn_controller[148183]: 2025-10-02T12:34:02Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:34:02 compute-0 ovn_controller[148183]: 2025-10-02T12:34:02Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:ee:74 10.100.0.12
Oct 02 12:34:03 compute-0 ceph-mon[73607]: pgmap v2187: 305 pgs: 305 active+clean; 612 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.0 MiB/s wr, 140 op/s
Oct 02 12:34:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:03 compute-0 nova_compute[257802]: 2025-10-02 12:34:03.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 582 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 689 KiB/s wr, 169 op/s
Oct 02 12:34:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:04 compute-0 nova_compute[257802]: 2025-10-02 12:34:04.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:05 compute-0 ceph-mon[73607]: pgmap v2188: 305 pgs: 305 active+clean; 582 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 689 KiB/s wr, 169 op/s
Oct 02 12:34:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3713526240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 21 KiB/s wr, 233 op/s
Oct 02 12:34:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:06 compute-0 nova_compute[257802]: 2025-10-02 12:34:06.110 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:06 compute-0 nova_compute[257802]: 2025-10-02 12:34:06.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 sudo[338977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:07 compute-0 sudo[338977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 sudo[338977]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:34:07 compute-0 sudo[339002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:07 compute-0 sudo[339002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 sudo[339002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:07 compute-0 ceph-mon[73607]: pgmap v2189: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 21 KiB/s wr, 233 op/s
Oct 02 12:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4114615551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.430 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.431 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.431 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:34:07 compute-0 nova_compute[257802]: 2025-10-02 12:34:07.431 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 19 KiB/s wr, 210 op/s
Oct 02 12:34:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:07.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4001065409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:08 compute-0 ceph-mon[73607]: pgmap v2190: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 19 KiB/s wr, 210 op/s
Oct 02 12:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1341908670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.893 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.922 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.923 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.923 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.961 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.962 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.962 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.962 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.962 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.963 2 INFO nova.compute.manager [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Terminating instance
Oct 02 12:34:08 compute-0 nova_compute[257802]: 2025-10-02 12:34:08.964 2 DEBUG nova.compute.manager [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:09 compute-0 kernel: tap37f1ce06-c6 (unregistering): left promiscuous mode
Oct 02 12:34:09 compute-0 NetworkManager[44987]: <info>  [1759408449.1503] device (tap37f1ce06-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:09 compute-0 ovn_controller[148183]: 2025-10-02T12:34:09Z|00584|binding|INFO|Releasing lport 37f1ce06-c620-444d-822e-67c8de421fd6 from this chassis (sb_readonly=0)
Oct 02 12:34:09 compute-0 ovn_controller[148183]: 2025-10-02T12:34:09Z|00585|binding|INFO|Setting lport 37f1ce06-c620-444d-822e-67c8de421fd6 down in Southbound
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 ovn_controller[148183]: 2025-10-02T12:34:09Z|00586|binding|INFO|Removing iface tap37f1ce06-c6 ovn-installed in OVS
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:09.162 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ee:74 10.100.0.12'], port_security=['fa:16:3e:52:ee:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '49fade07-5c87-4e89-bbec-cce4fc94a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f3d344f-7e5f-4676-877b-da313e338dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '385766b9209941f3ab805e8d5e2af163', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1389a46f-eb3b-49c0-bee4-ea4be4a55967', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6cf2f8e-38d5-4acc-9afc-6fc6835becad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=37f1ce06-c620-444d-822e-67c8de421fd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:09.163 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 37f1ce06-c620-444d-822e-67c8de421fd6 in datapath 9f3d344f-7e5f-4676-877b-da313e338dc0 unbound from our chassis
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:09.164 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f3d344f-7e5f-4676-877b-da313e338dc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:09.165 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86cb7949-9bea-4344-975a-b1dad440c54c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:09.165 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 namespace which is not needed anymore
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:09.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:09 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 02 12:34:09 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007f.scope: Consumed 14.381s CPU time.
Oct 02 12:34:09 compute-0 systemd-machined[211836]: Machine qemu-66-instance-0000007f terminated.
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.403 2 INFO nova.virt.libvirt.driver [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Instance destroyed successfully.
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.404 2 DEBUG nova.objects.instance [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lazy-loading 'resources' on Instance uuid 49fade07-5c87-4e89-bbec-cce4fc94a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.420 2 DEBUG nova.virt.libvirt.vif [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-222361521',display_name='tempest-ListServerFiltersTestJSON-instance-222361521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-222361521',id=127,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='385766b9209941f3ab805e8d5e2af163',ramdisk_id='',reservation_id='r-stf3jzcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-542915701',owner_user_name='tempest-ListServerFiltersTestJSON-542915701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:47Z,user_data=None,user_id='1c2fbed9aaf84b4e864db97bec4c797c',uuid=49fade07-5c87-4e89-bbec-cce4fc94a4a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.421 2 DEBUG nova.network.os_vif_util [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converting VIF {"id": "37f1ce06-c620-444d-822e-67c8de421fd6", "address": "fa:16:3e:52:ee:74", "network": {"id": "9f3d344f-7e5f-4676-877b-da313e338dc0", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1391478832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "385766b9209941f3ab805e8d5e2af163", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37f1ce06-c6", "ovs_interfaceid": "37f1ce06-c620-444d-822e-67c8de421fd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.422 2 DEBUG nova.network.os_vif_util [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.422 2 DEBUG os_vif [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37f1ce06-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 nova_compute[257802]: 2025-10-02 12:34:09.428 2 INFO os_vif [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ee:74,bridge_name='br-int',has_traffic_filtering=True,id=37f1ce06-c620-444d-822e-67c8de421fd6,network=Network(9f3d344f-7e5f-4676-877b-da313e338dc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37f1ce06-c6')
Oct 02 12:34:09 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [NOTICE]   (338428) : haproxy version is 2.8.14-c23fe91
Oct 02 12:34:09 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [NOTICE]   (338428) : path to executable is /usr/sbin/haproxy
Oct 02 12:34:09 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [ALERT]    (338428) : Current worker (338430) exited with code 143 (Terminated)
Oct 02 12:34:09 compute-0 neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0[338388]: [WARNING]  (338428) : All workers exited. Exiting... (0)
Oct 02 12:34:09 compute-0 systemd[1]: libpod-14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75.scope: Deactivated successfully.
Oct 02 12:34:09 compute-0 podman[339051]: 2025-10-02 12:34:09.486642381 +0000 UTC m=+0.234746242 container died 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:34:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 425 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 23 KiB/s wr, 237 op/s
Oct 02 12:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/612066073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-856072de0ddef5f474532536696e5c4757671171d8f7898b43dce04ac5f22535-merged.mount: Deactivated successfully.
Oct 02 12:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75-userdata-shm.mount: Deactivated successfully.
Oct 02 12:34:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:09.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:10 compute-0 podman[339051]: 2025-10-02 12:34:10.238396644 +0000 UTC m=+0.986500515 container cleanup 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:34:10 compute-0 systemd[1]: libpod-conmon-14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75.scope: Deactivated successfully.
Oct 02 12:34:10 compute-0 podman[339111]: 2025-10-02 12:34:10.918001512 +0000 UTC m=+0.652903533 container remove 14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.925 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b930c9c2-23d8-4a5a-80a0-1134911559a3]: (4, ('Thu Oct  2 12:34:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 (14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75)\n14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75\nThu Oct  2 12:34:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 (14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75)\n14c2e994570f8564ddb59920cbd04135cdf48f28142d136a1d588e4a6e533d75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.927 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18fe28c5-04f8-4065-958f-e94ca5900067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.929 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f3d344f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:10 compute-0 nova_compute[257802]: 2025-10-02 12:34:10.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-0 kernel: tap9f3d344f-70: left promiscuous mode
Oct 02 12:34:10 compute-0 nova_compute[257802]: 2025-10-02 12:34:10.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.948 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b251e7-1db1-4f6f-b176-e50cf948d4f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.984 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf812a1-183d-4681-98a5-8e65fad022a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:10.985 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e11bfd-61d3-4d09-8468-91ff5f13d281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:11.001 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b75c4fed-217e-4f2d-bc4a-6a86572b991f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649407, 'reachable_time': 29655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339128, 'error': None, 'target': 'ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f3d344f\x2d7e5f\x2d4676\x2d877b\x2dda313e338dc0.mount: Deactivated successfully.
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:11.005 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f3d344f-7e5f-4676-877b-da313e338dc0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:11.005 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[679dce64-d884-432f-a7b5-3fd565e1796b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.151 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.151 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.151 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.152 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.152 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:11 compute-0 ceph-mon[73607]: pgmap v2191: 305 pgs: 305 active+clean; 425 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 23 KiB/s wr, 237 op/s
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.195 2 DEBUG nova.compute.manager [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.196 2 DEBUG oslo_concurrency.lockutils [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.196 2 DEBUG oslo_concurrency.lockutils [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.196 2 DEBUG oslo_concurrency.lockutils [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.197 2 DEBUG nova.compute.manager [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.197 2 DEBUG nova.compute.manager [req-d81b48ee-83e1-4c1d-bc34-d02d752d4f66 req-6dc70f3f-faaf-4b84-9622-f61bf24e0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-unplugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:11.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951768068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.596 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.691 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.691 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.694 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.695 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.695 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 408 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 205 op/s
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.835 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.836 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4171MB free_disk=20.82094955444336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.836 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-0 nova_compute[257802]: 2025-10-02 12:34:11.837 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:11.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.065 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance a53afa14-bb7b-4723-8239-2ed285f1bc94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.066 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 49fade07-5c87-4e89-bbec-cce4fc94a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.066 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.066 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.125 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1951768068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1876702206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1056343614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.595 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.603 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.700 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.772 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.772 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.794 2 INFO nova.virt.libvirt.driver [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Deleting instance files /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2_del
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.795 2 INFO nova.virt.libvirt.driver [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Deletion of /var/lib/nova/instances/49fade07-5c87-4e89-bbec-cce4fc94a4a2_del complete
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.953 2 INFO nova.compute.manager [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Took 3.99 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.955 2 DEBUG oslo.service.loopingcall [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.955 2 DEBUG nova.compute.manager [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:12 compute-0 nova_compute[257802]: 2025-10-02 12:34:12.956 2 DEBUG nova.network.neutron [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:34:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:13.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:34:13 compute-0 ceph-mon[73607]: pgmap v2192: 305 pgs: 305 active+clean; 408 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 205 op/s
Oct 02 12:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1056343614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 545 KiB/s wr, 209 op/s
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.897 2 DEBUG nova.compute.manager [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.898 2 DEBUG oslo_concurrency.lockutils [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.898 2 DEBUG oslo_concurrency.lockutils [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.898 2 DEBUG oslo_concurrency.lockutils [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.898 2 DEBUG nova.compute.manager [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] No waiting events found dispatching network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:13 compute-0 nova_compute[257802]: 2025-10-02 12:34:13.898 2 WARNING nova.compute.manager [req-833ccd0c-ba5b-4f6b-82ba-98a7d41e8922 req-3ac894fc-2ea9-4413-95e2-8aea64a5f1ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received unexpected event network-vif-plugged-37f1ce06-c620-444d-822e-67c8de421fd6 for instance with vm_state active and task_state deleting.
Oct 02 12:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:13.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:14 compute-0 nova_compute[257802]: 2025-10-02 12:34:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:14 compute-0 nova_compute[257802]: 2025-10-02 12:34:14.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:34:14 compute-0 nova_compute[257802]: 2025-10-02 12:34:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:14 compute-0 ceph-mon[73607]: pgmap v2193: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 545 KiB/s wr, 209 op/s
Oct 02 12:34:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 12:34:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:15.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 12:34:15 compute-0 nova_compute[257802]: 2025-10-02 12:34:15.268 2 DEBUG nova.network.neutron [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:15 compute-0 nova_compute[257802]: 2025-10-02 12:34:15.449 2 INFO nova.compute.manager [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Took 2.49 seconds to deallocate network for instance.
Oct 02 12:34:15 compute-0 nova_compute[257802]: 2025-10-02 12:34:15.595 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:15 compute-0 nova_compute[257802]: 2025-10-02 12:34:15.596 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 211 op/s
Oct 02 12:34:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.130 2 DEBUG nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.138 2 DEBUG nova.compute.manager [req-77d8d463-2554-4df0-b795-6a270f256364 req-b25999f9-b8a6-4d5f-b35b-e0dabbf56540 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Received event network-vif-deleted-37f1ce06-c620-444d-822e-67c8de421fd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.221 2 DEBUG nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.222 2 DEBUG nova.compute.provider_tree [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.331 2 DEBUG nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.362 2 DEBUG nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.458 2 DEBUG oslo_concurrency.processutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.888 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.888 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191167142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.950 2 DEBUG oslo_concurrency.processutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:16 compute-0 nova_compute[257802]: 2025-10-02 12:34:16.956 2 DEBUG nova.compute.provider_tree [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:16 compute-0 ceph-mon[73607]: pgmap v2194: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 211 op/s
Oct 02 12:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1936495245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.073 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.091 2 DEBUG nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:17.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.400 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.492 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.492 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.498 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.498 2 INFO nova.compute.claims [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.600 2 INFO nova.scheduler.client.report [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Deleted allocations for instance 49fade07-5c87-4e89-bbec-cce4fc94a4a2
Oct 02 12:34:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 3.1 MiB/s wr, 131 op/s
Oct 02 12:34:17 compute-0 podman[339203]: 2025-10-02 12:34:17.926984014 +0000 UTC m=+0.055492285 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:34:17 compute-0 podman[339201]: 2025-10-02 12:34:17.951560719 +0000 UTC m=+0.085138414 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:34:17 compute-0 podman[339202]: 2025-10-02 12:34:17.983798112 +0000 UTC m=+0.101525228 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:34:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:17 compute-0 nova_compute[257802]: 2025-10-02 12:34:17.999 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.101 2 DEBUG oslo_concurrency.lockutils [None req-25c45dbb-82ba-457c-ad97-8153becddc1f 1c2fbed9aaf84b4e864db97bec4c797c 385766b9209941f3ab805e8d5e2af163 - - default default] Lock "49fade07-5c87-4e89-bbec-cce4fc94a4a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1760207176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/191167142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191293994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.450 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.455 2 DEBUG nova.compute.provider_tree [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.487 2 DEBUG nova.scheduler.client.report [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.681 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.682 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.749 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.749 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.780 2 INFO nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.816 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.958 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.959 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.960 2 INFO nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Creating image(s)
Oct 02 12:34:18 compute-0 nova_compute[257802]: 2025-10-02 12:34:18.983 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.015 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.041 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.044 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.109 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.110 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.111 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.111 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:19 compute-0 ceph-mon[73607]: pgmap v2195: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 3.1 MiB/s wr, 131 op/s
Oct 02 12:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/191293994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.135 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.139 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.170 2 DEBUG nova.policy [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c932f0d0e594f00855572fbe06ee3aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.425 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.504 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] resizing rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 608 KiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.725 2 DEBUG nova.objects.instance [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.764 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.765 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Ensure instance console log exists: /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.766 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.766 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.767 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:19 compute-0 ovn_controller[148183]: 2025-10-02T12:34:19Z|00587|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:34:19 compute-0 nova_compute[257802]: 2025-10-02 12:34:19.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:34:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:19.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1609700800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2995388052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:20 compute-0 nova_compute[257802]: 2025-10-02 12:34:20.974 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Successfully created port: fb769e14-32bb-436e-a8b2-f08e69207e0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:21 compute-0 ceph-mon[73607]: pgmap v2196: 305 pgs: 305 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 608 KiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 02 12:34:21 compute-0 nova_compute[257802]: 2025-10-02 12:34:21.167 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:21.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 437 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 597 KiB/s rd, 4.2 MiB/s wr, 146 op/s
Oct 02 12:34:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:22 compute-0 podman[339450]: 2025-10-02 12:34:22.932422497 +0000 UTC m=+0.074733658 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:34:22 compute-0 nova_compute[257802]: 2025-10-02 12:34:22.996 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Successfully updated port: fb769e14-32bb-436e-a8b2-f08e69207e0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.012 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.013 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.013 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.102 2 DEBUG nova.compute.manager [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.103 2 DEBUG nova.compute.manager [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing instance network info cache due to event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.103 2 DEBUG oslo_concurrency.lockutils [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:23 compute-0 ceph-mon[73607]: pgmap v2197: 305 pgs: 305 active+clean; 437 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 597 KiB/s rd, 4.2 MiB/s wr, 146 op/s
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.181 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:23.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:23 compute-0 nova_compute[257802]: 2025-10-02 12:34:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.8 MiB/s wr, 171 op/s
Oct 02 12:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:24.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:24 compute-0 nova_compute[257802]: 2025-10-02 12:34:24.403 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408449.402171, 49fade07-5c87-4e89-bbec-cce4fc94a4a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:24 compute-0 nova_compute[257802]: 2025-10-02 12:34:24.403 2 INFO nova.compute.manager [-] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] VM Stopped (Lifecycle Event)
Oct 02 12:34:24 compute-0 nova_compute[257802]: 2025-10-02 12:34:24.447 2 DEBUG nova.compute.manager [None req-0e330345-8242-4b67-86f4-0e8467358d41 - - - - - -] [instance: 49fade07-5c87-4e89-bbec-cce4fc94a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:24 compute-0 nova_compute[257802]: 2025-10-02 12:34:24.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:25 compute-0 ceph-mon[73607]: pgmap v2198: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.8 MiB/s wr, 171 op/s
Oct 02 12:34:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:25.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.418 2 DEBUG nova.network.neutron [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.459 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.459 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance network_info: |[{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.460 2 DEBUG oslo_concurrency.lockutils [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.460 2 DEBUG nova.network.neutron [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.463 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start _get_guest_xml network_info=[{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.467 2 WARNING nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.470 2 DEBUG nova.virt.libvirt.host [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.471 2 DEBUG nova.virt.libvirt.host [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.474 2 DEBUG nova.virt.libvirt.host [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.474 2 DEBUG nova.virt.libvirt.host [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.475 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.476 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.476 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.476 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.477 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.477 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.477 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.477 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.478 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.478 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.478 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.478 2 DEBUG nova.virt.hardware [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.481 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 262 op/s
Oct 02 12:34:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819670850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.913 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.938 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:25 compute-0 nova_compute[257802]: 2025-10-02 12:34:25.942 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:26.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2819670850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069609047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.377 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.379 2 DEBUG nova.virt.libvirt.vif [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-844114970',display_name='tempest-ServerRescueNegativeTestJSON-server-844114970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-844114970',id=132,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBC3dzJ/cmeAQeCTRVIaMi9ODTeKzsWGp+oGguk+hFSNuPm8DjZXpwH/w0EeoRUq6Hegzhnzkofu7f4IKtcBTMmXs34k+4eg8rqlyWmhp8XzQZq6+/mosGCR22msyjISyg==',key_name='tempest-keypair-1971519988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ejvqx4ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.379 2 DEBUG nova.network.os_vif_util [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.380 2 DEBUG nova.network.os_vif_util [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.381 2 DEBUG nova.objects.instance [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.698 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <uuid>7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</uuid>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <name>instance-00000084</name>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-844114970</nova:name>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:34:25</nova:creationTime>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <nova:port uuid="fb769e14-32bb-436e-a8b2-f08e69207e0f">
Oct 02 12:34:26 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="serial">7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="uuid">7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk">
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config">
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:34:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:7f:8b:39"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <target dev="tapfb769e14-32"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/console.log" append="off"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:34:26 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:34:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:34:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:34:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:34:26 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.699 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Preparing to wait for external event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.699 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.700 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.700 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.700 2 DEBUG nova.virt.libvirt.vif [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-844114970',display_name='tempest-ServerRescueNegativeTestJSON-server-844114970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-844114970',id=132,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBC3dzJ/cmeAQeCTRVIaMi9ODTeKzsWGp+oGguk+hFSNuPm8DjZXpwH/w0EeoRUq6Hegzhnzkofu7f4IKtcBTMmXs34k+4eg8rqlyWmhp8XzQZq6+/mosGCR22msyjISyg==',key_name='tempest-keypair-1971519988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ejvqx4ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.701 2 DEBUG nova.network.os_vif_util [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.701 2 DEBUG nova.network.os_vif_util [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.702 2 DEBUG os_vif [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb769e14-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb769e14-32, col_values=(('external_ids', {'iface-id': 'fb769e14-32bb-436e-a8b2-f08e69207e0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:8b:39', 'vm-uuid': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:26 compute-0 NetworkManager[44987]: <info>  [1759408466.7080] manager: (tapfb769e14-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.714 2 INFO os_vif [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32')
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.812 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.812 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.812 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:7f:8b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.813 2 INFO nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Using config drive
Oct 02 12:34:26 compute-0 nova_compute[257802]: 2025-10-02 12:34:26.838 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:26.953 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:26.953 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:26.954 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:27.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:27 compute-0 sudo[339560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:27 compute-0 sudo[339560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:27 compute-0 sudo[339560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:27 compute-0 sudo[339585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:27 compute-0 sudo[339585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:27 compute-0 sudo[339585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:27 compute-0 ceph-mon[73607]: pgmap v2199: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 262 op/s
Oct 02 12:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1069609047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.398 2 INFO nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Creating config drive at /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.404 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprgw_5zbx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.539 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprgw_5zbx" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.568 2 DEBUG nova.storage.rbd_utils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.572 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.777 2 DEBUG oslo_concurrency.processutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.778 2 INFO nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Deleting local config drive /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config because it was imported into RBD.
Oct 02 12:34:27 compute-0 kernel: tapfb769e14-32: entered promiscuous mode
Oct 02 12:34:27 compute-0 NetworkManager[44987]: <info>  [1759408467.8303] manager: (tapfb769e14-32): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 ovn_controller[148183]: 2025-10-02T12:34:27Z|00588|binding|INFO|Claiming lport fb769e14-32bb-436e-a8b2-f08e69207e0f for this chassis.
Oct 02 12:34:27 compute-0 ovn_controller[148183]: 2025-10-02T12:34:27Z|00589|binding|INFO|fb769e14-32bb-436e-a8b2-f08e69207e0f: Claiming fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.839 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.840 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.842 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.858 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1cacbcd4-d2d1-419b-8449-224c233a640a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 systemd-udevd[339663]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 ovn_controller[148183]: 2025-10-02T12:34:27Z|00590|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f ovn-installed in OVS
Oct 02 12:34:27 compute-0 ovn_controller[148183]: 2025-10-02T12:34:27Z|00591|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f up in Southbound
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 systemd-machined[211836]: New machine qemu-67-instance-00000084.
Oct 02 12:34:27 compute-0 NetworkManager[44987]: <info>  [1759408467.8817] device (tapfb769e14-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:27 compute-0 NetworkManager[44987]: <info>  [1759408467.8831] device (tapfb769e14-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:27 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-00000084.
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.893 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c32802-e921-4a35-b503-784f6d43c6d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.896 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[759d58df-847b-4622-8ba7-4220393135ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.923 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1bef18-4488-40e4-80f0-514c6de85b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.949 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[519ee2e2-4ec9-4144-a06b-51e354f02c39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 30641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339674, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.965 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dd66a4e5-e370-48ae-98a7-c48c37d67aa2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339677, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339677, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.967 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 nova_compute[257802]: 2025-10-02 12:34:27.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.970 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.970 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.970 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:27.971 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:28.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.157 2 DEBUG nova.compute.manager [req-a293d0aa-713b-4728-ae1c-d237f132afcc req-67b1ed92-7ed5-4a82-8639-c3257edc7549 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.158 2 DEBUG oslo_concurrency.lockutils [req-a293d0aa-713b-4728-ae1c-d237f132afcc req-67b1ed92-7ed5-4a82-8639-c3257edc7549 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.158 2 DEBUG oslo_concurrency.lockutils [req-a293d0aa-713b-4728-ae1c-d237f132afcc req-67b1ed92-7ed5-4a82-8639-c3257edc7549 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.158 2 DEBUG oslo_concurrency.lockutils [req-a293d0aa-713b-4728-ae1c-d237f132afcc req-67b1ed92-7ed5-4a82-8639-c3257edc7549 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.158 2 DEBUG nova.compute.manager [req-a293d0aa-713b-4728-ae1c-d237f132afcc req-67b1ed92-7ed5-4a82-8639-c3257edc7549 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Processing event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.669 2 DEBUG nova.network.neutron [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updated VIF entry in instance network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.670 2 DEBUG nova.network.neutron [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.686 2 DEBUG oslo_concurrency.lockutils [req-ade0f329-d377-497a-835c-412db715771b req-d5298893-2ffa-41b8-8e09-4036818b5efd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.733 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408468.7327225, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.733 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Started (Lifecycle Event)
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.735 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.738 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.740 2 INFO nova.virt.libvirt.driver [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance spawned successfully.
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.741 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.756 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.759 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.779 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.779 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.780 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.780 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.780 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.781 2 DEBUG nova.virt.libvirt.driver [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.784 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.784 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408468.7328336, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.784 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Paused (Lifecycle Event)
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.833 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.836 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408468.7375624, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.836 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Resumed (Lifecycle Event)
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.869 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.871 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.880 2 INFO nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Took 9.92 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.880 2 DEBUG nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.930 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:28 compute-0 nova_compute[257802]: 2025-10-02 12:34:28.952 2 INFO nova.compute.manager [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Took 11.49 seconds to build instance.
Oct 02 12:34:29 compute-0 nova_compute[257802]: 2025-10-02 12:34:29.026 2 DEBUG oslo_concurrency.lockutils [None req-e0f946fc-d45a-41db-a35a-49d1e3bbf0d1 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:29 compute-0 ceph-mon[73607]: pgmap v2200: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Oct 02 12:34:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.3 MiB/s wr, 215 op/s
Oct 02 12:34:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:30.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.234 2 DEBUG nova.compute.manager [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.234 2 DEBUG oslo_concurrency.lockutils [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.235 2 DEBUG oslo_concurrency.lockutils [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.235 2 DEBUG oslo_concurrency.lockutils [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.235 2 DEBUG nova.compute.manager [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:30 compute-0 nova_compute[257802]: 2025-10-02 12:34:30.235 2 WARNING nova.compute.manager [req-a5b2ce85-7be0-4b13-bfac-70c5931d9724 req-044d87cd-a65a-4a55-bdfe-090bf5ff5a1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state active and task_state None.
Oct 02 12:34:30 compute-0 ceph-mon[73607]: pgmap v2201: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.3 MiB/s wr, 215 op/s
Oct 02 12:34:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:31.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:34:31 compute-0 nova_compute[257802]: 2025-10-02 12:34:31.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:32.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:32 compute-0 NetworkManager[44987]: <info>  [1759408472.1077] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 NetworkManager[44987]: <info>  [1759408472.1083] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 ovn_controller[148183]: 2025-10-02T12:34:32Z|00592|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.417 2 DEBUG nova.compute.manager [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.417 2 DEBUG nova.compute.manager [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing instance network info cache due to event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.418 2 DEBUG oslo_concurrency.lockutils [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.418 2 DEBUG oslo_concurrency.lockutils [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:32 compute-0 nova_compute[257802]: 2025-10-02 12:34:32.418 2 DEBUG nova.network.neutron [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:33.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:33 compute-0 nova_compute[257802]: 2025-10-02 12:34:33.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-0 ceph-mon[73607]: pgmap v2202: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 02 12:34:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.3 MiB/s wr, 228 op/s
Oct 02 12:34:33 compute-0 nova_compute[257802]: 2025-10-02 12:34:33.759 2 DEBUG nova.network.neutron [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updated VIF entry in instance network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:33 compute-0 nova_compute[257802]: 2025-10-02 12:34:33.760 2 DEBUG nova.network.neutron [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:33 compute-0 nova_compute[257802]: 2025-10-02 12:34:33.787 2 DEBUG oslo_concurrency.lockutils [req-a63b8945-b445-4bff-b5d8-ec76d4fffeab req-905bbbbb-723f-403c-b170-40942826dea5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:34.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:34 compute-0 nova_compute[257802]: 2025-10-02 12:34:34.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3941843408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:35.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:35 compute-0 ceph-mon[73607]: pgmap v2203: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.3 MiB/s wr, 228 op/s
Oct 02 12:34:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Oct 02 12:34:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:36.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:36 compute-0 ceph-mon[73607]: pgmap v2204: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Oct 02 12:34:36 compute-0 nova_compute[257802]: 2025-10-02 12:34:36.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:37.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 140 op/s
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.954 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.976 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.977 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.977 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.978 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.978 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:37 compute-0 nova_compute[257802]: 2025-10-02 12:34:37.979 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:38 compute-0 nova_compute[257802]: 2025-10-02 12:34:38.015 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:38 compute-0 nova_compute[257802]: 2025-10-02 12:34:38.016 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:38 compute-0 nova_compute[257802]: 2025-10-02 12:34:38.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:39 compute-0 ceph-mon[73607]: pgmap v2205: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 140 op/s
Oct 02 12:34:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 573 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.9 MiB/s wr, 250 op/s
Oct 02 12:34:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.344 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.345 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.473 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.670 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.671 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.680 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:40 compute-0 nova_compute[257802]: 2025-10-02 12:34:40.680 2 INFO nova.compute.claims [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.014 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.132003158s ======
Oct 02 12:34:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:41.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.132003158s
Oct 02 12:34:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208427220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.609 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.618 2 DEBUG nova.compute.provider_tree [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.643 2 DEBUG nova.scheduler.client.report [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.726 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.727 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 581 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Oct 02 12:34:41 compute-0 ceph-mon[73607]: pgmap v2206: 305 pgs: 305 active+clean; 573 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.9 MiB/s wr, 250 op/s
Oct 02 12:34:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2983616495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1430517384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/112747279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.821 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.822 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.873 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.899 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:41 compute-0 nova_compute[257802]: 2025-10-02 12:34:41.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.014 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.015 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.016 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Creating image(s)
Oct 02 12:34:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:42.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.047 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.071 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.096 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.100 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.187 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.188 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.189 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.189 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.220 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.225 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 de073a99-7033-4df3-b9bd-300429654683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:42 compute-0 nova_compute[257802]: 2025-10-02 12:34:42.255 2 DEBUG nova.policy [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f02a0ac23d9e44d5a6205e853818fa50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:34:42
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'volumes', 'images']
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1208427220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:42 compute-0 ceph-mon[73607]: pgmap v2207: 305 pgs: 305 active+clean; 581 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.027 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 de073a99-7033-4df3-b9bd-300429654683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.802s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.105 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] resizing rbd image de073a99-7033-4df3-b9bd-300429654683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:43.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.401 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Successfully created port: e5011138-d4c0-4361-a71a-ade8340ff5bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.451 2 DEBUG nova.objects.instance [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'migration_context' on Instance uuid de073a99-7033-4df3-b9bd-300429654683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.486 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.486 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Ensure instance console log exists: /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.487 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.487 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:43 compute-0 nova_compute[257802]: 2025-10-02 12:34:43.488 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 611 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.6 MiB/s wr, 276 op/s
Oct 02 12:34:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.580 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Successfully updated port: e5011138-d4c0-4361-a71a-ade8340ff5bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:44 compute-0 ovn_controller[148183]: 2025-10-02T12:34:44Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:34:44 compute-0 ovn_controller[148183]: 2025-10-02T12:34:44Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.643 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.643 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquired lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.643 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.706 2 DEBUG nova.compute.manager [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-changed-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.706 2 DEBUG nova.compute.manager [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Refreshing instance network info cache due to event network-changed-e5011138-d4c0-4361-a71a-ade8340ff5bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.706 2 DEBUG oslo_concurrency.lockutils [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:44 compute-0 nova_compute[257802]: 2025-10-02 12:34:44.844 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:44 compute-0 ceph-mon[73607]: pgmap v2208: 305 pgs: 305 active+clean; 611 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.6 MiB/s wr, 276 op/s
Oct 02 12:34:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 MiB/s wr, 382 op/s
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.869 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Updating instance_info_cache with network_info: [{"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.890 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Releasing lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.891 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Instance network_info: |[{"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.891 2 DEBUG oslo_concurrency.lockutils [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.891 2 DEBUG nova.network.neutron [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Refreshing network info cache for port e5011138-d4c0-4361-a71a-ade8340ff5bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.893 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Start _get_guest_xml network_info=[{"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.898 2 WARNING nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.903 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.904 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.907 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.907 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.908 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.909 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.909 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.909 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.909 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.910 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.910 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.910 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.910 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.911 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.911 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.911 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:45 compute-0 nova_compute[257802]: 2025-10-02 12:34:45.914 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86450503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:46 compute-0 nova_compute[257802]: 2025-10-02 12:34:46.569 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:46 compute-0 nova_compute[257802]: 2025-10-02 12:34:46.603 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:46 compute-0 nova_compute[257802]: 2025-10-02 12:34:46.606 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:46 compute-0 nova_compute[257802]: 2025-10-02 12:34:46.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823528407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.076 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.078 2 DEBUG nova.virt.libvirt.vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-2',id=135,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:41Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=de073a99-7033-4df3-b9bd-300429654683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.079 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.080 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.081 2 DEBUG nova.objects.instance [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid de073a99-7033-4df3-b9bd-300429654683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.103 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <uuid>de073a99-7033-4df3-b9bd-300429654683</uuid>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <name>instance-00000087</name>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:name>tempest-tempest.common.compute-instance-783180319-2</nova:name>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:34:45</nova:creationTime>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:user uuid="f02a0ac23d9e44d5a6205e853818fa50">tempest-MultipleCreateTestJSON-822408607-project-member</nova:user>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:project uuid="c3b4ed8f5ff54d4cb9f232e285155ca0">tempest-MultipleCreateTestJSON-822408607</nova:project>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <nova:port uuid="e5011138-d4c0-4361-a71a-ade8340ff5bc">
Oct 02 12:34:47 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <system>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="serial">de073a99-7033-4df3-b9bd-300429654683</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="uuid">de073a99-7033-4df3-b9bd-300429654683</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </system>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <os>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </os>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <features>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </features>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/de073a99-7033-4df3-b9bd-300429654683_disk">
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </source>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/de073a99-7033-4df3-b9bd-300429654683_disk.config">
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </source>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:34:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:22:17:67"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <target dev="tape5011138-d4"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/console.log" append="off"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <video>
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </video>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:34:47 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:34:47 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:34:47 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:34:47 compute-0 nova_compute[257802]: </domain>
Oct 02 12:34:47 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.105 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Preparing to wait for external event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.106 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.106 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.106 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.107 2 DEBUG nova.virt.libvirt.vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-2',id=135,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:41Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=de073a99-7033-4df3-b9bd-300429654683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.107 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.108 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.108 2 DEBUG os_vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5011138-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5011138-d4, col_values=(('external_ids', {'iface-id': 'e5011138-d4c0-4361-a71a-ade8340ff5bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:17:67', 'vm-uuid': 'de073a99-7033-4df3-b9bd-300429654683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-0 NetworkManager[44987]: <info>  [1759408487.1178] manager: (tape5011138-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.125 2 INFO os_vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4')
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.185 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.186 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.187 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No VIF found with MAC fa:16:3e:22:17:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.188 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Using config drive
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.224 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:47 compute-0 ceph-mon[73607]: pgmap v2209: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 MiB/s wr, 382 op/s
Oct 02 12:34:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/41007933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/86450503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/698035241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/823528407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:47 compute-0 sudo[340003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:47 compute-0 sudo[340003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:47 compute-0 sudo[340003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:47 compute-0 sudo[340028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:47 compute-0 sudo[340028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:47 compute-0 sudo[340028]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 322 op/s
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.742 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Creating config drive at /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.746 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v9ffdtl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.876 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v9ffdtl" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.911 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image de073a99-7033-4df3-b9bd-300429654683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.915 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config de073a99-7033-4df3-b9bd-300429654683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.957 2 DEBUG nova.network.neutron [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Updated VIF entry in instance network info cache for port e5011138-d4c0-4361-a71a-ade8340ff5bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:47 compute-0 nova_compute[257802]: 2025-10-02 12:34:47.958 2 DEBUG nova.network.neutron [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Updating instance_info_cache with network_info: [{"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.019 2 DEBUG oslo_concurrency.lockutils [req-1c5a99c6-3d66-4f45-ad52-1d2dfc7115c4 req-bc76873c-7190-4bce-995b-cebaaafcbc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-de073a99-7033-4df3-b9bd-300429654683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:48.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.359 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config de073a99-7033-4df3-b9bd-300429654683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.360 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Deleting local config drive /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683/disk.config because it was imported into RBD.
Oct 02 12:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Oct 02 12:34:48 compute-0 kernel: tape5011138-d4: entered promiscuous mode
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.4046] manager: (tape5011138-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Oct 02 12:34:48 compute-0 ovn_controller[148183]: 2025-10-02T12:34:48Z|00593|binding|INFO|Claiming lport e5011138-d4c0-4361-a71a-ade8340ff5bc for this chassis.
Oct 02 12:34:48 compute-0 ovn_controller[148183]: 2025-10-02T12:34:48Z|00594|binding|INFO|e5011138-d4c0-4361-a71a-ade8340ff5bc: Claiming fa:16:3e:22:17:67 10.100.0.6
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Oct 02 12:34:48 compute-0 ovn_controller[148183]: 2025-10-02T12:34:48Z|00595|binding|INFO|Setting lport e5011138-d4c0-4361-a71a-ade8340ff5bc ovn-installed in OVS
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ovn_controller[148183]: 2025-10-02T12:34:48Z|00596|binding|INFO|Setting lport e5011138-d4c0-4361-a71a-ade8340ff5bc up in Southbound
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.444 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:17:67 10.100.0.6'], port_security=['fa:16:3e:22:17:67 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de073a99-7033-4df3-b9bd-300429654683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=e5011138-d4c0-4361-a71a-ade8340ff5bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.445 158261 INFO neutron.agent.ovn.metadata.agent [-] Port e5011138-d4c0-4361-a71a-ade8340ff5bc in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d bound to our chassis
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.446 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.459 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae19748-c4bb-4d8d-ac6b-ed450f6de9be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.461 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71fb4dcc-11 in ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.462 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71fb4dcc-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.462 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9e90f6-eb3f-4020-a0ba-0f0752a2559e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.463 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a3b9c-c158-431d-8791-b13c0a0f8410]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 systemd-machined[211836]: New machine qemu-68-instance-00000087.
Oct 02 12:34:48 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-00000087.
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.476 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9932c2-85cb-4e1a-bba8-86a27cbb72d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 systemd-udevd[340145]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.489 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb832cc-6812-45a3-971f-1b13ba826347]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.5016] device (tape5011138-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.5025] device (tape5011138-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:48 compute-0 podman[340103]: 2025-10-02 12:34:48.507619414 +0000 UTC m=+0.064930467 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:48 compute-0 podman[340105]: 2025-10-02 12:34:48.508389483 +0000 UTC m=+0.063806490 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.527 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bed062a7-d7c4-4b85-9531-476947eec52e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 podman[340104]: 2025-10-02 12:34:48.528536708 +0000 UTC m=+0.083657348 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:34:48 compute-0 systemd-udevd[340161]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.5372] manager: (tap71fb4dcc-10): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.532 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[408e128a-96f8-418f-8cb6-dc9f6ab7dec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.567 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4255e1df-98ec-4388-86ea-5410d587c6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.572 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[56c57c43-a242-408b-9b98-2a78a971b0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.5919] device (tap71fb4dcc-10): carrier: link connected
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.598 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8bfc12-9441-40dc-bd5b-8cdf909a3a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.613 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c282217a-a77e-41a8-9ee4-8bdc7e484399]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655621, 'reachable_time': 28086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340195, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.627 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9c82338d-4165-4889-a930-65223b9cd089]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655621, 'tstamp': 655621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340196, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.643 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a774cf-9729-4908-8970-71283d4b7af8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655621, 'reachable_time': 28086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340197, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.670 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a6e95a-a5ab-4618-94ac-3d72eea7a269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.718 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe9256e-7c20-4905-9897-23f08fddcdd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.719 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.720 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.720 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71fb4dcc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 NetworkManager[44987]: <info>  [1759408488.7224] manager: (tap71fb4dcc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Oct 02 12:34:48 compute-0 kernel: tap71fb4dcc-10: entered promiscuous mode
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.724 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71fb4dcc-10, col_values=(('external_ids', {'iface-id': 'c32f3592-2fbc-4d81-bf91-681fef1a2dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ovn_controller[148183]: 2025-10-02T12:34:48Z|00597|binding|INFO|Releasing lport c32f3592-2fbc-4d81-bf91-681fef1a2dd1 from this chassis (sb_readonly=0)
Oct 02 12:34:48 compute-0 nova_compute[257802]: 2025-10-02 12:34:48.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.744 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.744 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3d33c5-6578-4db4-998a-767ee0d8dcce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.745 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:34:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:48.746 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'env', 'PROCESS_TAG=haproxy-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71fb4dcc-12bb-458e-9241-e19e223ca96d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:34:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:49 compute-0 podman[340229]: 2025-10-02 12:34:49.076776578 +0000 UTC m=+0.021731816 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:34:49 compute-0 podman[340229]: 2025-10-02 12:34:49.242222735 +0000 UTC m=+0.187177953 container create 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:34:49 compute-0 systemd[1]: Started libpod-conmon-71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb.scope.
Oct 02 12:34:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1937b00ceff4212d6fe3d5315688d03ea88bd3b6af9d0d4f9e16bc242cb354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:34:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:34:49 compute-0 podman[340229]: 2025-10-02 12:34:49.410502082 +0000 UTC m=+0.355457320 container init 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:34:49 compute-0 podman[340229]: 2025-10-02 12:34:49.415790592 +0000 UTC m=+0.360745810 container start 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:34:49 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [NOTICE]   (340248) : New worker (340250) forked
Oct 02 12:34:49 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [NOTICE]   (340248) : Loading success.
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.510 2 DEBUG nova.compute.manager [req-a8913efc-3e1b-48a2-aa77-fb1099ca7f4f req-af4cedcf-860d-422b-a296-4faadc6b7380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.511 2 DEBUG oslo_concurrency.lockutils [req-a8913efc-3e1b-48a2-aa77-fb1099ca7f4f req-af4cedcf-860d-422b-a296-4faadc6b7380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.511 2 DEBUG oslo_concurrency.lockutils [req-a8913efc-3e1b-48a2-aa77-fb1099ca7f4f req-af4cedcf-860d-422b-a296-4faadc6b7380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.511 2 DEBUG oslo_concurrency.lockutils [req-a8913efc-3e1b-48a2-aa77-fb1099ca7f4f req-af4cedcf-860d-422b-a296-4faadc6b7380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.511 2 DEBUG nova.compute.manager [req-a8913efc-3e1b-48a2-aa77-fb1099ca7f4f req-af4cedcf-860d-422b-a296-4faadc6b7380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Processing event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:49 compute-0 ceph-mon[73607]: pgmap v2210: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 322 op/s
Oct 02 12:34:49 compute-0 ceph-mon[73607]: osdmap e326: 3 total, 3 up, 3 in
Oct 02 12:34:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 674 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.2 MiB/s wr, 493 op/s
Oct 02 12:34:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:49.837 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:49.838 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:34:49 compute-0 nova_compute[257802]: 2025-10-02 12:34:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.252 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.253 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408490.2519698, de073a99-7033-4df3-b9bd-300429654683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.253 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] VM Started (Lifecycle Event)
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.259 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.263 2 INFO nova.virt.libvirt.driver [-] [instance: de073a99-7033-4df3-b9bd-300429654683] Instance spawned successfully.
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.263 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.324 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.327 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.337 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.338 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.338 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.338 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.339 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.339 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.363 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.363 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408490.2551606, de073a99-7033-4df3-b9bd-300429654683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.364 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] VM Paused (Lifecycle Event)
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.401 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.404 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408490.2574844, de073a99-7033-4df3-b9bd-300429654683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.404 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] VM Resumed (Lifecycle Event)
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.445 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.448 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.475 2 INFO nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Took 8.46 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.475 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.476 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.575 2 INFO nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Took 9.94 seconds to build instance.
Oct 02 12:34:50 compute-0 nova_compute[257802]: 2025-10-02 12:34:50.597 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Oct 02 12:34:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Oct 02 12:34:50 compute-0 ceph-mon[73607]: pgmap v2212: 305 pgs: 305 active+clean; 674 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.2 MiB/s wr, 493 op/s
Oct 02 12:34:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3434496002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.607 2 DEBUG nova.compute.manager [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.607 2 DEBUG oslo_concurrency.lockutils [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.607 2 DEBUG oslo_concurrency.lockutils [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.608 2 DEBUG oslo_concurrency.lockutils [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.608 2 DEBUG nova.compute.manager [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] No waiting events found dispatching network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:51 compute-0 nova_compute[257802]: 2025-10-02 12:34:51.608 2 WARNING nova.compute.manager [req-160e4d8d-6788-49a9-8bd7-04fb3854ec0c req-8a5911c0-1932-4f58-b628-375d73dff0ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received unexpected event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc for instance with vm_state active and task_state None.
Oct 02 12:34:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 689 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 589 op/s
Oct 02 12:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Oct 02 12:34:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:52 compute-0 nova_compute[257802]: 2025-10-02 12:34:52.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:52 compute-0 ceph-mon[73607]: osdmap e327: 3 total, 3 up, 3 in
Oct 02 12:34:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Oct 02 12:34:52 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Oct 02 12:34:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:52.840 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:53 compute-0 ceph-mon[73607]: pgmap v2214: 305 pgs: 305 active+clean; 689 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 589 op/s
Oct 02 12:34:53 compute-0 ceph-mon[73607]: osdmap e328: 3 total, 3 up, 3 in
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:53 compute-0 sudo[340303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:53 compute-0 sudo[340303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:53 compute-0 sudo[340303]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.484 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.486 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.486 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.487 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.487 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.488 2 INFO nova.compute.manager [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Terminating instance
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.489 2 DEBUG nova.compute.manager [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:53 compute-0 sudo[340334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:53 compute-0 sudo[340334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:53 compute-0 sudo[340334]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:53 compute-0 kernel: tape5011138-d4 (unregistering): left promiscuous mode
Oct 02 12:34:53 compute-0 NetworkManager[44987]: <info>  [1759408493.5515] device (tape5011138-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 ovn_controller[148183]: 2025-10-02T12:34:53Z|00598|binding|INFO|Releasing lport e5011138-d4c0-4361-a71a-ade8340ff5bc from this chassis (sb_readonly=0)
Oct 02 12:34:53 compute-0 ovn_controller[148183]: 2025-10-02T12:34:53Z|00599|binding|INFO|Setting lport e5011138-d4c0-4361-a71a-ade8340ff5bc down in Southbound
Oct 02 12:34:53 compute-0 ovn_controller[148183]: 2025-10-02T12:34:53Z|00600|binding|INFO|Removing iface tape5011138-d4 ovn-installed in OVS
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 podman[340327]: 2025-10-02 12:34:53.577958071 +0000 UTC m=+0.105628567 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.616 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:17:67 10.100.0.6'], port_security=['fa:16:3e:22:17:67 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de073a99-7033-4df3-b9bd-300429654683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=e5011138-d4c0-4361-a71a-ade8340ff5bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.618 158261 INFO neutron.agent.ovn.metadata.agent [-] Port e5011138-d4c0-4361-a71a-ade8340ff5bc in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d unbound from our chassis
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.619 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71fb4dcc-12bb-458e-9241-e19e223ca96d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.620 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7eef7971-211a-4857-a1ae-a3d56f766be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.620 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace which is not needed anymore
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 sudo[340373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:53 compute-0 sudo[340373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:53 compute-0 sudo[340373]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:53 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 02 12:34:53 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000087.scope: Consumed 4.808s CPU time.
Oct 02 12:34:53 compute-0 systemd-machined[211836]: Machine qemu-68-instance-00000087 terminated.
Oct 02 12:34:53 compute-0 sudo[340414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:34:53 compute-0 sudo[340414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.734 2 INFO nova.virt.libvirt.driver [-] [instance: de073a99-7033-4df3-b9bd-300429654683] Instance destroyed successfully.
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.734 2 DEBUG nova.objects.instance [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'resources' on Instance uuid de073a99-7033-4df3-b9bd-300429654683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 714 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.2 MiB/s wr, 664 op/s
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [NOTICE]   (340248) : haproxy version is 2.8.14-c23fe91
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [NOTICE]   (340248) : path to executable is /usr/sbin/haproxy
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [WARNING]  (340248) : Exiting Master process...
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [WARNING]  (340248) : Exiting Master process...
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [ALERT]    (340248) : Current worker (340250) exited with code 143 (Terminated)
Oct 02 12:34:53 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[340244]: [WARNING]  (340248) : All workers exited. Exiting... (0)
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.750 2 DEBUG nova.virt.libvirt.vif [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-2',id=135,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:34:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:50Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=de073a99-7033-4df3-b9bd-300429654683,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.750 2 DEBUG nova.network.os_vif_util [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "address": "fa:16:3e:22:17:67", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5011138-d4", "ovs_interfaceid": "e5011138-d4c0-4361-a71a-ade8340ff5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:53 compute-0 systemd[1]: libpod-71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb.scope: Deactivated successfully.
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.751 2 DEBUG nova.network.os_vif_util [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.751 2 DEBUG os_vif [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.753 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5011138-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:53 compute-0 conmon[340244]: conmon 71c2acdc2df904736dd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb.scope/container/memory.events
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.758 2 INFO os_vif [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:17:67,bridge_name='br-int',has_traffic_filtering=True,id=e5011138-d4c0-4361-a71a-ade8340ff5bc,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5011138-d4')
Oct 02 12:34:53 compute-0 podman[340452]: 2025-10-02 12:34:53.75930572 +0000 UTC m=+0.048787720 container died 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.780 2 DEBUG nova.compute.manager [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-unplugged-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.781 2 DEBUG oslo_concurrency.lockutils [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.781 2 DEBUG oslo_concurrency.lockutils [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.781 2 DEBUG oslo_concurrency.lockutils [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.782 2 DEBUG nova.compute.manager [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] No waiting events found dispatching network-vif-unplugged-e5011138-d4c0-4361-a71a-ade8340ff5bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.782 2 DEBUG nova.compute.manager [req-234eb45a-a0af-458d-9fdd-4469ede866b9 req-f38e80d4-d7be-43c3-8ed1-70a54563fc6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-unplugged-e5011138-d4c0-4361-a71a-ade8340ff5bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1937b00ceff4212d6fe3d5315688d03ea88bd3b6af9d0d4f9e16bc242cb354-merged.mount: Deactivated successfully.
Oct 02 12:34:53 compute-0 podman[340452]: 2025-10-02 12:34:53.821436587 +0000 UTC m=+0.110918597 container cleanup 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:34:53 compute-0 systemd[1]: libpod-conmon-71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb.scope: Deactivated successfully.
Oct 02 12:34:53 compute-0 podman[340508]: 2025-10-02 12:34:53.904309855 +0000 UTC m=+0.053412064 container remove 71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.915 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ae17ad-e3fc-4bb4-9b56-5c1f46359024]: (4, ('Thu Oct  2 12:34:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb)\n71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb\nThu Oct  2 12:34:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb)\n71c2acdc2df904736dd7d7f0a19d486bdd46d565c3bb1af1cd4c469cfa8bdcbb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.918 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b6173d12-7af3-4c45-b80e-3407bcf37357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.920 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:53 compute-0 kernel: tap71fb4dcc-10: left promiscuous mode
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.935 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[03163a8a-2d13-4e01-ac01-c4678d6135c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:53 compute-0 nova_compute[257802]: 2025-10-02 12:34:53.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.970 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ce14841b-3419-452c-81b2-977c4e0d14f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.971 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[06f96736-af57-4eef-96a2-9926f09fe503]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.988 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac16591-48d1-4da2-a38b-a06c63270c4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655614, 'reachable_time': 19852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340539, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d71fb4dcc\x2d12bb\x2d458e\x2d9241\x2de19e223ca96d.mount: Deactivated successfully.
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.991 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:34:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:34:53.991 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[29feb38d-8868-41a8-8751-9a69853e3478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:54 compute-0 sudo[340414]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:54 compute-0 sudo[340557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:54 compute-0 sudo[340557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:54 compute-0 sudo[340557]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:54 compute-0 sudo[340582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:54 compute-0 sudo[340582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:54 compute-0 sudo[340582]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013842541701563373 of space, bias 1.0, pg target 4.152762510469012 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6401553485017681 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0032054078261678763 of space, bias 1.0, pg target 0.9488007165456914 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:34:54 compute-0 sudo[340607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:54 compute-0 sudo[340607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:54 compute-0 sudo[340607]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:54 compute-0 sudo[340632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:34:54 compute-0 sudo[340632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:54 compute-0 ceph-mon[73607]: pgmap v2216: 305 pgs: 305 active+clean; 714 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.2 MiB/s wr, 664 op/s
Oct 02 12:34:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.039990967 +0000 UTC m=+0.049749274 container create 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 12:34:55 compute-0 systemd[1]: Started libpod-conmon-459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e.scope.
Oct 02 12:34:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.01410121 +0000 UTC m=+0.023859537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.123054579 +0000 UTC m=+0.132812916 container init 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.131492366 +0000 UTC m=+0.141250673 container start 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:34:55 compute-0 ecstatic_galois[340718]: 167 167
Oct 02 12:34:55 compute-0 systemd[1]: libpod-459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e.scope: Deactivated successfully.
Oct 02 12:34:55 compute-0 conmon[340718]: conmon 459526c1c22e9b298d25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e.scope/container/memory.events
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.138890858 +0000 UTC m=+0.148649165 container attach 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.139157264 +0000 UTC m=+0.148915561 container died 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a498ae1ffe0f2a5d318f7c186d4e29da1ec15827cd654968d3c388393c28b616-merged.mount: Deactivated successfully.
Oct 02 12:34:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:55 compute-0 podman[340702]: 2025-10-02 12:34:55.203491616 +0000 UTC m=+0.213249933 container remove 459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:34:55 compute-0 systemd[1]: libpod-conmon-459526c1c22e9b298d25bf6dc99d9e9eff534a5dbde330a0ce2258d7b290d06e.scope: Deactivated successfully.
Oct 02 12:34:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1765944482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.410 2 INFO nova.virt.libvirt.driver [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Deleting instance files /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683_del
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.411 2 INFO nova.virt.libvirt.driver [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Deletion of /var/lib/nova/instances/de073a99-7033-4df3-b9bd-300429654683_del complete
Oct 02 12:34:55 compute-0 podman[340742]: 2025-10-02 12:34:55.416064823 +0000 UTC m=+0.069597513 container create 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:34:55 compute-0 systemd[1]: Started libpod-conmon-2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636.scope.
Oct 02 12:34:55 compute-0 podman[340742]: 2025-10-02 12:34:55.381703758 +0000 UTC m=+0.035236538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606a50099066cc0e2a6db6184009d74190babe74e3a5d8408548cea3c2debf3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606a50099066cc0e2a6db6184009d74190babe74e3a5d8408548cea3c2debf3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606a50099066cc0e2a6db6184009d74190babe74e3a5d8408548cea3c2debf3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606a50099066cc0e2a6db6184009d74190babe74e3a5d8408548cea3c2debf3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.523 2 INFO nova.compute.manager [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Took 2.03 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.524 2 DEBUG oslo.service.loopingcall [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.524 2 DEBUG nova.compute.manager [-] [instance: de073a99-7033-4df3-b9bd-300429654683] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:55 compute-0 nova_compute[257802]: 2025-10-02 12:34:55.525 2 DEBUG nova.network.neutron [-] [instance: de073a99-7033-4df3-b9bd-300429654683] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:55 compute-0 podman[340742]: 2025-10-02 12:34:55.54654197 +0000 UTC m=+0.200074750 container init 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:34:55 compute-0 podman[340742]: 2025-10-02 12:34:55.557223713 +0000 UTC m=+0.210756413 container start 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:34:55 compute-0 podman[340742]: 2025-10-02 12:34:55.677614083 +0000 UTC m=+0.331146803 container attach 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:34:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 698 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 6.4 MiB/s wr, 570 op/s
Oct 02 12:34:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:56.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/654346124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/654346124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1765944482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/763028527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.140 2 DEBUG nova.compute.manager [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "de073a99-7033-4df3-b9bd-300429654683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.141 2 DEBUG oslo_concurrency.lockutils [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.141 2 DEBUG nova.compute.manager [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] No waiting events found dispatching network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.141 2 WARNING nova.compute.manager [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received unexpected event network-vif-plugged-e5011138-d4c0-4361-a71a-ade8340ff5bc for instance with vm_state active and task_state deleting.
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.306 2 DEBUG nova.network.neutron [-] [instance: de073a99-7033-4df3-b9bd-300429654683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.311 2 DEBUG nova.compute.manager [req-4f436c1a-3ec0-43dc-b2ce-2706dd17a94e req-829a2de1-e64e-4109-abf0-29bafe7e7322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Received event network-vif-deleted-e5011138-d4c0-4361-a71a-ade8340ff5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.311 2 INFO nova.compute.manager [req-4f436c1a-3ec0-43dc-b2ce-2706dd17a94e req-829a2de1-e64e-4109-abf0-29bafe7e7322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Neutron deleted interface e5011138-d4c0-4361-a71a-ade8340ff5bc; detaching it from the instance and deleting it from the info cache
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.312 2 DEBUG nova.network.neutron [req-4f436c1a-3ec0-43dc-b2ce-2706dd17a94e req-829a2de1-e64e-4109-abf0-29bafe7e7322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.549 2 INFO nova.compute.manager [-] [instance: de073a99-7033-4df3-b9bd-300429654683] Took 1.02 seconds to deallocate network for instance.
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.556 2 DEBUG nova.compute.manager [req-4f436c1a-3ec0-43dc-b2ce-2706dd17a94e req-829a2de1-e64e-4109-abf0-29bafe7e7322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: de073a99-7033-4df3-b9bd-300429654683] Detach interface failed, port_id=e5011138-d4c0-4361-a71a-ade8340ff5bc, reason: Instance de073a99-7033-4df3-b9bd-300429654683 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.714 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.715 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:56 compute-0 nova_compute[257802]: 2025-10-02 12:34:56.823 2 DEBUG oslo_concurrency.processutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:56 compute-0 friendly_williams[340758]: [
Oct 02 12:34:56 compute-0 friendly_williams[340758]:     {
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "available": false,
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "ceph_device": false,
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "lsm_data": {},
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "lvs": [],
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "path": "/dev/sr0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "rejected_reasons": [
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "Has a FileSystem",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "Insufficient space (<5GB)"
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         ],
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         "sys_api": {
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "actuators": null,
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "device_nodes": "sr0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "devname": "sr0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "human_readable_size": "482.00 KB",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "id_bus": "ata",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "model": "QEMU DVD-ROM",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "nr_requests": "2",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "parent": "/dev/sr0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "partitions": {},
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "path": "/dev/sr0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "removable": "1",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "rev": "2.5+",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "ro": "0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "rotational": "0",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "sas_address": "",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "sas_device_handle": "",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "scheduler_mode": "mq-deadline",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "sectors": 0,
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "sectorsize": "2048",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "size": 493568.0,
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "support_discard": "2048",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "type": "disk",
Oct 02 12:34:56 compute-0 friendly_williams[340758]:             "vendor": "QEMU"
Oct 02 12:34:56 compute-0 friendly_williams[340758]:         }
Oct 02 12:34:56 compute-0 friendly_williams[340758]:     }
Oct 02 12:34:56 compute-0 friendly_williams[340758]: ]
Oct 02 12:34:56 compute-0 systemd[1]: libpod-2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636.scope: Deactivated successfully.
Oct 02 12:34:56 compute-0 systemd[1]: libpod-2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636.scope: Consumed 1.213s CPU time.
Oct 02 12:34:56 compute-0 podman[340742]: 2025-10-02 12:34:56.927777749 +0000 UTC m=+1.581310469 container died 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:34:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:34:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863806258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.401 2 DEBUG oslo_concurrency.processutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.407 2 DEBUG nova.compute.provider_tree [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.514 2 DEBUG nova.scheduler.client.report [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.635 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-606a50099066cc0e2a6db6184009d74190babe74e3a5d8408548cea3c2debf3b-merged.mount: Deactivated successfully.
Oct 02 12:34:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 698 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.7 MiB/s wr, 395 op/s
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.746 2 INFO nova.scheduler.client.report [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Deleted allocations for instance de073a99-7033-4df3-b9bd-300429654683
Oct 02 12:34:57 compute-0 nova_compute[257802]: 2025-10-02 12:34:57.938 2 DEBUG oslo_concurrency.lockutils [None req-1de01f1f-289f-4f50-83ec-ca6c5da6a69a f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "de073a99-7033-4df3-b9bd-300429654683" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:58 compute-0 ceph-mon[73607]: pgmap v2217: 305 pgs: 305 active+clean; 698 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 6.4 MiB/s wr, 570 op/s
Oct 02 12:34:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2013956393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:34:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:34:58 compute-0 podman[340742]: 2025-10-02 12:34:58.318945871 +0000 UTC m=+2.972478571 container remove 2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:58 compute-0 systemd[1]: libpod-conmon-2400d40a6be3fea15d1e1227e465ac961033f4d5e5f79f22e6f0db7255879636.scope: Deactivated successfully.
Oct 02 12:34:58 compute-0 nova_compute[257802]: 2025-10-02 12:34:58.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:58 compute-0 sudo[340632]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:34:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:34:58 compute-0 nova_compute[257802]: 2025-10-02 12:34:58.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:59 compute-0 nova_compute[257802]: 2025-10-02 12:34:59.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Oct 02 12:34:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:34:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Oct 02 12:34:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2061018038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1863806258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:59 compute-0 ceph-mon[73607]: pgmap v2218: 305 pgs: 305 active+clean; 698 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.7 MiB/s wr, 395 op/s
Oct 02 12:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6ce04f3b-9fd7-4267-aa40-f8f77a9b3c43 does not exist
Oct 02 12:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9a018580-ba6c-477e-bb30-e70bb987211e does not exist
Oct 02 12:34:59 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e7ebaf52-2824-4216-80fb-5033634c6933 does not exist
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:34:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:34:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:59 compute-0 sudo[342051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:59 compute-0 sudo[342051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:59 compute-0 sudo[342051]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:59 compute-0 sudo[342076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:59 compute-0 sudo[342076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:59 compute-0 sudo[342076]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:59 compute-0 sudo[342101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:59 compute-0 sudo[342101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:59 compute-0 sudo[342101]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 679 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.3 MiB/s wr, 394 op/s
Oct 02 12:34:59 compute-0 sudo[342126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:34:59 compute-0 sudo[342126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:00 compute-0 nova_compute[257802]: 2025-10-02 12:35:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:00.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:00 compute-0 podman[342191]: 2025-10-02 12:35:00.086039448 +0000 UTC m=+0.019850460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:00 compute-0 podman[342191]: 2025-10-02 12:35:00.669713587 +0000 UTC m=+0.603524579 container create 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:35:01 compute-0 ceph-mon[73607]: osdmap e329: 3 total, 3 up, 3 in
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:35:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:01 compute-0 ceph-mon[73607]: pgmap v2220: 305 pgs: 305 active+clean; 679 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 6.3 MiB/s wr, 394 op/s
Oct 02 12:35:01 compute-0 systemd[1]: Started libpod-conmon-48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf.scope.
Oct 02 12:35:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:01 compute-0 podman[342191]: 2025-10-02 12:35:01.170493589 +0000 UTC m=+1.104304611 container init 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:35:01 compute-0 podman[342191]: 2025-10-02 12:35:01.177766118 +0000 UTC m=+1.111577120 container start 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:35:01 compute-0 boring_driscoll[342208]: 167 167
Oct 02 12:35:01 compute-0 systemd[1]: libpod-48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf.scope: Deactivated successfully.
Oct 02 12:35:01 compute-0 conmon[342208]: conmon 48ff51e49940dffaee6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf.scope/container/memory.events
Oct 02 12:35:01 compute-0 podman[342191]: 2025-10-02 12:35:01.198669852 +0000 UTC m=+1.132480844 container attach 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:35:01 compute-0 podman[342191]: 2025-10-02 12:35:01.19901519 +0000 UTC m=+1.132826182 container died 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-83984b796af58bb5d1986036343e12c9f098e9de38a56f3d65dd24f30e45975d-merged.mount: Deactivated successfully.
Oct 02 12:35:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:01 compute-0 podman[342191]: 2025-10-02 12:35:01.723649038 +0000 UTC m=+1.657460030 container remove 48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:35:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 351 op/s
Oct 02 12:35:01 compute-0 systemd[1]: libpod-conmon-48ff51e49940dffaee6e9fe228d28f78d63ccf473f626106e503f9fd40d5d7bf.scope: Deactivated successfully.
Oct 02 12:35:01 compute-0 podman[342233]: 2025-10-02 12:35:01.87910719 +0000 UTC m=+0.021861617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:01 compute-0 podman[342233]: 2025-10-02 12:35:01.986531582 +0000 UTC m=+0.129285989 container create b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:02 compute-0 systemd[1]: Started libpod-conmon-b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03.scope.
Oct 02 12:35:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:02 compute-0 podman[342233]: 2025-10-02 12:35:02.610973484 +0000 UTC m=+0.753727921 container init b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:35:02 compute-0 podman[342233]: 2025-10-02 12:35:02.617805292 +0000 UTC m=+0.760559689 container start b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:35:02 compute-0 ceph-mon[73607]: pgmap v2221: 305 pgs: 305 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 351 op/s
Oct 02 12:35:02 compute-0 podman[342233]: 2025-10-02 12:35:02.846933994 +0000 UTC m=+0.989688431 container attach b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:35:03 compute-0 nova_compute[257802]: 2025-10-02 12:35:03.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:03 compute-0 nova_compute[257802]: 2025-10-02 12:35:03.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:03 compute-0 nova_compute[257802]: 2025-10-02 12:35:03.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:03 compute-0 tender_hypatia[342249]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:35:03 compute-0 tender_hypatia[342249]: --> relative data size: 1.0
Oct 02 12:35:03 compute-0 tender_hypatia[342249]: --> All data devices are unavailable
Oct 02 12:35:03 compute-0 systemd[1]: libpod-b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03.scope: Deactivated successfully.
Oct 02 12:35:03 compute-0 podman[342233]: 2025-10-02 12:35:03.460339466 +0000 UTC m=+1.603093893 container died b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:35:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 648 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 250 op/s
Oct 02 12:35:03 compute-0 nova_compute[257802]: 2025-10-02 12:35:03.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.215 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.216 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4276342209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3877520165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.254 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cf97fbb5a343d5789161da458e74ebbf94e87c51c8e16f55f32ad3849626fa8-merged.mount: Deactivated successfully.
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.430 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.430 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.437 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.438 2 INFO nova.compute.claims [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.664 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.776 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.777 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:04 compute-0 nova_compute[257802]: 2025-10-02 12:35:04.806 2 DEBUG nova.objects.instance [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.058 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963677223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.105 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.111 2 DEBUG nova.compute.provider_tree [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.133 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.157 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.158 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.214 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.214 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.246 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:35:05 compute-0 podman[342233]: 2025-10-02 12:35:05.267926417 +0000 UTC m=+3.410680854 container remove b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.271 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:35:05 compute-0 systemd[1]: libpod-conmon-b5789967f885fde8bab3844e04db8882eaf94cede35b8b1e5273f511fe044b03.scope: Deactivated successfully.
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.293 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.293 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.294 2 INFO nova.compute.manager [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Attaching volume 5134bab8-913f-41c9-b5b8-350b459c53e1 to /dev/vdb
Oct 02 12:35:05 compute-0 sudo[342126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:05 compute-0 sudo[342302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:05 compute-0 sudo[342302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:05 compute-0 sudo[342302]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.394 2 DEBUG nova.policy [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f02a0ac23d9e44d5a6205e853818fa50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:35:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:05 compute-0 sudo[342327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:05 compute-0 sudo[342327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.428 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:35:05 compute-0 sudo[342327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.429 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.430 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Creating image(s)
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.460 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:05 compute-0 sudo[342352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:05 compute-0 sudo[342352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:05 compute-0 sudo[342352]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.490 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.516 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.520 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 sudo[342413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:35:05 compute-0 sudo[342413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.545 2 DEBUG os_brick.utils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.546 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.558 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.559 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[3cedcf77-ca5a-4287-b0b2-1554d119129c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.561 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.569 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.569 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a0820c-682b-4690-8fdf-387d40968f35]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.571 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.580 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.581 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[3a949d0b-8dee-4f9e-b73d-fac602eb39cd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.582 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0a1347-e827-4929-bc9f-445384ef159b]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.582 2 DEBUG oslo_concurrency.processutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.609 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.610 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.611 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.612 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:05 compute-0 ceph-mon[73607]: pgmap v2222: 305 pgs: 305 active+clean; 648 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 250 op/s
Oct 02 12:35:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1963677223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.650 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.661 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.692 2 DEBUG oslo_concurrency.processutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "nvme version" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.694 2 DEBUG os_brick.initiator.connectors.lightos [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.695 2 DEBUG os_brick.initiator.connectors.lightos [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.695 2 DEBUG os_brick.initiator.connectors.lightos [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.695 2 DEBUG os_brick.utils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] <== get_connector_properties: return (149ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:35:05 compute-0 nova_compute[257802]: 2025-10-02 12:35:05.695 2 DEBUG nova.virt.block_device [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating existing volume attachment record: 045962c8-90e7-4348-aad0-c3be55d6ec09 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:35:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 12:35:05 compute-0 podman[342531]: 2025-10-02 12:35:05.845530867 +0000 UTC m=+0.022989146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:06.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:06 compute-0 podman[342531]: 2025-10-02 12:35:06.152617587 +0000 UTC m=+0.330075836 container create 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:35:06 compute-0 systemd[1]: Started libpod-conmon-7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea.scope.
Oct 02 12:35:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:06 compute-0 podman[342531]: 2025-10-02 12:35:06.485066211 +0000 UTC m=+0.662524540 container init 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:35:06 compute-0 podman[342531]: 2025-10-02 12:35:06.49478922 +0000 UTC m=+0.672247489 container start 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:35:06 compute-0 awesome_diffie[342556]: 167 167
Oct 02 12:35:06 compute-0 systemd[1]: libpod-7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea.scope: Deactivated successfully.
Oct 02 12:35:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3565055433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:06 compute-0 nova_compute[257802]: 2025-10-02 12:35:06.616 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Successfully created port: 001b44c4-049b-4b18-9848-d408d7dcf23f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:35:06 compute-0 podman[342531]: 2025-10-02 12:35:06.755800727 +0000 UTC m=+0.933258996 container attach 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:06 compute-0 podman[342531]: 2025-10-02 12:35:06.75714544 +0000 UTC m=+0.934603689 container died 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:35:06 compute-0 nova_compute[257802]: 2025-10-02 12:35:06.877 2 DEBUG nova.objects.instance [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:06 compute-0 nova_compute[257802]: 2025-10-02 12:35:06.924 2 DEBUG nova.virt.libvirt.driver [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Attempting to attach volume 5134bab8-913f-41c9-b5b8-350b459c53e1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:35:06 compute-0 nova_compute[257802]: 2025-10-02 12:35:06.927 2 DEBUG nova.virt.libvirt.guest [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:35:06 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5134bab8-913f-41c9-b5b8-350b459c53e1">
Oct 02 12:35:06 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   </source>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:35:06 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:35:06 compute-0 nova_compute[257802]:   <serial>5134bab8-913f-41c9-b5b8-350b459c53e1</serial>
Oct 02 12:35:06 compute-0 nova_compute[257802]: </disk>
Oct 02 12:35:06 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:35:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:07.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2464211780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-0 ceph-mon[73607]: pgmap v2223: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 12:35:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2365189100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3565055433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-0 sudo[342596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:07 compute-0 sudo[342596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:07 compute-0 sudo[342596]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 12:35:07 compute-0 sudo[342621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:07 compute-0 sudo[342621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:07 compute-0 sudo[342621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-14548485d52483638597027cc64f005fc7bd6dca403676e16051ad87c71fa7d2-merged.mount: Deactivated successfully.
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.034 2 DEBUG nova.virt.libvirt.driver [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.034 2 DEBUG nova.virt.libvirt.driver [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.035 2 DEBUG nova.virt.libvirt.driver [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.035 2 DEBUG nova.virt.libvirt.driver [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:7f:8b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.095 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:35:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.155 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.305 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.376 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] resizing rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.456 2 DEBUG oslo_concurrency.lockutils [None req-6e849279-3a3d-485b-8394-fe07c26617ca 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.559 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Successfully updated port: 001b44c4-049b-4b18-9848-d408d7dcf23f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.679 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.680 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquired lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.680 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.708 2 DEBUG nova.compute.manager [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-changed-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.709 2 DEBUG nova.compute.manager [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Refreshing instance network info cache due to event network-changed-001b44c4-049b-4b18-9848-d408d7dcf23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.709 2 DEBUG oslo_concurrency.lockutils [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408493.72635, de073a99-7033-4df3-b9bd-300429654683 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.730 2 INFO nova.compute.manager [-] [instance: de073a99-7033-4df3-b9bd-300429654683] VM Stopped (Lifecycle Event)
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:08 compute-0 nova_compute[257802]: 2025-10-02 12:35:08.806 2 DEBUG nova.compute.manager [None req-99148e97-5b07-458a-8fac-6740c1efe481 - - - - - -] [instance: de073a99-7033-4df3-b9bd-300429654683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.135 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.145 2 DEBUG nova.objects.instance [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'migration_context' on Instance uuid aa9070a4-cc2e-4aec-9abe-9873932ea0de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.214 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.215 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Ensure instance console log exists: /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.216 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.216 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.217 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.383 2 INFO nova.compute.manager [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Rescuing
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.384 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.384 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:09 compute-0 nova_compute[257802]: 2025-10-02 12:35:09.384 2 DEBUG nova.network.neutron [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:09 compute-0 ceph-mon[73607]: pgmap v2224: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 12:35:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1937057779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 640 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Oct 02 12:35:10 compute-0 podman[342531]: 2025-10-02 12:35:10.052656485 +0000 UTC m=+4.230114744 container remove 7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:35:10 compute-0 systemd[1]: libpod-conmon-7f25b94adf0b841c4512c620f06a489a1695a82dddb5fe08312b06f357c549ea.scope: Deactivated successfully.
Oct 02 12:35:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:10.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.126 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Updating instance_info_cache with network_info: [{"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.180 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Releasing lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.181 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Instance network_info: |[{"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.182 2 DEBUG oslo_concurrency.lockutils [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.182 2 DEBUG nova.network.neutron [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Refreshing network info cache for port 001b44c4-049b-4b18-9848-d408d7dcf23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.188 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Start _get_guest_xml network_info=[{"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.194 2 WARNING nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.205 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.206 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.209 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.210 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.211 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.211 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.212 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.213 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.213 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.213 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.214 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.214 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.215 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.215 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.215 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.216 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.221 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:10 compute-0 podman[342727]: 2025-10-02 12:35:10.248347306 +0000 UTC m=+0.023572950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:10 compute-0 podman[342727]: 2025-10-02 12:35:10.786684781 +0000 UTC m=+0.561910445 container create 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:35:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479586594' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.876 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.913 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:10 compute-0 nova_compute[257802]: 2025-10-02 12:35:10.919 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:10 compute-0 systemd[1]: Started libpod-conmon-22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc.scope.
Oct 02 12:35:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5de51fe20d3d18ec348d59f0465b8a68a2f8a73c20255142b4541e9f187e59f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5de51fe20d3d18ec348d59f0465b8a68a2f8a73c20255142b4541e9f187e59f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5de51fe20d3d18ec348d59f0465b8a68a2f8a73c20255142b4541e9f187e59f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5de51fe20d3d18ec348d59f0465b8a68a2f8a73c20255142b4541e9f187e59f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:11 compute-0 ceph-mon[73607]: pgmap v2225: 305 pgs: 305 active+clean; 640 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Oct 02 12:35:11 compute-0 podman[342727]: 2025-10-02 12:35:11.195285638 +0000 UTC m=+0.970511332 container init 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:35:11 compute-0 podman[342727]: 2025-10-02 12:35:11.207439537 +0000 UTC m=+0.982665191 container start 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:35:11 compute-0 podman[342727]: 2025-10-02 12:35:11.375106358 +0000 UTC m=+1.150332022 container attach 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:35:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100561163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.600 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.603 2 DEBUG nova.virt.libvirt.vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-1',id=137,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:05Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=aa9070a4-cc2e-4aec-9abe-9873932ea0de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.603 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.604 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.606 2 DEBUG nova.objects.instance [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa9070a4-cc2e-4aec-9abe-9873932ea0de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 664 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.750 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <uuid>aa9070a4-cc2e-4aec-9abe-9873932ea0de</uuid>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <name>instance-00000089</name>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:name>tempest-MultipleCreateTestJSON-server-27244558-1</nova:name>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:35:10</nova:creationTime>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:user uuid="f02a0ac23d9e44d5a6205e853818fa50">tempest-MultipleCreateTestJSON-822408607-project-member</nova:user>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:project uuid="c3b4ed8f5ff54d4cb9f232e285155ca0">tempest-MultipleCreateTestJSON-822408607</nova:project>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <nova:port uuid="001b44c4-049b-4b18-9848-d408d7dcf23f">
Oct 02 12:35:11 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <system>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="serial">aa9070a4-cc2e-4aec-9abe-9873932ea0de</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="uuid">aa9070a4-cc2e-4aec-9abe-9873932ea0de</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </system>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <os>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </os>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <features>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </features>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk">
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config">
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:35:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:63:8c:c2"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <target dev="tap001b44c4-04"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/console.log" append="off"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <video>
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </video>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:35:11 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:35:11 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:35:11 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:35:11 compute-0 nova_compute[257802]: </domain>
Oct 02 12:35:11 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.751 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Preparing to wait for external event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.751 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.751 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.752 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.752 2 DEBUG nova.virt.libvirt.vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-1',id=137,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:05Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=aa9070a4-cc2e-4aec-9abe-9873932ea0de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.753 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.753 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.754 2 DEBUG os_vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap001b44c4-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap001b44c4-04, col_values=(('external_ids', {'iface-id': '001b44c4-049b-4b18-9848-d408d7dcf23f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:8c:c2', 'vm-uuid': 'aa9070a4-cc2e-4aec-9abe-9873932ea0de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:11 compute-0 NetworkManager[44987]: <info>  [1759408511.7636] manager: (tap001b44c4-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:11 compute-0 nova_compute[257802]: 2025-10-02 12:35:11.771 2 INFO os_vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04')
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]: {
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:     "1": [
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:         {
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "devices": [
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "/dev/loop3"
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             ],
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "lv_name": "ceph_lv0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "lv_size": "7511998464",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "name": "ceph_lv0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "tags": {
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.cluster_name": "ceph",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.crush_device_class": "",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.encrypted": "0",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.osd_id": "1",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.type": "block",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:                 "ceph.vdo": "0"
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             },
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "type": "block",
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:             "vg_name": "ceph_vg0"
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:         }
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]:     ]
Oct 02 12:35:12 compute-0 objective_ardinghelli[342785]: }
Oct 02 12:35:12 compute-0 systemd[1]: libpod-22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc.scope: Deactivated successfully.
Oct 02 12:35:12 compute-0 podman[342727]: 2025-10-02 12:35:12.043072751 +0000 UTC m=+1.818298395 container died 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.084 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.085 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.085 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No VIF found with MAC fa:16:3e:63:8c:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.086 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Using config drive
Oct 02 12:35:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:12.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.147 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.151 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2479586594' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2100561163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.312 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.313 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.313 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.313 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.313 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5de51fe20d3d18ec348d59f0465b8a68a2f8a73c20255142b4541e9f187e59f9-merged.mount: Deactivated successfully.
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129051763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:12 compute-0 nova_compute[257802]: 2025-10-02 12:35:12.809 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.056 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.057 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.057 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.060 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.060 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.060 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:13 compute-0 podman[342727]: 2025-10-02 12:35:13.103150353 +0000 UTC m=+2.878375997 container remove 22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:35:13 compute-0 sudo[342413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:13 compute-0 systemd[1]: libpod-conmon-22fc4a956bb138982f328426bb0c5bb53dfa51e421a5c31e86d16e7bb8cacbfc.scope: Deactivated successfully.
Oct 02 12:35:13 compute-0 sudo[342869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:13 compute-0 sudo[342869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:13 compute-0 sudo[342869]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.253 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.254 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3869MB free_disk=20.73788833618164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.254 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.255 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.269 2 DEBUG nova.network.neutron [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:13 compute-0 sudo[342894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:13 compute-0 sudo[342894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:13 compute-0 sudo[342894]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:13 compute-0 sudo[342919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:13 compute-0 sudo[342919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:13 compute-0 sudo[342919]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:13 compute-0 sudo[342944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:35:13 compute-0 sudo[342944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.426 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:13 compute-0 ceph-mon[73607]: pgmap v2226: 305 pgs: 305 active+clean; 664 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Oct 02 12:35:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3129051763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1174958944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.584 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance a53afa14-bb7b-4723-8239-2ed285f1bc94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.585 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.585 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance aa9070a4-cc2e-4aec-9abe-9873932ea0de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.585 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.585 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.647 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 852 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 02 12:35:13 compute-0 nova_compute[257802]: 2025-10-02 12:35:13.865 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:35:13 compute-0 podman[343009]: 2025-10-02 12:35:13.795493772 +0000 UTC m=+0.023872925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2960373602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571525367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:14 compute-0 podman[343009]: 2025-10-02 12:35:14.076238501 +0000 UTC m=+0.304617654 container create 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.089 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.095 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:35:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:14.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.169 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.294 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.295 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.331 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Creating config drive at /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.339 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr_b6_asm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.476 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr_b6_asm" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:14 compute-0 systemd[1]: Started libpod-conmon-304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e.scope.
Oct 02 12:35:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.870 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:14 compute-0 nova_compute[257802]: 2025-10-02 12:35:14.875 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:14 compute-0 ceph-mon[73607]: pgmap v2227: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 852 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 02 12:35:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2960373602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3571525367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1491830790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-0 podman[343009]: 2025-10-02 12:35:14.97338219 +0000 UTC m=+1.201761403 container init 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:35:14 compute-0 podman[343009]: 2025-10-02 12:35:14.982433631 +0000 UTC m=+1.210812804 container start 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:35:14 compute-0 thirsty_faraday[343054]: 167 167
Oct 02 12:35:14 compute-0 systemd[1]: libpod-304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e.scope: Deactivated successfully.
Oct 02 12:35:15 compute-0 podman[343009]: 2025-10-02 12:35:15.335895479 +0000 UTC m=+1.564274662 container attach 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:15 compute-0 podman[343009]: 2025-10-02 12:35:15.336288619 +0000 UTC m=+1.564667782 container died 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:35:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Oct 02 12:35:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:16.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:16 compute-0 nova_compute[257802]: 2025-10-02 12:35:16.257 2 DEBUG nova.network.neutron [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Updated VIF entry in instance network info cache for port 001b44c4-049b-4b18-9848-d408d7dcf23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:35:16 compute-0 nova_compute[257802]: 2025-10-02 12:35:16.257 2 DEBUG nova.network.neutron [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Updating instance_info_cache with network_info: [{"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e31166c138056e63a31a268f36de7859272ba06cd3c81a53317a223f8dd0661-merged.mount: Deactivated successfully.
Oct 02 12:35:16 compute-0 nova_compute[257802]: 2025-10-02 12:35:16.382 2 DEBUG oslo_concurrency.lockutils [req-1c067c40-3d75-4024-ac72-fca882fa6903 req-f4e18592-eb4a-4d55-86c9-df478b0f2cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-aa9070a4-cc2e-4aec-9abe-9873932ea0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:16 compute-0 nova_compute[257802]: 2025-10-02 12:35:16.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:17 compute-0 podman[343009]: 2025-10-02 12:35:17.158946314 +0000 UTC m=+3.387325507 container remove 304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:35:17 compute-0 systemd[1]: libpod-conmon-304c4cbd8b975b67473ac1edadb94163926209bc43459992b681873dd5a4e57e.scope: Deactivated successfully.
Oct 02 12:35:17 compute-0 podman[343116]: 2025-10-02 12:35:17.305765096 +0000 UTC m=+0.024998582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:17.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:17 compute-0 ceph-mon[73607]: pgmap v2228: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Oct 02 12:35:17 compute-0 nova_compute[257802]: 2025-10-02 12:35:17.598 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config aa9070a4-cc2e-4aec-9abe-9873932ea0de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:17 compute-0 nova_compute[257802]: 2025-10-02 12:35:17.599 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Deleting local config drive /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de/disk.config because it was imported into RBD.
Oct 02 12:35:17 compute-0 kernel: tap001b44c4-04: entered promiscuous mode
Oct 02 12:35:17 compute-0 NetworkManager[44987]: <info>  [1759408517.6543] manager: (tap001b44c4-04): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Oct 02 12:35:17 compute-0 ovn_controller[148183]: 2025-10-02T12:35:17Z|00601|binding|INFO|Claiming lport 001b44c4-049b-4b18-9848-d408d7dcf23f for this chassis.
Oct 02 12:35:17 compute-0 ovn_controller[148183]: 2025-10-02T12:35:17Z|00602|binding|INFO|001b44c4-049b-4b18-9848-d408d7dcf23f: Claiming fa:16:3e:63:8c:c2 10.100.0.3
Oct 02 12:35:17 compute-0 ovn_controller[148183]: 2025-10-02T12:35:17Z|00603|binding|INFO|Setting lport 001b44c4-049b-4b18-9848-d408d7dcf23f ovn-installed in OVS
Oct 02 12:35:17 compute-0 nova_compute[257802]: 2025-10-02 12:35:17.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:17 compute-0 nova_compute[257802]: 2025-10-02 12:35:17.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:17 compute-0 systemd-machined[211836]: New machine qemu-69-instance-00000089.
Oct 02 12:35:17 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000089.
Oct 02 12:35:17 compute-0 systemd-udevd[343143]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:17 compute-0 NetworkManager[44987]: <info>  [1759408517.7432] device (tap001b44c4-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:17 compute-0 NetworkManager[44987]: <info>  [1759408517.7441] device (tap001b44c4-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 02 12:35:17 compute-0 ovn_controller[148183]: 2025-10-02T12:35:17Z|00604|binding|INFO|Setting lport 001b44c4-049b-4b18-9848-d408d7dcf23f up in Southbound
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.797 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:8c:c2 10.100.0.3'], port_security=['fa:16:3e:63:8c:c2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aa9070a4-cc2e-4aec-9abe-9873932ea0de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=001b44c4-049b-4b18-9848-d408d7dcf23f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.799 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 001b44c4-049b-4b18-9848-d408d7dcf23f in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d bound to our chassis
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.800 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.813 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[25bb64ae-2a37-4582-a130-7825ca12fd03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.814 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71fb4dcc-11 in ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.815 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71fb4dcc-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.815 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[74f96b71-8aef-4213-a409-70ef05fbb41b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.816 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15f1fee4-5bd9-43c2-b9a1-85b847d220f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.828 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdce328-4b6c-4717-a358-b656bbb74312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.843 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ed75972a-28c2-4394-a3b7-c68f41b0d4f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.871 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[629a8a06-92d5-4dbd-858f-ae6e97ee37c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.878 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5fee583a-2ec6-45bd-85e2-613c88cab2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 NetworkManager[44987]: <info>  [1759408517.8793] manager: (tap71fb4dcc-10): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Oct 02 12:35:17 compute-0 podman[343116]: 2025-10-02 12:35:17.905335155 +0000 UTC m=+0.624568601 container create 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.923 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e7cb16f6-a9f1-449c-87e8-afec92d19a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.926 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b6175a-4ec7-48f7-8739-24641f505813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 NetworkManager[44987]: <info>  [1759408517.9490] device (tap71fb4dcc-10): carrier: link connected
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.956 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e563b43e-71f2-4c83-b00e-68534c9da61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.973 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5f07026f-1558-4f71-aa93-d2669dbe3e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658557, 'reachable_time': 37654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343183, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:17.987 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50bf1623-40d1-4bbc-bd23-31c8a9761a9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658557, 'tstamp': 658557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343184, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.007 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[856a94ad-58e4-4090-a87c-4bc02c369c0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658557, 'reachable_time': 37654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343185, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.039 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7f59371e-2d88-4fb7-853d-811d7bab2432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.110 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[269f495c-8bf0-44f3-9b0f-092827b46435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.112 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.113 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.114 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71fb4dcc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:18.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:18 compute-0 NetworkManager[44987]: <info>  [1759408518.1585] manager: (tap71fb4dcc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:18 compute-0 kernel: tap71fb4dcc-10: entered promiscuous mode
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.162 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71fb4dcc-10, col_values=(('external_ids', {'iface-id': 'c32f3592-2fbc-4d81-bf91-681fef1a2dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:18 compute-0 ovn_controller[148183]: 2025-10-02T12:35:18Z|00605|binding|INFO|Releasing lport c32f3592-2fbc-4d81-bf91-681fef1a2dd1 from this chassis (sb_readonly=0)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.185 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.187 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a83fc43f-18fd-40a8-9239-c41725512fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.188 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:35:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:18.189 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'env', 'PROCESS_TAG=haproxy-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71fb4dcc-12bb-458e-9241-e19e223ca96d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:35:18 compute-0 systemd[1]: Started libpod-conmon-1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d.scope.
Oct 02 12:35:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef73c2c59c534c659f6a80d423aa21a8e02d775d7041b84c8cf83dc2e1efc4c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef73c2c59c534c659f6a80d423aa21a8e02d775d7041b84c8cf83dc2e1efc4c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef73c2c59c534c659f6a80d423aa21a8e02d775d7041b84c8cf83dc2e1efc4c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef73c2c59c534c659f6a80d423aa21a8e02d775d7041b84c8cf83dc2e1efc4c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:18 compute-0 podman[343116]: 2025-10-02 12:35:18.371198773 +0000 UTC m=+1.090432309 container init 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:35:18 compute-0 podman[343116]: 2025-10-02 12:35:18.380808619 +0000 UTC m=+1.100042095 container start 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.679 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408518.6786182, aa9070a4-cc2e-4aec-9abe-9873932ea0de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.679 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] VM Started (Lifecycle Event)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.695 2 DEBUG nova.compute.manager [req-51bd4259-30c4-4741-9dd2-92e39642502f req-34fb1a4f-1be5-42f2-be8e-fe21b5d82f40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.695 2 DEBUG oslo_concurrency.lockutils [req-51bd4259-30c4-4741-9dd2-92e39642502f req-34fb1a4f-1be5-42f2-be8e-fe21b5d82f40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.696 2 DEBUG oslo_concurrency.lockutils [req-51bd4259-30c4-4741-9dd2-92e39642502f req-34fb1a4f-1be5-42f2-be8e-fe21b5d82f40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.696 2 DEBUG oslo_concurrency.lockutils [req-51bd4259-30c4-4741-9dd2-92e39642502f req-34fb1a4f-1be5-42f2-be8e-fe21b5d82f40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.696 2 DEBUG nova.compute.manager [req-51bd4259-30c4-4741-9dd2-92e39642502f req-34fb1a4f-1be5-42f2-be8e-fe21b5d82f40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Processing event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.697 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.702 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.706 2 INFO nova.virt.libvirt.driver [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Instance spawned successfully.
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.706 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.731 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.734 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.753 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.753 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.754 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.754 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.754 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.755 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:18 compute-0 podman[343116]: 2025-10-02 12:35:18.755402823 +0000 UTC m=+1.474636279 container attach 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.848 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.848 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408518.67879, aa9070a4-cc2e-4aec-9abe-9873932ea0de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.849 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] VM Paused (Lifecycle Event)
Oct 02 12:35:18 compute-0 podman[343257]: 2025-10-02 12:35:18.940558394 +0000 UTC m=+0.076159925 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:35:18 compute-0 podman[343258]: 2025-10-02 12:35:18.959442725 +0000 UTC m=+0.076293697 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:35:18 compute-0 podman[343255]: 2025-10-02 12:35:18.970944537 +0000 UTC m=+0.099601378 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.986 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.990 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408518.7009206, aa9070a4-cc2e-4aec-9abe-9873932ea0de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:18 compute-0 nova_compute[257802]: 2025-10-02 12:35:18.990 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] VM Resumed (Lifecycle Event)
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.007 2 INFO nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Took 13.58 seconds to spawn the instance on the hypervisor.
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.007 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:19 compute-0 podman[343277]: 2025-10-02 12:35:18.94490326 +0000 UTC m=+0.045192266 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.035 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.038 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:19 compute-0 ceph-mon[73607]: pgmap v2229: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.168 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.188 2 INFO nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Took 14.79 seconds to build instance.
Oct 02 12:35:19 compute-0 sad_kalam[343229]: {
Oct 02 12:35:19 compute-0 sad_kalam[343229]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:35:19 compute-0 sad_kalam[343229]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:35:19 compute-0 sad_kalam[343229]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:35:19 compute-0 sad_kalam[343229]:         "osd_id": 1,
Oct 02 12:35:19 compute-0 sad_kalam[343229]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:35:19 compute-0 sad_kalam[343229]:         "type": "bluestore"
Oct 02 12:35:19 compute-0 sad_kalam[343229]:     }
Oct 02 12:35:19 compute-0 sad_kalam[343229]: }
Oct 02 12:35:19 compute-0 systemd[1]: libpod-1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d.scope: Deactivated successfully.
Oct 02 12:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:19 compute-0 podman[343342]: 2025-10-02 12:35:19.30909153 +0000 UTC m=+0.037719754 container died 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:19 compute-0 nova_compute[257802]: 2025-10-02 12:35:19.330 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:19 compute-0 podman[343277]: 2025-10-02 12:35:19.471055943 +0000 UTC m=+0.571344929 container create c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:35:19 compute-0 systemd[1]: Started libpod-conmon-c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485.scope.
Oct 02 12:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef73c2c59c534c659f6a80d423aa21a8e02d775d7041b84c8cf83dc2e1efc4c2-merged.mount: Deactivated successfully.
Oct 02 12:35:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ba73474a103e78283aec271ffb8accb6bacfeeeb0ddf9e691477678ebe8d45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:19 compute-0 podman[343277]: 2025-10-02 12:35:19.624392594 +0000 UTC m=+0.724681590 container init c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:35:19 compute-0 podman[343277]: 2025-10-02 12:35:19.630594286 +0000 UTC m=+0.730883272 container start c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:35:19 compute-0 podman[343342]: 2025-10-02 12:35:19.638277324 +0000 UTC m=+0.366905538 container remove 1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:35:19 compute-0 systemd[1]: libpod-conmon-1532aa22081529c91b7e347f30b34fddf218b0de3d3fe951b72f8b3c7d600c9d.scope: Deactivated successfully.
Oct 02 12:35:19 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [NOTICE]   (343366) : New worker (343368) forked
Oct 02 12:35:19 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [NOTICE]   (343366) : Loading success.
Oct 02 12:35:19 compute-0 sudo[342944]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:35:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:35:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:35:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:35:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:35:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3c72c437-03d7-4ad7-bc0c-9e0be761b513 does not exist
Oct 02 12:35:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ae4fddab-4c05-4e5a-bfd0-d3272190d309 does not exist
Oct 02 12:35:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 04a00b1c-265b-4c14-ae6c-859e8a9497c0 does not exist
Oct 02 12:35:19 compute-0 sudo[343377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:19 compute-0 sudo[343377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:19 compute-0 sudo[343377]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:19 compute-0 sudo[343402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:35:19 compute-0 sudo[343402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:19 compute-0 sudo[343402]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:35:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:20.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.808 2 DEBUG nova.compute.manager [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.808 2 DEBUG oslo_concurrency.lockutils [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.809 2 DEBUG oslo_concurrency.lockutils [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.809 2 DEBUG oslo_concurrency.lockutils [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.809 2 DEBUG nova.compute.manager [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] No waiting events found dispatching network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:20 compute-0 nova_compute[257802]: 2025-10-02 12:35:20.809 2 WARNING nova.compute.manager [req-35b4c46b-93e0-4f5f-82fd-c698f0494136 req-146394c8-d452-4982-8983-c286fb180eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received unexpected event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f for instance with vm_state active and task_state None.
Oct 02 12:35:20 compute-0 ceph-mon[73607]: pgmap v2230: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:35:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:35:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:35:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:21.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:21 compute-0 kernel: tapfb769e14-32 (unregistering): left promiscuous mode
Oct 02 12:35:21 compute-0 NetworkManager[44987]: <info>  [1759408521.6611] device (tapfb769e14-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:21 compute-0 ovn_controller[148183]: 2025-10-02T12:35:21Z|00606|binding|INFO|Releasing lport fb769e14-32bb-436e-a8b2-f08e69207e0f from this chassis (sb_readonly=0)
Oct 02 12:35:21 compute-0 ovn_controller[148183]: 2025-10-02T12:35:21Z|00607|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f down in Southbound
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_controller[148183]: 2025-10-02T12:35:21Z|00608|binding|INFO|Removing iface tapfb769e14-32 ovn-installed in OVS
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.720 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.722 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.724 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.739 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[033edefa-9faa-4226-a88a-7cb92144ceeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 02 12:35:21 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000084.scope: Consumed 15.032s CPU time.
Oct 02 12:35:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 692 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 02 12:35:21 compute-0 systemd-machined[211836]: Machine qemu-67-instance-00000084 terminated.
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.772 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c25fbb64-4c1a-4ab7-af06-c27151f00d98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.776 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5f68ffbc-d17a-4aba-b904-a7351fe490a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.805 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[16cac4fd-bdf7-46ae-aba9-f3e41e8545d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.824 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7dc1ae-b4c2-4fc6-a65d-352ede02a104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 42805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343440, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.841 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d010c3-edd3-40f1-a6a5-f62916ec563f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343441, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343441, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.842 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.847 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.848 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.848 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:21.849 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.942 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance shutdown successfully after 8 seconds.
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.949 2 INFO nova.virt.libvirt.driver [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance destroyed successfully.
Oct 02 12:35:21 compute-0 nova_compute[257802]: 2025-10-02 12:35:21.950 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.518 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Attempting rescue
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.518 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.522 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.522 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Creating image(s)
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.548 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:22 compute-0 nova_compute[257802]: 2025-10-02 12:35:22.550 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:23 compute-0 ceph-mon[73607]: pgmap v2231: 305 pgs: 305 active+clean; 692 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.278 2 DEBUG nova.compute.manager [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.278 2 DEBUG oslo_concurrency.lockutils [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.279 2 DEBUG oslo_concurrency.lockutils [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.279 2 DEBUG oslo_concurrency.lockutils [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.279 2 DEBUG nova.compute.manager [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.280 2 WARNING nova.compute.manager [req-573eda4c-bea3-4acb-9ab4-6db32da21839 req-e59d5dd7-d305-4e5a-8904-a1896ca6e195 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state active and task_state rescuing.
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.312 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.341 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.344 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.414 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.414 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.415 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.415 2 DEBUG oslo_concurrency.lockutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:23.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.443 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:23 compute-0 nova_compute[257802]: 2025-10-02 12:35:23.447 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 229 op/s
Oct 02 12:35:23 compute-0 podman[343549]: 2025-10-02 12:35:23.950037327 +0000 UTC m=+0.081703561 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:35:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.243 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.471 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.472 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.691 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.692 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start _get_guest_xml network_info=[{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:7f:8b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:24 compute-0 nova_compute[257802]: 2025-10-02 12:35:24.692 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.010 2 WARNING nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.029 2 DEBUG nova.virt.libvirt.host [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.030 2 DEBUG nova.virt.libvirt.host [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.041 2 DEBUG nova.virt.libvirt.host [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.042 2 DEBUG nova.virt.libvirt.host [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.044 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.045 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.046 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.046 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.047 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.047 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.048 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.048 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.049 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.049 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.050 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.050 2 DEBUG nova.virt.hardware [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.051 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:25 compute-0 ceph-mon[73607]: pgmap v2232: 305 pgs: 305 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 229 op/s
Oct 02 12:35:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2371010568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:25.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.571 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.668 2 DEBUG nova.compute.manager [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.669 2 DEBUG oslo_concurrency.lockutils [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.669 2 DEBUG oslo_concurrency.lockutils [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.669 2 DEBUG oslo_concurrency.lockutils [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.670 2 DEBUG nova.compute.manager [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.670 2 WARNING nova.compute.manager [req-de2e7e1c-45d3-457f-93dc-01baf1563094 req-cdc77ea5-57bd-401d-8ca6-adf7f3f26b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state active and task_state rescuing.
Oct 02 12:35:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.0 MiB/s wr, 226 op/s
Oct 02 12:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671159876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336774228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.993 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:25 compute-0 nova_compute[257802]: 2025-10-02 12:35:25.994 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.238 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.238 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.238 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.239 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.239 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.240 2 INFO nova.compute.manager [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Terminating instance
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.241 2 DEBUG nova.compute.manager [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:26 compute-0 kernel: tap001b44c4-04 (unregistering): left promiscuous mode
Oct 02 12:35:26 compute-0 NetworkManager[44987]: <info>  [1759408526.3344] device (tap001b44c4-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:26 compute-0 ovn_controller[148183]: 2025-10-02T12:35:26Z|00609|binding|INFO|Releasing lport 001b44c4-049b-4b18-9848-d408d7dcf23f from this chassis (sb_readonly=0)
Oct 02 12:35:26 compute-0 ovn_controller[148183]: 2025-10-02T12:35:26Z|00610|binding|INFO|Setting lport 001b44c4-049b-4b18-9848-d408d7dcf23f down in Southbound
Oct 02 12:35:26 compute-0 ovn_controller[148183]: 2025-10-02T12:35:26Z|00611|binding|INFO|Removing iface tap001b44c4-04 ovn-installed in OVS
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 02 12:35:26 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000089.scope: Consumed 8.212s CPU time.
Oct 02 12:35:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767505575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:26 compute-0 systemd-machined[211836]: Machine qemu-69-instance-00000089 terminated.
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.412 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.413 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.474 2 INFO nova.virt.libvirt.driver [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Instance destroyed successfully.
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.475 2 DEBUG nova.objects.instance [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'resources' on Instance uuid aa9070a4-cc2e-4aec-9abe-9873932ea0de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.498 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:8c:c2 10.100.0.3'], port_security=['fa:16:3e:63:8c:c2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aa9070a4-cc2e-4aec-9abe-9873932ea0de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=001b44c4-049b-4b18-9848-d408d7dcf23f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.499 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 001b44c4-049b-4b18-9848-d408d7dcf23f in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d unbound from our chassis
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.501 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71fb4dcc-12bb-458e-9241-e19e223ca96d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.503 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae8c233-39e7-4173-a21d-a1271b96c483]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.503 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace which is not needed anymore
Oct 02 12:35:26 compute-0 ceph-mon[73607]: pgmap v2233: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.0 MiB/s wr, 226 op/s
Oct 02 12:35:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/671159876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2336774228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2767505575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.665 2 DEBUG nova.virt.libvirt.vif [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-1',id=137,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:19Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=aa9070a4-cc2e-4aec-9abe-9873932ea0de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.666 2 DEBUG nova.network.os_vif_util [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "001b44c4-049b-4b18-9848-d408d7dcf23f", "address": "fa:16:3e:63:8c:c2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap001b44c4-04", "ovs_interfaceid": "001b44c4-049b-4b18-9848-d408d7dcf23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.667 2 DEBUG nova.network.os_vif_util [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.668 2 DEBUG os_vif [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.671 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap001b44c4-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:26 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [NOTICE]   (343366) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:26 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [NOTICE]   (343366) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:26 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [WARNING]  (343366) : Exiting Master process...
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [ALERT]    (343366) : Current worker (343368) exited with code 143 (Terminated)
Oct 02 12:35:26 compute-0 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[343362]: [WARNING]  (343366) : All workers exited. Exiting... (0)
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.680 2 INFO os_vif [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:8c:c2,bridge_name='br-int',has_traffic_filtering=True,id=001b44c4-049b-4b18-9848-d408d7dcf23f,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap001b44c4-04')
Oct 02 12:35:26 compute-0 systemd[1]: libpod-c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485.scope: Deactivated successfully.
Oct 02 12:35:26 compute-0 podman[343670]: 2025-10-02 12:35:26.689221844 +0000 UTC m=+0.094333149 container died c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ba73474a103e78283aec271ffb8accb6bacfeeeb0ddf9e691477678ebe8d45-merged.mount: Deactivated successfully.
Oct 02 12:35:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384346189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.888 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.890 2 DEBUG nova.virt.libvirt.vif [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-844114970',display_name='tempest-ServerRescueNegativeTestJSON-server-844114970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-844114970',id=132,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBC3dzJ/cmeAQeCTRVIaMi9ODTeKzsWGp+oGguk+hFSNuPm8DjZXpwH/w0EeoRUq6Hegzhnzkofu7f4IKtcBTMmXs34k+4eg8rqlyWmhp8XzQZq6+/mosGCR22msyjISyg==',key_name='tempest-keypair-1971519988',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ejvqx4ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:7f:8b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.891 2 DEBUG nova.network.os_vif_util [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:7f:8b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.892 2 DEBUG nova.network.os_vif_util [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.893 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:26 compute-0 podman[343670]: 2025-10-02 12:35:26.91304846 +0000 UTC m=+0.318159765 container cleanup c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:35:26 compute-0 systemd[1]: libpod-conmon-c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485.scope: Deactivated successfully.
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.954 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.955 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:26.955 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:26 compute-0 nova_compute[257802]: 2025-10-02 12:35:26.996 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <uuid>7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</uuid>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <name>instance-00000084</name>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-844114970</nova:name>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:35:25</nova:creationTime>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <nova:port uuid="fb769e14-32bb-436e-a8b2-f08e69207e0f">
Oct 02 12:35:26 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="serial">7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="uuid">7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.rescue">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config.rescue">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:35:26 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:7f:8b:39"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <target dev="tapfb769e14-32"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/console.log" append="off"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:35:26 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:35:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:35:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:35:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:35:26 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.006 2 INFO nova.virt.libvirt.driver [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance destroyed successfully.
Oct 02 12:35:27 compute-0 podman[343731]: 2025-10-02 12:35:27.092174773 +0000 UTC m=+0.158103980 container remove c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.099 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b198dd6d-4313-4b86-9f8f-d93620a34ebf]: (4, ('Thu Oct  2 12:35:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485)\nc890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485\nThu Oct  2 12:35:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (c890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485)\nc890fe61c1baeb63a6fdc4cf685f3ee2223505e4adc46c521634f6ddfae3c485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.100 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd5c16f-3353-4325-999c-c8849a9185b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.101 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-0 kernel: tap71fb4dcc-10: left promiscuous mode
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.111 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5a64646c-1216-4c44-a853-66bf2348e6fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.130 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[78b28fa9-a415-4b32-b6e7-559ccd239169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.131 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e074943e-c27a-4312-9963-984d4391616b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.146 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e63fbebc-6651-46fc-a3f5-2843975b95a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658549, 'reachable_time': 28585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343750, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d71fb4dcc\x2d12bb\x2d458e\x2d9241\x2de19e223ca96d.mount: Deactivated successfully.
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.149 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:27.149 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[02314f95-2ef0-4484-a6d8-c6a242b49097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.350 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.350 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.350 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.350 2 DEBUG nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:7f:8b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.351 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Using config drive
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.377 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:27.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:27 compute-0 nova_compute[257802]: 2025-10-02 12:35:27.661 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 180 op/s
Oct 02 12:35:27 compute-0 sudo[343770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:27 compute-0 sudo[343770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:27 compute-0 sudo[343770]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:27 compute-0 sudo[343795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:27 compute-0 sudo[343795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:27 compute-0 sudo[343795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1384346189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.100 2 DEBUG nova.objects.instance [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'keypairs' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.402 2 DEBUG nova.compute.manager [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-unplugged-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.403 2 DEBUG oslo_concurrency.lockutils [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.403 2 DEBUG oslo_concurrency.lockutils [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.403 2 DEBUG oslo_concurrency.lockutils [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.403 2 DEBUG nova.compute.manager [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] No waiting events found dispatching network-vif-unplugged-001b44c4-049b-4b18-9848-d408d7dcf23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:28 compute-0 nova_compute[257802]: 2025-10-02 12:35:28.403 2 DEBUG nova.compute.manager [req-f5dab35a-1d2e-45a9-8ba5-f3093f21477a req-ed635c4f-5fd5-4828-95c3-447bfaa21732 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-unplugged-001b44c4-049b-4b18-9848-d408d7dcf23f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:29 compute-0 ceph-mon[73607]: pgmap v2234: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 180 op/s
Oct 02 12:35:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4226396856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:29.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Oct 02 12:35:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:30.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.451 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Creating config drive at /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.457 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwa90vmm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.990 2 DEBUG nova.compute.manager [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.991 2 DEBUG oslo_concurrency.lockutils [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.991 2 DEBUG oslo_concurrency.lockutils [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.991 2 DEBUG oslo_concurrency.lockutils [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.991 2 DEBUG nova.compute.manager [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] No waiting events found dispatching network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:30 compute-0 nova_compute[257802]: 2025-10-02 12:35:30.991 2 WARNING nova.compute.manager [req-0c618608-4305-410f-af6f-47d732df8bad req-8109990e-186f-48a4-8d88-66bfae8d942b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received unexpected event network-vif-plugged-001b44c4-049b-4b18-9848-d408d7dcf23f for instance with vm_state active and task_state deleting.
Oct 02 12:35:31 compute-0 nova_compute[257802]: 2025-10-02 12:35:31.399 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwa90vmm" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:31 compute-0 nova_compute[257802]: 2025-10-02 12:35:31.431 2 DEBUG nova.storage.rbd_utils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:31 compute-0 nova_compute[257802]: 2025-10-02 12:35:31.437 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:31.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:31.583 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:31.583 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:35:31 compute-0 nova_compute[257802]: 2025-10-02 12:35:31.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:31 compute-0 nova_compute[257802]: 2025-10-02 12:35:31.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 645 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Oct 02 12:35:31 compute-0 ceph-mon[73607]: pgmap v2235: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Oct 02 12:35:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:32.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.161 2 DEBUG oslo_concurrency.processutils [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.162 2 INFO nova.virt.libvirt.driver [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Deleting local config drive /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75/disk.config.rescue because it was imported into RBD.
Oct 02 12:35:32 compute-0 kernel: tapfb769e14-32: entered promiscuous mode
Oct 02 12:35:32 compute-0 ovn_controller[148183]: 2025-10-02T12:35:32Z|00612|binding|INFO|Claiming lport fb769e14-32bb-436e-a8b2-f08e69207e0f for this chassis.
Oct 02 12:35:32 compute-0 ovn_controller[148183]: 2025-10-02T12:35:32Z|00613|binding|INFO|fb769e14-32bb-436e-a8b2-f08e69207e0f: Claiming fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:35:32 compute-0 NetworkManager[44987]: <info>  [1759408532.2230] manager: (tapfb769e14-32): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.231 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '5', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.232 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.233 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-0 ovn_controller[148183]: 2025-10-02T12:35:32Z|00614|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f ovn-installed in OVS
Oct 02 12:35:32 compute-0 ovn_controller[148183]: 2025-10-02T12:35:32Z|00615|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f up in Southbound
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.254 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cde12cae-7122-457d-a17c-895465f6a239]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-0 systemd-udevd[343877]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:32 compute-0 systemd-machined[211836]: New machine qemu-70-instance-00000084.
Oct 02 12:35:32 compute-0 NetworkManager[44987]: <info>  [1759408532.2870] device (tapfb769e14-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:32 compute-0 NetworkManager[44987]: <info>  [1759408532.2883] device (tapfb769e14-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:32 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-00000084.
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.293 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c7650305-d7c4-4a58-84d9-88ad5575e6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.298 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[740a0977-154c-4027-b03e-d6fa93d67fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.332 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f34649d7-b443-444b-bd9d-4cc898163255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.348 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a1b3fb-c3b5-4542-a5d3-09b19c2b6290]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 42805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343888, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.363 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa72a39-218d-46b2-a738-85daa73fad84]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343889, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343889, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.364 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.369 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.369 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.370 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:32 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:32.370 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.895 2 DEBUG nova.compute.manager [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.896 2 DEBUG oslo_concurrency.lockutils [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.896 2 DEBUG oslo_concurrency.lockutils [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.896 2 DEBUG oslo_concurrency.lockutils [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.897 2 DEBUG nova.compute.manager [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:32 compute-0 nova_compute[257802]: 2025-10-02 12:35:32.897 2 WARNING nova.compute.manager [req-4fa87fe7-9f66-4e45-9a8d-9df7733534bb req-0481af21-fd07-482d-a37a-2834d30c2183 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state active and task_state rescuing.
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.427 2 INFO nova.virt.libvirt.driver [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Deleting instance files /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de_del
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.428 2 INFO nova.virt.libvirt.driver [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Deletion of /var/lib/nova/instances/aa9070a4-cc2e-4aec-9abe-9873932ea0de_del complete
Oct 02 12:35:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:33.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.663 2 INFO nova.compute.manager [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Took 7.42 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.664 2 DEBUG oslo.service.loopingcall [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.664 2 DEBUG nova.compute.manager [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.664 2 DEBUG nova.network.neutron [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 622 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.861 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.862 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408533.8611116, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.862 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Resumed (Lifecycle Event)
Oct 02 12:35:33 compute-0 nova_compute[257802]: 2025-10-02 12:35:33.865 2 DEBUG nova.compute.manager [None req-f43e9ab8-0a0d-40f9-9d24-18fc790baef3 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:33 compute-0 ceph-mon[73607]: pgmap v2236: 305 pgs: 305 active+clean; 645 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.023 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.027 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.623 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408533.86235, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.623 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Started (Lifecycle Event)
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.865 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:34 compute-0 nova_compute[257802]: 2025-10-02 12:35:34.868 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:35 compute-0 ceph-mon[73607]: pgmap v2237: 305 pgs: 305 active+clean; 622 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Oct 02 12:35:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2685851625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.394 2 DEBUG nova.compute.manager [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.395 2 DEBUG oslo_concurrency.lockutils [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.396 2 DEBUG oslo_concurrency.lockutils [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.397 2 DEBUG oslo_concurrency.lockutils [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.397 2 DEBUG nova.compute.manager [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:35 compute-0 nova_compute[257802]: 2025-10-02 12:35:35.398 2 WARNING nova.compute.manager [req-22f920a9-acd3-4168-ad1a-0c19cdc1eccf req-c7e1ed87-b0b1-4002-ae17-cd1a7de858b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state rescued and task_state None.
Oct 02 12:35:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:35.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 02 12:35:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:36 compute-0 nova_compute[257802]: 2025-10-02 12:35:36.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:36 compute-0 ceph-mon[73607]: pgmap v2238: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.335 2 DEBUG nova.network.neutron [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.443 2 DEBUG nova.compute.manager [req-bedb83a4-a981-4783-a0a9-07366575d19b req-9a65a73c-38df-4b79-80a4-a8081df5dabf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Received event network-vif-deleted-001b44c4-049b-4b18-9848-d408d7dcf23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.443 2 INFO nova.compute.manager [req-bedb83a4-a981-4783-a0a9-07366575d19b req-9a65a73c-38df-4b79-80a4-a8081df5dabf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Neutron deleted interface 001b44c4-049b-4b18-9848-d408d7dcf23f; detaching it from the instance and deleting it from the info cache
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.444 2 DEBUG nova.network.neutron [req-bedb83a4-a981-4783-a0a9-07366575d19b req-9a65a73c-38df-4b79-80a4-a8081df5dabf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:37.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.536 2 INFO nova.compute.manager [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Took 3.87 seconds to deallocate network for instance.
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.582 2 DEBUG nova.compute.manager [req-bedb83a4-a981-4783-a0a9-07366575d19b req-9a65a73c-38df-4b79-80a4-a8081df5dabf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Detach interface failed, port_id=001b44c4-049b-4b18-9848-d408d7dcf23f, reason: Instance aa9070a4-cc2e-4aec-9abe-9873932ea0de could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:35:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 836 KiB/s wr, 82 op/s
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.800 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.800 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:37 compute-0 nova_compute[257802]: 2025-10-02 12:35:37.964 2 DEBUG oslo_concurrency.processutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685329264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.422 2 DEBUG oslo_concurrency.processutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.427 2 DEBUG nova.compute.provider_tree [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.732 2 DEBUG nova.scheduler.client.report [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.875 2 INFO nova.compute.manager [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Unrescuing
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.877 2 DEBUG oslo_concurrency.lockutils [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.878 2 DEBUG oslo_concurrency.lockutils [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:38 compute-0 nova_compute[257802]: 2025-10-02 12:35:38.879 2 DEBUG nova.network.neutron [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:39 compute-0 nova_compute[257802]: 2025-10-02 12:35:39.040 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:39 compute-0 nova_compute[257802]: 2025-10-02 12:35:39.205 2 INFO nova.scheduler.client.report [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Deleted allocations for instance aa9070a4-cc2e-4aec-9abe-9873932ea0de
Oct 02 12:35:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:39 compute-0 ceph-mon[73607]: pgmap v2239: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 836 KiB/s wr, 82 op/s
Oct 02 12:35:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2685329264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:39 compute-0 nova_compute[257802]: 2025-10-02 12:35:39.635 2 DEBUG oslo_concurrency.lockutils [None req-e919810b-baa6-4141-976e-8841c6188fbb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "aa9070a4-cc2e-4aec-9abe-9873932ea0de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 630 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 MiB/s wr, 189 op/s
Oct 02 12:35:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:40.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:40.585 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.268 2 DEBUG nova.network.neutron [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:41.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.473 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408526.4720201, aa9070a4-cc2e-4aec-9abe-9873932ea0de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.474 2 INFO nova.compute.manager [-] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] VM Stopped (Lifecycle Event)
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.489 2 DEBUG oslo_concurrency.lockutils [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.490 2 DEBUG nova.objects.instance [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.596 2 DEBUG nova.compute.manager [None req-10bb92af-69e9-4794-bb2e-f291bd7c4188 - - - - - -] [instance: aa9070a4-cc2e-4aec-9abe-9873932ea0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 630 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 666 KiB/s wr, 162 op/s
Oct 02 12:35:41 compute-0 kernel: tapfb769e14-32 (unregistering): left promiscuous mode
Oct 02 12:35:41 compute-0 NetworkManager[44987]: <info>  [1759408541.8677] device (tapfb769e14-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:41 compute-0 ovn_controller[148183]: 2025-10-02T12:35:41Z|00616|binding|INFO|Releasing lport fb769e14-32bb-436e-a8b2-f08e69207e0f from this chassis (sb_readonly=0)
Oct 02 12:35:41 compute-0 ovn_controller[148183]: 2025-10-02T12:35:41Z|00617|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f down in Southbound
Oct 02 12:35:41 compute-0 ovn_controller[148183]: 2025-10-02T12:35:41Z|00618|binding|INFO|Removing iface tapfb769e14-32 ovn-installed in OVS
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:41 compute-0 nova_compute[257802]: 2025-10-02 12:35:41.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.895 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '6', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.896 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.898 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.912 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bb567895-e9a4-43ea-a1aa-2a582beeb000]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:41 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 02 12:35:41 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000084.scope: Consumed 9.096s CPU time.
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.941 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b186a90c-a874-495d-8e4a-211dbe7d9270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:41 compute-0 systemd-machined[211836]: Machine qemu-70-instance-00000084 terminated.
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.944 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d18dc69e-ac48-4a7c-a89b-b2900e672108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.970 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[da9cb9b3-5657-4877-b9b5-b8cec75d402a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:41.989 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f72178-cbdb-4842-97e3-e1b60ea2d3ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 42805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343989, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.002 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd29165-7353-40cd-a14a-1feb332a6630]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343990, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343990, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.003 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.009 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.009 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.010 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:42.010 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.149 2 INFO nova.virt.libvirt.driver [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance destroyed successfully.
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.150 2 DEBUG nova.objects.instance [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:35:42
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.control', '.rgw.root']
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.486 2 DEBUG nova.compute.manager [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.486 2 DEBUG oslo_concurrency.lockutils [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.487 2 DEBUG oslo_concurrency.lockutils [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.487 2 DEBUG oslo_concurrency.lockutils [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.487 2 DEBUG nova.compute.manager [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:42 compute-0 nova_compute[257802]: 2025-10-02 12:35:42.488 2 WARNING nova.compute.manager [req-1baf0993-3699-4ac5-944b-64081c96c179 req-b7ca85c8-7ac7-429f-ac48-3b495a6c3048 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:35:43 compute-0 kernel: tapfb769e14-32: entered promiscuous mode
Oct 02 12:35:43 compute-0 NetworkManager[44987]: <info>  [1759408543.1997] manager: (tapfb769e14-32): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Oct 02 12:35:43 compute-0 systemd-udevd[343981]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:43 compute-0 ovn_controller[148183]: 2025-10-02T12:35:43Z|00619|binding|INFO|Claiming lport fb769e14-32bb-436e-a8b2-f08e69207e0f for this chassis.
Oct 02 12:35:43 compute-0 ovn_controller[148183]: 2025-10-02T12:35:43Z|00620|binding|INFO|fb769e14-32bb-436e-a8b2-f08e69207e0f: Claiming fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 NetworkManager[44987]: <info>  [1759408543.2223] device (tapfb769e14-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:43 compute-0 NetworkManager[44987]: <info>  [1759408543.2235] device (tapfb769e14-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:43 compute-0 ovn_controller[148183]: 2025-10-02T12:35:43Z|00621|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f ovn-installed in OVS
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 ovn_controller[148183]: 2025-10-02T12:35:43Z|00622|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f up in Southbound
Oct 02 12:35:43 compute-0 systemd-machined[211836]: New machine qemu-71-instance-00000084.
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.239 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '7', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.241 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.244 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:35:43 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000084.
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.260 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a6ec59-5902-400e-9130-40fde03fca4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.292 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0a79b7-1561-4bcc-85e9-6313f1af196b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.295 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5a56715c-5cef-456f-8d2f-a0664cd70562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.334 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a941c8-29c2-4ddf-aefb-7ce02ec4f20c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.351 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33f4a6db-ae0a-4876-a800-21dc561dbdb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 42805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344030, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.367 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[820ad739-7dfc-4601-9bfa-99fe19953719]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344031, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344031, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.369 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 nova_compute[257802]: 2025-10-02 12:35:43.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.372 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.372 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.373 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:35:43.373 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:35:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:43.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.0 MiB/s wr, 176 op/s
Oct 02 12:35:44 compute-0 ceph-mon[73607]: pgmap v2240: 305 pgs: 305 active+clean; 630 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 MiB/s wr, 189 op/s
Oct 02 12:35:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3987054870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:44 compute-0 ceph-mon[73607]: pgmap v2241: 305 pgs: 305 active+clean; 630 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 666 KiB/s wr, 162 op/s
Oct 02 12:35:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:35:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:35:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:35:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.609 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.610 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.610 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.610 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.610 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.610 2 WARNING nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.611 2 WARNING nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.612 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.612 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.612 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.612 2 DEBUG oslo_concurrency.lockutils [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.612 2 DEBUG nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:44 compute-0 nova_compute[257802]: 2025-10-02 12:35:44.613 2 WARNING nova.compute.manager [req-815b538d-b99b-418d-a21d-062fdd3cb8f1 req-31c8423f-8116-47f0-9ddd-5469e72c982a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.099 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.100 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408545.0984423, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.100 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Resumed (Lifecycle Event)
Oct 02 12:35:45 compute-0 ceph-mon[73607]: pgmap v2242: 305 pgs: 305 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.0 MiB/s wr, 176 op/s
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.179 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.184 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.387 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.387 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408545.0992203, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.387 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Started (Lifecycle Event)
Oct 02 12:35:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:45.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.514 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.517 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:45 compute-0 nova_compute[257802]: 2025-10-02 12:35:45.622 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:35:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.7 MiB/s wr, 172 op/s
Oct 02 12:35:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1047015636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2897118727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:46 compute-0 nova_compute[257802]: 2025-10-02 12:35:46.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:47.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.6 MiB/s wr, 145 op/s
Oct 02 12:35:47 compute-0 ceph-mon[73607]: pgmap v2243: 305 pgs: 305 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.7 MiB/s wr, 172 op/s
Oct 02 12:35:48 compute-0 sudo[344113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:48 compute-0 sudo[344113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:48 compute-0 sudo[344113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:48 compute-0 sudo[344138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:48 compute-0 sudo[344138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:48 compute-0 sudo[344138]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:35:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:48.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:35:48 compute-0 nova_compute[257802]: 2025-10-02 12:35:48.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:48 compute-0 nova_compute[257802]: 2025-10-02 12:35:48.716 2 DEBUG nova.compute.manager [None req-875ddc6f-fe1c-4299-8680-b29adc0af3c4 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:49.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 711 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.7 MiB/s wr, 259 op/s
Oct 02 12:35:49 compute-0 ceph-mon[73607]: pgmap v2244: 305 pgs: 305 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.6 MiB/s wr, 145 op/s
Oct 02 12:35:49 compute-0 ovn_controller[148183]: 2025-10-02T12:35:49Z|00623|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct 02 12:35:49 compute-0 nova_compute[257802]: 2025-10-02 12:35:49.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:49 compute-0 podman[344164]: 2025-10-02 12:35:49.962241306 +0000 UTC m=+0.065069633 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:35:49 compute-0 podman[344165]: 2025-10-02 12:35:49.966679115 +0000 UTC m=+0.069480432 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:35:49 compute-0 podman[344166]: 2025-10-02 12:35:49.969687088 +0000 UTC m=+0.067426411 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid)
Oct 02 12:35:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:50.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:51 compute-0 nova_compute[257802]: 2025-10-02 12:35:51.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 711 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.0 MiB/s wr, 153 op/s
Oct 02 12:35:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:52 compute-0 ceph-mon[73607]: pgmap v2245: 305 pgs: 305 active+clean; 711 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.7 MiB/s wr, 259 op/s
Oct 02 12:35:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2869866938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:53 compute-0 nova_compute[257802]: 2025-10-02 12:35:53.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:53.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.0 MiB/s wr, 191 op/s
Oct 02 12:35:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Oct 02 12:35:54 compute-0 ceph-mon[73607]: pgmap v2246: 305 pgs: 305 active+clean; 711 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.0 MiB/s wr, 153 op/s
Oct 02 12:35:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3010537431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:54.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012844895775172158 of space, bias 1.0, pg target 3.8534687325516472 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6423180354899497 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0040689195398287735 of space, bias 1.0, pg target 1.2084691033291457 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:35:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Oct 02 12:35:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Oct 02 12:35:54 compute-0 podman[344219]: 2025-10-02 12:35:54.971954715 +0000 UTC m=+0.100257945 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:35:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:35:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2662473630' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:35:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2662473630' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:35:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:55.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:35:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 30 KiB/s wr, 240 op/s
Oct 02 12:35:55 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Oct 02 12:35:56 compute-0 ceph-mon[73607]: pgmap v2247: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.0 MiB/s wr, 191 op/s
Oct 02 12:35:56 compute-0 ceph-mon[73607]: osdmap e330: 3 total, 3 up, 3 in
Oct 02 12:35:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2662473630' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:35:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.537 2 DEBUG nova.compute.manager [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.538 2 DEBUG nova.compute.manager [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing instance network info cache due to event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.538 2 DEBUG oslo_concurrency.lockutils [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.538 2 DEBUG oslo_concurrency.lockutils [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.538 2 DEBUG nova.network.neutron [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:56 compute-0 nova_compute[257802]: 2025-10-02 12:35:56.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:57.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 30 KiB/s wr, 240 op/s
Oct 02 12:35:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2662473630' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:57 compute-0 ceph-mon[73607]: pgmap v2249: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 30 KiB/s wr, 240 op/s
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.101 2 DEBUG nova.network.neutron [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updated VIF entry in instance network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.102 2 DEBUG nova.network.neutron [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.440 2 DEBUG oslo_concurrency.lockutils [req-e9d02b74-cc88-4a5f-9847-e1bdbeeca63c req-5daa04b2-234e-4a8e-a410-af278f5bba62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.921 2 DEBUG nova.compute.manager [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.922 2 DEBUG nova.compute.manager [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing instance network info cache due to event network-changed-fb769e14-32bb-436e-a8b2-f08e69207e0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.923 2 DEBUG oslo_concurrency.lockutils [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.923 2 DEBUG oslo_concurrency.lockutils [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:58 compute-0 nova_compute[257802]: 2025-10-02 12:35:58.924 2 DEBUG nova.network.neutron [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Refreshing network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:59 compute-0 ceph-mon[73607]: pgmap v2250: 305 pgs: 305 active+clean; 693 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 30 KiB/s wr, 240 op/s
Oct 02 12:35:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2397694068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:35:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:35:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:59.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:35:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 140 op/s
Oct 02 12:35:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:00 compute-0 nova_compute[257802]: 2025-10-02 12:36:00.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:00 compute-0 nova_compute[257802]: 2025-10-02 12:36:00.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:01.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:01 compute-0 nova_compute[257802]: 2025-10-02 12:36:01.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 140 op/s
Oct 02 12:36:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:02 compute-0 ovn_controller[148183]: 2025-10-02T12:36:02Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:8b:39 10.100.0.13
Oct 02 12:36:03 compute-0 nova_compute[257802]: 2025-10-02 12:36:03.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:03 compute-0 nova_compute[257802]: 2025-10-02 12:36:03.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:03.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:03 compute-0 ceph-mon[73607]: pgmap v2251: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 140 op/s
Oct 02 12:36:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 116 op/s
Oct 02 12:36:04 compute-0 nova_compute[257802]: 2025-10-02 12:36:04.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:04.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Oct 02 12:36:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/109249967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:04 compute-0 ceph-mon[73607]: pgmap v2252: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 140 op/s
Oct 02 12:36:04 compute-0 ceph-mon[73607]: pgmap v2253: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 116 op/s
Oct 02 12:36:05 compute-0 nova_compute[257802]: 2025-10-02 12:36:05.270 2 DEBUG nova.network.neutron [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updated VIF entry in instance network info cache for port fb769e14-32bb-436e-a8b2-f08e69207e0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:05 compute-0 nova_compute[257802]: 2025-10-02 12:36:05.271 2 DEBUG nova.network.neutron [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [{"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:05.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Oct 02 12:36:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Oct 02 12:36:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 43 KiB/s wr, 90 op/s
Oct 02 12:36:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:06.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:06 compute-0 nova_compute[257802]: 2025-10-02 12:36:06.530 2 DEBUG oslo_concurrency.lockutils [req-fb16b706-4cec-405e-90e3-e23fec35a863 req-e152923b-f28f-4831-9358-b4cc69a1f6ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:06 compute-0 nova_compute[257802]: 2025-10-02 12:36:06.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:07 compute-0 nova_compute[257802]: 2025-10-02 12:36:07.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:07 compute-0 ceph-mon[73607]: osdmap e331: 3 total, 3 up, 3 in
Oct 02 12:36:07 compute-0 ceph-mon[73607]: pgmap v2255: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 43 KiB/s wr, 90 op/s
Oct 02 12:36:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:07.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 43 KiB/s wr, 90 op/s
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:36:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:08.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:08 compute-0 sudo[344252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:08 compute-0 sudo[344252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:08 compute-0 sudo[344252]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:08 compute-0 sudo[344277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:08 compute-0 sudo[344277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:08 compute-0 sudo[344277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.306 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.306 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.306 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.307 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:08 compute-0 nova_compute[257802]: 2025-10-02 12:36:08.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:09.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:09 compute-0 ceph-mon[73607]: pgmap v2256: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 43 KiB/s wr, 90 op/s
Oct 02 12:36:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1570981439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 616 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 133 op/s
Oct 02 12:36:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:10.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:10 compute-0 nova_compute[257802]: 2025-10-02 12:36:10.784 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [{"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:10 compute-0 nova_compute[257802]: 2025-10-02 12:36:10.844 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-a53afa14-bb7b-4723-8239-2ed285f1bc94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:10 compute-0 nova_compute[257802]: 2025-10-02 12:36:10.844 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:36:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/875864746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:11 compute-0 ceph-mon[73607]: pgmap v2257: 305 pgs: 305 active+clean; 616 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 133 op/s
Oct 02 12:36:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:11.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:11 compute-0 nova_compute[257802]: 2025-10-02 12:36:11.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 616 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 133 op/s
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.141 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.141 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.142 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.142 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.142 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972818464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.590 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.741 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.742 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.742 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.748 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.748 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.749 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.973 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.975 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3928MB free_disk=20.718524932861328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.975 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:12 compute-0 nova_compute[257802]: 2025-10-02 12:36:12.976 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.272 2 DEBUG oslo_concurrency.lockutils [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.272 2 DEBUG oslo_concurrency.lockutils [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.349 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance a53afa14-bb7b-4723-8239-2ed285f1bc94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.350 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.350 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.350 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.373 2 INFO nova.compute.manager [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Detaching volume 5134bab8-913f-41c9-b5b8-350b459c53e1
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.402 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:13 compute-0 ceph-mon[73607]: pgmap v2258: 305 pgs: 305 active+clean; 616 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 133 op/s
Oct 02 12:36:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2972818464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.739 2 INFO nova.virt.block_device [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Attempting to driver detach volume 5134bab8-913f-41c9-b5b8-350b459c53e1 from mountpoint /dev/vdb
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.748 2 DEBUG nova.virt.libvirt.driver [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Attempting to detach device vdb from instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.749 2 DEBUG nova.virt.libvirt.guest [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:36:13 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5134bab8-913f-41c9-b5b8-350b459c53e1">
Oct 02 12:36:13 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]:   </source>
Oct 02 12:36:13 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]:   <serial>5134bab8-913f-41c9-b5b8-350b459c53e1</serial>
Oct 02 12:36:13 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:36:13 compute-0 nova_compute[257802]: </disk>
Oct 02 12:36:13 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:36:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 55 KiB/s wr, 156 op/s
Oct 02 12:36:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611203973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.865 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.871 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:13 compute-0 nova_compute[257802]: 2025-10-02 12:36:13.928 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.017 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.018 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.038 2 INFO nova.virt.libvirt.driver [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully detached device vdb from instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 from the persistent domain config.
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.039 2 DEBUG nova.virt.libvirt.driver [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.040 2 DEBUG nova.virt.libvirt.guest [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:36:14 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5134bab8-913f-41c9-b5b8-350b459c53e1">
Oct 02 12:36:14 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]:   </source>
Oct 02 12:36:14 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]:   <serial>5134bab8-913f-41c9-b5b8-350b459c53e1</serial>
Oct 02 12:36:14 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:36:14 compute-0 nova_compute[257802]: </disk>
Oct 02 12:36:14 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.162 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408574.1621916, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.165 2 DEBUG nova.virt.libvirt.driver [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.167 2 INFO nova.virt.libvirt.driver [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully detached device vdb from instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 from the live domain config.
Oct 02 12:36:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.324 2 DEBUG nova.objects.instance [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.367 2 DEBUG oslo_concurrency.lockutils [None req-b883e3f6-0f81-4b1d-ab0b-c2abe2f055e0 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:14.527 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:14 compute-0 nova_compute[257802]: 2025-10-02 12:36:14.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:14.529 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:36:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:15 compute-0 sshd-session[344353]: Invalid user maroof from 167.99.55.34 port 38750
Oct 02 12:36:15 compute-0 sshd-session[344353]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:36:15 compute-0 sshd-session[344353]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:36:15 compute-0 ceph-mon[73607]: pgmap v2259: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 55 KiB/s wr, 156 op/s
Oct 02 12:36:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3611203973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2177103198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:15.531 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 30 KiB/s wr, 124 op/s
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.955 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.956 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.956 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.956 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.956 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.957 2 INFO nova.compute.manager [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Terminating instance
Oct 02 12:36:15 compute-0 nova_compute[257802]: 2025-10-02 12:36:15.958 2 DEBUG nova.compute.manager [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:36:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:16 compute-0 kernel: tapfb769e14-32 (unregistering): left promiscuous mode
Oct 02 12:36:16 compute-0 NetworkManager[44987]: <info>  [1759408576.4902] device (tapfb769e14-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:36:16 compute-0 ovn_controller[148183]: 2025-10-02T12:36:16Z|00624|binding|INFO|Releasing lport fb769e14-32bb-436e-a8b2-f08e69207e0f from this chassis (sb_readonly=0)
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_controller[148183]: 2025-10-02T12:36:16Z|00625|binding|INFO|Setting lport fb769e14-32bb-436e-a8b2-f08e69207e0f down in Southbound
Oct 02 12:36:16 compute-0 ovn_controller[148183]: 2025-10-02T12:36:16Z|00626|binding|INFO|Removing iface tapfb769e14-32 ovn-installed in OVS
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.510 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:8b:39 10.100.0.13'], port_security=['fa:16:3e:7f:8b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '8', 'neutron:security_group_ids': '31221492-0f82-4f56-bac5-f47f9e2caefd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fb769e14-32bb-436e-a8b2-f08e69207e0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.511 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fb769e14-32bb-436e-a8b2-f08e69207e0f in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.513 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.532 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac5e09a-663b-4976-a91e-2b5fc80209da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 02 12:36:16 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000084.scope: Consumed 15.775s CPU time.
Oct 02 12:36:16 compute-0 systemd-machined[211836]: Machine qemu-71-instance-00000084 terminated.
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.562 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bdacc920-37f4-4d15-beba-8ac306c1bb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.566 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3d187b81-08e1-426e-a599-99da636e0c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.592 2 INFO nova.virt.libvirt.driver [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Instance destroyed successfully.
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.593 2 DEBUG nova.objects.instance [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.596 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[75affb88-169d-47e0-9dc1-8f8da337345f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.612 2 DEBUG nova.virt.libvirt.vif [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-844114970',display_name='tempest-ServerRescueNegativeTestJSON-server-844114970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-844114970',id=132,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBC3dzJ/cmeAQeCTRVIaMi9ODTeKzsWGp+oGguk+hFSNuPm8DjZXpwH/w0EeoRUq6Hegzhnzkofu7f4IKtcBTMmXs34k+4eg8rqlyWmhp8XzQZq6+/mosGCR22msyjISyg==',key_name='tempest-keypair-1971519988',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ejvqx4ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.612 2 DEBUG nova.network.os_vif_util [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "address": "fa:16:3e:7f:8b:39", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb769e14-32", "ovs_interfaceid": "fb769e14-32bb-436e-a8b2-f08e69207e0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.612 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[03b1807c-93a3-4e55-ab38-39c11c3dd0d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647515, 'reachable_time': 42805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344378, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.613 2 DEBUG nova.network.os_vif_util [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.614 2 DEBUG os_vif [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb769e14-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.626 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[306135f4-d94a-40ae-b646-a14d54a4fdf1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647525, 'tstamp': 647525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344379, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape58f4ba2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647528, 'tstamp': 647528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344379, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.627 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.655 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.656 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.656 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:16.657 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:16 compute-0 nova_compute[257802]: 2025-10-02 12:36:16.656 2 INFO os_vif [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:8b:39,bridge_name='br-int',has_traffic_filtering=True,id=fb769e14-32bb-436e-a8b2-f08e69207e0f,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb769e14-32')
Oct 02 12:36:16 compute-0 ceph-mon[73607]: pgmap v2260: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 30 KiB/s wr, 124 op/s
Oct 02 12:36:16 compute-0 sshd-session[344353]: Failed password for invalid user maroof from 167.99.55.34 port 38750 ssh2
Oct 02 12:36:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 26 KiB/s wr, 105 op/s
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.832 2 DEBUG nova.compute.manager [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.833 2 DEBUG oslo_concurrency.lockutils [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.833 2 DEBUG oslo_concurrency.lockutils [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.833 2 DEBUG oslo_concurrency.lockutils [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.833 2 DEBUG nova.compute.manager [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:17 compute-0 nova_compute[257802]: 2025-10-02 12:36:17.834 2 DEBUG nova.compute.manager [req-276faffa-7674-4061-9773-ff1373a4ac4b req-a649d717-1585-459b-afde-167879aa6f9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-unplugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:36:17 compute-0 sshd-session[344353]: Connection closed by invalid user maroof 167.99.55.34 port 38750 [preauth]
Oct 02 12:36:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:18 compute-0 nova_compute[257802]: 2025-10-02 12:36:18.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:19 compute-0 ceph-mon[73607]: pgmap v2261: 305 pgs: 305 active+clean; 617 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 26 KiB/s wr, 105 op/s
Oct 02 12:36:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:19.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 664 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 12:36:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.013 2 DEBUG nova.compute.manager [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.013 2 DEBUG oslo_concurrency.lockutils [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.014 2 DEBUG oslo_concurrency.lockutils [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.014 2 DEBUG oslo_concurrency.lockutils [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.014 2 DEBUG nova.compute.manager [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] No waiting events found dispatching network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.014 2 WARNING nova.compute.manager [req-c34fd249-12f7-4af4-bdd0-ddc129afece3 req-de879adc-aa4c-442a-9c94-b6a2d510aedf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received unexpected event network-vif-plugged-fb769e14-32bb-436e-a8b2-f08e69207e0f for instance with vm_state active and task_state deleting.
Oct 02 12:36:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:20 compute-0 sudo[344400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:20 compute-0 sudo[344400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:20 compute-0 sudo[344400]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:20 compute-0 podman[344424]: 2025-10-02 12:36:20.413675827 +0000 UTC m=+0.059733733 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:36:20 compute-0 podman[344425]: 2025-10-02 12:36:20.414481436 +0000 UTC m=+0.059319852 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:36:20 compute-0 sudo[344443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:20 compute-0 sudo[344443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:20 compute-0 sudo[344443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:20 compute-0 podman[344426]: 2025-10-02 12:36:20.42693386 +0000 UTC m=+0.069272525 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:36:20 compute-0 nova_compute[257802]: 2025-10-02 12:36:20.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:20 compute-0 sudo[344508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:20 compute-0 sudo[344508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:20 compute-0 sudo[344508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:20 compute-0 sudo[344533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:36:20 compute-0 sudo[344533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:20 compute-0 sudo[344533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:36:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:36:20 compute-0 ceph-mon[73607]: pgmap v2262: 305 pgs: 305 active+clean; 664 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 12:36:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:21.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:36:21 compute-0 nova_compute[257802]: 2025-10-02 12:36:21.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 664 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 12:36:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.007 2 INFO nova.virt.libvirt.driver [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Deleting instance files /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_del
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.007 2 INFO nova.virt.libvirt.driver [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Deletion of /var/lib/nova/instances/7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75_del complete
Oct 02 12:36:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.054 2 INFO nova.compute.manager [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Took 6.10 seconds to destroy the instance on the hypervisor.
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.054 2 DEBUG oslo.service.loopingcall [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.054 2 DEBUG nova.compute.manager [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.055 2 DEBUG nova.network.neutron [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:36:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3348269603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:22 compute-0 sudo[344578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:22 compute-0 sudo[344578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:22 compute-0 sudo[344578]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:22.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:22 compute-0 sudo[344603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:22 compute-0 sudo[344603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:22 compute-0 sudo[344603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:22 compute-0 sudo[344628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:22 compute-0 sudo[344628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:22 compute-0 sudo[344628]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:22 compute-0 sudo[344653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:36:22 compute-0 sudo[344653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:22 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.760 2 DEBUG nova.network.neutron [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.778 2 INFO nova.compute.manager [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Took 0.72 seconds to deallocate network for instance.
Oct 02 12:36:22 compute-0 sudo[344653]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.857 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.859 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.897 2 DEBUG nova.compute.manager [req-1462d6bd-dd0d-4bfa-8d62-1ed76f0e7a92 req-1331888e-0f07-4e9e-9c49-9811547f9a9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Received event network-vif-deleted-fb769e14-32bb-436e-a8b2-f08e69207e0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:22 compute-0 nova_compute[257802]: 2025-10-02 12:36:22.937 2 DEBUG oslo_concurrency.processutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340501301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.358 2 DEBUG oslo_concurrency.processutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.366 2 DEBUG nova.compute.provider_tree [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.386 2 DEBUG nova.scheduler.client.report [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.417 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.454 2 INFO nova.scheduler.client.report [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Deleted allocations for instance 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75
Oct 02 12:36:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3760426406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:23 compute-0 ceph-mon[73607]: pgmap v2263: 305 pgs: 305 active+clean; 664 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 12:36:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.551 2 DEBUG oslo_concurrency.lockutils [None req-d13e7f51-68c4-499f-ad7c-f394a15359c7 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 636 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 92 op/s
Oct 02 12:36:23 compute-0 nova_compute[257802]: 2025-10-02 12:36:23.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:24 compute-0 nova_compute[257802]: 2025-10-02 12:36:24.019 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1340501301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:25 compute-0 ceph-mon[73607]: pgmap v2264: 305 pgs: 305 active+clean; 636 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 92 op/s
Oct 02 12:36:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 3.8 MiB/s wr, 106 op/s
Oct 02 12:36:25 compute-0 podman[344732]: 2025-10-02 12:36:25.934639514 +0000 UTC m=+0.075732573 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:36:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:26 compute-0 nova_compute[257802]: 2025-10-02 12:36:26.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:26.954 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:26.955 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:26.955 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:27 compute-0 ceph-mon[73607]: pgmap v2265: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 3.8 MiB/s wr, 106 op/s
Oct 02 12:36:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:36:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 3.8 MiB/s wr, 106 op/s
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:36:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:28 compute-0 sudo[344760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:28 compute-0 sudo[344760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:28 compute-0 sudo[344760]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:36:28 compute-0 nova_compute[257802]: 2025-10-02 12:36:28.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:36:28 compute-0 sudo[344785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:28 compute-0 sudo[344785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:28 compute-0 sudo[344785]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9f08dec4-77bc-45c4-87a8-e79b4dc2901a does not exist
Oct 02 12:36:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 70c156fd-b3f2-421f-a31e-7bf65f223401 does not exist
Oct 02 12:36:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a31c6bce-a44e-4b38-a920-c81991d3f9e7 does not exist
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:36:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:36:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:28 compute-0 sudo[344811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:28 compute-0 sudo[344811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:28 compute-0 sudo[344811]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:28 compute-0 sudo[344836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:28 compute-0 sudo[344836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:28 compute-0 sudo[344836]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:28 compute-0 sudo[344861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:28 compute-0 sudo[344861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:28 compute-0 sudo[344861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-0 sudo[344886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:36:29 compute-0 sudo[344886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-0 ceph-mon[73607]: pgmap v2266: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 3.8 MiB/s wr, 106 op/s
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:29 compute-0 podman[344953]: 2025-10-02 12:36:29.322161324 +0000 UTC m=+0.020595804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:29.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:36:29 compute-0 podman[344953]: 2025-10-02 12:36:29.815495004 +0000 UTC m=+0.513929464 container create 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 02 12:36:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:30 compute-0 systemd[1]: Started libpod-conmon-65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6.scope.
Oct 02 12:36:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:30 compute-0 podman[344953]: 2025-10-02 12:36:30.476514626 +0000 UTC m=+1.174949106 container init 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:36:30 compute-0 podman[344953]: 2025-10-02 12:36:30.483730592 +0000 UTC m=+1.182165042 container start 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:36:30 compute-0 charming_chaum[344969]: 167 167
Oct 02 12:36:30 compute-0 systemd[1]: libpod-65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6.scope: Deactivated successfully.
Oct 02 12:36:30 compute-0 podman[344953]: 2025-10-02 12:36:30.702366161 +0000 UTC m=+1.400800641 container attach 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:36:30 compute-0 podman[344953]: 2025-10-02 12:36:30.704004261 +0000 UTC m=+1.402438751 container died 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:36:30 compute-0 ceph-mon[73607]: pgmap v2267: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:36:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.592 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408576.590229, 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.592 2 INFO nova.compute.manager [-] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] VM Stopped (Lifecycle Event)
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.640 2 DEBUG nova.compute.manager [None req-94a282f3-bd7c-47eb-9413-d865639ad66c - - - - - -] [instance: 7b3aa782-5b49-4fe9-8bbb-4e43b3c8da75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.777 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.778 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.806 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.910 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.910 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.920 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:36:31 compute-0 nova_compute[257802]: 2025-10-02 12:36:31.920 2 INFO nova.compute.claims [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:36:32 compute-0 nova_compute[257802]: 2025-10-02 12:36:32.051 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-77897cc5860fb4e95b92ba6d4eccfa8c5b7b73c9213387a66e3b2a4ab4eded6d-merged.mount: Deactivated successfully.
Oct 02 12:36:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879401842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:32 compute-0 nova_compute[257802]: 2025-10-02 12:36:32.977 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.926s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:32 compute-0 nova_compute[257802]: 2025-10-02 12:36:32.984 2 DEBUG nova.compute.provider_tree [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.000 2 DEBUG nova.scheduler.client.report [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.023 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.024 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.069 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.070 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:36:33 compute-0 ceph-mon[73607]: pgmap v2268: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:36:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/86529349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.089 2 INFO nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.106 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.184 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.186 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.186 2 INFO nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Creating image(s)
Oct 02 12:36:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.767 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.809 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.850 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.855 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.896 2 DEBUG nova.policy [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c02d1dcc10ea4e57bbc6b7a3c100dc7b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6822f02d5ca04c659329a75d487054cf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.945 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.946 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.946 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.947 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.972 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:33 compute-0 nova_compute[257802]: 2025-10-02 12:36:33.976 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 17766045-13fc-4377-848f-6815e8a474d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:36:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1791666093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:36:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1791666093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:34.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:34 compute-0 podman[344953]: 2025-10-02 12:36:34.32934883 +0000 UTC m=+5.027783290 container remove 65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaum, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:36:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2879401842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1791666093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1791666093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:34 compute-0 systemd[1]: libpod-conmon-65c352f0bc09b156e915841e9f707c8d653c105e01d8273027fdd64eac9d1ad6.scope: Deactivated successfully.
Oct 02 12:36:34 compute-0 podman[345109]: 2025-10-02 12:36:34.49731829 +0000 UTC m=+0.022921852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:34 compute-0 podman[345109]: 2025-10-02 12:36:34.72455047 +0000 UTC m=+0.250154032 container create fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:36:34 compute-0 nova_compute[257802]: 2025-10-02 12:36:34.878 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Successfully created port: 4c06cc55-6b35-48e0-892a-4fd710f2cf39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:36:34 compute-0 systemd[1]: Started libpod-conmon-fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b.scope.
Oct 02 12:36:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:35 compute-0 podman[345109]: 2025-10-02 12:36:35.031563911 +0000 UTC m=+0.557167533 container init fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:35 compute-0 podman[345109]: 2025-10-02 12:36:35.039523606 +0000 UTC m=+0.565127148 container start fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.063758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595063818, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1709, "num_deletes": 257, "total_data_size": 2985800, "memory_usage": 3020624, "flush_reason": "Manual Compaction"}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 02 12:36:35 compute-0 podman[345109]: 2025-10-02 12:36:35.44106718 +0000 UTC m=+0.966670812 container attach fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595441271, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2909933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48861, "largest_seqno": 50569, "table_properties": {"data_size": 2901821, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17419, "raw_average_key_size": 20, "raw_value_size": 2885420, "raw_average_value_size": 3406, "num_data_blocks": 211, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408438, "oldest_key_time": 1759408438, "file_creation_time": 1759408595, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 377571 microseconds, and 6723 cpu microseconds.
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:36:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.441332) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2909933 bytes OK
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.441359) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.709411) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.709449) EVENT_LOG_v1 {"time_micros": 1759408595709441, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.709479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2978428, prev total WAL file size 3010087, number of live WAL files 2.
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.711425) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373536' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2841KB)], [107(10MB)]
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595711451, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 13501059, "oldest_snapshot_seqno": -1}
Oct 02 12:36:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 179 op/s
Oct 02 12:36:35 compute-0 nice_villani[345126]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:36:35 compute-0 nice_villani[345126]: --> relative data size: 1.0
Oct 02 12:36:35 compute-0 nice_villani[345126]: --> All data devices are unavailable
Oct 02 12:36:35 compute-0 nova_compute[257802]: 2025-10-02 12:36:35.884 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Successfully updated port: 4c06cc55-6b35-48e0-892a-4fd710f2cf39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:36:35 compute-0 nova_compute[257802]: 2025-10-02 12:36:35.900 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:35 compute-0 nova_compute[257802]: 2025-10-02 12:36:35.900 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:35 compute-0 nova_compute[257802]: 2025-10-02 12:36:35.900 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:36:35 compute-0 systemd[1]: libpod-fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b.scope: Deactivated successfully.
Oct 02 12:36:35 compute-0 podman[345109]: 2025-10-02 12:36:35.927127113 +0000 UTC m=+1.452730675 container died fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 7986 keys, 13336267 bytes, temperature: kUnknown
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595966442, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 13336267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13280803, "index_size": 34424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 206094, "raw_average_key_size": 25, "raw_value_size": 13136442, "raw_average_value_size": 1644, "num_data_blocks": 1361, "num_entries": 7986, "num_filter_entries": 7986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408595, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.966686) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13336267 bytes
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.973956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.9 rd, 52.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8521, records dropped: 535 output_compression: NoCompression
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.973992) EVENT_LOG_v1 {"time_micros": 1759408595973977, "job": 64, "event": "compaction_finished", "compaction_time_micros": 255064, "compaction_time_cpu_micros": 27654, "output_level": 6, "num_output_files": 1, "total_output_size": 13336267, "num_input_records": 8521, "num_output_records": 7986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595974894, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595977524, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.711296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.977630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.977636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.977637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.977638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:36:35.977640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:36:35 compute-0 ceph-mon[73607]: pgmap v2269: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 02 12:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4607ba4730a9c59f2ada894f769ca58f18231d98007bbe9c0c51b9757f0675d5-merged.mount: Deactivated successfully.
Oct 02 12:36:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:36.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:36 compute-0 nova_compute[257802]: 2025-10-02 12:36:36.270 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:36:36 compute-0 podman[345109]: 2025-10-02 12:36:36.553433796 +0000 UTC m=+2.079037338 container remove fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_villani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:36 compute-0 systemd[1]: libpod-conmon-fb62d55a72bbc6767f68a7953b95768552af8a99ddfb8d50baeba7ae025b208b.scope: Deactivated successfully.
Oct 02 12:36:36 compute-0 nova_compute[257802]: 2025-10-02 12:36:36.588 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 17766045-13fc-4377-848f-6815e8a474d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:36 compute-0 sudo[344886]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[345170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:36 compute-0 sudo[345170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[345170]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 nova_compute[257802]: 2025-10-02 12:36:36.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:36 compute-0 nova_compute[257802]: 2025-10-02 12:36:36.676 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] resizing rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:36:36 compute-0 sudo[345217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:36 compute-0 sudo[345217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[345217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[345260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:36 compute-0 sudo[345260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[345260]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[345285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:36:36 compute-0 sudo[345285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:37 compute-0 ceph-mon[73607]: pgmap v2270: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 179 op/s
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.254 2 DEBUG nova.network.neutron [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.17569883 +0000 UTC m=+0.024545371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.271194098 +0000 UTC m=+0.120040619 container create 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.279 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.280 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Instance network_info: |[{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:36:37 compute-0 systemd[1]: Started libpod-conmon-8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c.scope.
Oct 02 12:36:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.572 2 DEBUG nova.compute.manager [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.572 2 DEBUG nova.compute.manager [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing instance network info cache due to event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.572 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.573 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.573 2 DEBUG nova.network.neutron [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.592310344 +0000 UTC m=+0.441156885 container init 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.602908083 +0000 UTC m=+0.451754604 container start 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:37 compute-0 zealous_ramanujan[345364]: 167 167
Oct 02 12:36:37 compute-0 systemd[1]: libpod-8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c.scope: Deactivated successfully.
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.738155012 +0000 UTC m=+0.587001573 container attach 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:37 compute-0 podman[345348]: 2025-10-02 12:36:37.739512895 +0000 UTC m=+0.588359446 container died 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 100 KiB/s wr, 127 op/s
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.899 2 DEBUG nova.objects.instance [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'migration_context' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.923 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.924 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Ensure instance console log exists: /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.924 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.925 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.925 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.932 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Start _get_guest_xml network_info=[{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.940 2 WARNING nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.947 2 DEBUG nova.virt.libvirt.host [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.948 2 DEBUG nova.virt.libvirt.host [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.952 2 DEBUG nova.virt.libvirt.host [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.953 2 DEBUG nova.virt.libvirt.host [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.956 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.956 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.957 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.957 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.958 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.958 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.958 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.959 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.959 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.959 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.960 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.960 2 DEBUG nova.virt.hardware [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:36:37 compute-0 nova_compute[257802]: 2025-10-02 12:36:37.965 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5709a7eb96c9ed17e8e76107daf8bd4261d9b805671aa1faf987071849f9b3-merged.mount: Deactivated successfully.
Oct 02 12:36:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:38 compute-0 podman[345348]: 2025-10-02 12:36:38.347321287 +0000 UTC m=+1.196167808 container remove 8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:38 compute-0 systemd[1]: libpod-conmon-8db571ecab9da5740e9b7b9ff2e6577a81cca7f7af61bd8f67288ba5ebabf86c.scope: Deactivated successfully.
Oct 02 12:36:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725298687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.482 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.509 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.529 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:38 compute-0 podman[345430]: 2025-10-02 12:36:38.508184663 +0000 UTC m=+0.028312514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:38 compute-0 podman[345430]: 2025-10-02 12:36:38.635101778 +0000 UTC m=+0.155229599 container create d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:36:38 compute-0 systemd[1]: Started libpod-conmon-d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101.scope.
Oct 02 12:36:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331d65210a3630ff7b3a5688f8f8ef966410beea327a78668c35d4f7c973486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331d65210a3630ff7b3a5688f8f8ef966410beea327a78668c35d4f7c973486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331d65210a3630ff7b3a5688f8f8ef966410beea327a78668c35d4f7c973486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331d65210a3630ff7b3a5688f8f8ef966410beea327a78668c35d4f7c973486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:38 compute-0 podman[345430]: 2025-10-02 12:36:38.865642108 +0000 UTC m=+0.385769929 container init d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:36:38 compute-0 podman[345430]: 2025-10-02 12:36:38.874623128 +0000 UTC m=+0.394750949 container start d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.904 2 DEBUG nova.network.neutron [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated VIF entry in instance network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.905 2 DEBUG nova.network.neutron [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377345661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.955 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.973 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.974 2 DEBUG nova.virt.libvirt.vif [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1488915729',display_name='tempest-ServerActionsTestOtherA-server-1488915729',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1488915729',id=141,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-ti5wv1hn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=17766045-13fc-4377-848f-6815e8a474d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.974 2 DEBUG nova.network.os_vif_util [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.975 2 DEBUG nova.network.os_vif_util [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:38 compute-0 nova_compute[257802]: 2025-10-02 12:36:38.976 2 DEBUG nova.objects.instance [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:39 compute-0 podman[345430]: 2025-10-02 12:36:39.01533186 +0000 UTC m=+0.535459681 container attach d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.027 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <uuid>17766045-13fc-4377-848f-6815e8a474d5</uuid>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <name>instance-0000008d</name>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherA-server-1488915729</nova:name>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:36:37</nova:creationTime>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <nova:port uuid="4c06cc55-6b35-48e0-892a-4fd710f2cf39">
Oct 02 12:36:39 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <system>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="serial">17766045-13fc-4377-848f-6815e8a474d5</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="uuid">17766045-13fc-4377-848f-6815e8a474d5</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </system>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <os>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </os>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <features>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </features>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/17766045-13fc-4377-848f-6815e8a474d5_disk">
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </source>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/17766045-13fc-4377-848f-6815e8a474d5_disk.config">
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </source>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:36:39 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:ec:94:f8"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <target dev="tap4c06cc55-6b"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/console.log" append="off"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <video>
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </video>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:36:39 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:36:39 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:36:39 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:36:39 compute-0 nova_compute[257802]: </domain>
Oct 02 12:36:39 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.027 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Preparing to wait for external event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.028 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.028 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.028 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.029 2 DEBUG nova.virt.libvirt.vif [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1488915729',display_name='tempest-ServerActionsTestOtherA-server-1488915729',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1488915729',id=141,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-ti5wv1hn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=17766045-13fc-4377-848f-6815e8a474d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.029 2 DEBUG nova.network.os_vif_util [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.030 2 DEBUG nova.network.os_vif_util [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.031 2 DEBUG os_vif [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c06cc55-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c06cc55-6b, col_values=(('external_ids', {'iface-id': '4c06cc55-6b35-48e0-892a-4fd710f2cf39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:94:f8', 'vm-uuid': '17766045-13fc-4377-848f-6815e8a474d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:39 compute-0 NetworkManager[44987]: <info>  [1759408599.0377] manager: (tap4c06cc55-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.057 2 INFO os_vif [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b')
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.209 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.210 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.210 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:ec:94:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.210 2 INFO nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Using config drive
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.233 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.537 2 INFO nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Creating config drive at /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.542 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuze5frip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:39 compute-0 ceph-mon[73607]: pgmap v2271: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 100 KiB/s wr, 127 op/s
Oct 02 12:36:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/725298687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3377345661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:39 compute-0 jolly_lamport[345485]: {
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:     "1": [
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:         {
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "devices": [
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "/dev/loop3"
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             ],
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "lv_name": "ceph_lv0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "lv_size": "7511998464",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "name": "ceph_lv0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "tags": {
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.cluster_name": "ceph",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.crush_device_class": "",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.encrypted": "0",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.osd_id": "1",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.type": "block",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:                 "ceph.vdo": "0"
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             },
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "type": "block",
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:             "vg_name": "ceph_vg0"
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:         }
Oct 02 12:36:39 compute-0 jolly_lamport[345485]:     ]
Oct 02 12:36:39 compute-0 jolly_lamport[345485]: }
Oct 02 12:36:39 compute-0 systemd[1]: libpod-d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101.scope: Deactivated successfully.
Oct 02 12:36:39 compute-0 podman[345430]: 2025-10-02 12:36:39.662194017 +0000 UTC m=+1.182321868 container died d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.679 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuze5frip" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 176 op/s
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.795 2 DEBUG nova.storage.rbd_utils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 17766045-13fc-4377-848f-6815e8a474d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:39 compute-0 nova_compute[257802]: 2025-10-02 12:36:39.798 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config 17766045-13fc-4377-848f-6815e8a474d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d331d65210a3630ff7b3a5688f8f8ef966410beea327a78668c35d4f7c973486-merged.mount: Deactivated successfully.
Oct 02 12:36:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:40.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:40 compute-0 podman[345430]: 2025-10-02 12:36:40.408922317 +0000 UTC m=+1.929050138 container remove d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:36:40 compute-0 systemd[1]: libpod-conmon-d0a5e1e3624be174a5c40b8863f50db9a172975ac5ec4924724f51e1a9544101.scope: Deactivated successfully.
Oct 02 12:36:40 compute-0 sudo[345285]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:40 compute-0 sudo[345569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:40 compute-0 sudo[345569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:40 compute-0 sudo[345569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:40 compute-0 sudo[345595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:40 compute-0 sudo[345595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:40 compute-0 sudo[345595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:40 compute-0 sudo[345620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:40 compute-0 sudo[345620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:40 compute-0 sudo[345620]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:40 compute-0 sudo[345645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:36:40 compute-0 sudo[345645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:40 compute-0 ceph-mon[73607]: pgmap v2272: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 176 op/s
Oct 02 12:36:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/531003767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.159952011 +0000 UTC m=+0.083656578 container create f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.099156454 +0000 UTC m=+0.022861041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:41 compute-0 systemd[1]: Started libpod-conmon-f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16.scope.
Oct 02 12:36:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.337 2 DEBUG oslo_concurrency.processutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config 17766045-13fc-4377-848f-6815e8a474d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.339 2 INFO nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Deleting local config drive /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5/disk.config because it was imported into RBD.
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.370338259 +0000 UTC m=+0.294042846 container init f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.380652842 +0000 UTC m=+0.304357409 container start f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:36:41 compute-0 systemd[1]: libpod-f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16.scope: Deactivated successfully.
Oct 02 12:36:41 compute-0 tender_wu[345726]: 167 167
Oct 02 12:36:41 compute-0 conmon[345726]: conmon f0fd1ffd173ce415d9f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16.scope/container/memory.events
Oct 02 12:36:41 compute-0 kernel: tap4c06cc55-6b: entered promiscuous mode
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ovn_controller[148183]: 2025-10-02T12:36:41Z|00627|binding|INFO|Claiming lport 4c06cc55-6b35-48e0-892a-4fd710f2cf39 for this chassis.
Oct 02 12:36:41 compute-0 ovn_controller[148183]: 2025-10-02T12:36:41Z|00628|binding|INFO|4c06cc55-6b35-48e0-892a-4fd710f2cf39: Claiming fa:16:3e:ec:94:f8 10.100.0.8
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.408256436 +0000 UTC m=+0.331961033 container attach f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.410105772 +0000 UTC m=+0.333810349 container died f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.4102] manager: (tap4c06cc55-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.419 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:94:f8 10.100.0.8'], port_security=['fa:16:3e:ec:94:f8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '17766045-13fc-4377-848f-6815e8a474d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4c06cc55-6b35-48e0-892a-4fd710f2cf39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.421 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4c06cc55-6b35-48e0-892a-4fd710f2cf39 in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.424 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ovn_controller[148183]: 2025-10-02T12:36:41Z|00629|binding|INFO|Setting lport 4c06cc55-6b35-48e0-892a-4fd710f2cf39 ovn-installed in OVS
Oct 02 12:36:41 compute-0 ovn_controller[148183]: 2025-10-02T12:36:41Z|00630|binding|INFO|Setting lport 4c06cc55-6b35-48e0-892a-4fd710f2cf39 up in Southbound
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.439 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d664f644-8e6f-4bc4-bf74-14379168446d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.441 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00455285-91 in ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.444 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00455285-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.444 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[20c1625c-69c7-4eed-bfce-f95b2754648a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.445 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3d61bb32-dff4-4132-b53e-d11d96075ca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 systemd-udevd[345756]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:41 compute-0 systemd-machined[211836]: New machine qemu-72-instance-0000008d.
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.459 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[2689bde4-7614-4bcb-aede-142eb2ece6fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.4639] device (tap4c06cc55-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.4654] device (tap4c06cc55-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:36:41 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000008d.
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.490 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8ff21-830a-4520-8fb6-4e30fd7b6d08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.518 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a64c5240-6fb3-484f-8aa1-71431297f120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.522 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7a2918-1b7c-491c-972b-627923eea98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 systemd-udevd[345759]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.5288] manager: (tap00455285-90): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Oct 02 12:36:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.551 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ebf43-a6b8-471a-bbc0-2ff772485ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.554 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a356d903-a6c9-4dfc-a52d-d9469884cccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.5727] device (tap00455285-90): carrier: link connected
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.585 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e96d71-28ef-469a-811e-bb0660c44f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.601 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3d00129b-62d0-420d-9caf-57896b081b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666919, 'reachable_time': 42334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345789, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.614 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0b0068-a9dd-43c1-8753-3f6b8e16d7b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:8a3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666919, 'tstamp': 666919}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345790, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.632 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f38094a-1aed-4eb9-856a-12dd8c58e3a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666919, 'reachable_time': 42334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345791, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.642 2 DEBUG nova.compute.manager [req-538ca4c1-576a-4302-b65e-13299a8ca0f5 req-de0f7f52-b29c-40a2-a192-c7ce05e4b301 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.642 2 DEBUG oslo_concurrency.lockutils [req-538ca4c1-576a-4302-b65e-13299a8ca0f5 req-de0f7f52-b29c-40a2-a192-c7ce05e4b301 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.643 2 DEBUG oslo_concurrency.lockutils [req-538ca4c1-576a-4302-b65e-13299a8ca0f5 req-de0f7f52-b29c-40a2-a192-c7ce05e4b301 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.643 2 DEBUG oslo_concurrency.lockutils [req-538ca4c1-576a-4302-b65e-13299a8ca0f5 req-de0f7f52-b29c-40a2-a192-c7ce05e4b301 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.643 2 DEBUG nova.compute.manager [req-538ca4c1-576a-4302-b65e-13299a8ca0f5 req-de0f7f52-b29c-40a2-a192-c7ce05e4b301 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Processing event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb022053f0a51c0d22421b6989602d5208e84ee210df23bd432c188d9f9ead7-merged.mount: Deactivated successfully.
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.669 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a80e4c62-3791-43a1-ab58-40173c49498e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.733 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f5dcde-f9fa-4088-9f5a-23299e0907e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.735 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.735 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.736 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:41 compute-0 NetworkManager[44987]: <info>  [1759408601.7384] manager: (tap00455285-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct 02 12:36:41 compute-0 kernel: tap00455285-90: entered promiscuous mode
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.743 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:41 compute-0 ovn_controller[148183]: 2025-10-02T12:36:41Z|00631|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.749 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.750 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.750 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.750 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.751 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.752 2 INFO nova.compute.manager [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Terminating instance
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.753 2 DEBUG nova.compute.manager [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:36:41 compute-0 nova_compute[257802]: 2025-10-02 12:36:41.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.765 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.766 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0636ec49-d10e-4992-b569-7a738cb64361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.767 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-00455285-97a7-4fa2-ba83-e8060936877e
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 00455285-97a7-4fa2-ba83-e8060936877e
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:36:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:41.768 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'env', 'PROCESS_TAG=haproxy-00455285-97a7-4fa2-ba83-e8060936877e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00455285-97a7-4fa2-ba83-e8060936877e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:36:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 12:36:41 compute-0 podman[345710]: 2025-10-02 12:36:41.812024146 +0000 UTC m=+0.735728713 container remove f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:36:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1825524569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1825524569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:41 compute-0 systemd[1]: libpod-conmon-f0fd1ffd173ce415d9f96b7bd82a47a88b52df5629ae4b7a88d8260a0ca91f16.scope: Deactivated successfully.
Oct 02 12:36:42 compute-0 podman[345811]: 2025-10-02 12:36:41.983758967 +0000 UTC m=+0.024474030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:42 compute-0 podman[345811]: 2025-10-02 12:36:42.206174268 +0000 UTC m=+0.246889311 container create 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:42 compute-0 kernel: tap8a410c4c-94 (unregistering): left promiscuous mode
Oct 02 12:36:42 compute-0 NetworkManager[44987]: <info>  [1759408602.2501] device (tap8a410c4c-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:36:42 compute-0 ovn_controller[148183]: 2025-10-02T12:36:42Z|00632|binding|INFO|Releasing lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 from this chassis (sb_readonly=0)
Oct 02 12:36:42 compute-0 ovn_controller[148183]: 2025-10-02T12:36:42Z|00633|binding|INFO|Setting lport 8a410c4c-94ba-44f0-9056-16dbab7db1d9 down in Southbound
Oct 02 12:36:42 compute-0 ovn_controller[148183]: 2025-10-02T12:36:42Z|00634|binding|INFO|Removing iface tap8a410c4c-94 ovn-installed in OVS
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:42.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:42.268 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:ff:7e 10.100.0.14'], port_security=['fa:16:3e:2f:ff:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a53afa14-bb7b-4723-8239-2ed285f1bc94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=8a410c4c-94ba-44f0-9056-16dbab7db1d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:42 compute-0 systemd[1]: Started libpod-conmon-405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5.scope.
Oct 02 12:36:42 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 02 12:36:42 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007e.scope: Consumed 21.985s CPU time.
Oct 02 12:36:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:42 compute-0 systemd-machined[211836]: Machine qemu-65-instance-0000007e terminated.
Oct 02 12:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7803783501d1b3e44df6bf8284f7a83f88eaaabbb4b4ae8bb000cae6f3e78639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7803783501d1b3e44df6bf8284f7a83f88eaaabbb4b4ae8bb000cae6f3e78639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7803783501d1b3e44df6bf8284f7a83f88eaaabbb4b4ae8bb000cae6f3e78639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7803783501d1b3e44df6bf8284f7a83f88eaaabbb4b4ae8bb000cae6f3e78639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:42 compute-0 podman[345811]: 2025-10-02 12:36:42.381905328 +0000 UTC m=+0.422620391 container init 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:42 compute-0 podman[345811]: 2025-10-02 12:36:42.390219572 +0000 UTC m=+0.430934615 container start 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.394 2 INFO nova.virt.libvirt.driver [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Instance destroyed successfully.
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.395 2 DEBUG nova.objects.instance [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid a53afa14-bb7b-4723-8239-2ed285f1bc94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.410 2 DEBUG nova.virt.libvirt.vif [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-408124247',display_name='tempest-ServerRescueNegativeTestJSON-server-408124247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-408124247',id=126,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-ysobwts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:30Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=a53afa14-bb7b-4723-8239-2ed285f1bc94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.411 2 DEBUG nova.network.os_vif_util [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "address": "fa:16:3e:2f:ff:7e", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a410c4c-94", "ovs_interfaceid": "8a410c4c-94ba-44f0-9056-16dbab7db1d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.412 2 DEBUG nova.network.os_vif_util [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.412 2 DEBUG os_vif [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a410c4c-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.419 2 INFO os_vif [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:ff:7e,bridge_name='br-int',has_traffic_filtering=True,id=8a410c4c-94ba-44f0-9056-16dbab7db1d9,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a410c4c-94')
Oct 02 12:36:42 compute-0 podman[345811]: 2025-10-02 12:36:42.430797825 +0000 UTC m=+0.471512868 container attach 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:36:42
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'images']
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.490 2 DEBUG nova.compute.manager [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.490 2 DEBUG oslo_concurrency.lockutils [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.490 2 DEBUG oslo_concurrency.lockutils [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.491 2 DEBUG oslo_concurrency.lockutils [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.491 2 DEBUG nova.compute.manager [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:42 compute-0 nova_compute[257802]: 2025-10-02 12:36:42.491 2 DEBUG nova.compute.manager [req-1f3a26ce-88df-4423-b541-17a0c182b062 req-8f8a2baf-4b8c-4bd1-8d42-e31a18d52e81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-unplugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:36:42 compute-0 podman[345862]: 2025-10-02 12:36:42.447573235 +0000 UTC m=+0.108106537 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:36:42 compute-0 podman[345862]: 2025-10-02 12:36:42.705815863 +0000 UTC m=+0.366349145 container create fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:42 compute-0 systemd[1]: Started libpod-conmon-fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552.scope.
Oct 02 12:36:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677887aa539c6afaf87b3d7502b43befa59f0e7bafb2b328dd2abcbe55d2d495/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:42 compute-0 podman[345862]: 2025-10-02 12:36:42.951247177 +0000 UTC m=+0.611780469 container init fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:36:42 compute-0 podman[345862]: 2025-10-02 12:36:42.959974552 +0000 UTC m=+0.620507844 container start fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:36:42 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [NOTICE]   (345945) : New worker (345947) forked
Oct 02 12:36:42 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [NOTICE]   (345945) : Loading success.
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.068 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408603.068205, 17766045-13fc-4377-848f-6815e8a474d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.069 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] VM Started (Lifecycle Event)
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.071 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.079 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.085 2 INFO nova.virt.libvirt.driver [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Instance spawned successfully.
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.089 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.092 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.099 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.113 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.114 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.115 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.115 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.116 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.117 2 DEBUG nova.virt.libvirt.driver [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.122 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.122 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408603.0684052, 17766045-13fc-4377-848f-6815e8a474d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.123 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] VM Paused (Lifecycle Event)
Oct 02 12:36:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:43.141 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 8a410c4c-94ba-44f0-9056-16dbab7db1d9 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis
Oct 02 12:36:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:43.144 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e58f4ba2-c72c-42b8-acea-ca6241431726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:36:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:43.146 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c94426ce-62c0-4ad1-9b92-318e918a46a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:43.147 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace which is not needed anymore
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.155 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.160 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408603.0743043, 17766045-13fc-4377-848f-6815e8a474d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.161 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] VM Resumed (Lifecycle Event)
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.191 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.197 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.204 2 INFO nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Took 10.02 seconds to spawn the instance on the hypervisor.
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.205 2 DEBUG nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:43 compute-0 busy_wu[345845]: {
Oct 02 12:36:43 compute-0 busy_wu[345845]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:36:43 compute-0 busy_wu[345845]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:36:43 compute-0 busy_wu[345845]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:36:43 compute-0 busy_wu[345845]:         "osd_id": 1,
Oct 02 12:36:43 compute-0 busy_wu[345845]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:36:43 compute-0 busy_wu[345845]:         "type": "bluestore"
Oct 02 12:36:43 compute-0 busy_wu[345845]:     }
Oct 02 12:36:43 compute-0 busy_wu[345845]: }
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.216 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:43 compute-0 systemd[1]: libpod-405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5.scope: Deactivated successfully.
Oct 02 12:36:43 compute-0 podman[345811]: 2025-10-02 12:36:43.236944247 +0000 UTC m=+1.277659290 container died 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.265 2 INFO nova.compute.manager [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Took 11.39 seconds to build instance.
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d612a950-8163-47ab-8e6c-fc956b2b9e55 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:43 compute-0 ceph-mon[73607]: pgmap v2273: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7803783501d1b3e44df6bf8284f7a83f88eaaabbb4b4ae8bb000cae6f3e78639-merged.mount: Deactivated successfully.
Oct 02 12:36:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.886 2 DEBUG nova.compute.manager [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.886 2 DEBUG oslo_concurrency.lockutils [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.887 2 DEBUG oslo_concurrency.lockutils [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.887 2 DEBUG oslo_concurrency.lockutils [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.887 2 DEBUG nova.compute.manager [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] No waiting events found dispatching network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:43 compute-0 nova_compute[257802]: 2025-10-02 12:36:43.887 2 WARNING nova.compute.manager [req-1a551ae5-62c9-4fe6-921c-4ed16788976a req-d0922fec-e008-46ca-a2e0-66b6daeba539 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received unexpected event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 for instance with vm_state active and task_state None.
Oct 02 12:36:43 compute-0 podman[345811]: 2025-10-02 12:36:43.979234409 +0000 UTC m=+2.019949452 container remove 405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:36:44 compute-0 sudo[345645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:36:44 compute-0 systemd[1]: libpod-conmon-405416ac1908d388040a19b50d74b662e29baaf727415c1e1d39efa29afe44d5.scope: Deactivated successfully.
Oct 02 12:36:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:36:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b42f3fd7-1a00-4b61-9b8d-f866b446b279 does not exist
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2b6ca823-1464-4382-8258-d7840d89215d does not exist
Oct 02 12:36:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a3392921-c522-4ecc-b0ff-28268b7c0fe8 does not exist
Oct 02 12:36:44 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [NOTICE]   (337542) : haproxy version is 2.8.14-c23fe91
Oct 02 12:36:44 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [NOTICE]   (337542) : path to executable is /usr/sbin/haproxy
Oct 02 12:36:44 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [WARNING]  (337542) : Exiting Master process...
Oct 02 12:36:44 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [ALERT]    (337542) : Current worker (337544) exited with code 143 (Terminated)
Oct 02 12:36:44 compute-0 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[337538]: [WARNING]  (337542) : All workers exited. Exiting... (0)
Oct 02 12:36:44 compute-0 systemd[1]: libpod-045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8.scope: Deactivated successfully.
Oct 02 12:36:44 compute-0 podman[346001]: 2025-10-02 12:36:44.14888326 +0000 UTC m=+0.647353350 container died 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:36:44 compute-0 sudo[346014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:44 compute-0 sudo[346014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:44 compute-0 sudo[346014]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8-userdata-shm.mount: Deactivated successfully.
Oct 02 12:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98626666109bd08180d9fa50b75947bc4e663092b586e25c99dd8343a9eb526-merged.mount: Deactivated successfully.
Oct 02 12:36:44 compute-0 sudo[346048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:36:44 compute-0 sudo[346048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:44 compute-0 sudo[346048]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:44 compute-0 podman[346001]: 2025-10-02 12:36:44.291050877 +0000 UTC m=+0.789520967 container cleanup 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:44 compute-0 systemd[1]: libpod-conmon-045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8.scope: Deactivated successfully.
Oct 02 12:36:44 compute-0 podman[346082]: 2025-10-02 12:36:44.373962236 +0000 UTC m=+0.054946296 container remove 045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.379 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea957df-d536-4173-ba0f-946cce97bda2]: (4, ('Thu Oct  2 12:36:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8)\n045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8\nThu Oct  2 12:36:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8)\n045ad838034caeee20ddafee4c6fadbed47c517816111a05ac7e45fa4eace2e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.380 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[11b4ebee-da85-42c3-9e45-72897828057e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.381 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:44 compute-0 kernel: tape58f4ba2-c0: left promiscuous mode
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.407 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e341b925-50e3-4a8a-a3b1-8eb08b6bb474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.430 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0235e2-b9be-4bd9-a90d-8d6aba13c6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.432 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0dac2708-67e2-42a1-821f-4fa82b8d2f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.448 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[84ec702c-c815-4556-9d79-c122ff885b89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647508, 'reachable_time': 35748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346096, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.451 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:36:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:36:44.451 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[0f39b5f5-c94b-439c-a926-3a167a233602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:44 compute-0 systemd[1]: run-netns-ovnmeta\x2de58f4ba2\x2dc72c\x2d42b8\x2dacea\x2dca6241431726.mount: Deactivated successfully.
Oct 02 12:36:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/397089190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.603 2 DEBUG nova.compute.manager [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.604 2 DEBUG oslo_concurrency.lockutils [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.604 2 DEBUG oslo_concurrency.lockutils [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.604 2 DEBUG oslo_concurrency.lockutils [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.604 2 DEBUG nova.compute.manager [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] No waiting events found dispatching network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:44 compute-0 nova_compute[257802]: 2025-10-02 12:36:44.604 2 WARNING nova.compute.manager [req-5748f910-7316-44b5-a914-647639ad559e req-a5ef1bfb-aef0-44d6-a05a-8ed4a699da61 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received unexpected event network-vif-plugged-8a410c4c-94ba-44f0-9056-16dbab7db1d9 for instance with vm_state rescued and task_state deleting.
Oct 02 12:36:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:45 compute-0 ceph-mon[73607]: pgmap v2274: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Oct 02 12:36:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3417013561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.664 2 INFO nova.virt.libvirt.driver [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Deleting instance files /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94_del
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.664 2 INFO nova.virt.libvirt.driver [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Deletion of /var/lib/nova/instances/a53afa14-bb7b-4723-8239-2ed285f1bc94_del complete
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.716 2 INFO nova.compute.manager [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Took 3.96 seconds to destroy the instance on the hypervisor.
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.716 2 DEBUG oslo.service.loopingcall [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.717 2 DEBUG nova.compute.manager [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:36:45 compute-0 nova_compute[257802]: 2025-10-02 12:36:45.717 2 DEBUG nova.network.neutron [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:36:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Oct 02 12:36:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:46.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:46 compute-0 ceph-mon[73607]: pgmap v2275: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 361 KiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.898 2 DEBUG nova.network.neutron [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.922 2 INFO nova.compute.manager [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Took 2.21 seconds to deallocate network for instance.
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.974 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.974 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:47 compute-0 nova_compute[257802]: 2025-10-02 12:36:47.990 2 DEBUG nova.compute.manager [req-a2ca0509-6ccc-4453-bff8-2ee7f3b83a30 req-d574b837-0143-45e7-9777-b78531b3897b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Received event network-vif-deleted-8a410c4c-94ba-44f0-9056-16dbab7db1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.038 2 DEBUG oslo_concurrency.processutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:48.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43978544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.501 2 DEBUG oslo_concurrency.processutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.507 2 DEBUG nova.compute.provider_tree [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.527 2 DEBUG nova.scheduler.client.report [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.564 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:48 compute-0 sudo[346125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:48 compute-0 sudo[346125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:48 compute-0 sudo[346125]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.613 2 INFO nova.scheduler.client.report [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Deleted allocations for instance a53afa14-bb7b-4723-8239-2ed285f1bc94
Oct 02 12:36:48 compute-0 sudo[346151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:48 compute-0 sudo[346151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:48 compute-0 sudo[346151]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:48 compute-0 nova_compute[257802]: 2025-10-02 12:36:48.688 2 DEBUG oslo_concurrency.lockutils [None req-53fad271-2c67-4e99-88f0-4a1d679a6317 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "a53afa14-bb7b-4723-8239-2ed285f1bc94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:48 compute-0 ceph-mon[73607]: pgmap v2276: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 361 KiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:36:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3186242937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/43978544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:49.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Oct 02 12:36:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:36:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:36:50 compute-0 nova_compute[257802]: 2025-10-02 12:36:50.465 2 DEBUG nova.compute.manager [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:50 compute-0 nova_compute[257802]: 2025-10-02 12:36:50.467 2 DEBUG nova.compute.manager [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing instance network info cache due to event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:50 compute-0 nova_compute[257802]: 2025-10-02 12:36:50.468 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:50 compute-0 nova_compute[257802]: 2025-10-02 12:36:50.468 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:50 compute-0 nova_compute[257802]: 2025-10-02 12:36:50.469 2 DEBUG nova.network.neutron [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:50 compute-0 podman[346179]: 2025-10-02 12:36:50.921168621 +0000 UTC m=+0.052934376 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:36:50 compute-0 podman[346178]: 2025-10-02 12:36:50.929007363 +0000 UTC m=+0.063121766 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:36:50 compute-0 podman[346177]: 2025-10-02 12:36:50.949844813 +0000 UTC m=+0.083976036 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:36:50 compute-0 ceph-mon[73607]: pgmap v2277: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Oct 02 12:36:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 12:36:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:52 compute-0 nova_compute[257802]: 2025-10-02 12:36:52.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:53 compute-0 ceph-mon[73607]: pgmap v2278: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 12:36:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3136633686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2831796320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:53.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:53 compute-0 nova_compute[257802]: 2025-10-02 12:36:53.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 267 op/s
Oct 02 12:36:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005914809871114833 of space, bias 1.0, pg target 1.77444296133445 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:36:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:36:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176272605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:36:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176272605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:55 compute-0 nova_compute[257802]: 2025-10-02 12:36:55.094 2 DEBUG nova.network.neutron [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated VIF entry in instance network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:55 compute-0 nova_compute[257802]: 2025-10-02 12:36:55.095 2 DEBUG nova.network.neutron [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:55 compute-0 nova_compute[257802]: 2025-10-02 12:36:55.123 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:55 compute-0 ceph-mon[73607]: pgmap v2279: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 267 op/s
Oct 02 12:36:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3176272605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3176272605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:36:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:55.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:36:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 475 KiB/s wr, 281 op/s
Oct 02 12:36:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:56.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:56 compute-0 ceph-mon[73607]: pgmap v2280: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 475 KiB/s wr, 281 op/s
Oct 02 12:36:56 compute-0 podman[346236]: 2025-10-02 12:36:56.968332453 +0000 UTC m=+0.105954194 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:36:57 compute-0 nova_compute[257802]: 2025-10-02 12:36:57.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408602.3885338, a53afa14-bb7b-4723-8239-2ed285f1bc94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:57 compute-0 nova_compute[257802]: 2025-10-02 12:36:57.391 2 INFO nova.compute.manager [-] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] VM Stopped (Lifecycle Event)
Oct 02 12:36:57 compute-0 nova_compute[257802]: 2025-10-02 12:36:57.413 2 DEBUG nova.compute.manager [None req-b51e17a2-8d1f-435c-a8ab-ef5697c786cb - - - - - -] [instance: a53afa14-bb7b-4723-8239-2ed285f1bc94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:57 compute-0 nova_compute[257802]: 2025-10-02 12:36:57.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 226 op/s
Oct 02 12:36:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/243347859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:58 compute-0 nova_compute[257802]: 2025-10-02 12:36:58.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:58 compute-0 nova_compute[257802]: 2025-10-02 12:36:58.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:36:58 compute-0 ovn_controller[148183]: 2025-10-02T12:36:58Z|00635|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:36:58 compute-0 nova_compute[257802]: 2025-10-02 12:36:58.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:58 compute-0 ovn_controller[148183]: 2025-10-02T12:36:58Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:94:f8 10.100.0.8
Oct 02 12:36:58 compute-0 ovn_controller[148183]: 2025-10-02T12:36:58Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:94:f8 10.100.0.8
Oct 02 12:36:58 compute-0 nova_compute[257802]: 2025-10-02 12:36:58.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:59 compute-0 ceph-mon[73607]: pgmap v2281: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 226 op/s
Oct 02 12:36:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4287727608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:36:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 294 op/s
Oct 02 12:37:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:00 compute-0 nova_compute[257802]: 2025-10-02 12:37:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:01 compute-0 nova_compute[257802]: 2025-10-02 12:37:01.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Oct 02 12:37:02 compute-0 ceph-mon[73607]: pgmap v2282: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 294 op/s
Oct 02 12:37:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4056615805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:02 compute-0 nova_compute[257802]: 2025-10-02 12:37:02.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:03 compute-0 nova_compute[257802]: 2025-10-02 12:37:03.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:03.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:03 compute-0 ceph-mon[73607]: pgmap v2283: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Oct 02 12:37:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3648069516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1656805135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 369 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 195 op/s
Oct 02 12:37:04 compute-0 nova_compute[257802]: 2025-10-02 12:37:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:04.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:05 compute-0 nova_compute[257802]: 2025-10-02 12:37:05.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:05 compute-0 ceph-mon[73607]: pgmap v2284: 305 pgs: 305 active+clean; 369 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 195 op/s
Oct 02 12:37:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:05.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 5.6 MiB/s wr, 143 op/s
Oct 02 12:37:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:07 compute-0 ceph-mon[73607]: pgmap v2285: 305 pgs: 305 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 5.6 MiB/s wr, 143 op/s
Oct 02 12:37:07 compute-0 nova_compute[257802]: 2025-10-02 12:37:07.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:07.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 5.6 MiB/s wr, 108 op/s
Oct 02 12:37:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/33302861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:08.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1727174770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/33302861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:08 compute-0 nova_compute[257802]: 2025-10-02 12:37:08.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:08 compute-0 sudo[346266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:08 compute-0 sudo[346266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:08 compute-0 sudo[346266]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:08 compute-0 sudo[346291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:08 compute-0 sudo[346291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:08 compute-0 sudo[346291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:09 compute-0 nova_compute[257802]: 2025-10-02 12:37:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:09 compute-0 nova_compute[257802]: 2025-10-02 12:37:09.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:37:09 compute-0 nova_compute[257802]: 2025-10-02 12:37:09.120 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:37:09 compute-0 ceph-mon[73607]: pgmap v2286: 305 pgs: 305 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 5.6 MiB/s wr, 108 op/s
Oct 02 12:37:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:09.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 220 op/s
Oct 02 12:37:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:37:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:37:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/545548618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/346022943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3394177219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/615945225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:11.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:11 compute-0 ceph-mon[73607]: pgmap v2287: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 220 op/s
Oct 02 12:37:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1902971222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4168009933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.124 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.124 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.125 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.125 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970457084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.543 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.631 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.631 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:12 compute-0 ceph-mon[73607]: pgmap v2288: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Oct 02 12:37:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/970457084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.785 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.787 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4118MB free_disk=20.825363159179688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.787 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.787 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.929 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:37:12 compute-0 nova_compute[257802]: 2025-10-02 12:37:12.997 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182315156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.447 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.453 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.531 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.588 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.589 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:13 compute-0 nova_compute[257802]: 2025-10-02 12:37:13.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:13.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.7 MiB/s wr, 225 op/s
Oct 02 12:37:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1182315156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:15 compute-0 ceph-mon[73607]: pgmap v2289: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.7 MiB/s wr, 225 op/s
Oct 02 12:37:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:15.442 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:15 compute-0 nova_compute[257802]: 2025-10-02 12:37:15.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:15.443 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:37:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:15.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 251 op/s
Oct 02 12:37:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:16.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3471199299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:17 compute-0 nova_compute[257802]: 2025-10-02 12:37:17.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:17 compute-0 ceph-mon[73607]: pgmap v2290: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 251 op/s
Oct 02 12:37:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:17.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Oct 02 12:37:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:18.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:18 compute-0 ceph-mon[73607]: pgmap v2291: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Oct 02 12:37:18 compute-0 nova_compute[257802]: 2025-10-02 12:37:18.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.731 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.732 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.761 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.0 MiB/s wr, 348 op/s
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.945 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.945 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.960 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:19 compute-0 nova_compute[257802]: 2025-10-02 12:37:19.961 2 INFO nova.compute.claims [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:37:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:20 compute-0 sshd-session[346366]: Connection closed by 185.247.137.219 port 47587
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.193 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:20 compute-0 sshd-session[346369]: Connection closed by 185.247.137.219 port 49171 [preauth]
Oct 02 12:37:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662611961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.632 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.640 2 DEBUG nova.compute.provider_tree [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.681 2 DEBUG nova.scheduler.client.report [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.710 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.712 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.756 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.757 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.782 2 INFO nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:20 compute-0 nova_compute[257802]: 2025-10-02 12:37:20.880 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:20 compute-0 ceph-mon[73607]: pgmap v2292: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.0 MiB/s wr, 348 op/s
Oct 02 12:37:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1662611961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.211 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.213 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.214 2 INFO nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Creating image(s)
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.249 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.279 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.309 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.314 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.351 2 DEBUG nova.policy [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe9cc788734f406d826446a848700331', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.390 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.391 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.393 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.393 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.425 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.431 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 12:37:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:21.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.723 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.1 MiB/s wr, 236 op/s
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.810 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] resizing rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:21 compute-0 podman[346541]: 2025-10-02 12:37:21.943657203 +0000 UTC m=+0.067725767 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.950 2 DEBUG nova.objects.instance [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'migration_context' on Instance uuid 20f1aa9a-e50f-4610-aaae-2468cccbeb6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.965 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.965 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Ensure instance console log exists: /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.966 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.966 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:21 compute-0 nova_compute[257802]: 2025-10-02 12:37:21.966 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:21 compute-0 podman[346543]: 2025-10-02 12:37:21.977278716 +0000 UTC m=+0.095129579 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:21 compute-0 podman[346542]: 2025-10-02 12:37:21.977364868 +0000 UTC m=+0.098078701 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:22.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:22 compute-0 nova_compute[257802]: 2025-10-02 12:37:22.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:22 compute-0 ceph-mon[73607]: pgmap v2293: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.1 MiB/s wr, 236 op/s
Oct 02 12:37:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:23.444 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:23 compute-0 nova_compute[257802]: 2025-10-02 12:37:23.588 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:23 compute-0 nova_compute[257802]: 2025-10-02 12:37:23.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:23.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 475 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 248 op/s
Oct 02 12:37:23 compute-0 nova_compute[257802]: 2025-10-02 12:37:23.844 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Successfully created port: 6319a656-83a0-492b-ac32-92c9f82f2ec5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:24.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:24 compute-0 ceph-mon[73607]: pgmap v2294: 305 pgs: 305 active+clean; 475 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 248 op/s
Oct 02 12:37:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.174 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Successfully updated port: 6319a656-83a0-492b-ac32-92c9f82f2ec5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.199 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.199 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquired lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.199 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.268 2 DEBUG nova.compute.manager [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-changed-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.269 2 DEBUG nova.compute.manager [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Refreshing instance network info cache due to event network-changed-6319a656-83a0-492b-ac32-92c9f82f2ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.269 2 DEBUG oslo_concurrency.lockutils [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:25 compute-0 nova_compute[257802]: 2025-10-02 12:37:25.474 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:37:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:37:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:37:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 12:37:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.653 2 DEBUG nova.network.neutron [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Updating instance_info_cache with network_info: [{"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.680 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Releasing lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.680 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance network_info: |[{"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.680 2 DEBUG oslo_concurrency.lockutils [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.681 2 DEBUG nova.network.neutron [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Refreshing network info cache for port 6319a656-83a0-492b-ac32-92c9f82f2ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.683 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Start _get_guest_xml network_info=[{"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.687 2 WARNING nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.692 2 DEBUG nova.virt.libvirt.host [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.693 2 DEBUG nova.virt.libvirt.host [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.695 2 DEBUG nova.virt.libvirt.host [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.696 2 DEBUG nova.virt.libvirt.host [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.696 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.697 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.697 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.698 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.698 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.698 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.698 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.698 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.699 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.699 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.699 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.699 2 DEBUG nova.virt.hardware [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:37:26 compute-0 nova_compute[257802]: 2025-10-02 12:37:26.702 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:26.955 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:26.960 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:26.960 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:27 compute-0 ceph-mon[73607]: pgmap v2295: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 12:37:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2525186184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228938636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.185 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.211 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.216 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652196126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.655 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.657 2 DEBUG nova.virt.libvirt.vif [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-423590868',display_name='tempest-ServersTestJSON-server-423590868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-423590868',id=146,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-5uena2bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:21Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=20f1aa9a-e50f-4610-aaae-2468cccbeb6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.657 2 DEBUG nova.network.os_vif_util [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.658 2 DEBUG nova.network.os_vif_util [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.659 2 DEBUG nova.objects.instance [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'pci_devices' on Instance uuid 20f1aa9a-e50f-4610-aaae-2468cccbeb6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.687 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <uuid>20f1aa9a-e50f-4610-aaae-2468cccbeb6b</uuid>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <name>instance-00000092</name>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:name>tempest-ServersTestJSON-server-423590868</nova:name>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:37:26</nova:creationTime>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:user uuid="fe9cc788734f406d826446a848700331">tempest-ServersTestJSON-80077074-project-member</nova:user>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:project uuid="bc0d63d3b4404ef8858166e8836dd0af">tempest-ServersTestJSON-80077074</nova:project>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <nova:port uuid="6319a656-83a0-492b-ac32-92c9f82f2ec5">
Oct 02 12:37:27 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <system>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="serial">20f1aa9a-e50f-4610-aaae-2468cccbeb6b</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="uuid">20f1aa9a-e50f-4610-aaae-2468cccbeb6b</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </system>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <os>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </os>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <features>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </features>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk">
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </source>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config">
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </source>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:37:27 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:45:90:72"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <target dev="tap6319a656-83"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/console.log" append="off"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <video>
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </video>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:37:27 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:37:27 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:37:27 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:37:27 compute-0 nova_compute[257802]: </domain>
Oct 02 12:37:27 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.687 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Preparing to wait for external event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.688 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.688 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.688 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.689 2 DEBUG nova.virt.libvirt.vif [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-423590868',display_name='tempest-ServersTestJSON-server-423590868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-423590868',id=146,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-5uena2bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:21Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=20f1aa9a-e50f-4610-aaae-2468cccbeb6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.689 2 DEBUG nova.network.os_vif_util [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.690 2 DEBUG nova.network.os_vif_util [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.690 2 DEBUG os_vif [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6319a656-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6319a656-83, col_values=(('external_ids', {'iface-id': '6319a656-83a0-492b-ac32-92c9f82f2ec5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:90:72', 'vm-uuid': '20f1aa9a-e50f-4610-aaae-2468cccbeb6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:27 compute-0 NetworkManager[44987]: <info>  [1759408647.6969] manager: (tap6319a656-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.703 2 INFO os_vif [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83')
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.758 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.758 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.758 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No VIF found with MAC fa:16:3e:45:90:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.759 2 INFO nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Using config drive
Oct 02 12:37:27 compute-0 nova_compute[257802]: 2025-10-02 12:37:27.787 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 12:37:27 compute-0 podman[346697]: 2025-10-02 12:37:27.930460497 +0000 UTC m=+0.072717540 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:37:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3228938636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3652196126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.461 2 INFO nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Creating config drive at /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.465 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohnw_2mw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.571 2 DEBUG nova.network.neutron [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Updated VIF entry in instance network info cache for port 6319a656-83a0-492b-ac32-92c9f82f2ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.572 2 DEBUG nova.network.neutron [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Updating instance_info_cache with network_info: [{"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.594 2 DEBUG oslo_concurrency.lockutils [req-29a4d211-4d1d-415e-ba6e-74bb9957d2aa req-72157ec9-4f00-4b70-a283-d525e0eaf5ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-20f1aa9a-e50f-4610-aaae-2468cccbeb6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.599 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohnw_2mw" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.629 2 DEBUG nova.storage.rbd_utils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.633 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:28 compute-0 nova_compute[257802]: 2025-10-02 12:37:28.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 sudo[346762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:28 compute-0 sudo[346762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:28 compute-0 sudo[346762]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:29 compute-0 sudo[346787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:29 compute-0 sudo[346787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:29 compute-0 sudo[346787]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.601 2 DEBUG oslo_concurrency.processutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config 20f1aa9a-e50f-4610-aaae-2468cccbeb6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.968s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.602 2 INFO nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Deleting local config drive /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b/disk.config because it was imported into RBD.
Oct 02 12:37:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:29.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.6513] manager: (tap6319a656-83): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Oct 02 12:37:29 compute-0 kernel: tap6319a656-83: entered promiscuous mode
Oct 02 12:37:29 compute-0 ovn_controller[148183]: 2025-10-02T12:37:29Z|00636|binding|INFO|Claiming lport 6319a656-83a0-492b-ac32-92c9f82f2ec5 for this chassis.
Oct 02 12:37:29 compute-0 ovn_controller[148183]: 2025-10-02T12:37:29Z|00637|binding|INFO|6319a656-83a0-492b-ac32-92c9f82f2ec5: Claiming fa:16:3e:45:90:72 10.100.0.14
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.661 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:90:72 10.100.0.14'], port_security=['fa:16:3e:45:90:72 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '20f1aa9a-e50f-4610-aaae-2468cccbeb6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6319a656-83a0-492b-ac32-92c9f82f2ec5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.662 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6319a656-83a0-492b-ac32-92c9f82f2ec5 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac bound to our chassis
Oct 02 12:37:29 compute-0 ceph-mon[73607]: pgmap v2296: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.663 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_controller[148183]: 2025-10-02T12:37:29Z|00638|binding|INFO|Setting lport 6319a656-83a0-492b-ac32-92c9f82f2ec5 ovn-installed in OVS
Oct 02 12:37:29 compute-0 ovn_controller[148183]: 2025-10-02T12:37:29Z|00639|binding|INFO|Setting lport 6319a656-83a0-492b-ac32-92c9f82f2ec5 up in Southbound
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.676 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bb990cbb-85ee-4ace-8b7f-df648cdf0296]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.677 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7203b00-e1 in ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.679 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7203b00-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.679 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c95dec1-edf4-43bf-aa55-c1a7bc596c8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.680 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfc1d0c-82d0-4db9-aa5b-58ce6cd6043f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 systemd-udevd[346829]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:29 compute-0 systemd-machined[211836]: New machine qemu-73-instance-00000092.
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.6929] device (tap6319a656-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.6945] device (tap6319a656-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.694 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b60adab8-2bd2-4cc1-b5af-815927d397c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000092.
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.717 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cf7186-1698-4416-ab6c-ff4fa21bedf7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.745 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb9b2d6-8330-4cb7-89f5-1877c848f834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.752 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ddab6067-df8d-4595-9f29-adaa0cb6a270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.7536] manager: (tapd7203b00-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Oct 02 12:37:29 compute-0 systemd-udevd[346832]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.787 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf277a4-ad69-4a52-9ee6-be16f5581449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.790 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c563f905-93ae-45df-8885-c22ec80d5b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.6 MiB/s wr, 304 op/s
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.8156] device (tapd7203b00-e0): carrier: link connected
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.826 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8889dc3f-23d5-4018-af17-efd47425a0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.847 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b36083f6-f739-4457-8dd5-c46d6a448618]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671744, 'reachable_time': 20928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346861, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.863 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d68e3517-471a-40c0-be17-3d45ee11a65c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:c4e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671744, 'tstamp': 671744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346862, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.885 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec6172c-bd8e-4647-895d-c0b221d250bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671744, 'reachable_time': 20928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346863, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 02 12:37:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:29.901601) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:37:29 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 02 12:37:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649901655, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 756, "num_deletes": 251, "total_data_size": 1005020, "memory_usage": 1020128, "flush_reason": "Manual Compaction"}
Oct 02 12:37:29 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.915 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7e66b838-c113-4eae-9354-2b1c20309f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.972 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98d81813-7c3c-4eb9-8be0-b6a29ca8cb0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.974 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.974 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.974 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 NetworkManager[44987]: <info>  [1759408649.9771] manager: (tapd7203b00-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct 02 12:37:29 compute-0 kernel: tapd7203b00-e0: entered promiscuous mode
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.980 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_controller[148183]: 2025-10-02T12:37:29Z|00640|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 nova_compute[257802]: 2025-10-02 12:37:29.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.997 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.998 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8191d19-de19-4fd5-b132-a8c2b9e5b82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:29.999 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-d7203b00-e5e4-402e-b777-ac6280fa23ac
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID d7203b00-e5e4-402e-b777-ac6280fa23ac
Oct 02 12:37:29 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:37:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:30.001 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'env', 'PROCESS_TAG=haproxy-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7203b00-e5e4-402e-b777-ac6280fa23ac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:37:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.025 2 DEBUG nova.compute.manager [req-54fd3282-6b81-4f52-9dd8-ef7ef6aaeb23 req-d915d7de-5a9b-4c04-864b-7cca53e43e8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.025 2 DEBUG oslo_concurrency.lockutils [req-54fd3282-6b81-4f52-9dd8-ef7ef6aaeb23 req-d915d7de-5a9b-4c04-864b-7cca53e43e8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.026 2 DEBUG oslo_concurrency.lockutils [req-54fd3282-6b81-4f52-9dd8-ef7ef6aaeb23 req-d915d7de-5a9b-4c04-864b-7cca53e43e8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.026 2 DEBUG oslo_concurrency.lockutils [req-54fd3282-6b81-4f52-9dd8-ef7ef6aaeb23 req-d915d7de-5a9b-4c04-864b-7cca53e43e8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.026 2 DEBUG nova.compute.manager [req-54fd3282-6b81-4f52-9dd8-ef7ef6aaeb23 req-d915d7de-5a9b-4c04-864b-7cca53e43e8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Processing event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408650117004, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 993471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50570, "largest_seqno": 51325, "table_properties": {"data_size": 989649, "index_size": 1602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9032, "raw_average_key_size": 19, "raw_value_size": 981854, "raw_average_value_size": 2143, "num_data_blocks": 70, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408595, "oldest_key_time": 1759408595, "file_creation_time": 1759408649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 215441 microseconds, and 3615 cpu microseconds.
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.117048) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 993471 bytes OK
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.117067) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.120674) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.120745) EVENT_LOG_v1 {"time_micros": 1759408650120736, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.120766) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1001217, prev total WAL file size 1001217, number of live WAL files 2.
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.121509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(970KB)], [110(12MB)]
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408650121572, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 14329738, "oldest_snapshot_seqno": -1}
Oct 02 12:37:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7931 keys, 12464437 bytes, temperature: kUnknown
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408650363108, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 12464437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12410150, "index_size": 33375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205739, "raw_average_key_size": 25, "raw_value_size": 12267569, "raw_average_value_size": 1546, "num_data_blocks": 1312, "num_entries": 7931, "num_filter_entries": 7931, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408650, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:30 compute-0 podman[346935]: 2025-10-02 12:37:30.339241071 +0000 UTC m=+0.023063575 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.363359) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12464437 bytes
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.560625) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.3 rd, 51.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(27.0) write-amplify(12.5) OK, records in: 8444, records dropped: 513 output_compression: NoCompression
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.560663) EVENT_LOG_v1 {"time_micros": 1759408650560648, "job": 66, "event": "compaction_finished", "compaction_time_micros": 241605, "compaction_time_cpu_micros": 26129, "output_level": 6, "num_output_files": 1, "total_output_size": 12464437, "num_input_records": 8444, "num_output_records": 7931, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408650561005, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408650562721, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.121271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.562822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.562845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.562848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.562850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:37:30.562852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.704 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408650.703973, 20f1aa9a-e50f-4610-aaae-2468cccbeb6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.704 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] VM Started (Lifecycle Event)
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.706 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.710 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.713 2 INFO nova.virt.libvirt.driver [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance spawned successfully.
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.713 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.749 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.753 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.776 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.776 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.776 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.777 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.777 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.777 2 DEBUG nova.virt.libvirt.driver [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.792 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.793 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408650.7041361, 20f1aa9a-e50f-4610-aaae-2468cccbeb6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.793 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] VM Paused (Lifecycle Event)
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.909 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.912 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408650.709058, 20f1aa9a-e50f-4610-aaae-2468cccbeb6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:30 compute-0 nova_compute[257802]: 2025-10-02 12:37:30.912 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] VM Resumed (Lifecycle Event)
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.017 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.020 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.064 2 INFO nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Took 9.85 seconds to spawn the instance on the hypervisor.
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.064 2 DEBUG nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.156 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:31 compute-0 ceph-mon[73607]: pgmap v2297: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.6 MiB/s wr, 304 op/s
Oct 02 12:37:31 compute-0 podman[346935]: 2025-10-02 12:37:31.208373375 +0000 UTC m=+0.892195859 container create 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.313 2 INFO nova.compute.manager [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Took 11.41 seconds to build instance.
Oct 02 12:37:31 compute-0 nova_compute[257802]: 2025-10-02 12:37:31.573 2 DEBUG oslo_concurrency.lockutils [None req-ff6862fc-acb4-4b8d-aaf8-94b53165b082 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:31.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:31 compute-0 systemd[1]: Started libpod-conmon-277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3.scope.
Oct 02 12:37:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:37:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295b1ebdfde190380d1465eb6122f7262ed814a4ce8a44fa24981eab0f36dddc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:31 compute-0 podman[346935]: 2025-10-02 12:37:31.948098764 +0000 UTC m=+1.631921278 container init 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:37:31 compute-0 podman[346935]: 2025-10-02 12:37:31.957418922 +0000 UTC m=+1.641241406 container start 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:31 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [NOTICE]   (346956) : New worker (346958) forked
Oct 02 12:37:31 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [NOTICE]   (346956) : Loading success.
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.113 2 DEBUG nova.compute.manager [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.113 2 DEBUG oslo_concurrency.lockutils [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.113 2 DEBUG oslo_concurrency.lockutils [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.113 2 DEBUG oslo_concurrency.lockutils [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.114 2 DEBUG nova.compute.manager [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] No waiting events found dispatching network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.114 2 WARNING nova.compute.manager [req-a217524d-1693-4f36-8c46-3efe683ac56c req-23a25589-f206-491f-8228-54e7adfb0ac7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received unexpected event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 for instance with vm_state active and task_state None.
Oct 02 12:37:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Oct 02 12:37:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3185048013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3789338036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Oct 02 12:37:32 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Oct 02 12:37:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:32 compute-0 nova_compute[257802]: 2025-10-02 12:37:32.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.282 2 DEBUG oslo_concurrency.lockutils [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.283 2 DEBUG oslo_concurrency.lockutils [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.283 2 DEBUG nova.compute.manager [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.286 2 DEBUG nova.compute.manager [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.287 2 DEBUG nova.objects.instance [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'flavor' on Instance uuid 20f1aa9a-e50f-4610-aaae-2468cccbeb6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.311 2 DEBUG nova.virt.libvirt.driver [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:37:33 compute-0 ceph-mon[73607]: pgmap v2298: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:37:33 compute-0 ceph-mon[73607]: osdmap e332: 3 total, 3 up, 3 in
Oct 02 12:37:33 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Oct 02 12:37:33 compute-0 nova_compute[257802]: 2025-10-02 12:37:33.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:33.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 589 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 02 12:37:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:37:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:37:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Oct 02 12:37:34 compute-0 ceph-mon[73607]: pgmap v2300: 305 pgs: 305 active+clean; 589 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 02 12:37:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Oct 02 12:37:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Oct 02 12:37:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:35.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 15 MiB/s wr, 482 op/s
Oct 02 12:37:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Oct 02 12:37:36 compute-0 ceph-mon[73607]: osdmap e333: 3 total, 3 up, 3 in
Oct 02 12:37:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Oct 02 12:37:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Oct 02 12:37:37 compute-0 ceph-mon[73607]: pgmap v2302: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 15 MiB/s wr, 482 op/s
Oct 02 12:37:37 compute-0 ceph-mon[73607]: osdmap e334: 3 total, 3 up, 3 in
Oct 02 12:37:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3408644407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1957920708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:37 compute-0 nova_compute[257802]: 2025-10-02 12:37:37.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 9.7 MiB/s wr, 334 op/s
Oct 02 12:37:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:38.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:38 compute-0 ceph-mon[73607]: pgmap v2304: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 9.7 MiB/s wr, 334 op/s
Oct 02 12:37:38 compute-0 nova_compute[257802]: 2025-10-02 12:37:38.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:39.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 471 op/s
Oct 02 12:37:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Oct 02 12:37:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Oct 02 12:37:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Oct 02 12:37:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:40.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:41 compute-0 ceph-mon[73607]: pgmap v2305: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 471 op/s
Oct 02 12:37:41 compute-0 ceph-mon[73607]: osdmap e335: 3 total, 3 up, 3 in
Oct 02 12:37:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:41.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Oct 02 12:37:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:37:42
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'vms']
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:42 compute-0 nova_compute[257802]: 2025-10-02 12:37:42.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:37:43 compute-0 nova_compute[257802]: 2025-10-02 12:37:43.352 2 DEBUG nova.virt.libvirt.driver [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:37:43 compute-0 nova_compute[257802]: 2025-10-02 12:37:43.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:43 compute-0 ceph-mon[73607]: pgmap v2307: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Oct 02 12:37:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:43.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.1 MiB/s wr, 266 op/s
Oct 02 12:37:43 compute-0 sshd-session[346972]: Invalid user ubuntu from 43.167.220.139 port 43370
Oct 02 12:37:43 compute-0 sshd-session[346972]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:37:43 compute-0 sshd-session[346972]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.167.220.139
Oct 02 12:37:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:37:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:37:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:37:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:37:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:37:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:44 compute-0 sudo[346975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:44 compute-0 sudo[346975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[346975]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[347001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:44 compute-0 sudo[347001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[347001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[347026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:44 compute-0 sudo[347026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[347026]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[347051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:37:44 compute-0 sudo[347051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:37:45 compute-0 ceph-mon[73607]: pgmap v2308: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.1 MiB/s wr, 266 op/s
Oct 02 12:37:45 compute-0 sudo[347051]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:37:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:37:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:45 compute-0 sshd-session[346972]: Failed password for invalid user ubuntu from 43.167.220.139 port 43370 ssh2
Oct 02 12:37:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:37:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 657 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 282 op/s
Oct 02 12:37:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:46 compute-0 sshd-session[346972]: Connection closed by invalid user ubuntu 43.167.220.139 port 43370 [preauth]
Oct 02 12:37:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev be4c3247-a3ac-484d-84c1-7cd08ac5ca60 does not exist
Oct 02 12:37:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1987815e-bab4-44e9-a5a0-2d29d2669f97 does not exist
Oct 02 12:37:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 29c17e22-c025-4ae3-b307-d701acf21156 does not exist
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:37:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:47 compute-0 sudo[347109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:47 compute-0 sudo[347109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:47 compute-0 sudo[347109]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:47 compute-0 sudo[347134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:47 compute-0 sudo[347134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:47 compute-0 sudo[347134]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:47 compute-0 sudo[347159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:47 compute-0 sudo[347159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:47 compute-0 sudo[347159]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:47 compute-0 sudo[347184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:37:47 compute-0 sudo[347184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:37:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:37:47 compute-0 ceph-mon[73607]: pgmap v2309: 305 pgs: 305 active+clean; 657 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 282 op/s
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:37:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:47 compute-0 nova_compute[257802]: 2025-10-02 12:37:47.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 657 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:47.928691258 +0000 UTC m=+0.021645960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:48.098586545 +0000 UTC m=+0.191541227 container create c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:48 compute-0 systemd[1]: Started libpod-conmon-c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c.scope.
Oct 02 12:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:48.293270948 +0000 UTC m=+0.386225630 container init c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:48.304113933 +0000 UTC m=+0.397068615 container start c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:48 compute-0 flamboyant_allen[347266]: 167 167
Oct 02 12:37:48 compute-0 systemd[1]: libpod-c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c.scope: Deactivated successfully.
Oct 02 12:37:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:48.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:48.390941278 +0000 UTC m=+0.483895950 container attach c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:48 compute-0 podman[347250]: 2025-10-02 12:37:48.391229475 +0000 UTC m=+0.484184157 container died c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:37:48 compute-0 nova_compute[257802]: 2025-10-02 12:37:48.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7133ae39ebba89a2fd2328887c16a79cafbbe959e78f566ae0b54a7ed85ce170-merged.mount: Deactivated successfully.
Oct 02 12:37:49 compute-0 sudo[347284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:49 compute-0 ceph-mon[73607]: pgmap v2310: 305 pgs: 305 active+clean; 657 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Oct 02 12:37:49 compute-0 sudo[347284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:49 compute-0 sudo[347284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:49 compute-0 sudo[347309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:49 compute-0 sudo[347309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:49 compute-0 sudo[347309]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:49 compute-0 ovn_controller[148183]: 2025-10-02T12:37:49Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:90:72 10.100.0.14
Oct 02 12:37:49 compute-0 ovn_controller[148183]: 2025-10-02T12:37:49Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:90:72 10.100.0.14
Oct 02 12:37:49 compute-0 podman[347250]: 2025-10-02 12:37:49.533948583 +0000 UTC m=+1.626903265 container remove c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:37:49 compute-0 systemd[1]: libpod-conmon-c86369680f1f59b527fef8fa60168d67c474b409ffe3a48dfd7c44c4d0cae03c.scope: Deactivated successfully.
Oct 02 12:37:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:49.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:49 compute-0 podman[347341]: 2025-10-02 12:37:49.699551864 +0000 UTC m=+0.020319028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Oct 02 12:37:49 compute-0 podman[347341]: 2025-10-02 12:37:49.872744472 +0000 UTC m=+0.193511606 container create 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:37:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:50 compute-0 systemd[1]: Started libpod-conmon-6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e.scope.
Oct 02 12:37:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:50 compute-0 podman[347341]: 2025-10-02 12:37:50.508507217 +0000 UTC m=+0.829274401 container init 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:37:50 compute-0 podman[347341]: 2025-10-02 12:37:50.517917717 +0000 UTC m=+0.838684851 container start 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:37:51 compute-0 podman[347341]: 2025-10-02 12:37:51.012817625 +0000 UTC m=+1.333584799 container attach 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:51 compute-0 ceph-mon[73607]: pgmap v2311: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Oct 02 12:37:51 compute-0 naughty_ritchie[347357]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:37:51 compute-0 naughty_ritchie[347357]: --> relative data size: 1.0
Oct 02 12:37:51 compute-0 naughty_ritchie[347357]: --> All data devices are unavailable
Oct 02 12:37:51 compute-0 systemd[1]: libpod-6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e.scope: Deactivated successfully.
Oct 02 12:37:51 compute-0 podman[347341]: 2025-10-02 12:37:51.407264826 +0000 UTC m=+1.728031960 container died 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e66552e040e83a06f107609d6afd9407b75c2abdcd38a601f87a5f2a6b839506-merged.mount: Deactivated successfully.
Oct 02 12:37:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:51.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 174 op/s
Oct 02 12:37:51 compute-0 podman[347341]: 2025-10-02 12:37:51.965397451 +0000 UTC m=+2.286164595 container remove 6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:37:51 compute-0 systemd[1]: libpod-conmon-6e2f27a5a5b5c23ed9f0040398b2929f78894ce9e67a95eb732d6c192088350e.scope: Deactivated successfully.
Oct 02 12:37:51 compute-0 sudo[347184]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:52 compute-0 sudo[347392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:52 compute-0 sudo[347392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:52 compute-0 podman[347386]: 2025-10-02 12:37:52.075016123 +0000 UTC m=+0.060978933 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:37:52 compute-0 sudo[347392]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:52 compute-0 podman[347385]: 2025-10-02 12:37:52.078757985 +0000 UTC m=+0.063588567 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:37:52 compute-0 podman[347384]: 2025-10-02 12:37:52.097301168 +0000 UTC m=+0.087252735 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 02 12:37:52 compute-0 sudo[347466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:52 compute-0 sudo[347466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:52 compute-0 sudo[347466]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:52 compute-0 sudo[347493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:52 compute-0 sudo[347493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:52 compute-0 sudo[347493]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:52 compute-0 sudo[347518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:37:52 compute-0 sudo[347518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:52 compute-0 podman[347582]: 2025-10-02 12:37:52.563103375 +0000 UTC m=+0.028442797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:52 compute-0 podman[347582]: 2025-10-02 12:37:52.690664136 +0000 UTC m=+0.156003528 container create c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:37:52 compute-0 nova_compute[257802]: 2025-10-02 12:37:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:52 compute-0 systemd[1]: Started libpod-conmon-c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed.scope.
Oct 02 12:37:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:52 compute-0 podman[347582]: 2025-10-02 12:37:52.956806517 +0000 UTC m=+0.422145929 container init c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:37:52 compute-0 podman[347582]: 2025-10-02 12:37:52.967888238 +0000 UTC m=+0.433227630 container start c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:37:52 compute-0 mystifying_northcutt[347600]: 167 167
Oct 02 12:37:52 compute-0 systemd[1]: libpod-c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed.scope: Deactivated successfully.
Oct 02 12:37:52 compute-0 conmon[347600]: conmon c735d2a861bf4cd8ff23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed.scope/container/memory.events
Oct 02 12:37:53 compute-0 podman[347582]: 2025-10-02 12:37:53.00721468 +0000 UTC m=+0.472554072 container attach c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:37:53 compute-0 podman[347582]: 2025-10-02 12:37:53.007757704 +0000 UTC m=+0.473097096 container died c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d72066292f237a258b81281bf2031c7c514f77cd438b586f4321a6f1af5fbbc-merged.mount: Deactivated successfully.
Oct 02 12:37:53 compute-0 ceph-mon[73607]: pgmap v2312: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 174 op/s
Oct 02 12:37:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2421256036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/195267348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:53 compute-0 nova_compute[257802]: 2025-10-02 12:37:53.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 583 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 221 op/s
Oct 02 12:37:53 compute-0 podman[347582]: 2025-10-02 12:37:53.918199508 +0000 UTC m=+1.383538900 container remove c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:37:53 compute-0 systemd[1]: libpod-conmon-c735d2a861bf4cd8ff23b586ded001cdeacbf132c3d1d3b7c46a1ca9708c75ed.scope: Deactivated successfully.
Oct 02 12:37:54 compute-0 podman[347625]: 2025-10-02 12:37:54.152428869 +0000 UTC m=+0.097832335 container create badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:54 compute-0 podman[347625]: 2025-10-02 12:37:54.081993526 +0000 UTC m=+0.027397022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:54 compute-0 nova_compute[257802]: 2025-10-02 12:37:54.401 2 DEBUG nova.virt.libvirt.driver [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:37:54 compute-0 systemd[1]: Started libpod-conmon-badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2.scope.
Oct 02 12:37:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bbc836dd08e4e9b10e4657a72ebfd501a6714fed442ff1249ad389f6ead6a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bbc836dd08e4e9b10e4657a72ebfd501a6714fed442ff1249ad389f6ead6a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bbc836dd08e4e9b10e4657a72ebfd501a6714fed442ff1249ad389f6ead6a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bbc836dd08e4e9b10e4657a72ebfd501a6714fed442ff1249ad389f6ead6a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010804529010794714 of space, bias 1.0, pg target 3.241358703238414 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6422100738065326 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069828308207705 of space, bias 1.0, pg target 1.2087390075376885 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:37:54 compute-0 podman[347625]: 2025-10-02 12:37:54.656689526 +0000 UTC m=+0.602093102 container init badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:37:54 compute-0 podman[347625]: 2025-10-02 12:37:54.665487952 +0000 UTC m=+0.610891428 container start badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:37:54 compute-0 ceph-mon[73607]: pgmap v2313: 305 pgs: 305 active+clean; 583 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 221 op/s
Oct 02 12:37:55 compute-0 podman[347625]: 2025-10-02 12:37:55.150424306 +0000 UTC m=+1.095827802 container attach badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:37:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:37:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567926342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:37:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:37:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567926342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:37:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:55.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:55 compute-0 practical_noether[347641]: {
Oct 02 12:37:55 compute-0 practical_noether[347641]:     "1": [
Oct 02 12:37:55 compute-0 practical_noether[347641]:         {
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "devices": [
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "/dev/loop3"
Oct 02 12:37:55 compute-0 practical_noether[347641]:             ],
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "lv_name": "ceph_lv0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "lv_size": "7511998464",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "name": "ceph_lv0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "tags": {
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.cluster_name": "ceph",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.crush_device_class": "",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.encrypted": "0",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.osd_id": "1",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.type": "block",
Oct 02 12:37:55 compute-0 practical_noether[347641]:                 "ceph.vdo": "0"
Oct 02 12:37:55 compute-0 practical_noether[347641]:             },
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "type": "block",
Oct 02 12:37:55 compute-0 practical_noether[347641]:             "vg_name": "ceph_vg0"
Oct 02 12:37:55 compute-0 practical_noether[347641]:         }
Oct 02 12:37:55 compute-0 practical_noether[347641]:     ]
Oct 02 12:37:55 compute-0 practical_noether[347641]: }
Oct 02 12:37:55 compute-0 systemd[1]: libpod-badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2.scope: Deactivated successfully.
Oct 02 12:37:55 compute-0 podman[347625]: 2025-10-02 12:37:55.714452066 +0000 UTC m=+1.659855552 container died badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:37:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 211 op/s
Oct 02 12:37:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/567926342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:37:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/567926342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:37:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-24bbc836dd08e4e9b10e4657a72ebfd501a6714fed442ff1249ad389f6ead6a9-merged.mount: Deactivated successfully.
Oct 02 12:37:57 compute-0 podman[347625]: 2025-10-02 12:37:57.295639382 +0000 UTC m=+3.241042858 container remove badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noether, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:37:57 compute-0 sudo[347518]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:57 compute-0 systemd[1]: libpod-conmon-badcaa5d4518632f420c78dc264de09cec7a85af870ea6c15b8d10a9172ab1f2.scope: Deactivated successfully.
Oct 02 12:37:57 compute-0 sudo[347664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:57 compute-0 sudo[347664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:57 compute-0 sudo[347664]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:57 compute-0 sudo[347689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:57 compute-0 sudo[347689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:57 compute-0 sudo[347689]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:57 compute-0 ceph-mon[73607]: pgmap v2314: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 211 op/s
Oct 02 12:37:57 compute-0 sudo[347714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:57 compute-0 sudo[347714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:57 compute-0 sudo[347714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:57 compute-0 sudo[347739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:37:57 compute-0 sudo[347739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:57 compute-0 nova_compute[257802]: 2025-10-02 12:37:57.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.0 MiB/s wr, 169 op/s
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:58.000749654 +0000 UTC m=+0.091496610 container create 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:57.935347834 +0000 UTC m=+0.026094820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:58 compute-0 systemd[1]: Started libpod-conmon-45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d.scope.
Oct 02 12:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:58.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:58.50087909 +0000 UTC m=+0.591626056 container init 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:58.507623456 +0000 UTC m=+0.598370412 container start 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:37:58 compute-0 focused_banzai[347835]: 167 167
Oct 02 12:37:58 compute-0 systemd[1]: libpod-45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d.scope: Deactivated successfully.
Oct 02 12:37:58 compute-0 nova_compute[257802]: 2025-10-02 12:37:58.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:58.848222659 +0000 UTC m=+0.938969635 container attach 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:37:58 compute-0 podman[347803]: 2025-10-02 12:37:58.849045449 +0000 UTC m=+0.939792405 container died 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:37:59 compute-0 ceph-mon[73607]: pgmap v2315: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 941 KiB/s rd, 4.0 MiB/s wr, 169 op/s
Oct 02 12:37:59 compute-0 kernel: tap6319a656-83 (unregistering): left promiscuous mode
Oct 02 12:37:59 compute-0 NetworkManager[44987]: <info>  [1759408679.1361] device (tap6319a656-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-0 ovn_controller[148183]: 2025-10-02T12:37:59Z|00641|binding|INFO|Releasing lport 6319a656-83a0-492b-ac32-92c9f82f2ec5 from this chassis (sb_readonly=0)
Oct 02 12:37:59 compute-0 ovn_controller[148183]: 2025-10-02T12:37:59Z|00642|binding|INFO|Setting lport 6319a656-83a0-492b-ac32-92c9f82f2ec5 down in Southbound
Oct 02 12:37:59 compute-0 ovn_controller[148183]: 2025-10-02T12:37:59Z|00643|binding|INFO|Removing iface tap6319a656-83 ovn-installed in OVS
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:59.156 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:90:72 10.100.0.14'], port_security=['fa:16:3e:45:90:72 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '20f1aa9a-e50f-4610-aaae-2468cccbeb6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6319a656-83a0-492b-ac32-92c9f82f2ec5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:59.159 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6319a656-83a0-492b-ac32-92c9f82f2ec5 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac unbound from our chassis
Oct 02 12:37:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:59.161 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7203b00-e5e4-402e-b777-ac6280fa23ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:37:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:59.163 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[97791674-8ec0-474e-b158-a145c7c98863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:37:59.164 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac namespace which is not needed anymore
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 02 12:37:59 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000092.scope: Consumed 14.600s CPU time.
Oct 02 12:37:59 compute-0 systemd-machined[211836]: Machine qemu-73-instance-00000092 terminated.
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.426 2 INFO nova.virt.libvirt.driver [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance shutdown successfully after 26 seconds.
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.435 2 INFO nova.virt.libvirt.driver [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance destroyed successfully.
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.435 2 DEBUG nova.objects.instance [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'numa_topology' on Instance uuid 20f1aa9a-e50f-4610-aaae-2468cccbeb6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.456 2 DEBUG nova.compute.manager [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.494 2 DEBUG nova.compute.manager [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-vif-unplugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.495 2 DEBUG oslo_concurrency.lockutils [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.495 2 DEBUG oslo_concurrency.lockutils [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.496 2 DEBUG oslo_concurrency.lockutils [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.496 2 DEBUG nova.compute.manager [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] No waiting events found dispatching network-vif-unplugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.496 2 WARNING nova.compute.manager [req-cc15b6cb-fc74-4bbc-9149-573b677cb455 req-454088bb-af10-4d65-8395-bc6d9396ddcf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received unexpected event network-vif-unplugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 for instance with vm_state active and task_state powering-off.
Oct 02 12:37:59 compute-0 nova_compute[257802]: 2025-10-02 12:37:59.533 2 DEBUG oslo_concurrency.lockutils [None req-302df176-fe04-4ea3-81b8-8e9df1a1f22f fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:37:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:59 compute-0 sshd-session[347893]: banner exchange: Connection from 40.124.174.199 port 57048: invalid format
Oct 02 12:37:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 949 KiB/s rd, 4.1 MiB/s wr, 180 op/s
Oct 02 12:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd75c417fa52a7043247a79959da6c817c1a9a21e1435af75979f74c8c79f87c-merged.mount: Deactivated successfully.
Oct 02 12:37:59 compute-0 ovn_controller[148183]: 2025-10-02T12:37:59Z|00644|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 12:38:00 compute-0 nova_compute[257802]: 2025-10-02 12:38:00.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:00 compute-0 nova_compute[257802]: 2025-10-02 12:38:00.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:38:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3759468216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3809355517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:00 compute-0 podman[347803]: 2025-10-02 12:38:00.727724433 +0000 UTC m=+2.818471389 container remove 45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:38:00 compute-0 podman[347818]: 2025-10-02 12:38:00.774298753 +0000 UTC m=+2.734433063 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:38:00 compute-0 systemd[1]: libpod-conmon-45f18162ec29884233baf88a8bdbd78dacfc36a3ea07244b9e193208c29c2e5d.scope: Deactivated successfully.
Oct 02 12:38:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:01 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [NOTICE]   (346956) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:01 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [NOTICE]   (346956) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:01 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [WARNING]  (346956) : Exiting Master process...
Oct 02 12:38:01 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [ALERT]    (346956) : Current worker (346958) exited with code 143 (Terminated)
Oct 02 12:38:01 compute-0 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[346952]: [WARNING]  (346956) : All workers exited. Exiting... (0)
Oct 02 12:38:01 compute-0 systemd[1]: libpod-277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3.scope: Deactivated successfully.
Oct 02 12:38:01 compute-0 podman[347899]: 2025-10-02 12:38:01.26708142 +0000 UTC m=+0.422538889 container died 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:38:01 compute-0 ceph-mon[73607]: pgmap v2316: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 949 KiB/s rd, 4.1 MiB/s wr, 180 op/s
Oct 02 12:38:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3194258724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:01.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.020 2 DEBUG nova.compute.manager [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.021 2 DEBUG oslo_concurrency.lockutils [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.021 2 DEBUG oslo_concurrency.lockutils [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.022 2 DEBUG oslo_concurrency.lockutils [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.022 2 DEBUG nova.compute.manager [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] No waiting events found dispatching network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.023 2 WARNING nova.compute.manager [req-0f2242bd-9dcc-4186-89fb-c7613f064403 req-eeff4001-3bd0-4182-a907-0dd32689160b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received unexpected event network-vif-plugged-6319a656-83a0-492b-ac32-92c9f82f2ec5 for instance with vm_state stopped and task_state None.
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-295b1ebdfde190380d1465eb6122f7262ed814a4ce8a44fa24981eab0f36dddc-merged.mount: Deactivated successfully.
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.674 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.675 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.675 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.675 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.676 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.677 2 INFO nova.compute.manager [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Terminating instance
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.678 2 DEBUG nova.compute.manager [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.684 2 INFO nova.virt.libvirt.driver [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Instance destroyed successfully.
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.685 2 DEBUG nova.objects.instance [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'resources' on Instance uuid 20f1aa9a-e50f-4610-aaae-2468cccbeb6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.695 2 DEBUG nova.virt.libvirt.vif [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-423590868',display_name='tempest-Íñstáñcé-1147441636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-423590868',id=146,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-5uena2bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:01Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=20f1aa9a-e50f-4610-aaae-2468cccbeb6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.696 2 DEBUG nova.network.os_vif_util [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "address": "fa:16:3e:45:90:72", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6319a656-83", "ovs_interfaceid": "6319a656-83a0-492b-ac32-92c9f82f2ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.696 2 DEBUG nova.network.os_vif_util [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.697 2 DEBUG os_vif [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6319a656-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[257802]: 2025-10-02 12:38:02.705 2 INFO os_vif [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:90:72,bridge_name='br-int',has_traffic_filtering=True,id=6319a656-83a0-492b-ac32-92c9f82f2ec5,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6319a656-83')
Oct 02 12:38:03 compute-0 podman[347917]: 2025-10-02 12:38:03.045806209 +0000 UTC m=+2.126430018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:03 compute-0 nova_compute[257802]: 2025-10-02 12:38:03.109 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:03 compute-0 podman[347899]: 2025-10-02 12:38:03.150621004 +0000 UTC m=+2.306078473 container cleanup 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:38:03 compute-0 ceph-mon[73607]: pgmap v2317: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Oct 02 12:38:03 compute-0 podman[347917]: 2025-10-02 12:38:03.453546745 +0000 UTC m=+2.534170504 container create 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:38:03 compute-0 systemd[1]: Started libpod-conmon-5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca.scope.
Oct 02 12:38:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:03.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:03 compute-0 nova_compute[257802]: 2025-10-02 12:38:03.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462ec462fca17c73d503d1c3939990b3bc9c5d5f9c34b74598a72cc51d26d496/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462ec462fca17c73d503d1c3939990b3bc9c5d5f9c34b74598a72cc51d26d496/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462ec462fca17c73d503d1c3939990b3bc9c5d5f9c34b74598a72cc51d26d496/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462ec462fca17c73d503d1c3939990b3bc9c5d5f9c34b74598a72cc51d26d496/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 714 KiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 02 12:38:04 compute-0 nova_compute[257802]: 2025-10-02 12:38:04.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:04.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:04 compute-0 podman[347917]: 2025-10-02 12:38:04.529655783 +0000 UTC m=+3.610279642 container init 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:38:04 compute-0 podman[347917]: 2025-10-02 12:38:04.540273473 +0000 UTC m=+3.620897242 container start 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:38:04 compute-0 podman[347917]: 2025-10-02 12:38:04.79679638 +0000 UTC m=+3.877420189 container attach 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:38:05 compute-0 podman[347963]: 2025-10-02 12:38:05.112873832 +0000 UTC m=+1.939409901 container remove 277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:05 compute-0 systemd[1]: libpod-conmon-277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3.scope: Deactivated successfully.
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.127 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9b57c869-e52c-42e5-9ca8-e2c2b9c32059]: (4, ('Thu Oct  2 12:38:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac (277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3)\n277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3\nThu Oct  2 12:38:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac (277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3)\n277018ee595bd3f7184aed1117772644f826e83aa0d83aef39f24e8beb92f4a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.129 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f741514b-371c-4411-bcd7-a8af0b4daf55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.130 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:05 compute-0 nova_compute[257802]: 2025-10-02 12:38:05.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 kernel: tapd7203b00-e0: left promiscuous mode
Oct 02 12:38:05 compute-0 nova_compute[257802]: 2025-10-02 12:38:05.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.171 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[832644f4-a8b7-43bd-ac8c-c285738ad485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.193 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fae1a412-0e35-4208-a319-bdb78d8c0e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.195 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[71e7c916-a749-4adf-91f1-883491892ccf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.212 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b33823ac-fa56-4e26-a590-cac1c8e98a29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671736, 'reachable_time': 36781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347992, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7203b00\x2de5e4\x2d402e\x2db777\x2dac6280fa23ac.mount: Deactivated successfully.
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.219 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:05.221 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ce45c69e-e792-489f-80c5-9d6b959a26ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ceph-mon[73607]: pgmap v2318: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 714 KiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]: {
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:         "osd_id": 1,
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:         "type": "bluestore"
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]:     }
Oct 02 12:38:05 compute-0 sleepy_perlman[347979]: }
Oct 02 12:38:05 compute-0 systemd[1]: libpod-5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca.scope: Deactivated successfully.
Oct 02 12:38:05 compute-0 podman[348008]: 2025-10-02 12:38:05.394859542 +0000 UTC m=+0.030545469 container died 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 544 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 108 op/s
Oct 02 12:38:06 compute-0 nova_compute[257802]: 2025-10-02 12:38:06.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-462ec462fca17c73d503d1c3939990b3bc9c5d5f9c34b74598a72cc51d26d496-merged.mount: Deactivated successfully.
Oct 02 12:38:07 compute-0 ceph-mon[73607]: pgmap v2319: 305 pgs: 305 active+clean; 544 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 108 op/s
Oct 02 12:38:07 compute-0 podman[348008]: 2025-10-02 12:38:07.275088154 +0000 UTC m=+1.910774091 container remove 5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:38:07 compute-0 systemd[1]: libpod-conmon-5702c6de544882ff0c9f78304526cf67ea8b8adf03680400f1758a754fb446ca.scope: Deactivated successfully.
Oct 02 12:38:07 compute-0 sudo[347739]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:38:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:38:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:38:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:38:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8c7e7d5e-05e7-4015-8828-df6820010ad2 does not exist
Oct 02 12:38:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5071f75b-2583-46ca-860e-aa78647859e7 does not exist
Oct 02 12:38:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c82ef63a-fc9f-4d57-893a-68f5ab626a31 does not exist
Oct 02 12:38:07 compute-0 sudo[348025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:07 compute-0 sudo[348025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:07 compute-0 sudo[348025]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:38:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:38:07 compute-0 sudo[348050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:38:07 compute-0 sudo[348050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:07 compute-0 sudo[348050]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:07 compute-0 nova_compute[257802]: 2025-10-02 12:38:07.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 567 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Oct 02 12:38:08 compute-0 nova_compute[257802]: 2025-10-02 12:38:08.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:38:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:38:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2084835883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1445632556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:08 compute-0 nova_compute[257802]: 2025-10-02 12:38:08.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.301 2 INFO nova.virt.libvirt.driver [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Deleting instance files /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b_del
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.301 2 INFO nova.virt.libvirt.driver [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Deletion of /var/lib/nova/instances/20f1aa9a-e50f-4610-aaae-2468cccbeb6b_del complete
Oct 02 12:38:09 compute-0 sudo[348077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:09 compute-0 sudo[348077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:09 compute-0 sudo[348077]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.355 2 INFO nova.compute.manager [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Took 6.68 seconds to destroy the instance on the hypervisor.
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.356 2 DEBUG oslo.service.loopingcall [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.357 2 DEBUG nova.compute.manager [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:38:09 compute-0 nova_compute[257802]: 2025-10-02 12:38:09.358 2 DEBUG nova.network.neutron [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:38:09 compute-0 sudo[348102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:09 compute-0 sudo[348102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:09 compute-0 sudo[348102]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:09 compute-0 sshd-session[347890]: Connection closed by 40.124.174.199 port 57036 [preauth]
Oct 02 12:38:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:38:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:38:09 compute-0 ceph-mon[73607]: pgmap v2320: 305 pgs: 305 active+clean; 567 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Oct 02 12:38:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 02 12:38:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:10.599 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:10.600 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.618 2 DEBUG nova.network.neutron [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.638 2 INFO nova.compute.manager [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Took 1.28 seconds to deallocate network for instance.
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.683 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.684 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:10 compute-0 nova_compute[257802]: 2025-10-02 12:38:10.737 2 DEBUG oslo_concurrency.processutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:10 compute-0 ceph-mon[73607]: pgmap v2321: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 02 12:38:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4040968926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3009170363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:38:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3061793362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.184 2 DEBUG oslo_concurrency.processutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.191 2 DEBUG nova.compute.provider_tree [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.206 2 DEBUG nova.scheduler.client.report [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.229 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.268 2 INFO nova.scheduler.client.report [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Deleted allocations for instance 20f1aa9a-e50f-4610-aaae-2468cccbeb6b
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.288 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.288 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.289 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.289 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.344 2 DEBUG oslo_concurrency.lockutils [None req-3444c762-76c1-4c13-930f-102062b20429 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "20f1aa9a-e50f-4610-aaae-2468cccbeb6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:11 compute-0 nova_compute[257802]: 2025-10-02 12:38:11.640 2 DEBUG nova.compute.manager [req-1d52a17e-4458-4080-86cd-6444ce775722 req-27fc20bc-db00-4866-96b6-edaa5bf14cef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Received event network-vif-deleted-6319a656-83a0-492b-ac32-92c9f82f2ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Oct 02 12:38:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3061793362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/998240517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:12.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:12 compute-0 nova_compute[257802]: 2025-10-02 12:38:12.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Oct 02 12:38:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Oct 02 12:38:13 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.307 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.328 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.328 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.329 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.355 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.355 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.356 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.356 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.356 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:13 compute-0 ceph-mon[73607]: pgmap v2322: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Oct 02 12:38:13 compute-0 ceph-mon[73607]: osdmap e336: 3 total, 3 up, 3 in
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:13.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 467 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 191 op/s
Oct 02 12:38:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291909900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.901 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.982 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:13 compute-0 nova_compute[257802]: 2025-10-02 12:38:13.983 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.141 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.142 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4052MB free_disk=20.814926147460938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.142 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.143 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.206 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.206 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.207 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.234 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408679.3880172, 20f1aa9a-e50f-4610-aaae-2468cccbeb6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.390 2 INFO nova.compute.manager [-] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] VM Stopped (Lifecycle Event)
Oct 02 12:38:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:38:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:14.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.419 2 DEBUG nova.compute.manager [None req-c79c3dbb-c959-40ca-bade-c4bc12be11c5 - - - - - -] [instance: 20f1aa9a-e50f-4610-aaae-2468cccbeb6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/431754805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.647 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.653 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.674 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.707 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:38:14 compute-0 nova_compute[257802]: 2025-10-02 12:38:14.708 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1291909900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:15.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.5 MiB/s wr, 179 op/s
Oct 02 12:38:16 compute-0 ceph-mon[73607]: pgmap v2324: 305 pgs: 305 active+clean; 467 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 191 op/s
Oct 02 12:38:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/431754805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:16.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:16 compute-0 nova_compute[257802]: 2025-10-02 12:38:16.703 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:17 compute-0 ceph-mon[73607]: pgmap v2325: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.5 MiB/s wr, 179 op/s
Oct 02 12:38:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/192862081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:17.602 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:17 compute-0 nova_compute[257802]: 2025-10-02 12:38:17.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Oct 02 12:38:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:18.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:18 compute-0 nova_compute[257802]: 2025-10-02 12:38:18.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2250064736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:18 compute-0 ceph-mon[73607]: pgmap v2326: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Oct 02 12:38:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 208 op/s
Oct 02 12:38:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Oct 02 12:38:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-0 ceph-mon[73607]: pgmap v2327: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 208 op/s
Oct 02 12:38:21 compute-0 ceph-mon[73607]: osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 192 op/s
Oct 02 12:38:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:22 compute-0 nova_compute[257802]: 2025-10-02 12:38:22.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:22 compute-0 ceph-mon[73607]: pgmap v2329: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 192 op/s
Oct 02 12:38:22 compute-0 podman[348202]: 2025-10-02 12:38:22.925159304 +0000 UTC m=+0.061610728 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:22 compute-0 podman[348204]: 2025-10-02 12:38:22.927544083 +0000 UTC m=+0.063995547 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:38:22 compute-0 podman[348203]: 2025-10-02 12:38:22.929138432 +0000 UTC m=+0.065965855 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:23 compute-0 nova_compute[257802]: 2025-10-02 12:38:23.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:23.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Oct 02 12:38:24 compute-0 nova_compute[257802]: 2025-10-02 12:38:24.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:25 compute-0 ceph-mon[73607]: pgmap v2330: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Oct 02 12:38:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3606411082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 02 12:38:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:26.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:26 compute-0 ceph-mon[73607]: pgmap v2331: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:26.957 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:26.957 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:38:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:27 compute-0 nova_compute[257802]: 2025-10-02 12:38:27.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 1.4 MiB/s wr, 88 op/s
Oct 02 12:38:27 compute-0 ovn_controller[148183]: 2025-10-02T12:38:27Z|00645|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:38:28 compute-0 nova_compute[257802]: 2025-10-02 12:38:28.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:28 compute-0 nova_compute[257802]: 2025-10-02 12:38:28.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:28 compute-0 ceph-mon[73607]: pgmap v2332: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 1.4 MiB/s wr, 88 op/s
Oct 02 12:38:29 compute-0 sudo[348265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:29 compute-0 sudo[348265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:29 compute-0 sudo[348265]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:29 compute-0 sudo[348290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:29 compute-0 sudo[348290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:29 compute-0 sudo[348290]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 907 KiB/s rd, 31 KiB/s wr, 118 op/s
Oct 02 12:38:30 compute-0 nova_compute[257802]: 2025-10-02 12:38:30.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3666136594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:38:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:38:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:30 compute-0 podman[348316]: 2025-10-02 12:38:30.946570797 +0000 UTC m=+0.081853893 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 12:38:31 compute-0 ceph-mon[73607]: pgmap v2333: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 907 KiB/s rd, 31 KiB/s wr, 118 op/s
Oct 02 12:38:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:38:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:38:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 39 KiB/s wr, 101 op/s
Oct 02 12:38:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:32 compute-0 nova_compute[257802]: 2025-10-02 12:38:32.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:33 compute-0 nova_compute[257802]: 2025-10-02 12:38:33.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:33 compute-0 ceph-mon[73607]: pgmap v2334: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 39 KiB/s wr, 101 op/s
Oct 02 12:38:33 compute-0 nova_compute[257802]: 2025-10-02 12:38:33.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:33.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 35 KiB/s wr, 98 op/s
Oct 02 12:38:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:34 compute-0 ceph-mon[73607]: pgmap v2335: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 35 KiB/s wr, 98 op/s
Oct 02 12:38:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:35.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 259 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 577 KiB/s rd, 37 KiB/s wr, 92 op/s
Oct 02 12:38:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1974365964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:36 compute-0 ceph-mon[73607]: pgmap v2336: 305 pgs: 305 active+clean; 259 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 577 KiB/s rd, 37 KiB/s wr, 92 op/s
Oct 02 12:38:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3882477945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1405666715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:37 compute-0 nova_compute[257802]: 2025-10-02 12:38:37.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 576 KiB/s rd, 51 KiB/s wr, 96 op/s
Oct 02 12:38:38 compute-0 nova_compute[257802]: 2025-10-02 12:38:38.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/894417205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:38 compute-0 nova_compute[257802]: 2025-10-02 12:38:38.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:39 compute-0 ceph-mon[73607]: pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 576 KiB/s rd, 51 KiB/s wr, 96 op/s
Oct 02 12:38:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1546098081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:39.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 227 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 413 KiB/s rd, 1.1 MiB/s wr, 118 op/s
Oct 02 12:38:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:41 compute-0 ceph-mon[73607]: pgmap v2338: 305 pgs: 305 active+clean; 227 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 413 KiB/s rd, 1.1 MiB/s wr, 118 op/s
Oct 02 12:38:41 compute-0 nova_compute[257802]: 2025-10-02 12:38:41.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:38:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2219945569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:38:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2219945569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:41.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1004 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 02 12:38:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:38:42
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'vms']
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:38:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2219945569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2219945569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:42 compute-0 nova_compute[257802]: 2025-10-02 12:38:42.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:38:43 compute-0 nova_compute[257802]: 2025-10-02 12:38:43.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 278 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 132 op/s
Oct 02 12:38:43 compute-0 ceph-mon[73607]: pgmap v2339: 305 pgs: 305 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1004 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 02 12:38:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1066104501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:38:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:38:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:38:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:38:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:38:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:44 compute-0 ceph-mon[73607]: pgmap v2340: 305 pgs: 305 active+clean; 278 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 132 op/s
Oct 02 12:38:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1558871007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:45.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 02 12:38:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3006945388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3006945388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:47 compute-0 ceph-mon[73607]: pgmap v2341: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 02 12:38:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:47.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:47 compute-0 nova_compute[257802]: 2025-10-02 12:38:47.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Oct 02 12:38:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:48 compute-0 ceph-mon[73607]: pgmap v2342: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Oct 02 12:38:48 compute-0 nova_compute[257802]: 2025-10-02 12:38:48.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:49 compute-0 ovn_controller[148183]: 2025-10-02T12:38:49Z|00646|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:49 compute-0 sudo[348351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:49 compute-0 sudo[348351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.696 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.697 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:49 compute-0 sudo[348351]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.716 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:38:49 compute-0 sudo[348376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:49 compute-0 sudo[348376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:49 compute-0 sudo[348376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.808 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.809 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.815 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.816 2 INFO nova.compute.claims [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:38:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 190 op/s
Oct 02 12:38:49 compute-0 nova_compute[257802]: 2025-10-02 12:38:49.963 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2588265676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.640 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.649 2 DEBUG nova.compute.provider_tree [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.667 2 DEBUG nova.scheduler.client.report [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.691 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.692 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.740 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.741 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.762 2 INFO nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.776 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.912 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.913 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:38:50 compute-0 nova_compute[257802]: 2025-10-02 12:38:50.913 2 INFO nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Creating image(s)
Oct 02 12:38:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.145 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.187 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.215 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.219 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.304 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.305 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.306 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.306 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.327 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.330 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c03d6d93-3bfc-4356-bdea-f62670b73a91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:51 compute-0 nova_compute[257802]: 2025-10-02 12:38:51.356 2 DEBUG nova.policy [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4ad22acfd744e47aa9bb09035188e74', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1294b2ea04b34f7189fde66e2afa2c56', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:38:51 compute-0 ceph-mon[73607]: pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 190 op/s
Oct 02 12:38:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2588265676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 298 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 178 op/s
Oct 02 12:38:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:52.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:52 compute-0 nova_compute[257802]: 2025-10-02 12:38:52.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:52 compute-0 ceph-mon[73607]: pgmap v2344: 305 pgs: 305 active+clean; 298 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 178 op/s
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.047 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Successfully created port: 85648688-e368-42fd-86c6-d892c37f8c7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.237 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c03d6d93-3bfc-4356-bdea-f62670b73a91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.908s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.305 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] resizing rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.405 2 DEBUG nova.objects.instance [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'migration_context' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.420 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.420 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Ensure instance console log exists: /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.421 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.421 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.421 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:53 compute-0 nova_compute[257802]: 2025-10-02 12:38:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 319 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 175 op/s
Oct 02 12:38:53 compute-0 podman[348593]: 2025-10-02 12:38:53.926123423 +0000 UTC m=+0.057576379 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:38:53 compute-0 podman[348594]: 2025-10-02 12:38:53.932724435 +0000 UTC m=+0.059301041 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:53 compute-0 podman[348592]: 2025-10-02 12:38:53.945664712 +0000 UTC m=+0.073767675 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.300 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Successfully updated port: 85648688-e368-42fd-86c6-d892c37f8c7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.314 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.315 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquired lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.315 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.434 2 DEBUG nova.compute.manager [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-changed-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.436 2 DEBUG nova.compute.manager [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Refreshing instance network info cache due to event network-changed-85648688-e368-42fd-86c6-d892c37f8c7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.437 2 DEBUG oslo_concurrency.lockutils [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:38:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005094192024939512 of space, bias 1.0, pg target 1.5282576074818537 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:38:54 compute-0 nova_compute[257802]: 2025-10-02 12:38:54.650 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:38:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:38:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/781174056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:38:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/781174056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.542 2 DEBUG nova.network.neutron [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updating instance_info_cache with network_info: [{"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.560 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Releasing lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.560 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Instance network_info: |[{"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.560 2 DEBUG oslo_concurrency.lockutils [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.561 2 DEBUG nova.network.neutron [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Refreshing network info cache for port 85648688-e368-42fd-86c6-d892c37f8c7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.563 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Start _get_guest_xml network_info=[{"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.568 2 WARNING nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.577 2 DEBUG nova.virt.libvirt.host [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.578 2 DEBUG nova.virt.libvirt.host [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.582 2 DEBUG nova.virt.libvirt.host [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.582 2 DEBUG nova.virt.libvirt.host [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.583 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.584 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.584 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.584 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.585 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.585 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.585 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.586 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.586 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.586 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.587 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.587 2 DEBUG nova.virt.hardware [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:55 compute-0 nova_compute[257802]: 2025-10-02 12:38:55.590 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:55 compute-0 ceph-mon[73607]: pgmap v2345: 305 pgs: 305 active+clean; 319 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 175 op/s
Oct 02 12:38:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/781174056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/781174056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 337 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 167 op/s
Oct 02 12:38:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947997443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.209 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.257 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.263 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678183792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.704 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.707 2 DEBUG nova.virt.libvirt.vif [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1407304067',display_name='tempest-TestEncryptedCinderVolumes-server-1407304067',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1407304067',id=150,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOMJUj6TR5Hs2OVN7EF3FXhyHxf5yFPtChZDgiv15qWm1+k3afDIwiy57sWcPPQbfpdF3629DcEaLbL6DecpaHLxDbvnGufOnRy2GJpUApZQYS+d1x5X1KYIoGRgkaZRA==',key_name='tempest-keypair-1596230167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1294b2ea04b34f7189fde66e2afa2c56',ramdisk_id='',reservation_id='r-2o1xb16a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1986184665',owner_user_name='tempest-TestEncryptedCinderVolumes-1986184665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4ad22acfd744e47aa9bb09035188e74',uuid=c03d6d93-3bfc-4356-bdea-f62670b73a91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.707 2 DEBUG nova.network.os_vif_util [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converting VIF {"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.709 2 DEBUG nova.network.os_vif_util [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.711 2 DEBUG nova.objects.instance [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'pci_devices' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.731 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <uuid>c03d6d93-3bfc-4356-bdea-f62670b73a91</uuid>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <name>instance-00000096</name>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1407304067</nova:name>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:38:55</nova:creationTime>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:user uuid="b4ad22acfd744e47aa9bb09035188e74">tempest-TestEncryptedCinderVolumes-1986184665-project-member</nova:user>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:project uuid="1294b2ea04b34f7189fde66e2afa2c56">tempest-TestEncryptedCinderVolumes-1986184665</nova:project>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <nova:port uuid="85648688-e368-42fd-86c6-d892c37f8c7b">
Oct 02 12:38:56 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <system>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="serial">c03d6d93-3bfc-4356-bdea-f62670b73a91</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="uuid">c03d6d93-3bfc-4356-bdea-f62670b73a91</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </system>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <os>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </os>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <features>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </features>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c03d6d93-3bfc-4356-bdea-f62670b73a91_disk">
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </source>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config">
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </source>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:38:56 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:dc:25:6b"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <target dev="tap85648688-e3"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/console.log" append="off"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <video>
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </video>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:38:56 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:38:56 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:38:56 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:38:56 compute-0 nova_compute[257802]: </domain>
Oct 02 12:38:56 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.733 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Preparing to wait for external event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.734 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.734 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.735 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.736 2 DEBUG nova.virt.libvirt.vif [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1407304067',display_name='tempest-TestEncryptedCinderVolumes-server-1407304067',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1407304067',id=150,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOMJUj6TR5Hs2OVN7EF3FXhyHxf5yFPtChZDgiv15qWm1+k3afDIwiy57sWcPPQbfpdF3629DcEaLbL6DecpaHLxDbvnGufOnRy2GJpUApZQYS+d1x5X1KYIoGRgkaZRA==',key_name='tempest-keypair-1596230167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1294b2ea04b34f7189fde66e2afa2c56',ramdisk_id='',reservation_id='r-2o1xb16a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1986184665',owner_user_name='tempest-TestEncryptedCinderVolumes-1986184665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4ad22acfd744e47aa9bb09035188e74',uuid=c03d6d93-3bfc-4356-bdea-f62670b73a91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.737 2 DEBUG nova.network.os_vif_util [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converting VIF {"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.738 2 DEBUG nova.network.os_vif_util [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.738 2 DEBUG os_vif [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.741 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85648688-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85648688-e3, col_values=(('external_ids', {'iface-id': '85648688-e368-42fd-86c6-d892c37f8c7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:25:6b', 'vm-uuid': 'c03d6d93-3bfc-4356-bdea-f62670b73a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 NetworkManager[44987]: <info>  [1759408736.7521] manager: (tap85648688-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 nova_compute[257802]: 2025-10-02 12:38:56.761 2 INFO os_vif [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3')
Oct 02 12:38:57 compute-0 nova_compute[257802]: 2025-10-02 12:38:57.180 2 DEBUG nova.network.neutron [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updated VIF entry in instance network info cache for port 85648688-e368-42fd-86c6-d892c37f8c7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:38:57 compute-0 nova_compute[257802]: 2025-10-02 12:38:57.181 2 DEBUG nova.network.neutron [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updating instance_info_cache with network_info: [{"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:57 compute-0 nova_compute[257802]: 2025-10-02 12:38:57.198 2 DEBUG oslo_concurrency.lockutils [req-654efa37-c05f-4667-8c83-198696637765 req-3f88d11a-9d40-49f6-be0f-997cbb1b30ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 142 op/s
Oct 02 12:38:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:58.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:58 compute-0 nova_compute[257802]: 2025-10-02 12:38:58.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:38:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 152 op/s
Oct 02 12:39:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:00.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:01 compute-0 nova_compute[257802]: 2025-10-02 12:39:01.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:01 compute-0 nova_compute[257802]: 2025-10-02 12:39:01.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:39:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:01.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:01 compute-0 nova_compute[257802]: 2025-10-02 12:39:01.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 123 op/s
Oct 02 12:39:01 compute-0 podman[348715]: 2025-10-02 12:39:01.942411622 +0000 UTC m=+0.078293827 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:39:02 compute-0 nova_compute[257802]: 2025-10-02 12:39:02.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:02 compute-0 nova_compute[257802]: 2025-10-02 12:39:02.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:02 compute-0 nova_compute[257802]: 2025-10-02 12:39:02.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:39:02 compute-0 nova_compute[257802]: 2025-10-02 12:39:02.122 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:39:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:03.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:03 compute-0 nova_compute[257802]: 2025-10-02 12:39:03.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 99 op/s
Oct 02 12:39:04 compute-0 nova_compute[257802]: 2025-10-02 12:39:04.122 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:04 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.odxjnj missed beacon ack from the monitors
Oct 02 12:39:05 compute-0 nova_compute[257802]: 2025-10-02 12:39:05.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:05.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 151 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _do_read, latency = 7.574198723s, num_ios = 1
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for read, latency = 7.574267864s
Oct 02 12:39:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.631754875s
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.631754875s
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.650602341s, txc = 0x55bcd528ac00
Oct 02 12:39:05 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.650038719s, txc = 0x55bcd61a0900
Oct 02 12:39:05 compute-0 nova_compute[257802]: 2025-10-02 12:39:05.994 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:05 compute-0 nova_compute[257802]: 2025-10-02 12:39:05.995 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:05 compute-0 nova_compute[257802]: 2025-10-02 12:39:05.995 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No VIF found with MAC fa:16:3e:dc:25:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:39:06 compute-0 nova_compute[257802]: 2025-10-02 12:39:05.996 2 INFO nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Using config drive
Oct 02 12:39:06 compute-0 nova_compute[257802]: 2025-10-02 12:39:06.029 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:06 compute-0 ceph-mon[73607]: pgmap v2346: 305 pgs: 305 active+clean; 337 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 167 op/s
Oct 02 12:39:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3947997443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/678183792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:06 compute-0 nova_compute[257802]: 2025-10-02 12:39:06.421 2 INFO nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Creating config drive at /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config
Oct 02 12:39:06 compute-0 nova_compute[257802]: 2025-10-02 12:39:06.433 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqmamwlaf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:06 compute-0 nova_compute[257802]: 2025-10-02 12:39:06.582 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqmamwlaf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:06 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.323419571s, txc = 0x55bcd528bb00
Oct 02 12:39:07 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 12:39:07 compute-0 ceph-mon[73607]: paxos.0).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Oct 02 12:39:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 130 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 02 12:39:08 compute-0 sudo[348775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:08 compute-0 sudo[348775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:08 compute-0 sudo[348775]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:08 compute-0 sudo[348800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:39:08 compute-0 sudo[348800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:08 compute-0 sudo[348800]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:08 compute-0 sudo[348825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:08 compute-0 sudo[348825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:08 compute-0 sudo[348825]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:08 compute-0 sudo[348850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:39:08 compute-0 sudo[348850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:08 compute-0 ceph-mon[73607]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 12:39:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:08 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 12:39:08 compute-0 sudo[348850]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:08 compute-0 nova_compute[257802]: 2025-10-02 12:39:08.956 2 DEBUG nova.storage.rbd_utils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] rbd image c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:08 compute-0 nova_compute[257802]: 2025-10-02 12:39:08.962 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:39:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:39:08 compute-0 nova_compute[257802]: 2025-10-02 12:39:08.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:08 compute-0 nova_compute[257802]: 2025-10-02 12:39:08.995 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:08 compute-0 nova_compute[257802]: 2025-10-02 12:39:08.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.154 2 DEBUG oslo_concurrency.processutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config c03d6d93-3bfc-4356-bdea-f62670b73a91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.155 2 INFO nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Deleting local config drive /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91/disk.config because it was imported into RBD.
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 12:39:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.fmcstn(active, since 67m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:39:09 compute-0 kernel: tap85648688-e3: entered promiscuous mode
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.2159] manager: (tap85648688-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Oct 02 12:39:09 compute-0 ovn_controller[148183]: 2025-10-02T12:39:09Z|00647|binding|INFO|Claiming lport 85648688-e368-42fd-86c6-d892c37f8c7b for this chassis.
Oct 02 12:39:09 compute-0 ovn_controller[148183]: 2025-10-02T12:39:09Z|00648|binding|INFO|85648688-e368-42fd-86c6-d892c37f8c7b: Claiming fa:16:3e:dc:25:6b 10.100.0.7
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 ovn_controller[148183]: 2025-10-02T12:39:09Z|00649|binding|INFO|Setting lport 85648688-e368-42fd-86c6-d892c37f8c7b ovn-installed in OVS
Oct 02 12:39:09 compute-0 ovn_controller[148183]: 2025-10-02T12:39:09Z|00650|binding|INFO|Setting lport 85648688-e368-42fd-86c6-d892c37f8c7b up in Southbound
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.272 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:6b 10.100.0.7'], port_security=['fa:16:3e:dc:25:6b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c03d6d93-3bfc-4356-bdea-f62670b73a91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1294b2ea04b34f7189fde66e2afa2c56', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5427e9f2-85fc-4f8d-a138-f123d5924ea4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7d1a30a-9898-452a-a8fe-d5b194845098, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=85648688-e368-42fd-86c6-d892c37f8c7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.274 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 85648688-e368-42fd-86c6-d892c37f8c7b in datapath 9d35d228-f367-4a7c-abd0-4c3ae2b08283 bound to our chassis
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.275 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d35d228-f367-4a7c-abd0-4c3ae2b08283
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 systemd-machined[211836]: New machine qemu-74-instance-00000096.
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.287 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[474f3186-095c-4e80-8d6b-cd84756e764c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.288 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d35d228-f1 in ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.290 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d35d228-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.290 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b02b20e9-171d-4b04-98bb-922ad065eb0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.290 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[39b54b69-7cec-4ead-81b2-6de853c5e8e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000096.
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.309 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1830c84e-e353-4513-b488-0ce080d7769a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 systemd-udevd[348953]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.325 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfcfd8d-4e0a-4781-8708-2a6f4443583f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.3454] device (tap85648688-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.3463] device (tap85648688-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.373 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6461793a-5206-4216-b053-3abd788146b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.377 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9b004604-3741-4981-ac52-7d31f16bb60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.3794] manager: (tap9d35d228-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Oct 02 12:39:09 compute-0 systemd-udevd[348956]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.417 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbba344-5024-4558-9a05-ff44d055c5cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.422 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0fde1d9a-89c3-43ae-bfe7-12594c7a6af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.4427] device (tap9d35d228-f0): carrier: link connected
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.451 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa30daf-205e-40b1-8052-e780755a84b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.466 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[189b8b51-07eb-45c6-a558-a2beab0f0ede]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d35d228-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:2e:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681706, 'reachable_time': 34475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348983, 'error': None, 'target': 'ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:39:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:39:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.480 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8f5cd-ecf5-4812-ac41-0036d4860ebd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:2e53'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681706, 'tstamp': 681706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348984, 'error': None, 'target': 'ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.497 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdf2c56-4483-44a6-a50b-d9f4f7c30549]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d35d228-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:2e:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681706, 'reachable_time': 34475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348985, 'error': None, 'target': 'ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.525 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9633a98e-f726-4ee6-809a-3d90c5525075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.576 2 DEBUG nova.compute.manager [req-1bbccacc-0221-43de-b3d9-88eb8a1f59c6 req-876e7813-95dd-4ac6-bde8-eab5b4b2a7b6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.577 2 DEBUG oslo_concurrency.lockutils [req-1bbccacc-0221-43de-b3d9-88eb8a1f59c6 req-876e7813-95dd-4ac6-bde8-eab5b4b2a7b6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.578 2 DEBUG oslo_concurrency.lockutils [req-1bbccacc-0221-43de-b3d9-88eb8a1f59c6 req-876e7813-95dd-4ac6-bde8-eab5b4b2a7b6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.578 2 DEBUG oslo_concurrency.lockutils [req-1bbccacc-0221-43de-b3d9-88eb8a1f59c6 req-876e7813-95dd-4ac6-bde8-eab5b4b2a7b6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.578 2 DEBUG nova.compute.manager [req-1bbccacc-0221-43de-b3d9-88eb8a1f59c6 req-876e7813-95dd-4ac6-bde8-eab5b4b2a7b6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Processing event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.588 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[477ef07e-254a-4d16-9fe8-26d5d49ffbf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.590 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d35d228-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.590 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.591 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d35d228-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:09 compute-0 NetworkManager[44987]: <info>  [1759408749.5934] manager: (tap9d35d228-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 kernel: tap9d35d228-f0: entered promiscuous mode
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.597 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d35d228-f0, col_values=(('external_ids', {'iface-id': '2a3378e4-d225-4d19-b2b7-dcce99197b9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 ovn_controller[148183]: 2025-10-02T12:39:09Z|00651|binding|INFO|Releasing lport 2a3378e4-d225-4d19-b2b7-dcce99197b9c from this chassis (sb_readonly=0)
Oct 02 12:39:09 compute-0 nova_compute[257802]: 2025-10-02 12:39:09.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.622 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d35d228-f367-4a7c-abd0-4c3ae2b08283.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d35d228-f367-4a7c-abd0-4c3ae2b08283.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.623 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ac70a278-2a05-4432-9f6d-34257e71eb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.623 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-9d35d228-f367-4a7c-abd0-4c3ae2b08283
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/9d35d228-f367-4a7c-abd0-4c3ae2b08283.pid.haproxy
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 9d35d228-f367-4a7c-abd0-4c3ae2b08283
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:39:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:09.624 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'env', 'PROCESS_TAG=haproxy-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d35d228-f367-4a7c-abd0-4c3ae2b08283.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:39:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:09.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:09 compute-0 sudo[349020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:09 compute-0 sudo[349020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:09 compute-0 sudo[349020]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 367 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 572 KiB/s wr, 38 op/s
Oct 02 12:39:09 compute-0 sudo[349045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:09 compute-0 sudo[349045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:09 compute-0 sudo[349045]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:10 compute-0 podman[349093]: 2025-10-02 12:39:09.967263739 +0000 UTC m=+0.019874117 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Oct 02 12:39:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b2f44875-36e6-4a73-ab25-d76eec6592b8 does not exist
Oct 02 12:39:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4304ed2a-28f5-44d0-be3a-ed5f1875aca7 does not exist
Oct 02 12:39:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9e918513-de10-4e02-bbed-d197370c2e45 does not exist
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:39:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:39:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:39:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:39:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:10 compute-0 sudo[349123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:10 compute-0 sudo[349123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:10 compute-0 sudo[349123]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:10 compute-0 sudo[349148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:39:10 compute-0 sudo[349148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:10 compute-0 sudo[349148]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.568 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408750.5673728, c03d6d93-3bfc-4356-bdea-f62670b73a91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.569 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] VM Started (Lifecycle Event)
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.570 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.575 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.580 2 INFO nova.virt.libvirt.driver [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Instance spawned successfully.
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.580 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:39:10 compute-0 sudo[349173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:10 compute-0 sudo[349173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:10 compute-0 sudo[349173]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.691 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:10 compute-0 sudo[349199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:39:10 compute-0 sudo[349199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.695 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.741 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.742 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.743 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.744 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.744 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.745 2 DEBUG nova.virt.libvirt.driver [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.768 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.769 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408750.5675929, c03d6d93-3bfc-4356-bdea-f62670b73a91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:10 compute-0 nova_compute[257802]: 2025-10-02 12:39:10.769 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] VM Paused (Lifecycle Event)
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Oct 02 12:39:10 compute-0 ceph-mon[73607]: pgmap v2351: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 151 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-1 calling monitor election
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-2 calling monitor election
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0 calling monitor election
Oct 02 12:39:10 compute-0 ceph-mon[73607]: pgmap v2352: 305 pgs: 305 active+clean; 365 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 130 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 12:39:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:39:10 compute-0 ceph-mon[73607]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 12:39:10 compute-0 ceph-mon[73607]: fsmap cephfs:1 {0=cephfs.compute-2.gpiyct=up:active} 2 up:standby
Oct 02 12:39:10 compute-0 ceph-mon[73607]: osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:39:10 compute-0 ceph-mon[73607]: mgrmap e11: compute-0.fmcstn(active, since 67m), standbys: compute-2.rbjjpf, compute-1.ypnrbl
Oct 02 12:39:10 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:39:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.015 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.021 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408750.573575, c03d6d93-3bfc-4356-bdea-f62670b73a91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.021 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] VM Resumed (Lifecycle Event)
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.077 2 INFO nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Took 20.17 seconds to spawn the instance on the hypervisor.
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.078 2 DEBUG nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:11 compute-0 podman[349093]: 2025-10-02 12:39:11.121987451 +0000 UTC m=+1.174597809 container create ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.177 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.180 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:39:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Oct 02 12:39:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.351 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.398 2 INFO nova.compute.manager [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Took 21.64 seconds to build instance.
Oct 02 12:39:11 compute-0 systemd[1]: Started libpod-conmon-ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d.scope.
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.433 2 DEBUG oslo_concurrency.lockutils [None req-23f1aea9-0e99-4ae4-86c5-04eb8e23963c b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ba245fed29d4c6bdad394dc0a95757caf823a18bc73b0de513d1f03ca4f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:11 compute-0 podman[349093]: 2025-10-02 12:39:11.603917251 +0000 UTC m=+1.656527659 container init ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:39:11 compute-0 podman[349093]: 2025-10-02 12:39:11.619161694 +0000 UTC m=+1.671772042 container start ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:39:11 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [NOTICE]   (349254) : New worker (349256) forked
Oct 02 12:39:11 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [NOTICE]   (349254) : Loading success.
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.718 2 DEBUG nova.compute.manager [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.718 2 DEBUG oslo_concurrency.lockutils [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.718 2 DEBUG oslo_concurrency.lockutils [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.718 2 DEBUG oslo_concurrency.lockutils [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.719 2 DEBUG nova.compute.manager [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] No waiting events found dispatching network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:39:11 compute-0 nova_compute[257802]: 2025-10-02 12:39:11.719 2 WARNING nova.compute.manager [req-0b887944-7438-430e-9e64-2fdbb6e64f6e req-eb6ec679-c7ba-499a-b480-408ab653dd5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received unexpected event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b for instance with vm_state active and task_state None.
Oct 02 12:39:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:11.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 383 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 27 op/s
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:11.929786204 +0000 UTC m=+0.022397029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:12 compute-0 ceph-mon[73607]: pgmap v2353: 305 pgs: 305 active+clean; 367 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 572 KiB/s wr, 38 op/s
Oct 02 12:39:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:39:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:39:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:39:12 compute-0 ceph-mon[73607]: osdmap e338: 3 total, 3 up, 3 in
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:12.130531345 +0000 UTC m=+0.223142150 container create 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:39:12 compute-0 systemd[1]: Started libpod-conmon-85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e.scope.
Oct 02 12:39:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:12.512207524 +0000 UTC m=+0.604818349 container init 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:12.519199745 +0000 UTC m=+0.611810550 container start 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:39:12 compute-0 systemd[1]: libpod-85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e.scope: Deactivated successfully.
Oct 02 12:39:12 compute-0 wonderful_rosalind[349298]: 167 167
Oct 02 12:39:12 compute-0 conmon[349298]: conmon 85fa2f89789b0233c93c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e.scope/container/memory.events
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:12.595552383 +0000 UTC m=+0.688163208 container attach 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:39:12 compute-0 podman[349281]: 2025-10-02 12:39:12.596132027 +0000 UTC m=+0.688742832 container died 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9aa4312d578696f3d255e06f161be11ce3be653ea1e6b9b8a56d8dc3a44dfd9-merged.mount: Deactivated successfully.
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.121 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.123 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.124 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.144 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:39:13 compute-0 ceph-mon[73607]: pgmap v2355: 305 pgs: 305 active+clean; 383 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 27 op/s
Oct 02 12:39:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1424316044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:13.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:13 compute-0 podman[349281]: 2025-10-02 12:39:13.82326938 +0000 UTC m=+1.915880185 container remove 85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:39:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 811 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Oct 02 12:39:13 compute-0 systemd[1]: libpod-conmon-85fa2f89789b0233c93ce3694019a4e0a1d5d5d21f2e992f08512147f7af900e.scope: Deactivated successfully.
Oct 02 12:39:13 compute-0 nova_compute[257802]: 2025-10-02 12:39:13.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:14 compute-0 podman[349324]: 2025-10-02 12:39:14.009312472 +0000 UTC m=+0.023941247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.127 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.127 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.127 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:14 compute-0 podman[349324]: 2025-10-02 12:39:14.22503625 +0000 UTC m=+0.239665015 container create cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:39:14 compute-0 systemd[1]: Started libpod-conmon-cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c.scope.
Oct 02 12:39:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:39:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:39:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39691553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.606 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.686 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.687 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.691 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.691 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Oct 02 12:39:14 compute-0 podman[349324]: 2025-10-02 12:39:14.740509551 +0000 UTC m=+0.755138346 container init cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:39:14 compute-0 podman[349324]: 2025-10-02 12:39:14.753968481 +0000 UTC m=+0.768597246 container start cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.856 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.858 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3859MB free_disk=20.83548355102539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.858 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.858 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.967 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.968 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance c03d6d93-3bfc-4356-bdea-f62670b73a91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.968 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:39:14 compute-0 nova_compute[257802]: 2025-10-02 12:39:14.968 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.044 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.108 2 DEBUG nova.compute.manager [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-changed-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.110 2 DEBUG nova.compute.manager [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Refreshing instance network info cache due to event network-changed-85648688-e368-42fd-86c6-d892c37f8c7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.110 2 DEBUG oslo_concurrency.lockutils [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.111 2 DEBUG oslo_concurrency.lockutils [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:15 compute-0 nova_compute[257802]: 2025-10-02 12:39:15.111 2 DEBUG nova.network.neutron [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Refreshing network info cache for port 85648688-e368-42fd-86c6-d892c37f8c7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:39:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1930165027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:15 compute-0 ceph-mon[73607]: pgmap v2356: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 811 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Oct 02 12:39:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/39691553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:15 compute-0 podman[349324]: 2025-10-02 12:39:15.50680489 +0000 UTC m=+1.521433655 container attach cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:39:15 compute-0 ecstatic_diffie[349360]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:39:15 compute-0 ecstatic_diffie[349360]: --> relative data size: 1.0
Oct 02 12:39:15 compute-0 ecstatic_diffie[349360]: --> All data devices are unavailable
Oct 02 12:39:15 compute-0 systemd[1]: libpod-cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c.scope: Deactivated successfully.
Oct 02 12:39:15 compute-0 podman[349324]: 2025-10-02 12:39:15.56405996 +0000 UTC m=+1.578688715 container died cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:39:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:39:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:39:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 389 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.4 MiB/s wr, 90 op/s
Oct 02 12:39:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622579073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:16 compute-0 nova_compute[257802]: 2025-10-02 12:39:16.149 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:16 compute-0 nova_compute[257802]: 2025-10-02 12:39:16.165 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:16 compute-0 nova_compute[257802]: 2025-10-02 12:39:16.191 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:16 compute-0 nova_compute[257802]: 2025-10-02 12:39:16.220 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:39:16 compute-0 nova_compute[257802]: 2025-10-02 12:39:16.221 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Oct 02 12:39:16 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Oct 02 12:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-393317a649747952c4980f3d0c3cb800eabda4b72c26afc296ce24d3cb9a3bd1-merged.mount: Deactivated successfully.
Oct 02 12:39:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/622579073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:17 compute-0 nova_compute[257802]: 2025-10-02 12:39:17.084 2 DEBUG nova.network.neutron [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updated VIF entry in instance network info cache for port 85648688-e368-42fd-86c6-d892c37f8c7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:39:17 compute-0 nova_compute[257802]: 2025-10-02 12:39:17.085 2 DEBUG nova.network.neutron [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updating instance_info_cache with network_info: [{"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:17 compute-0 nova_compute[257802]: 2025-10-02 12:39:17.345 2 DEBUG oslo_concurrency.lockutils [req-0bb8c9d4-4f3b-4031-b465-62827591243c req-ca9e6c0d-f201-4d40-a135-b5705609920d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c03d6d93-3bfc-4356-bdea-f62670b73a91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:17 compute-0 podman[349324]: 2025-10-02 12:39:17.367543186 +0000 UTC m=+3.382171941 container remove cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:39:17 compute-0 sudo[349199]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:17 compute-0 systemd[1]: libpod-conmon-cdd46c31ba22b649bfe5e9efc8e98968167e05a03d687ac98c08fdd5bc747f3c.scope: Deactivated successfully.
Oct 02 12:39:17 compute-0 sudo[349415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:17 compute-0 sudo[349415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:17 compute-0 sudo[349415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:17 compute-0 sudo[349440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:39:17 compute-0 sudo[349440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:17 compute-0 sudo[349440]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:17 compute-0 sudo[349465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:17 compute-0 sudo[349465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:17 compute-0 sudo[349465]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:17 compute-0 sudo[349490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:39:17 compute-0 sudo[349490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:17.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 177 op/s
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:17.971319957 +0000 UTC m=+0.021438946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:18.153358681 +0000 UTC m=+0.203477650 container create d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 02 12:39:18 compute-0 ceph-mon[73607]: pgmap v2357: 305 pgs: 305 active+clean; 389 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.4 MiB/s wr, 90 op/s
Oct 02 12:39:18 compute-0 ceph-mon[73607]: osdmap e339: 3 total, 3 up, 3 in
Oct 02 12:39:18 compute-0 systemd[1]: Started libpod-conmon-d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239.scope.
Oct 02 12:39:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:18 compute-0 nova_compute[257802]: 2025-10-02 12:39:18.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:18.764670127 +0000 UTC m=+0.814789116 container init d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:18.771595506 +0000 UTC m=+0.821714475 container start d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:39:18 compute-0 reverent_turing[349570]: 167 167
Oct 02 12:39:18 compute-0 systemd[1]: libpod-d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239.scope: Deactivated successfully.
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:18.960968911 +0000 UTC m=+1.011087910 container attach d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:39:18 compute-0 podman[349554]: 2025-10-02 12:39:18.961277338 +0000 UTC m=+1.011396307 container died d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:39:18 compute-0 nova_compute[257802]: 2025-10-02 12:39:18.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:19.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:19 compute-0 ceph-mon[73607]: pgmap v2359: 305 pgs: 305 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 177 op/s
Oct 02 12:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-88dd8d6f69d91b7de939ac7d7dfa92f05540ae944bedc4320c0bec92e3c02a22-merged.mount: Deactivated successfully.
Oct 02 12:39:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 398 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 511 KiB/s wr, 190 op/s
Oct 02 12:39:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:20 compute-0 podman[349554]: 2025-10-02 12:39:20.533238508 +0000 UTC m=+2.583357477 container remove d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:39:20 compute-0 systemd[1]: libpod-conmon-d1686432174a716d17f65f8cfb3ffa79f5a1e9f882bf6572b3a918694fdd2239.scope: Deactivated successfully.
Oct 02 12:39:20 compute-0 podman[349595]: 2025-10-02 12:39:20.693351755 +0000 UTC m=+0.022695456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:39:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3085279549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:39:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3085279549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:21 compute-0 podman[349595]: 2025-10-02 12:39:21.111520937 +0000 UTC m=+0.440864608 container create 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:39:21 compute-0 systemd[1]: Started libpod-conmon-031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999.scope.
Oct 02 12:39:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe817fe38c5df124080b09b26cd58f19823e24d1c28131732fdcfad858b0acc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe817fe38c5df124080b09b26cd58f19823e24d1c28131732fdcfad858b0acc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe817fe38c5df124080b09b26cd58f19823e24d1c28131732fdcfad858b0acc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe817fe38c5df124080b09b26cd58f19823e24d1c28131732fdcfad858b0acc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 398 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 508 KiB/s wr, 173 op/s
Oct 02 12:39:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:22 compute-0 podman[349595]: 2025-10-02 12:39:22.345742263 +0000 UTC m=+1.675085964 container init 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:39:22 compute-0 podman[349595]: 2025-10-02 12:39:22.353995345 +0000 UTC m=+1.683339016 container start 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:39:22 compute-0 ceph-mon[73607]: pgmap v2360: 305 pgs: 305 active+clean; 398 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 511 KiB/s wr, 190 op/s
Oct 02 12:39:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:22 compute-0 podman[349595]: 2025-10-02 12:39:22.768025845 +0000 UTC m=+2.097369516 container attach 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:39:23 compute-0 youthful_nobel[349611]: {
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:     "1": [
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:         {
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "devices": [
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "/dev/loop3"
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             ],
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "lv_name": "ceph_lv0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "lv_size": "7511998464",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "name": "ceph_lv0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "tags": {
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.cluster_name": "ceph",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.crush_device_class": "",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.encrypted": "0",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.osd_id": "1",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.type": "block",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:                 "ceph.vdo": "0"
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             },
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "type": "block",
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:             "vg_name": "ceph_vg0"
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:         }
Oct 02 12:39:23 compute-0 youthful_nobel[349611]:     ]
Oct 02 12:39:23 compute-0 youthful_nobel[349611]: }
Oct 02 12:39:23 compute-0 systemd[1]: libpod-031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999.scope: Deactivated successfully.
Oct 02 12:39:23 compute-0 podman[349621]: 2025-10-02 12:39:23.330062455 +0000 UTC m=+0.026047228 container died 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:39:23 compute-0 nova_compute[257802]: 2025-10-02 12:39:23.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:23.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 403 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 299 KiB/s wr, 150 op/s
Oct 02 12:39:23 compute-0 nova_compute[257802]: 2025-10-02 12:39:23.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:23.880 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:39:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:23.881 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:39:23 compute-0 nova_compute[257802]: 2025-10-02 12:39:23.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3085279549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3085279549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:24 compute-0 ceph-mon[73607]: pgmap v2361: 305 pgs: 305 active+clean; 398 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 508 KiB/s wr, 173 op/s
Oct 02 12:39:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:24.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe817fe38c5df124080b09b26cd58f19823e24d1c28131732fdcfad858b0acc6-merged.mount: Deactivated successfully.
Oct 02 12:39:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:25.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 404 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 545 KiB/s wr, 132 op/s
Oct 02 12:39:26 compute-0 ceph-mon[73607]: pgmap v2362: 305 pgs: 305 active+clean; 403 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 299 KiB/s wr, 150 op/s
Oct 02 12:39:26 compute-0 podman[349621]: 2025-10-02 12:39:26.066047085 +0000 UTC m=+2.762031838 container remove 031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:39:26 compute-0 systemd[1]: libpod-conmon-031f1dbafcdfc46b711a435883fa2a7281e66f40669eb8366b5973b1fb457999.scope: Deactivated successfully.
Oct 02 12:39:26 compute-0 sudo[349490]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:26 compute-0 sudo[349665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:26 compute-0 sudo[349665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:26 compute-0 sudo[349665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:26 compute-0 podman[349638]: 2025-10-02 12:39:26.21460662 +0000 UTC m=+2.001108211 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid)
Oct 02 12:39:26 compute-0 podman[349636]: 2025-10-02 12:39:26.232798105 +0000 UTC m=+2.022835742 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:39:26 compute-0 podman[349637]: 2025-10-02 12:39:26.236764252 +0000 UTC m=+2.026829700 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:39:26 compute-0 sudo[349710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:39:26 compute-0 sudo[349710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:26 compute-0 sudo[349710]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:26 compute-0 sudo[349742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:26 compute-0 sudo[349742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:26 compute-0 sudo[349742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:26 compute-0 sudo[349767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:39:26 compute-0 sudo[349767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:26.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:26 compute-0 podman[349834]: 2025-10-02 12:39:26.648024394 +0000 UTC m=+0.021236080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:26.883 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:26.957 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:39:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:27 compute-0 podman[349834]: 2025-10-02 12:39:27.023738166 +0000 UTC m=+0.396949832 container create b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:39:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Oct 02 12:39:27 compute-0 systemd[1]: Started libpod-conmon-b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039.scope.
Oct 02 12:39:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:27 compute-0 podman[349834]: 2025-10-02 12:39:27.304551497 +0000 UTC m=+0.677763183 container init b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:39:27 compute-0 podman[349834]: 2025-10-02 12:39:27.312164683 +0000 UTC m=+0.685376339 container start b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:39:27 compute-0 systemd[1]: libpod-b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039.scope: Deactivated successfully.
Oct 02 12:39:27 compute-0 strange_hawking[349850]: 167 167
Oct 02 12:39:27 compute-0 conmon[349850]: conmon b1bf4fc1849e3914169c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039.scope/container/memory.events
Oct 02 12:39:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Oct 02 12:39:27 compute-0 podman[349834]: 2025-10-02 12:39:27.405401464 +0000 UTC m=+0.778613150 container attach b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:39:27 compute-0 podman[349834]: 2025-10-02 12:39:27.405783223 +0000 UTC m=+0.778994889 container died b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:39:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Oct 02 12:39:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 1.2 MiB/s wr, 78 op/s
Oct 02 12:39:27 compute-0 ceph-mon[73607]: pgmap v2363: 305 pgs: 305 active+clean; 404 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 545 KiB/s wr, 132 op/s
Oct 02 12:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5d2f3cf5b221ab99bedf5c58d3761b7ce7110cda39ecd60e32925a721f44bb-merged.mount: Deactivated successfully.
Oct 02 12:39:28 compute-0 nova_compute[257802]: 2025-10-02 12:39:28.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:28 compute-0 nova_compute[257802]: 2025-10-02 12:39:28.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:28 compute-0 nova_compute[257802]: 2025-10-02 12:39:28.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:39:28 compute-0 podman[349834]: 2025-10-02 12:39:28.209439736 +0000 UTC m=+1.582651402 container remove b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:39:28 compute-0 systemd[1]: libpod-conmon-b1bf4fc1849e3914169c872766c23c308b3711400cfe0c2f05568e01fea78039.scope: Deactivated successfully.
Oct 02 12:39:28 compute-0 podman[349874]: 2025-10-02 12:39:28.384744225 +0000 UTC m=+0.025362082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:39:28 compute-0 podman[349874]: 2025-10-02 12:39:28.507747454 +0000 UTC m=+0.148365291 container create eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:39:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:28 compute-0 systemd[1]: Started libpod-conmon-eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa.scope.
Oct 02 12:39:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326015d0fe77d5f72ef7fd94331bfa9de5514e42d4b0c0e6ec511d9c591be709/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326015d0fe77d5f72ef7fd94331bfa9de5514e42d4b0c0e6ec511d9c591be709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326015d0fe77d5f72ef7fd94331bfa9de5514e42d4b0c0e6ec511d9c591be709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326015d0fe77d5f72ef7fd94331bfa9de5514e42d4b0c0e6ec511d9c591be709/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:28 compute-0 podman[349874]: 2025-10-02 12:39:28.707793429 +0000 UTC m=+0.348411286 container init eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:39:28 compute-0 podman[349874]: 2025-10-02 12:39:28.71724872 +0000 UTC m=+0.357866557 container start eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:39:28 compute-0 nova_compute[257802]: 2025-10-02 12:39:28.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:28 compute-0 podman[349874]: 2025-10-02 12:39:28.759408392 +0000 UTC m=+0.400026249 container attach eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:39:28 compute-0 nova_compute[257802]: 2025-10-02 12:39:28.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:29 compute-0 ceph-mon[73607]: osdmap e340: 3 total, 3 up, 3 in
Oct 02 12:39:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1677275262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:29 compute-0 ceph-mon[73607]: pgmap v2365: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 1.2 MiB/s wr, 78 op/s
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]: {
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:         "osd_id": 1,
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:         "type": "bluestore"
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]:     }
Oct 02 12:39:29 compute-0 quizzical_lehmann[349891]: }
Oct 02 12:39:29 compute-0 systemd[1]: libpod-eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa.scope: Deactivated successfully.
Oct 02 12:39:29 compute-0 podman[349874]: 2025-10-02 12:39:29.547823291 +0000 UTC m=+1.188441138 container died eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:39:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 380 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 2.3 MiB/s wr, 81 op/s
Oct 02 12:39:29 compute-0 sudo[349924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:30 compute-0 sudo[349924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:30 compute-0 sudo[349924]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:30 compute-0 sudo[349949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:30 compute-0 sudo[349949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:30 compute-0 sudo[349949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-326015d0fe77d5f72ef7fd94331bfa9de5514e42d4b0c0e6ec511d9c591be709-merged.mount: Deactivated successfully.
Oct 02 12:39:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:30.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:30 compute-0 ovn_controller[148183]: 2025-10-02T12:39:30Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:25:6b 10.100.0.7
Oct 02 12:39:30 compute-0 ovn_controller[148183]: 2025-10-02T12:39:30Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:25:6b 10.100.0.7
Oct 02 12:39:30 compute-0 podman[349874]: 2025-10-02 12:39:30.735971111 +0000 UTC m=+2.376588958 container remove eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:39:30 compute-0 systemd[1]: libpod-conmon-eb4a00f492ad0139da1cc97a175b6df426368704a5edac9688697fa5afe5bafa.scope: Deactivated successfully.
Oct 02 12:39:30 compute-0 sudo[349767]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:39:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:39:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d92221b0-5604-47a8-b21a-58883ef2169d does not exist
Oct 02 12:39:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3892abce-0992-4894-8efe-bcd9ee3c3e17 does not exist
Oct 02 12:39:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev acfe165f-c727-41c2-aa77-ad58da51d0c8 does not exist
Oct 02 12:39:31 compute-0 sudo[349977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:31 compute-0 sudo[349977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:31 compute-0 sudo[349977]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:31 compute-0 sudo[350002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:39:31 compute-0 sudo[350002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:31 compute-0 sudo[350002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:31 compute-0 ceph-mon[73607]: pgmap v2366: 305 pgs: 305 active+clean; 380 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 2.3 MiB/s wr, 81 op/s
Oct 02 12:39:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:31 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:39:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:39:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:31.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:39:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Oct 02 12:39:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:32.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:32 compute-0 podman[350028]: 2025-10-02 12:39:32.954706557 +0000 UTC m=+0.092240008 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:39:33 compute-0 nova_compute[257802]: 2025-10-02 12:39:33.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-0 ceph-mon[73607]: pgmap v2367: 305 pgs: 305 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Oct 02 12:39:33 compute-0 nova_compute[257802]: 2025-10-02 12:39:33.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:33.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 3.8 MiB/s wr, 105 op/s
Oct 02 12:39:34 compute-0 nova_compute[257802]: 2025-10-02 12:39:34.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:34.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:34 compute-0 ceph-mon[73607]: pgmap v2368: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 3.8 MiB/s wr, 105 op/s
Oct 02 12:39:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:35.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 379 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Oct 02 12:39:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2582492956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/457775009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:36.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:37 compute-0 ceph-mon[73607]: pgmap v2369: 305 pgs: 305 active+clean; 379 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Oct 02 12:39:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:39:37 compute-0 nova_compute[257802]: 2025-10-02 12:39:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:38.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:38 compute-0 ceph-mon[73607]: pgmap v2370: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:39:38 compute-0 nova_compute[257802]: 2025-10-02 12:39:38.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:39 compute-0 nova_compute[257802]: 2025-10-02 12:39:39.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:39:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:39.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:39:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 3.1 MiB/s wr, 119 op/s
Oct 02 12:39:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-0 ceph-mon[73607]: pgmap v2371: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 3.1 MiB/s wr, 119 op/s
Oct 02 12:39:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:41.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 370 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:39:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:39:42
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:39:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:43 compute-0 ceph-mon[73607]: pgmap v2372: 305 pgs: 305 active+clean; 370 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:39:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:39:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:43.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:39:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 343 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Oct 02 12:39:43 compute-0 nova_compute[257802]: 2025-10-02 12:39:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:44 compute-0 nova_compute[257802]: 2025-10-02 12:39:44.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:39:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:39:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:39:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:39:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:39:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:45 compute-0 ceph-mon[73607]: pgmap v2373: 305 pgs: 305 active+clean; 343 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Oct 02 12:39:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:45.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 873 KiB/s wr, 137 op/s
Oct 02 12:39:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/327545541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:47.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 866 KiB/s wr, 130 op/s
Oct 02 12:39:48 compute-0 ceph-mon[73607]: pgmap v2374: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 873 KiB/s wr, 137 op/s
Oct 02 12:39:48 compute-0 nova_compute[257802]: 2025-10-02 12:39:48.383 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:48 compute-0 nova_compute[257802]: 2025-10-02 12:39:48.383 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:48 compute-0 nova_compute[257802]: 2025-10-02 12:39:48.446 2 DEBUG nova.objects.instance [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'flavor' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:48 compute-0 nova_compute[257802]: 2025-10-02 12:39:48.575 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:48 compute-0 nova_compute[257802]: 2025-10-02 12:39:48.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.089 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.090 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.090 2 INFO nova.compute.manager [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Attaching volume 6831a10e-a853-4e3f-956f-b5ceafcd071e to /dev/vdb
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.247 2 DEBUG os_brick.utils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.249 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.264 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.265 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec6ebbc-57aa-4e73-a95b-a95969afca0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.266 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.281 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.282 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[548c7b28-c77c-4ee5-a9aa-5d7525d7e0ec]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.283 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.298 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.298 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c9623eac-2257-4378-aa64-ba18b6ed73ef]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.300 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[54e45abd-fa8a-4106-98df-d6c758825060]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.300 2 DEBUG oslo_concurrency.processutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:49 compute-0 ceph-mon[73607]: pgmap v2375: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 866 KiB/s wr, 130 op/s
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.346 2 DEBUG oslo_concurrency.processutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "nvme version" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.349 2 DEBUG os_brick.initiator.connectors.lightos [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.349 2 DEBUG os_brick.initiator.connectors.lightos [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.349 2 DEBUG os_brick.initiator.connectors.lightos [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.350 2 DEBUG os_brick.utils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:39:49 compute-0 nova_compute[257802]: 2025-10-02 12:39:49.350 2 DEBUG nova.virt.block_device [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updating existing volume attachment record: 32434325-3bb9-4635-b3cf-a531f5083862 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:39:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 314 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 333 KiB/s wr, 111 op/s
Oct 02 12:39:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:39:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2364795325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:50 compute-0 sudo[350071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:50 compute-0 sudo[350071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:50 compute-0 sudo[350071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:50 compute-0 sudo[350096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:50 compute-0 sudo[350096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:50 compute-0 sudo[350096]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.453 2 DEBUG os_brick.encryptors [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Using volume encryption metadata '{'encryption_key_id': 'aa21fc07-13f4-4816-a07d-936433201da9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6831a10e-a853-4e3f-956f-b5ceafcd071e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6831a10e-a853-4e3f-956f-b5ceafcd071e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c03d6d93-3bfc-4356-bdea-f62670b73a91', 'attached_at': '', 'detached_at': '', 'volume_id': '6831a10e-a853-4e3f-956f-b5ceafcd071e', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.461 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.482 2 DEBUG barbicanclient.v1.secrets [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/aa21fc07-13f4-4816-a07d-936433201da9 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.482 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.503 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.504 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.526 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.527 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.553 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.554 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.584 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.585 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.605 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.606 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.634 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.636 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.663 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.664 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2364795325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.681 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.682 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.704 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.705 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.742 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.743 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.761 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.762 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.782 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.782 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.812 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.813 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.833 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.834 2 INFO barbicanclient.base [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Calculated Secrets uuid ref: secrets/aa21fc07-13f4-4816-a07d-936433201da9
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.855 2 DEBUG barbicanclient.client [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.857 2 DEBUG nova.virt.libvirt.host [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 02 12:39:50 compute-0 nova_compute[257802]:   <usage type="volume">
Oct 02 12:39:50 compute-0 nova_compute[257802]:     <volume>6831a10e-a853-4e3f-956f-b5ceafcd071e</volume>
Oct 02 12:39:50 compute-0 nova_compute[257802]:   </usage>
Oct 02 12:39:50 compute-0 nova_compute[257802]: </secret>
Oct 02 12:39:50 compute-0 nova_compute[257802]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 02 12:39:50 compute-0 nova_compute[257802]: 2025-10-02 12:39:50.997 2 DEBUG nova.objects.instance [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'flavor' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:51 compute-0 nova_compute[257802]: 2025-10-02 12:39:51.104 2 DEBUG nova.virt.libvirt.driver [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Attempting to attach volume 6831a10e-a853-4e3f-956f-b5ceafcd071e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:39:51 compute-0 nova_compute[257802]: 2025-10-02 12:39:51.110 2 DEBUG nova.virt.libvirt.guest [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6831a10e-a853-4e3f-956f-b5ceafcd071e">
Oct 02 12:39:51 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   </source>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:39:51 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <serial>6831a10e-a853-4e3f-956f-b5ceafcd071e</serial>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   <encryption format="luks">
Oct 02 12:39:51 compute-0 nova_compute[257802]:     <secret type="passphrase" uuid="a472058e-1314-4f00-89fc-abef846e9f6f"/>
Oct 02 12:39:51 compute-0 nova_compute[257802]:   </encryption>
Oct 02 12:39:51 compute-0 nova_compute[257802]: </disk>
Oct 02 12:39:51 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:39:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:51.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 307 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Oct 02 12:39:51 compute-0 ceph-mon[73607]: pgmap v2376: 305 pgs: 305 active+clean; 314 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 333 KiB/s wr, 111 op/s
Oct 02 12:39:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:53 compute-0 ceph-mon[73607]: pgmap v2377: 305 pgs: 305 active+clean; 307 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Oct 02 12:39:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3309376384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/390193139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:53.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.969 2 DEBUG nova.virt.libvirt.driver [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.970 2 DEBUG nova.virt.libvirt.driver [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.970 2 DEBUG nova.virt.libvirt.driver [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:53 compute-0 nova_compute[257802]: 2025-10-02 12:39:53.971 2 DEBUG nova.virt.libvirt.driver [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] No VIF found with MAC fa:16:3e:dc:25:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:39:54 compute-0 nova_compute[257802]: 2025-10-02 12:39:54.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:39:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:54.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005339195979899497 of space, bias 1.0, pg target 1.6017587939698492 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6500127591080402 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:39:54 compute-0 ceph-mon[73607]: pgmap v2378: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 12:39:54 compute-0 nova_compute[257802]: 2025-10-02 12:39:54.999 2 DEBUG oslo_concurrency.lockutils [None req-9318a960-85d6-418e-bf1f-770df5ec8b21 b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:39:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3433366664' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:39:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3433366664' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:55.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 12:39:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1960292889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3433366664' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3433366664' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.091 2 DEBUG oslo_concurrency.lockutils [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.092 2 DEBUG oslo_concurrency.lockutils [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.122 2 INFO nova.compute.manager [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Detaching volume 6831a10e-a853-4e3f-956f-b5ceafcd071e
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.433 2 INFO nova.virt.block_device [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Attempting to driver detach volume 6831a10e-a853-4e3f-956f-b5ceafcd071e from mountpoint /dev/vdb
Oct 02 12:39:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.631 2 DEBUG os_brick.encryptors [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Using volume encryption metadata '{'encryption_key_id': 'aa21fc07-13f4-4816-a07d-936433201da9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6831a10e-a853-4e3f-956f-b5ceafcd071e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6831a10e-a853-4e3f-956f-b5ceafcd071e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c03d6d93-3bfc-4356-bdea-f62670b73a91', 'attached_at': '', 'detached_at': '', 'volume_id': '6831a10e-a853-4e3f-956f-b5ceafcd071e', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.637 2 DEBUG nova.virt.libvirt.driver [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Attempting to detach device vdb from instance c03d6d93-3bfc-4356-bdea-f62670b73a91 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.638 2 DEBUG nova.virt.libvirt.guest [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6831a10e-a853-4e3f-956f-b5ceafcd071e">
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   </source>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <serial>6831a10e-a853-4e3f-956f-b5ceafcd071e</serial>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <encryption format="luks">
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <secret type="passphrase" uuid="a472058e-1314-4f00-89fc-abef846e9f6f"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   </encryption>
Oct 02 12:39:56 compute-0 nova_compute[257802]: </disk>
Oct 02 12:39:56 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.701 2 INFO nova.virt.libvirt.driver [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Successfully detached device vdb from instance c03d6d93-3bfc-4356-bdea-f62670b73a91 from the persistent domain config.
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.702 2 DEBUG nova.virt.libvirt.driver [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c03d6d93-3bfc-4356-bdea-f62670b73a91 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.702 2 DEBUG nova.virt.libvirt.guest [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-6831a10e-a853-4e3f-956f-b5ceafcd071e">
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   </source>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <serial>6831a10e-a853-4e3f-956f-b5ceafcd071e</serial>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   <encryption format="luks">
Oct 02 12:39:56 compute-0 nova_compute[257802]:     <secret type="passphrase" uuid="a472058e-1314-4f00-89fc-abef846e9f6f"/>
Oct 02 12:39:56 compute-0 nova_compute[257802]:   </encryption>
Oct 02 12:39:56 compute-0 nova_compute[257802]: </disk>
Oct 02 12:39:56 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.810 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408796.8101323, c03d6d93-3bfc-4356-bdea-f62670b73a91 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.812 2 DEBUG nova.virt.libvirt.driver [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c03d6d93-3bfc-4356-bdea-f62670b73a91 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:39:56 compute-0 nova_compute[257802]: 2025-10-02 12:39:56.814 2 INFO nova.virt.libvirt.driver [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Successfully detached device vdb from instance c03d6d93-3bfc-4356-bdea-f62670b73a91 from the live domain config.
Oct 02 12:39:56 compute-0 podman[350147]: 2025-10-02 12:39:56.925934803 +0000 UTC m=+0.054990807 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 12:39:56 compute-0 podman[350148]: 2025-10-02 12:39:56.930587366 +0000 UTC m=+0.056438421 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:39:56 compute-0 podman[350149]: 2025-10-02 12:39:56.930685669 +0000 UTC m=+0.054456884 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:39:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:57 compute-0 ceph-mon[73607]: pgmap v2379: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 12:39:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:39:58 compute-0 nova_compute[257802]: 2025-10-02 12:39:58.139 2 DEBUG nova.objects.instance [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'flavor' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:58 compute-0 nova_compute[257802]: 2025-10-02 12:39:58.440 2 DEBUG oslo_concurrency.lockutils [None req-3bc04b5d-a263-4730-99a2-c706f718739a b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:39:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:58.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:39:58 compute-0 nova_compute[257802]: 2025-10-02 12:39:58.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:59 compute-0 nova_compute[257802]: 2025-10-02 12:39:59.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:59 compute-0 ceph-mon[73607]: pgmap v2380: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:39:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:39:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:59.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 12:40:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:40:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:00.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:00 compute-0 ceph-mon[73607]: pgmap v2381: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 12:40:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:40:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1802728507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 nova_compute[257802]: 2025-10-02 12:40:01.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:01 compute-0 nova_compute[257802]: 2025-10-02 12:40:01.117 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:40:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:01.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Oct 02 12:40:02 compute-0 nova_compute[257802]: 2025-10-02 12:40:02.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:02.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:02 compute-0 ceph-mon[73607]: pgmap v2382: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Oct 02 12:40:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:03.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 559 KiB/s wr, 98 op/s
Oct 02 12:40:03 compute-0 nova_compute[257802]: 2025-10-02 12:40:03.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-0 podman[350207]: 2025-10-02 12:40:03.954957995 +0000 UTC m=+0.092744430 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:40:04 compute-0 nova_compute[257802]: 2025-10-02 12:40:04.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:04.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:05 compute-0 nova_compute[257802]: 2025-10-02 12:40:05.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:05 compute-0 ceph-mon[73607]: pgmap v2383: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 559 KiB/s wr, 98 op/s
Oct 02 12:40:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 86 op/s
Oct 02 12:40:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:06.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:07 compute-0 nova_compute[257802]: 2025-10-02 12:40:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:07 compute-0 ceph-mon[73607]: pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 86 op/s
Oct 02 12:40:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:07.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 78 op/s
Oct 02 12:40:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:08.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:08 compute-0 nova_compute[257802]: 2025-10-02 12:40:08.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:09 compute-0 nova_compute[257802]: 2025-10-02 12:40:09.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:09 compute-0 nova_compute[257802]: 2025-10-02 12:40:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:09 compute-0 ceph-mon[73607]: pgmap v2385: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 78 op/s
Oct 02 12:40:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 329 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 368 KiB/s wr, 81 op/s
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.134 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.134 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.135 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.135 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.135 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.136 2 INFO nova.compute.manager [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Terminating instance
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.137 2 DEBUG nova.compute.manager [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:40:10 compute-0 kernel: tap85648688-e3 (unregistering): left promiscuous mode
Oct 02 12:40:10 compute-0 NetworkManager[44987]: <info>  [1759408810.1955] device (tap85648688-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:40:10 compute-0 ovn_controller[148183]: 2025-10-02T12:40:10Z|00652|binding|INFO|Releasing lport 85648688-e368-42fd-86c6-d892c37f8c7b from this chassis (sb_readonly=0)
Oct 02 12:40:10 compute-0 ovn_controller[148183]: 2025-10-02T12:40:10Z|00653|binding|INFO|Setting lport 85648688-e368-42fd-86c6-d892c37f8c7b down in Southbound
Oct 02 12:40:10 compute-0 ovn_controller[148183]: 2025-10-02T12:40:10Z|00654|binding|INFO|Removing iface tap85648688-e3 ovn-installed in OVS
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.227 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:6b 10.100.0.7'], port_security=['fa:16:3e:dc:25:6b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c03d6d93-3bfc-4356-bdea-f62670b73a91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1294b2ea04b34f7189fde66e2afa2c56', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5427e9f2-85fc-4f8d-a138-f123d5924ea4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7d1a30a-9898-452a-a8fe-d5b194845098, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=85648688-e368-42fd-86c6-d892c37f8c7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.228 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 85648688-e368-42fd-86c6-d892c37f8c7b in datapath 9d35d228-f367-4a7c-abd0-4c3ae2b08283 unbound from our chassis
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.229 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d35d228-f367-4a7c-abd0-4c3ae2b08283, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.230 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0293c015-59dc-4be7-b2e8-f0e838a023e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.231 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283 namespace which is not needed anymore
Oct 02 12:40:10 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 02 12:40:10 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000096.scope: Consumed 18.184s CPU time.
Oct 02 12:40:10 compute-0 systemd-machined[211836]: Machine qemu-74-instance-00000096 terminated.
Oct 02 12:40:10 compute-0 sudo[350255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:10 compute-0 sudo[350255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:10 compute-0 sudo[350255]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.374 2 INFO nova.virt.libvirt.driver [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Instance destroyed successfully.
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.375 2 DEBUG nova.objects.instance [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lazy-loading 'resources' on Instance uuid c03d6d93-3bfc-4356-bdea-f62670b73a91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:10 compute-0 sudo[350295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:10 compute-0 sudo[350295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:10 compute-0 sudo[350295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.460 2 DEBUG nova.virt.libvirt.vif [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1407304067',display_name='tempest-TestEncryptedCinderVolumes-server-1407304067',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1407304067',id=150,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOMJUj6TR5Hs2OVN7EF3FXhyHxf5yFPtChZDgiv15qWm1+k3afDIwiy57sWcPPQbfpdF3629DcEaLbL6DecpaHLxDbvnGufOnRy2GJpUApZQYS+d1x5X1KYIoGRgkaZRA==',key_name='tempest-keypair-1596230167',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:39:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1294b2ea04b34f7189fde66e2afa2c56',ramdisk_id='',reservation_id='r-2o1xb16a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1986184665',owner_user_name='tempest-TestEncryptedCinderVolumes-1986184665-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:39:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4ad22acfd744e47aa9bb09035188e74',uuid=c03d6d93-3bfc-4356-bdea-f62670b73a91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.460 2 DEBUG nova.network.os_vif_util [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converting VIF {"id": "85648688-e368-42fd-86c6-d892c37f8c7b", "address": "fa:16:3e:dc:25:6b", "network": {"id": "9d35d228-f367-4a7c-abd0-4c3ae2b08283", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-255695143-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1294b2ea04b34f7189fde66e2afa2c56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85648688-e3", "ovs_interfaceid": "85648688-e368-42fd-86c6-d892c37f8c7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.461 2 DEBUG nova.network.os_vif_util [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.462 2 DEBUG os_vif [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85648688-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.470 2 INFO os_vif [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:6b,bridge_name='br-int',has_traffic_filtering=True,id=85648688-e368-42fd-86c6-d892c37f8c7b,network=Network(9d35d228-f367-4a7c-abd0-4c3ae2b08283),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85648688-e3')
Oct 02 12:40:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [NOTICE]   (349254) : haproxy version is 2.8.14-c23fe91
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [NOTICE]   (349254) : path to executable is /usr/sbin/haproxy
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [WARNING]  (349254) : Exiting Master process...
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [WARNING]  (349254) : Exiting Master process...
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [ALERT]    (349254) : Current worker (349256) exited with code 143 (Terminated)
Oct 02 12:40:10 compute-0 neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283[349250]: [WARNING]  (349254) : All workers exited. Exiting... (0)
Oct 02 12:40:10 compute-0 systemd[1]: libpod-ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d.scope: Deactivated successfully.
Oct 02 12:40:10 compute-0 podman[350277]: 2025-10-02 12:40:10.63535632 +0000 UTC m=+0.314008083 container died ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:40:10 compute-0 ceph-mon[73607]: pgmap v2386: 305 pgs: 305 active+clean; 329 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 368 KiB/s wr, 81 op/s
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.686 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d196ba245fed29d4c6bdad394dc0a95757caf823a18bc73b0de513d1f03ca4f3-merged.mount: Deactivated successfully.
Oct 02 12:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:40:10 compute-0 podman[350277]: 2025-10-02 12:40:10.778899313 +0000 UTC m=+0.457551076 container cleanup ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:40:10 compute-0 systemd[1]: libpod-conmon-ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d.scope: Deactivated successfully.
Oct 02 12:40:10 compute-0 podman[350369]: 2025-10-02 12:40:10.893920597 +0000 UTC m=+0.090597648 container remove ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.900 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fa87ef72-5140-416d-be29-a1b4aad21b52]: (4, ('Thu Oct  2 12:40:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283 (ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d)\nee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d\nThu Oct  2 12:40:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283 (ee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d)\nee013e4da4dc13a3e87be5e8344d78b6603761a8710de2a10728c312467f372d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.902 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[921e81ad-6046-41d1-adc2-fe62d76f54e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.903 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d35d228-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 kernel: tap9d35d228-f0: left promiscuous mode
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.913 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8aa409-3011-4919-8ba1-33cc761d53af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 nova_compute[257802]: 2025-10-02 12:40:10.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.942 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c50a7a31-e129-412a-a597-541baed23078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.943 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ecaabe-2a33-4818-80f2-c2aa840e949a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.959 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbf3a03-0a6f-4a36-adcd-878076d279d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681698, 'reachable_time': 42374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350383, 'error': None, 'target': 'ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.962 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d35d228-f367-4a7c-abd0-4c3ae2b08283 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.962 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0f287b-7886-4d70-8c63-4cb4457a585e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d35d228\x2df367\x2d4a7c\x2dabd0\x2d4c3ae2b08283.mount: Deactivated successfully.
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.963 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:40:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:10.964 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.204 2 INFO nova.virt.libvirt.driver [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Deleting instance files /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91_del
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.204 2 INFO nova.virt.libvirt.driver [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Deletion of /var/lib/nova/instances/c03d6d93-3bfc-4356-bdea-f62670b73a91_del complete
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.319 2 INFO nova.compute.manager [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Took 1.18 seconds to destroy the instance on the hypervisor.
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.320 2 DEBUG oslo.service.loopingcall [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.320 2 DEBUG nova.compute.manager [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:40:11 compute-0 nova_compute[257802]: 2025-10-02 12:40:11.321 2 DEBUG nova.network.neutron [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:40:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3604971004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 336 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 807 KiB/s wr, 89 op/s
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.188 2 DEBUG nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-unplugged-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.189 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.189 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.189 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.189 2 DEBUG nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] No waiting events found dispatching network-vif-unplugged-85648688-e368-42fd-86c6-d892c37f8c7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.190 2 DEBUG nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-unplugged-85648688-e368-42fd-86c6-d892c37f8c7b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.190 2 DEBUG nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.190 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.190 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.190 2 DEBUG oslo_concurrency.lockutils [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.191 2 DEBUG nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] No waiting events found dispatching network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:12 compute-0 nova_compute[257802]: 2025-10-02 12:40:12.191 2 WARNING nova.compute.manager [req-1f50a9ae-9663-4066-8d4c-0e78545ae41c req-0061c016-5f3e-4c4d-ae72-bf36dfe29a23 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received unexpected event network-vif-plugged-85648688-e368-42fd-86c6-d892c37f8c7b for instance with vm_state active and task_state deleting.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.411712) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812411774, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1709, "num_deletes": 253, "total_data_size": 2854372, "memory_usage": 2898880, "flush_reason": "Manual Compaction"}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812425736, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1734248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51326, "largest_seqno": 53034, "table_properties": {"data_size": 1728265, "index_size": 2993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16570, "raw_average_key_size": 21, "raw_value_size": 1714755, "raw_average_value_size": 2235, "num_data_blocks": 131, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408650, "oldest_key_time": 1759408650, "file_creation_time": 1759408812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 14076 microseconds, and 4239 cpu microseconds.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.425796) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1734248 bytes OK
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.425813) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.429783) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.429796) EVENT_LOG_v1 {"time_micros": 1759408812429791, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.429811) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2847088, prev total WAL file size 2847088, number of live WAL files 2.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.430558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373539' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1693KB)], [113(11MB)]
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812430620, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 14198685, "oldest_snapshot_seqno": -1}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8239 keys, 11383874 bytes, temperature: kUnknown
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812525230, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 11383874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11330154, "index_size": 32040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 212677, "raw_average_key_size": 25, "raw_value_size": 11184898, "raw_average_value_size": 1357, "num_data_blocks": 1257, "num_entries": 8239, "num_filter_entries": 8239, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.525523) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11383874 bytes
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.528144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.9 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(14.8) write-amplify(6.6) OK, records in: 8698, records dropped: 459 output_compression: NoCompression
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.528172) EVENT_LOG_v1 {"time_micros": 1759408812528160, "job": 68, "event": "compaction_finished", "compaction_time_micros": 94731, "compaction_time_cpu_micros": 26735, "output_level": 6, "num_output_files": 1, "total_output_size": 11383874, "num_input_records": 8698, "num_output_records": 8239, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812528747, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812531172, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.430413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.531328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.531333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.531334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.531336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:40:12.531337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:13 compute-0 ceph-mon[73607]: pgmap v2387: 305 pgs: 305 active+clean; 336 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 807 KiB/s wr, 89 op/s
Oct 02 12:40:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:13.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 12:40:13 compute-0 nova_compute[257802]: 2025-10-02 12:40:13.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-0 nova_compute[257802]: 2025-10-02 12:40:13.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-0 nova_compute[257802]: 2025-10-02 12:40:13.892 2 DEBUG nova.network.neutron [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.133 2 INFO nova.compute.manager [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Took 2.81 seconds to deallocate network for instance.
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.212 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.213 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.213 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.213 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.214 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.286 2 DEBUG nova.compute.manager [req-6047fa04-9b62-48e2-958f-829023f46fda req-a5cefe9d-dbce-4fa7-a3e0-555935719e9a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Received event network-vif-deleted-85648688-e368-42fd-86c6-d892c37f8c7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.317 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.318 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.496 2 DEBUG nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:40:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/520256467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.668 2 DEBUG nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.668 2 DEBUG nova.compute.provider_tree [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:40:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975119985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.696 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.703 2 DEBUG nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.828 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.829 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:40:14 compute-0 nova_compute[257802]: 2025-10-02 12:40:14.998 2 DEBUG nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.050 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.051 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4086MB free_disk=20.912879943847656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.051 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.083 2 DEBUG oslo_concurrency.processutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324046881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.553 2 DEBUG oslo_concurrency.processutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.560 2 DEBUG nova.compute.provider_tree [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:15 compute-0 ceph-mon[73607]: pgmap v2388: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 12:40:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2975119985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3480621312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/324046881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.754 2 DEBUG nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.920 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:15 compute-0 nova_compute[257802]: 2025-10-02 12:40:15.923 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.046 2 INFO nova.scheduler.client.report [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Deleted allocations for instance c03d6d93-3bfc-4356-bdea-f62670b73a91
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.166 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.166 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.167 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.210 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.431 2 DEBUG oslo_concurrency.lockutils [None req-39bac518-dfbb-4116-a37b-0f4705d78dda b4ad22acfd744e47aa9bb09035188e74 1294b2ea04b34f7189fde66e2afa2c56 - - default default] Lock "c03d6d93-3bfc-4356-bdea-f62670b73a91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951597425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.667 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.672 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.725 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:16 compute-0 ceph-mon[73607]: pgmap v2389: 305 pgs: 305 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 12:40:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1951597425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.892 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:40:16 compute-0 nova_compute[257802]: 2025-10-02 12:40:16.892 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Oct 02 12:40:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3669041500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Oct 02 12:40:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Oct 02 12:40:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:40:17 compute-0 nova_compute[257802]: 2025-10-02 12:40:17.892 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:17 compute-0 nova_compute[257802]: 2025-10-02 12:40:17.893 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:40:17 compute-0 nova_compute[257802]: 2025-10-02 12:40:17.893 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:40:18 compute-0 nova_compute[257802]: 2025-10-02 12:40:18.344 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:18 compute-0 nova_compute[257802]: 2025-10-02 12:40:18.345 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:18 compute-0 nova_compute[257802]: 2025-10-02 12:40:18.345 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:40:18 compute-0 nova_compute[257802]: 2025-10-02 12:40:18.345 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:18 compute-0 nova_compute[257802]: 2025-10-02 12:40:18.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:19 compute-0 ceph-mon[73607]: osdmap e341: 3 total, 3 up, 3 in
Oct 02 12:40:19 compute-0 ceph-mon[73607]: pgmap v2391: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:40:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:19.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 216 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 506 KiB/s rd, 3.7 MiB/s wr, 148 op/s
Oct 02 12:40:20 compute-0 nova_compute[257802]: 2025-10-02 12:40:20.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:20 compute-0 ceph-mon[73607]: pgmap v2392: 305 pgs: 305 active+clean; 216 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 506 KiB/s rd, 3.7 MiB/s wr, 148 op/s
Oct 02 12:40:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Oct 02 12:40:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:22 compute-0 nova_compute[257802]: 2025-10-02 12:40:22.940 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:22 compute-0 nova_compute[257802]: 2025-10-02 12:40:22.971 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:22 compute-0 nova_compute[257802]: 2025-10-02 12:40:22.972 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:40:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:40:23 compute-0 nova_compute[257802]: 2025-10-02 12:40:23.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:24.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:25 compute-0 nova_compute[257802]: 2025-10-02 12:40:25.372 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408810.3707993, c03d6d93-3bfc-4356-bdea-f62670b73a91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:25 compute-0 nova_compute[257802]: 2025-10-02 12:40:25.373 2 INFO nova.compute.manager [-] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] VM Stopped (Lifecycle Event)
Oct 02 12:40:25 compute-0 nova_compute[257802]: 2025-10-02 12:40:25.409 2 DEBUG nova.compute.manager [None req-c029173d-9d7b-4e4a-9343-9c13c16b308b - - - - - -] [instance: c03d6d93-3bfc-4356-bdea-f62670b73a91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:25 compute-0 nova_compute[257802]: 2025-10-02 12:40:25.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 12:40:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:26.958 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:40:26.959 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:27 compute-0 nova_compute[257802]: 2025-10-02 12:40:27.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:27.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 12:40:27 compute-0 podman[350463]: 2025-10-02 12:40:27.929904512 +0000 UTC m=+0.061390203 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:40:27 compute-0 podman[350462]: 2025-10-02 12:40:27.931477021 +0000 UTC m=+0.065769931 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:40:27 compute-0 podman[350464]: 2025-10-02 12:40:27.94369378 +0000 UTC m=+0.068892636 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid)
Oct 02 12:40:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:28.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:28 compute-0 nova_compute[257802]: 2025-10-02 12:40:28.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Oct 02 12:40:30 compute-0 ceph-mon[73607]: pgmap v2393: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Oct 02 12:40:30 compute-0 nova_compute[257802]: 2025-10-02 12:40:30.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:30 compute-0 sudo[350515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:30 compute-0 sudo[350515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:30 compute-0 sudo[350515]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:30 compute-0 sudo[350540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:30 compute-0 sudo[350540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:30 compute-0 sudo[350540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:31 compute-0 ceph-mon[73607]: pgmap v2394: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:40:31 compute-0 ceph-mon[73607]: pgmap v2395: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 12:40:31 compute-0 ceph-mon[73607]: pgmap v2396: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 12:40:31 compute-0 ceph-mon[73607]: pgmap v2397: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Oct 02 12:40:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2396500098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:31 compute-0 sudo[350566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:31 compute-0 sudo[350566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:31 compute-0 sudo[350566]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:31 compute-0 sudo[350591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:31 compute-0 sudo[350591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:31 compute-0 sudo[350591]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:31 compute-0 sudo[350616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:31 compute-0 sudo[350616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:31 compute-0 sudo[350616]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:31 compute-0 sudo[350641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:40:31 compute-0 sudo[350641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 380 KiB/s wr, 18 op/s
Oct 02 12:40:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:32.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:33 compute-0 podman[350739]: 2025-10-02 12:40:33.005595945 +0000 UTC m=+0.891655407 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:40:33 compute-0 podman[350760]: 2025-10-02 12:40:33.222978275 +0000 UTC m=+0.107969204 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:33 compute-0 podman[350739]: 2025-10-02 12:40:33.304163301 +0000 UTC m=+1.190222713 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:40:33 compute-0 ceph-mon[73607]: pgmap v2398: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 380 KiB/s wr, 18 op/s
Oct 02 12:40:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 4.3 KiB/s wr, 23 op/s
Oct 02 12:40:33 compute-0 nova_compute[257802]: 2025-10-02 12:40:33.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:40:34 compute-0 podman[350811]: 2025-10-02 12:40:34.123982218 +0000 UTC m=+0.106540238 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:40:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:34 compute-0 podman[350902]: 2025-10-02 12:40:34.958687469 +0000 UTC m=+0.682411426 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:40:35 compute-0 nova_compute[257802]: 2025-10-02 12:40:35.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:35 compute-0 podman[350925]: 2025-10-02 12:40:35.688761252 +0000 UTC m=+0.698465210 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:40:35 compute-0 ceph-mon[73607]: pgmap v2399: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 4.3 KiB/s wr, 23 op/s
Oct 02 12:40:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 236 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 697 KiB/s wr, 30 op/s
Oct 02 12:40:36 compute-0 podman[350902]: 2025-10-02 12:40:36.310053282 +0000 UTC m=+2.033777189 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:40:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:36.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:40:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:40:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1920746968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:40:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1920746968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:37 compute-0 ceph-mon[73607]: pgmap v2400: 305 pgs: 305 active+clean; 236 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 697 KiB/s wr, 30 op/s
Oct 02 12:40:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 846 KiB/s wr, 52 op/s
Oct 02 12:40:37 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:38 compute-0 podman[350971]: 2025-10-02 12:40:38.325263077 +0000 UTC m=+0.822196417 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Oct 02 12:40:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:38.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:38 compute-0 nova_compute[257802]: 2025-10-02 12:40:38.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:39 compute-0 podman[350971]: 2025-10-02 12:40:39.048403259 +0000 UTC m=+1.545336599 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived)
Oct 02 12:40:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1920746968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1920746968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:39 compute-0 ceph-mon[73607]: pgmap v2401: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 846 KiB/s wr, 52 op/s
Oct 02 12:40:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:40:40 compute-0 sudo[350641]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:40:40 compute-0 ovn_controller[148183]: 2025-10-02T12:40:40Z|00655|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:40:40 compute-0 nova_compute[257802]: 2025-10-02 12:40:40.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:40 compute-0 nova_compute[257802]: 2025-10-02 12:40:40.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:40.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:40:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:41 compute-0 sudo[351021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:41 compute-0 sudo[351021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:41 compute-0 sudo[351021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:41 compute-0 sudo[351046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:41 compute-0 sudo[351046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:41 compute-0 sudo[351046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:41 compute-0 sudo[351071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:41 compute-0 sudo[351071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:41 compute-0 sudo[351071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:41 compute-0 sudo[351096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:40:41 compute-0 sudo[351096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:41 compute-0 ceph-mon[73607]: pgmap v2402: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:40:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:41.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 12:40:42 compute-0 sudo[351096]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:40:42
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:42.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 27f4810c-c671-4811-9590-b251d1255bb2 does not exist
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d44b992b-8673-4f3b-a373-3da8d78d94c9 does not exist
Oct 02 12:40:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 738d6f48-8ef8-4226-b099-5676bcb64a28 does not exist
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:40:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:40:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:42 compute-0 sudo[351153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:42 compute-0 sudo[351153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:42 compute-0 sudo[351153]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:42 compute-0 sudo[351178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:42 compute-0 sudo[351178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:42 compute-0 sudo[351178]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:42 compute-0 sudo[351203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:42 compute-0 sudo[351203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:42 compute-0 sudo[351203]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:42 compute-0 sudo[351228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:40:42 compute-0 sudo[351228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:40:43 compute-0 podman[351294]: 2025-10-02 12:40:43.355707793 +0000 UTC m=+0.027302069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2007664150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: pgmap v2403: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2489439363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:40:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:40:43 compute-0 nova_compute[257802]: 2025-10-02 12:40:43.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:43 compute-0 podman[351294]: 2025-10-02 12:40:43.906165161 +0000 UTC m=+0.577759427 container create 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:40:44 compute-0 ovn_controller[148183]: 2025-10-02T12:40:44Z|00656|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:40:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:40:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:40:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:40:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:40:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:40:44 compute-0 nova_compute[257802]: 2025-10-02 12:40:44.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:44 compute-0 systemd[1]: Started libpod-conmon-1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd.scope.
Oct 02 12:40:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:44.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:44 compute-0 podman[351294]: 2025-10-02 12:40:44.849553293 +0000 UTC m=+1.521147569 container init 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:40:44 compute-0 podman[351294]: 2025-10-02 12:40:44.858079422 +0000 UTC m=+1.529673678 container start 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:40:44 compute-0 jovial_easley[351311]: 167 167
Oct 02 12:40:44 compute-0 systemd[1]: libpod-1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd.scope: Deactivated successfully.
Oct 02 12:40:45 compute-0 podman[351294]: 2025-10-02 12:40:45.472167957 +0000 UTC m=+2.143762253 container attach 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:40:45 compute-0 podman[351294]: 2025-10-02 12:40:45.472814993 +0000 UTC m=+2.144409259 container died 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:40:45 compute-0 nova_compute[257802]: 2025-10-02 12:40:45.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:45 compute-0 ceph-mon[73607]: pgmap v2404: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:40:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 12:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-987cdf040aec48152e1b67ded03916283dc33ea3d6789f69ffd42c797c89528e-merged.mount: Deactivated successfully.
Oct 02 12:40:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:47 compute-0 ceph-mon[73607]: pgmap v2405: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 12:40:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Oct 02 12:40:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:48 compute-0 podman[351294]: 2025-10-02 12:40:48.558786947 +0000 UTC m=+5.230381213 container remove 1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:40:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:48.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:48 compute-0 systemd[1]: libpod-conmon-1c98b4d5a802a5226909e8af4ececeb50ea0c1571519dfa45d3c1b391753c4dd.scope: Deactivated successfully.
Oct 02 12:40:48 compute-0 podman[351341]: 2025-10-02 12:40:48.710220522 +0000 UTC m=+0.021062567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:48 compute-0 nova_compute[257802]: 2025-10-02 12:40:48.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:40:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 53K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1598 writes, 7229 keys, 1598 commit groups, 1.0 writes per commit group, ingest: 10.62 MB, 0.02 MB/s
                                           Interval WAL: 1597 writes, 1597 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     38.8      1.79              0.19        34    0.053       0      0       0.0       0.0
                                             L6      1/0   10.86 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5    101.5     85.6      3.67              0.90        33    0.111    206K    18K       0.0       0.0
                                            Sum      1/0   10.86 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     68.3     70.2      5.46              1.09        67    0.082    206K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     42.0     40.7      1.87              0.21        12    0.156     49K   3068       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    101.5     85.6      3.67              0.90        33    0.111    206K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.8      1.78              0.19        33    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.068, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.36 GB read, 0.09 MB/s read, 5.5 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 41.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000416 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2410,39.44 MB,12.9736%) FilterBlock(68,583.11 KB,0.187317%) IndexBlock(68,1011.77 KB,0.325017%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:40:49 compute-0 podman[351341]: 2025-10-02 12:40:49.509124326 +0000 UTC m=+0.819966401 container create 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:40:49 compute-0 ceph-mon[73607]: pgmap v2406: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Oct 02 12:40:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:49.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 986 KiB/s wr, 93 op/s
Oct 02 12:40:49 compute-0 systemd[1]: Started libpod-conmon-560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6.scope.
Oct 02 12:40:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:50 compute-0 podman[351341]: 2025-10-02 12:40:50.342418325 +0000 UTC m=+1.653260370 container init 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:40:50 compute-0 podman[351341]: 2025-10-02 12:40:50.349365443 +0000 UTC m=+1.660207488 container start 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:40:50 compute-0 nova_compute[257802]: 2025-10-02 12:40:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:50 compute-0 sudo[351363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:50 compute-0 sudo[351363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:50 compute-0 sudo[351363]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:50 compute-0 sudo[351388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:50 compute-0 sudo[351388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:50 compute-0 sudo[351388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:50 compute-0 podman[351341]: 2025-10-02 12:40:50.858545161 +0000 UTC m=+2.169387186 container attach 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:40:51 compute-0 priceless_burnell[351357]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:40:51 compute-0 priceless_burnell[351357]: --> relative data size: 1.0
Oct 02 12:40:51 compute-0 priceless_burnell[351357]: --> All data devices are unavailable
Oct 02 12:40:51 compute-0 systemd[1]: libpod-560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6.scope: Deactivated successfully.
Oct 02 12:40:51 compute-0 podman[351341]: 2025-10-02 12:40:51.301820747 +0000 UTC m=+2.612662772 container died 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:40:51 compute-0 ceph-mon[73607]: pgmap v2407: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 986 KiB/s wr, 93 op/s
Oct 02 12:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bafcf3383a082c293f872d941c4708e657dec6f4f46abca5bf446da6c5431dd-merged.mount: Deactivated successfully.
Oct 02 12:40:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 97 op/s
Oct 02 12:40:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:40:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/667717592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:40:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/667717592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:52 compute-0 podman[351341]: 2025-10-02 12:40:52.406379201 +0000 UTC m=+3.717221216 container remove 560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:40:52 compute-0 systemd[1]: libpod-conmon-560ffe5c5e30b349236842bb9cb22b0bff828c28465de1153c231ce03dd793f6.scope: Deactivated successfully.
Oct 02 12:40:52 compute-0 sudo[351228]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:52 compute-0 sudo[351438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:52 compute-0 sudo[351438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:52 compute-0 sudo[351438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:52 compute-0 sudo[351463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:52 compute-0 sudo[351463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:52 compute-0 sudo[351463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:52.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:52 compute-0 ceph-mon[73607]: pgmap v2408: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 97 op/s
Oct 02 12:40:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/667717592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/667717592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:52 compute-0 sudo[351489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:52 compute-0 sudo[351489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:52 compute-0 sudo[351489]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:52 compute-0 sudo[351514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:40:52 compute-0 sudo[351514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:53 compute-0 podman[351579]: 2025-10-02 12:40:53.011130927 +0000 UTC m=+0.022062351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:53 compute-0 podman[351579]: 2025-10-02 12:40:53.32063469 +0000 UTC m=+0.331566094 container create 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:40:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 119 op/s
Oct 02 12:40:53 compute-0 nova_compute[257802]: 2025-10-02 12:40:53.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:54 compute-0 systemd[1]: Started libpod-conmon-542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd.scope.
Oct 02 12:40:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031685118299832494 of space, bias 1.0, pg target 0.9505535489949748 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164868032291085 of space, bias 1.0, pg target 0.6494604096873255 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:40:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:54 compute-0 podman[351579]: 2025-10-02 12:40:54.709066759 +0000 UTC m=+1.719998193 container init 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:54 compute-0 podman[351579]: 2025-10-02 12:40:54.718449788 +0000 UTC m=+1.729381192 container start 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:40:54 compute-0 gracious_shockley[351595]: 167 167
Oct 02 12:40:54 compute-0 systemd[1]: libpod-542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd.scope: Deactivated successfully.
Oct 02 12:40:55 compute-0 podman[351579]: 2025-10-02 12:40:55.311895268 +0000 UTC m=+2.322826672 container attach 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:40:55 compute-0 podman[351579]: 2025-10-02 12:40:55.313082997 +0000 UTC m=+2.324014411 container died 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:40:55 compute-0 ceph-mon[73607]: pgmap v2409: 305 pgs: 305 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 119 op/s
Oct 02 12:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe01523c3fa9c8c409ffd45f7bc03ba7a3b9c76d10dbf5597344f8c1e2f0550-merged.mount: Deactivated successfully.
Oct 02 12:40:55 compute-0 nova_compute[257802]: 2025-10-02 12:40:55.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:40:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:40:55 compute-0 podman[351579]: 2025-10-02 12:40:55.874076502 +0000 UTC m=+2.885007906 container remove 542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:40:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 622 KiB/s wr, 126 op/s
Oct 02 12:40:55 compute-0 systemd[1]: libpod-conmon-542a6a002ec370eba838005f652238a7b38d781d95ff19bcde585a0a424ff1dd.scope: Deactivated successfully.
Oct 02 12:40:56 compute-0 podman[351620]: 2025-10-02 12:40:56.095379696 +0000 UTC m=+0.039106718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:56 compute-0 podman[351620]: 2025-10-02 12:40:56.193263241 +0000 UTC m=+0.136990243 container create 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:40:56 compute-0 systemd[1]: Started libpod-conmon-4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3.scope.
Oct 02 12:40:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438cd827d4f1c0497dbed0366e9bdf8bef38d82b9f2e8f5614f4e9ca3a4af519/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438cd827d4f1c0497dbed0366e9bdf8bef38d82b9f2e8f5614f4e9ca3a4af519/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438cd827d4f1c0497dbed0366e9bdf8bef38d82b9f2e8f5614f4e9ca3a4af519/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438cd827d4f1c0497dbed0366e9bdf8bef38d82b9f2e8f5614f4e9ca3a4af519/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:56 compute-0 podman[351620]: 2025-10-02 12:40:56.436533773 +0000 UTC m=+0.380260775 container init 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:40:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2023610297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2023610297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:56 compute-0 podman[351620]: 2025-10-02 12:40:56.446282802 +0000 UTC m=+0.390009794 container start 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:40:56 compute-0 podman[351620]: 2025-10-02 12:40:56.514123951 +0000 UTC m=+0.457850953 container attach 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:40:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:57 compute-0 nova_compute[257802]: 2025-10-02 12:40:57.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]: {
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:     "1": [
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:         {
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "devices": [
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "/dev/loop3"
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             ],
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "lv_name": "ceph_lv0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "lv_size": "7511998464",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "name": "ceph_lv0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "tags": {
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.cluster_name": "ceph",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.crush_device_class": "",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.encrypted": "0",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.osd_id": "1",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.type": "block",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:                 "ceph.vdo": "0"
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             },
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "type": "block",
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:             "vg_name": "ceph_vg0"
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:         }
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]:     ]
Oct 02 12:40:57 compute-0 upbeat_mcnulty[351636]: }
Oct 02 12:40:57 compute-0 systemd[1]: libpod-4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3.scope: Deactivated successfully.
Oct 02 12:40:57 compute-0 podman[351646]: 2025-10-02 12:40:57.33272601 +0000 UTC m=+0.033330037 container died 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:40:57 compute-0 ceph-mon[73607]: pgmap v2410: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 622 KiB/s wr, 126 op/s
Oct 02 12:40:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:57.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Oct 02 12:40:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-438cd827d4f1c0497dbed0366e9bdf8bef38d82b9f2e8f5614f4e9ca3a4af519-merged.mount: Deactivated successfully.
Oct 02 12:40:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:58 compute-0 podman[351646]: 2025-10-02 12:40:58.947702432 +0000 UTC m=+1.648306499 container remove 4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:40:58 compute-0 nova_compute[257802]: 2025-10-02 12:40:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:58 compute-0 systemd[1]: libpod-conmon-4d379e5fef21dfc60739eec651e84f9d899c6303c3e297092a9663952f6021a3.scope: Deactivated successfully.
Oct 02 12:40:59 compute-0 sudo[351514]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:59 compute-0 podman[351663]: 2025-10-02 12:40:59.017016408 +0000 UTC m=+0.913207743 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:40:59 compute-0 podman[351661]: 2025-10-02 12:40:59.020604426 +0000 UTC m=+0.914924296 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:59 compute-0 podman[351662]: 2025-10-02 12:40:59.049634616 +0000 UTC m=+0.948628080 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 12:40:59 compute-0 sudo[351718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:59 compute-0 sudo[351718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:59 compute-0 sudo[351718]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:59 compute-0 sudo[351745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:59 compute-0 sudo[351745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:59 compute-0 sudo[351745]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:59 compute-0 sudo[351770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:59 compute-0 sudo[351770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:59 compute-0 sudo[351770]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:59 compute-0 sudo[351795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:40:59 compute-0 sudo[351795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:59 compute-0 ceph-mon[73607]: pgmap v2411: 305 pgs: 305 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Oct 02 12:40:59 compute-0 podman[351861]: 2025-10-02 12:40:59.612018455 +0000 UTC m=+0.021472526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:40:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:40:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:59.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:40:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 154 op/s
Oct 02 12:41:00 compute-0 podman[351861]: 2025-10-02 12:41:00.022072818 +0000 UTC m=+0.431526849 container create 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:41:00 compute-0 systemd[1]: Started libpod-conmon-8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678.scope.
Oct 02 12:41:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:00 compute-0 podman[351861]: 2025-10-02 12:41:00.278493401 +0000 UTC m=+0.687947462 container init 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:41:00 compute-0 podman[351861]: 2025-10-02 12:41:00.286070896 +0000 UTC m=+0.695524927 container start 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:41:00 compute-0 condescending_dewdney[351877]: 167 167
Oct 02 12:41:00 compute-0 systemd[1]: libpod-8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678.scope: Deactivated successfully.
Oct 02 12:41:00 compute-0 podman[351861]: 2025-10-02 12:41:00.404041663 +0000 UTC m=+0.813495704 container attach 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:41:00 compute-0 podman[351861]: 2025-10-02 12:41:00.405245523 +0000 UTC m=+0.814699554 container died 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:41:00 compute-0 nova_compute[257802]: 2025-10-02 12:41:00.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:00 compute-0 ceph-mon[73607]: pgmap v2412: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 154 op/s
Oct 02 12:41:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/708667367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-803ed9bbfcefdb8180b8a9003f3d013e7437f15acffbf243564353217f11416f-merged.mount: Deactivated successfully.
Oct 02 12:41:01 compute-0 podman[351861]: 2025-10-02 12:41:01.578767845 +0000 UTC m=+1.988221916 container remove 8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dewdney, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:41:01 compute-0 systemd[1]: libpod-conmon-8f641c37e20bec1d43710e41debfc88b442c6cb71e18c0cdd190e62843d8f678.scope: Deactivated successfully.
Oct 02 12:41:01 compute-0 podman[351902]: 2025-10-02 12:41:01.728456827 +0000 UTC m=+0.028812626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:41:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:41:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 117 op/s
Oct 02 12:41:01 compute-0 podman[351902]: 2025-10-02 12:41:01.955920772 +0000 UTC m=+0.256276551 container create 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4004202773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:02 compute-0 systemd[1]: Started libpod-conmon-5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8.scope.
Oct 02 12:41:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cb8fda77a0c37420a88a5a585eb620da1a11e5924de3ba4875c0c2278f39b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cb8fda77a0c37420a88a5a585eb620da1a11e5924de3ba4875c0c2278f39b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cb8fda77a0c37420a88a5a585eb620da1a11e5924de3ba4875c0c2278f39b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cb8fda77a0c37420a88a5a585eb620da1a11e5924de3ba4875c0c2278f39b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:02 compute-0 nova_compute[257802]: 2025-10-02 12:41:02.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:02 compute-0 podman[351902]: 2025-10-02 12:41:02.638458761 +0000 UTC m=+0.938814560 container init 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:41:02 compute-0 podman[351902]: 2025-10-02 12:41:02.645885252 +0000 UTC m=+0.946241031 container start 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 12:41:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:02 compute-0 podman[351902]: 2025-10-02 12:41:02.73569268 +0000 UTC m=+1.036048479 container attach 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:41:03 compute-0 nova_compute[257802]: 2025-10-02 12:41:03.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:03 compute-0 nova_compute[257802]: 2025-10-02 12:41:03.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:03 compute-0 nova_compute[257802]: 2025-10-02 12:41:03.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:41:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:03 compute-0 ceph-mon[73607]: pgmap v2413: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 117 op/s
Oct 02 12:41:03 compute-0 sad_wright[351918]: {
Oct 02 12:41:03 compute-0 sad_wright[351918]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:41:03 compute-0 sad_wright[351918]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:41:03 compute-0 sad_wright[351918]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:41:03 compute-0 sad_wright[351918]:         "osd_id": 1,
Oct 02 12:41:03 compute-0 sad_wright[351918]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:41:03 compute-0 sad_wright[351918]:         "type": "bluestore"
Oct 02 12:41:03 compute-0 sad_wright[351918]:     }
Oct 02 12:41:03 compute-0 sad_wright[351918]: }
Oct 02 12:41:03 compute-0 systemd[1]: libpod-5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8.scope: Deactivated successfully.
Oct 02 12:41:03 compute-0 podman[351902]: 2025-10-02 12:41:03.761253911 +0000 UTC m=+2.061609690 container died 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:41:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:41:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:41:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.6 MiB/s wr, 124 op/s
Oct 02 12:41:03 compute-0 nova_compute[257802]: 2025-10-02 12:41:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6cb8fda77a0c37420a88a5a585eb620da1a11e5924de3ba4875c0c2278f39b3-merged.mount: Deactivated successfully.
Oct 02 12:41:04 compute-0 podman[351902]: 2025-10-02 12:41:04.180674752 +0000 UTC m=+2.481030531 container remove 5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:04 compute-0 sudo[351795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:41:04 compute-0 systemd[1]: libpod-conmon-5eda93d1f46cb34c57b6cad6e880d1ded706cff4707a47b9138d5a2b2562b6c8.scope: Deactivated successfully.
Oct 02 12:41:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:41:04 compute-0 podman[351955]: 2025-10-02 12:41:04.338767921 +0000 UTC m=+0.077086057 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:41:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:41:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:41:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6e54afa8-503b-4d40-8c4b-707ec862f9f4 does not exist
Oct 02 12:41:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 52b6ab5a-1d50-4c9b-89cf-292630b05528 does not exist
Oct 02 12:41:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2f48b5af-d47c-4080-a05f-b15c525a77ec does not exist
Oct 02 12:41:04 compute-0 sudo[351981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:04 compute-0 sudo[351981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:04 compute-0 sudo[351981]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:04 compute-0 sudo[352006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:41:04 compute-0 sudo[352006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:04 compute-0 sudo[352006]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:05 compute-0 ceph-mon[73607]: pgmap v2414: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.6 MiB/s wr, 124 op/s
Oct 02 12:41:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:41:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:41:05 compute-0 nova_compute[257802]: 2025-10-02 12:41:05.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.9 MiB/s wr, 105 op/s
Oct 02 12:41:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:07 compute-0 nova_compute[257802]: 2025-10-02 12:41:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:07 compute-0 ceph-mon[73607]: pgmap v2415: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.9 MiB/s wr, 105 op/s
Oct 02 12:41:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.8 MiB/s wr, 107 op/s
Oct 02 12:41:08 compute-0 nova_compute[257802]: 2025-10-02 12:41:08.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:08 compute-0 nova_compute[257802]: 2025-10-02 12:41:08.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:09 compute-0 nova_compute[257802]: 2025-10-02 12:41:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:09 compute-0 ceph-mon[73607]: pgmap v2416: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.8 MiB/s wr, 107 op/s
Oct 02 12:41:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1884910576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1884910576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:09.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.8 MiB/s wr, 127 op/s
Oct 02 12:41:10 compute-0 nova_compute[257802]: 2025-10-02 12:41:10.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:10.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:10 compute-0 ceph-mon[73607]: pgmap v2417: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.8 MiB/s wr, 127 op/s
Oct 02 12:41:10 compute-0 sudo[352035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:10 compute-0 sudo[352035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:10 compute-0 sudo[352035]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:10 compute-0 sudo[352060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:10 compute-0 sudo[352060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:10 compute-0 sudo[352060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3051522132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3051522132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:11.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 107 op/s
Oct 02 12:41:12 compute-0 nova_compute[257802]: 2025-10-02 12:41:12.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:12.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:12 compute-0 ceph-mon[73607]: pgmap v2418: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 107 op/s
Oct 02 12:41:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2115603472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:41:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1570411965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:41:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1570411965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:13.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 92 op/s
Oct 02 12:41:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1570411965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1570411965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/480575300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:13 compute-0 nova_compute[257802]: 2025-10-02 12:41:13.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:13 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 02 12:41:13 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:13.997013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:41:13 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 02 12:41:13 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408873997061, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 783, "num_deletes": 251, "total_data_size": 1050621, "memory_usage": 1068368, "flush_reason": "Manual Compaction"}
Oct 02 12:41:13 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874023464, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 1027815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53035, "largest_seqno": 53817, "table_properties": {"data_size": 1023837, "index_size": 1694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9658, "raw_average_key_size": 20, "raw_value_size": 1015525, "raw_average_value_size": 2111, "num_data_blocks": 74, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408812, "oldest_key_time": 1759408812, "file_creation_time": 1759408873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 26487 microseconds, and 3318 cpu microseconds.
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.023504) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 1027815 bytes OK
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.023522) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.073419) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.073464) EVENT_LOG_v1 {"time_micros": 1759408874073453, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.073489) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1046665, prev total WAL file size 1077433, number of live WAL files 2.
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.074133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(1003KB)], [116(10MB)]
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874074168, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12411689, "oldest_snapshot_seqno": -1}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952472254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8196 keys, 10536251 bytes, temperature: kUnknown
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874168026, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10536251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10483584, "index_size": 31051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 212584, "raw_average_key_size": 25, "raw_value_size": 10339820, "raw_average_value_size": 1261, "num_data_blocks": 1210, "num_entries": 8196, "num_filter_entries": 8196, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759408874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.168535) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10536251 bytes
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.173220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.7 rd, 111.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(22.3) write-amplify(10.3) OK, records in: 8720, records dropped: 524 output_compression: NoCompression
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.173240) EVENT_LOG_v1 {"time_micros": 1759408874173231, "job": 70, "event": "compaction_finished", "compaction_time_micros": 94218, "compaction_time_cpu_micros": 23664, "output_level": 6, "num_output_files": 1, "total_output_size": 10536251, "num_input_records": 8720, "num_output_records": 8196, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874173517, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874175451, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.074039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.175505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.175509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.175510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.175512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:41:14.175513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:41:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:15 compute-0 ceph-mon[73607]: pgmap v2419: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 92 op/s
Oct 02 12:41:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/952472254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3784981824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:15 compute-0 nova_compute[257802]: 2025-10-02 12:41:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:15.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 02 12:41:16 compute-0 nova_compute[257802]: 2025-10-02 12:41:16.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:16 compute-0 nova_compute[257802]: 2025-10-02 12:41:16.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:16 compute-0 nova_compute[257802]: 2025-10-02 12:41:16.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:41:16 compute-0 nova_compute[257802]: 2025-10-02 12:41:16.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:41:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-0 nova_compute[257802]: 2025-10-02 12:41:17.148 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:17 compute-0 nova_compute[257802]: 2025-10-02 12:41:17.149 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:17 compute-0 nova_compute[257802]: 2025-10-02 12:41:17.149 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:41:17 compute-0 nova_compute[257802]: 2025-10-02 12:41:17.149 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:17 compute-0 ceph-mon[73607]: pgmap v2420: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 02 12:41:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:17.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 243 KiB/s rd, 2.3 MiB/s wr, 104 op/s
Oct 02 12:41:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.280 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.280 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.315 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.458 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.458 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.467 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.467 2 INFO nova.compute.claims [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.667 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:18 compute-0 nova_compute[257802]: 2025-10-02 12:41:18.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201762902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.098 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.104 2 DEBUG nova.compute.provider_tree [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.129 2 DEBUG nova.scheduler.client.report [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.171 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.171 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.224 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.225 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.256 2 INFO nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:41:19 compute-0 ceph-mon[73607]: pgmap v2421: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 243 KiB/s rd, 2.3 MiB/s wr, 104 op/s
Oct 02 12:41:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/201762902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.290 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.341 2 INFO nova.virt.block_device [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Booting with volume 0af2788f-a0c7-4253-b4eb-758a295353f8 at /dev/vda
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.456 2 DEBUG nova.policy [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b82c89ad6c4a49e78943f7a92d0a6560', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a41d99312f014c65adddea4f70536a15', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.689 2 DEBUG os_brick.utils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.690 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.700 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.700 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d84f3e-eb89-43b7-ba37-14f560181b3e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.701 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.708 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.708 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[097c0e07-c329-44a3-b795-2bd522e4dc3e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.709 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.718 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.718 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d97bef16-ff1f-4048-9c2a-3f9b7b1f8b3f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.719 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[9d45ea5d-b900-4df4-aa91-7d5ca23bb351]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.720 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.750 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.753 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.753 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.753 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.753 2 DEBUG os_brick.utils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:41:19 compute-0 nova_compute[257802]: 2025-10-02 12:41:19.754 2 DEBUG nova.virt.block_device [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating existing volume attachment record: 6cbf0e30-594b-47ed-bef7-fc3b77bafd29 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:41:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:19.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 164 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 02 12:41:20 compute-0 nova_compute[257802]: 2025-10-02 12:41:20.590 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Successfully created port: 763709ed-3fe4-45a4-8a2f-4b21f4534590 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:41:20 compute-0 nova_compute[257802]: 2025-10-02 12:41:20.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:21 compute-0 nova_compute[257802]: 2025-10-02 12:41:21.013 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/969228056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73607]: pgmap v2422: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 164 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 02 12:41:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/969228056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2986351854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:21.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.469 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.469 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.469 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.520 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.520 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.521 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.521 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.521 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.882 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.884 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.884 2 INFO nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Creating image(s)
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.885 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.885 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Ensure instance console log exists: /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.885 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.886 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.886 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635733801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:22 compute-0 nova_compute[257802]: 2025-10-02 12:41:22.945 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.036 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.037 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.129 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Successfully updated port: 763709ed-3fe4-45a4-8a2f-4b21f4534590 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.157 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.158 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.158 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.216 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.217 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=20.897205352783203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.217 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.217 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.269 2 DEBUG nova.compute.manager [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.270 2 DEBUG nova.compute.manager [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.270 2 DEBUG oslo_concurrency.lockutils [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.321 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.321 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 714ae75f-1424-4b97-b849-84e5b4e77668 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.322 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.322 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.388 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.391 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Oct 02 12:41:23 compute-0 ceph-mon[73607]: pgmap v2423: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 12:41:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1635733801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Oct 02 12:41:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Oct 02 12:41:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606799857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.819 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.826 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.862 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.900 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:41:23 compute-0 nova_compute[257802]: 2025-10-02 12:41:23.901 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2263938249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-0 nova_compute[257802]: 2025-10-02 12:41:24.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:24 compute-0 ceph-mon[73607]: osdmap e342: 3 total, 3 up, 3 in
Oct 02 12:41:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2110376952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2606799857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2263938249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.185 2 DEBUG nova.network.neutron [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.237 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.237 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Instance network_info: |[{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.238 2 DEBUG oslo_concurrency.lockutils [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.238 2 DEBUG nova.network.neutron [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.242 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Start _get_guest_xml network_info=[{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '6cbf0e30-594b-47ed-bef7-fc3b77bafd29', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0af2788f-a0c7-4253-b4eb-758a295353f8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0af2788f-a0c7-4253-b4eb-758a295353f8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '714ae75f-1424-4b97-b849-84e5b4e77668', 'attached_at': '', 'detached_at': '', 'volume_id': '0af2788f-a0c7-4253-b4eb-758a295353f8', 'serial': '0af2788f-a0c7-4253-b4eb-758a295353f8'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.247 2 WARNING nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.252 2 DEBUG nova.virt.libvirt.host [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.252 2 DEBUG nova.virt.libvirt.host [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.255 2 DEBUG nova.virt.libvirt.host [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.256 2 DEBUG nova.virt.libvirt.host [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.257 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.258 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.258 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.258 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.258 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.259 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.259 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.259 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.259 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.259 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.260 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.260 2 DEBUG nova.virt.hardware [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.310 2 DEBUG nova.storage.rbd_utils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] rbd image 714ae75f-1424-4b97-b849-84e5b4e77668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.315 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:25 compute-0 ceph-mon[73607]: pgmap v2425: 305 pgs: 305 active+clean; 485 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Oct 02 12:41:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/945226095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:25.569 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:25.570 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685184144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.760 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.889 2 DEBUG nova.virt.libvirt.vif [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-686473254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-686473254',id=153,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKhJxrDtnxwBQUfhXEoiE7UJdnEItyt2MVgFBXsCoh01cS2FKjJZa0tSLP7/9uktcmwDXaXDiKLD638dMdEY8dQy2aXxdKxSuJAyk4atAc8PHb6iv+FO/634dBFNFVRVg==',key_name='tempest-TestInstancesWithCinderVolumes-1888663332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a41d99312f014c65adddea4f70536a15',ramdisk_id='',reservation_id='r-0xfegyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-99684106',owner_user_name='tempest-TestInstancesWithCinderVolumes-99684106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:19Z,user_data=None,user_id='b82c89ad6c4a49e78943f7a92d0a6560',uuid=714ae75f-1424-4b97-b849-84e5b4e77668,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.889 2 DEBUG nova.network.os_vif_util [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converting VIF {"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.890 2 DEBUG nova.network.os_vif_util [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.891 2 DEBUG nova.objects.instance [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'pci_devices' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 3.1 MiB/s wr, 20 op/s
Oct 02 12:41:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:25.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.997 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <uuid>714ae75f-1424-4b97-b849-84e5b4e77668</uuid>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <name>instance-00000099</name>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-686473254</nova:name>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:41:25</nova:creationTime>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:user uuid="b82c89ad6c4a49e78943f7a92d0a6560">tempest-TestInstancesWithCinderVolumes-99684106-project-member</nova:user>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:project uuid="a41d99312f014c65adddea4f70536a15">tempest-TestInstancesWithCinderVolumes-99684106</nova:project>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <nova:port uuid="763709ed-3fe4-45a4-8a2f-4b21f4534590">
Oct 02 12:41:25 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="serial">714ae75f-1424-4b97-b849-84e5b4e77668</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="uuid">714ae75f-1424-4b97-b849-84e5b4e77668</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/714ae75f-1424-4b97-b849-84e5b4e77668_disk.config">
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-0af2788f-a0c7-4253-b4eb-758a295353f8">
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:41:25 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <serial>0af2788f-a0c7-4253-b4eb-758a295353f8</serial>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1a:b3:16"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <target dev="tap763709ed-3f"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/console.log" append="off"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:41:25 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:41:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:41:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:41:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:41:25 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.999 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Preparing to wait for external event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.999 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.999 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:25 compute-0 nova_compute[257802]: 2025-10-02 12:41:25.999 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.000 2 DEBUG nova.virt.libvirt.vif [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-686473254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-686473254',id=153,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKhJxrDtnxwBQUfhXEoiE7UJdnEItyt2MVgFBXsCoh01cS2FKjJZa0tSLP7/9uktcmwDXaXDiKLD638dMdEY8dQy2aXxdKxSuJAyk4atAc8PHb6iv+FO/634dBFNFVRVg==',key_name='tempest-TestInstancesWithCinderVolumes-1888663332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a41d99312f014c65adddea4f70536a15',ramdisk_id='',reservation_id='r-0xfegyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-99684106',owner_user_name='tempest-TestInstancesWithCinderVolumes-99684106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:19Z,user_data=None,user_id='b82c89ad6c4a49e78943f7a92d0a6560',uuid=714ae75f-1424-4b97-b849-84e5b4e77668,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.000 2 DEBUG nova.network.os_vif_util [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converting VIF {"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.001 2 DEBUG nova.network.os_vif_util [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.001 2 DEBUG os_vif [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap763709ed-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap763709ed-3f, col_values=(('external_ids', {'iface-id': '763709ed-3fe4-45a4-8a2f-4b21f4534590', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:b3:16', 'vm-uuid': '714ae75f-1424-4b97-b849-84e5b4e77668'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 NetworkManager[44987]: <info>  [1759408886.0090] manager: (tap763709ed-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.015 2 INFO os_vif [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f')
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.423 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.423 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.423 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No VIF found with MAC fa:16:3e:1a:b3:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.424 2 INFO nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Using config drive
Oct 02 12:41:26 compute-0 nova_compute[257802]: 2025-10-02 12:41:26.459 2 DEBUG nova.storage.rbd_utils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] rbd image 714ae75f-1424-4b97-b849-84e5b4e77668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3685184144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:26 compute-0 ceph-mon[73607]: pgmap v2426: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 3.1 MiB/s wr, 20 op/s
Oct 02 12:41:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/115220749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:26.959 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:26.959 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:26.960 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Oct 02 12:41:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:28.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-0 ceph-mon[73607]: pgmap v2427: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.445 2 INFO nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Creating config drive at /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.449 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85g1ypy3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.530 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:29.572 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.580 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85g1ypy3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.609 2 DEBUG nova.storage.rbd_utils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] rbd image 714ae75f-1424-4b97-b849-84e5b4e77668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.612 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config 714ae75f-1424-4b97-b849-84e5b4e77668_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.843 2 DEBUG oslo_concurrency.processutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config 714ae75f-1424-4b97-b849-84e5b4e77668_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.844 2 INFO nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Deleting local config drive /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668/disk.config because it was imported into RBD.
Oct 02 12:41:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 31 op/s
Oct 02 12:41:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:29 compute-0 kernel: tap763709ed-3f: entered promiscuous mode
Oct 02 12:41:29 compute-0 NetworkManager[44987]: <info>  [1759408889.9093] manager: (tap763709ed-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Oct 02 12:41:29 compute-0 ovn_controller[148183]: 2025-10-02T12:41:29Z|00657|binding|INFO|Claiming lport 763709ed-3fe4-45a4-8a2f-4b21f4534590 for this chassis.
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-0 ovn_controller[148183]: 2025-10-02T12:41:29Z|00658|binding|INFO|763709ed-3fe4-45a4-8a2f-4b21f4534590: Claiming fa:16:3e:1a:b3:16 10.100.0.7
Oct 02 12:41:29 compute-0 ovn_controller[148183]: 2025-10-02T12:41:29Z|00659|binding|INFO|Setting lport 763709ed-3fe4-45a4-8a2f-4b21f4534590 ovn-installed in OVS
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-0 nova_compute[257802]: 2025-10-02 12:41:29.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-0 systemd-udevd[352336]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:29 compute-0 systemd-machined[211836]: New machine qemu-75-instance-00000099.
Oct 02 12:41:29 compute-0 podman[352269]: 2025-10-02 12:41:29.957180899 +0000 UTC m=+0.095300533 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:41:29 compute-0 NetworkManager[44987]: <info>  [1759408889.9580] device (tap763709ed-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:29 compute-0 podman[352268]: 2025-10-02 12:41:29.958909051 +0000 UTC m=+0.096967763 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 12:41:29 compute-0 NetworkManager[44987]: <info>  [1759408889.9591] device (tap763709ed-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:29 compute-0 podman[352270]: 2025-10-02 12:41:29.966563009 +0000 UTC m=+0.097696242 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:41:29 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000099.
Oct 02 12:41:30 compute-0 ovn_controller[148183]: 2025-10-02T12:41:30Z|00660|binding|INFO|Setting lport 763709ed-3fe4-45a4-8a2f-4b21f4534590 up in Southbound
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.196 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:b3:16 10.100.0.7'], port_security=['fa:16:3e:1a:b3:16 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '714ae75f-1424-4b97-b849-84e5b4e77668', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a41d99312f014c65adddea4f70536a15', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f3db9ba-e6e8-41b4-b916-387b4ad385f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eaf2b53-ef61-475e-8161-94a8e63ff149, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=763709ed-3fe4-45a4-8a2f-4b21f4534590) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.198 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 763709ed-3fe4-45a4-8a2f-4b21f4534590 in datapath e7b8a8de-b6cd-4283-854b-a2bd919c371d bound to our chassis
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.199 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7b8a8de-b6cd-4283-854b-a2bd919c371d
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.211 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0a0e1d-67b5-4649-ad62-3dea9a332e43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.212 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape7b8a8de-b1 in ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.216 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape7b8a8de-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.216 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[40e3a9e0-6c40-4d2d-b839-6b0c51405740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.217 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6b5f50-8951-48a2-be75-c89715e4605b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.231 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[929fdc3d-f5af-452d-a600-cbb30c1e1514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.257 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cd7e20-f538-4745-bb4d-b4125ab381d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.290 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3a90d70d-50e4-41f7-87a5-a8a6e298fccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 NetworkManager[44987]: <info>  [1759408890.2979] manager: (tape7b8a8de-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.299 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[80be9021-4710-42e6-a3e0-c59d1933994a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.329 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bb941ce7-1131-4f70-8a2a-0fdf6abe4087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_controller[148183]: 2025-10-02T12:41:30Z|00661|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.332 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ad11ba2d-0f5d-4200-9dbc-d86dd475d886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 NetworkManager[44987]: <info>  [1759408890.3644] device (tape7b8a8de-b0): carrier: link connected
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.369 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9c7a5f-6ee9-4518-b8bc-cd31979e024b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.385 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[57fd9466-5a37-4156-a165-1c9433918d46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7b8a8de-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695799, 'reachable_time': 38641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352373, 'error': None, 'target': 'ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.401 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[52e229ca-86af-461d-9482-75609291480f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:1819'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695799, 'tstamp': 695799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352374, 'error': None, 'target': 'ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[35c9ef23-ab52-477a-a043-34058bd05ef9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7b8a8de-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695799, 'reachable_time': 38641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352375, 'error': None, 'target': 'ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.450 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed1d5ab-1c8e-41dd-8bac-97a35d34e08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.505 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[65d3cb3b-2166-43c4-b9e7-6acf2e54cc5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.507 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7b8a8de-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.507 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.507 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7b8a8de-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:30 compute-0 NetworkManager[44987]: <info>  [1759408890.5101] manager: (tape7b8a8de-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 02 12:41:30 compute-0 kernel: tape7b8a8de-b0: entered promiscuous mode
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.512 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7b8a8de-b0, col_values=(('external_ids', {'iface-id': '79bf28ab-e58e-4276-adf8-279ba85b1b49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:30 compute-0 ovn_controller[148183]: 2025-10-02T12:41:30Z|00662|binding|INFO|Releasing lport 79bf28ab-e58e-4276-adf8-279ba85b1b49 from this chassis (sb_readonly=0)
Oct 02 12:41:30 compute-0 nova_compute[257802]: 2025-10-02 12:41:30.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.530 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e7b8a8de-b6cd-4283-854b-a2bd919c371d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e7b8a8de-b6cd-4283-854b-a2bd919c371d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.531 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c5fbaa-e7f6-47b5-9e83-222a24af45b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.531 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-e7b8a8de-b6cd-4283-854b-a2bd919c371d
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/e7b8a8de-b6cd-4283-854b-a2bd919c371d.pid.haproxy
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID e7b8a8de-b6cd-4283-854b-a2bd919c371d
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:41:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:41:30.532 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'env', 'PROCESS_TAG=haproxy-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e7b8a8de-b6cd-4283-854b-a2bd919c371d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:41:30 compute-0 nova_compute[257802]: 2025-10-02 12:41:30.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:30 compute-0 nova_compute[257802]: 2025-10-02 12:41:30.531 2 DEBUG nova.network.neutron [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:30 compute-0 nova_compute[257802]: 2025-10-02 12:41:30.531 2 DEBUG nova.network.neutron [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:30 compute-0 nova_compute[257802]: 2025-10-02 12:41:30.571 2 DEBUG oslo_concurrency.lockutils [req-c418a300-d4d6-4a00-a296-eac19c302cd0 req-6998b38f-dd33-49f5-aaa2-258deda53503 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:30.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:30 compute-0 podman[352408]: 2025-10-02 12:41:30.945794426 +0000 UTC m=+0.070380832 container create e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:41:30 compute-0 podman[352408]: 2025-10-02 12:41:30.899023202 +0000 UTC m=+0.023609638 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:41:31 compute-0 systemd[1]: Started libpod-conmon-e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137.scope.
Oct 02 12:41:31 compute-0 nova_compute[257802]: 2025-10-02 12:41:31.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:31 compute-0 ceph-mon[73607]: pgmap v2428: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 31 op/s
Oct 02 12:41:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8346cf1d5e47e85a7984b34b0820c2aa59f5156484ca527e226d4de46ee913/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:31 compute-0 sudo[352426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:31 compute-0 sudo[352426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:31 compute-0 sudo[352426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:31 compute-0 podman[352408]: 2025-10-02 12:41:31.097855997 +0000 UTC m=+0.222442433 container init e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:41:31 compute-0 podman[352408]: 2025-10-02 12:41:31.10578346 +0000 UTC m=+0.230369866 container start e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:41:31 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [NOTICE]   (352457) : New worker (352478) forked
Oct 02 12:41:31 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [NOTICE]   (352457) : Loading success.
Oct 02 12:41:31 compute-0 sudo[352453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:31 compute-0 sudo[352453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:31 compute-0 sudo[352453]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 02 12:41:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:31.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.100 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408892.1002984, 714ae75f-1424-4b97-b849-84e5b4e77668 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.101 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] VM Started (Lifecycle Event)
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.177 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.181 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408892.100558, 714ae75f-1424-4b97-b849-84e5b4e77668 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.182 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] VM Paused (Lifecycle Event)
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.297 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:32 compute-0 nova_compute[257802]: 2025-10-02 12:41:32.300 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:32.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.231 2 DEBUG nova.compute.manager [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.232 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.233 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.233 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.234 2 DEBUG nova.compute.manager [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Processing event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.234 2 DEBUG nova.compute.manager [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.235 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.235 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.235 2 DEBUG oslo_concurrency.lockutils [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.236 2 DEBUG nova.compute.manager [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] No waiting events found dispatching network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.236 2 WARNING nova.compute.manager [req-60f77a4f-254c-4789-9ee7-a2da73110683 req-f4accc15-55da-40f5-a281-c73e8525cf42 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received unexpected event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 for instance with vm_state building and task_state spawning.
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.237 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.241 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.246 2 INFO nova.virt.libvirt.driver [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Instance spawned successfully.
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.246 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.259 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.260 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408893.2404354, 714ae75f-1424-4b97-b849-84e5b4e77668 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.260 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] VM Resumed (Lifecycle Event)
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.277 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.278 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.278 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.279 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.279 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.280 2 DEBUG nova.virt.libvirt.driver [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.350 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.353 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:33 compute-0 ceph-mon[73607]: pgmap v2429: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 02 12:41:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3687820285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.525 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.607 2 INFO nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Took 10.72 seconds to spawn the instance on the hypervisor.
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.608 2 DEBUG nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.0 MiB/s wr, 78 op/s
Oct 02 12:41:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-0 nova_compute[257802]: 2025-10-02 12:41:33.995 2 INFO nova.compute.manager [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Took 15.58 seconds to build instance.
Oct 02 12:41:34 compute-0 nova_compute[257802]: 2025-10-02 12:41:34.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:34 compute-0 nova_compute[257802]: 2025-10-02 12:41:34.046 2 DEBUG oslo_concurrency.lockutils [None req-9a84fbb0-c044-4bfc-a8da-75015bdc8872 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:34.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2362597005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:34 compute-0 podman[352533]: 2025-10-02 12:41:34.987560843 +0000 UTC m=+0.121765060 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:41:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 96 op/s
Oct 02 12:41:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4152341224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:35 compute-0 ceph-mon[73607]: pgmap v2430: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.0 MiB/s wr, 78 op/s
Oct 02 12:41:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/423294808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-0 nova_compute[257802]: 2025-10-02 12:41:36.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:41:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:41:36 compute-0 ceph-mon[73607]: pgmap v2431: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 96 op/s
Oct 02 12:41:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4152341224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 506 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 40 KiB/s wr, 138 op/s
Oct 02 12:41:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:37.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:38.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:39 compute-0 nova_compute[257802]: 2025-10-02 12:41:39.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:39 compute-0 ceph-mon[73607]: pgmap v2432: 305 pgs: 305 active+clean; 506 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 40 KiB/s wr, 138 op/s
Oct 02 12:41:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.4 MiB/s wr, 197 op/s
Oct 02 12:41:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:39.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:40 compute-0 unix_chkpwd[352564]: password check failed for user (root)
Oct 02 12:41:40 compute-0 sshd-session[352559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.167.220.139  user=root
Oct 02 12:41:41 compute-0 nova_compute[257802]: 2025-10-02 12:41:41.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:41 compute-0 ceph-mon[73607]: pgmap v2433: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.4 MiB/s wr, 197 op/s
Oct 02 12:41:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 556 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 02 12:41:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:41.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:41:42
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', '.mgr', '.rgw.root']
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:41:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:42.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:43 compute-0 sshd-session[352559]: Failed password for root from 43.167.220.139 port 44446 ssh2
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:41:43 compute-0 nova_compute[257802]: 2025-10-02 12:41:43.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:43 compute-0 ceph-mon[73607]: pgmap v2434: 305 pgs: 305 active+clean; 556 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 02 12:41:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 558 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.4 MiB/s wr, 253 op/s
Oct 02 12:41:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:44 compute-0 nova_compute[257802]: 2025-10-02 12:41:44.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:41:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:41:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:41:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:41:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:41:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/922932443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/922932443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:44 compute-0 sshd-session[352559]: Connection closed by authenticating user root 43.167.220.139 port 44446 [preauth]
Oct 02 12:41:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:45 compute-0 ceph-mon[73607]: pgmap v2435: 305 pgs: 305 active+clean; 558 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.4 MiB/s wr, 253 op/s
Oct 02 12:41:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1658884160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3628699595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.3 MiB/s wr, 256 op/s
Oct 02 12:41:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:46 compute-0 nova_compute[257802]: 2025-10-02 12:41:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 ceph-mon[73607]: pgmap v2436: 305 pgs: 305 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.3 MiB/s wr, 256 op/s
Oct 02 12:41:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:47 compute-0 ovn_controller[148183]: 2025-10-02T12:41:47Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:b3:16 10.100.0.7
Oct 02 12:41:47 compute-0 ovn_controller[148183]: 2025-10-02T12:41:47Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:b3:16 10.100.0.7
Oct 02 12:41:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Oct 02 12:41:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:48 compute-0 nova_compute[257802]: 2025-10-02 12:41:48.221 2 DEBUG nova.compute.manager [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:48 compute-0 nova_compute[257802]: 2025-10-02 12:41:48.221 2 DEBUG nova.compute.manager [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing instance network info cache due to event network-changed-4c06cc55-6b35-48e0-892a-4fd710f2cf39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:48 compute-0 nova_compute[257802]: 2025-10-02 12:41:48.224 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:48 compute-0 nova_compute[257802]: 2025-10-02 12:41:48.224 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:48 compute-0 nova_compute[257802]: 2025-10-02 12:41:48.225 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Refreshing network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:48 compute-0 ceph-mon[73607]: pgmap v2437: 305 pgs: 305 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Oct 02 12:41:49 compute-0 nova_compute[257802]: 2025-10-02 12:41:49.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 615 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Oct 02 12:41:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:41:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:49.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:41:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:41:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:41:50 compute-0 ceph-mon[73607]: pgmap v2438: 305 pgs: 305 active+clean; 615 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Oct 02 12:41:51 compute-0 nova_compute[257802]: 2025-10-02 12:41:51.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:51 compute-0 sudo[352570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:51 compute-0 sudo[352570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:51 compute-0 sudo[352570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:51 compute-0 sudo[352595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:51 compute-0 sudo[352595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:51 compute-0 sudo[352595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:51 compute-0 nova_compute[257802]: 2025-10-02 12:41:51.459 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated VIF entry in instance network info cache for port 4c06cc55-6b35-48e0-892a-4fd710f2cf39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:51 compute-0 nova_compute[257802]: 2025-10-02 12:41:51.460 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:51 compute-0 nova_compute[257802]: 2025-10-02 12:41:51.607 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:51 compute-0 nova_compute[257802]: 2025-10-02 12:41:51.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 632 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.6 MiB/s wr, 329 op/s
Oct 02 12:41:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.082 2 DEBUG nova.compute.manager [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.213 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.214 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.269 2 DEBUG nova.objects.instance [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_requests' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.301 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.302 2 INFO nova.compute.claims [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.302 2 DEBUG nova.objects.instance [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.319 2 DEBUG nova.objects.instance [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.386 2 INFO nova.compute.resource_tracker [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating resource usage from migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.387 2 DEBUG nova.compute.resource_tracker [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Starting to track incoming migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:41:52 compute-0 nova_compute[257802]: 2025-10-02 12:41:52.488 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:52.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244234377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:53 compute-0 ceph-mon[73607]: pgmap v2439: 305 pgs: 305 active+clean; 632 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.6 MiB/s wr, 329 op/s
Oct 02 12:41:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1244234377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:53 compute-0 nova_compute[257802]: 2025-10-02 12:41:53.005 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:53 compute-0 nova_compute[257802]: 2025-10-02 12:41:53.012 2 DEBUG nova.compute.provider_tree [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:53 compute-0 nova_compute[257802]: 2025-10-02 12:41:53.041 2 DEBUG nova.scheduler.client.report [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:53 compute-0 nova_compute[257802]: 2025-10-02 12:41:53.077 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:53 compute-0 nova_compute[257802]: 2025-10-02 12:41:53.078 2 INFO nova.compute.manager [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Migrating
Oct 02 12:41:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 645 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.6 MiB/s wr, 308 op/s
Oct 02 12:41:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:54 compute-0 nova_compute[257802]: 2025-10-02 12:41:54.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005364641494509585 of space, bias 1.0, pg target 1.6093924483528754 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009419929508654383 of space, bias 1.0, pg target 2.8165589230876606 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003806103724641727 of space, bias 1.0, pg target 1.130412806218593 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:41:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:55 compute-0 ceph-mon[73607]: pgmap v2440: 305 pgs: 305 active+clean; 645 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.6 MiB/s wr, 308 op/s
Oct 02 12:41:55 compute-0 sshd-session[352644]: Accepted publickey for nova from 192.168.122.101 port 38526 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:41:55 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:41:55 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:41:55 compute-0 systemd-logind[789]: New session 72 of user nova.
Oct 02 12:41:55 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:41:55 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:41:55 compute-0 systemd[352648]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:41:55 compute-0 systemd[352648]: Queued start job for default target Main User Target.
Oct 02 12:41:55 compute-0 systemd[352648]: Created slice User Application Slice.
Oct 02 12:41:55 compute-0 systemd[352648]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:41:55 compute-0 systemd[352648]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:41:55 compute-0 systemd[352648]: Reached target Paths.
Oct 02 12:41:55 compute-0 systemd[352648]: Reached target Timers.
Oct 02 12:41:55 compute-0 systemd[352648]: Starting D-Bus User Message Bus Socket...
Oct 02 12:41:55 compute-0 systemd[352648]: Starting Create User's Volatile Files and Directories...
Oct 02 12:41:55 compute-0 systemd[352648]: Finished Create User's Volatile Files and Directories.
Oct 02 12:41:55 compute-0 systemd[352648]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:41:55 compute-0 systemd[352648]: Reached target Sockets.
Oct 02 12:41:55 compute-0 systemd[352648]: Reached target Basic System.
Oct 02 12:41:55 compute-0 systemd[352648]: Reached target Main User Target.
Oct 02 12:41:55 compute-0 systemd[352648]: Startup finished in 161ms.
Oct 02 12:41:55 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:41:55 compute-0 systemd[1]: Started Session 72 of User nova.
Oct 02 12:41:55 compute-0 sshd-session[352644]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:41:55 compute-0 sshd-session[352663]: Received disconnect from 192.168.122.101 port 38526:11: disconnected by user
Oct 02 12:41:55 compute-0 sshd-session[352663]: Disconnected from user nova 192.168.122.101 port 38526
Oct 02 12:41:55 compute-0 sshd-session[352644]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:41:55 compute-0 systemd[1]: session-72.scope: Deactivated successfully.
Oct 02 12:41:55 compute-0 systemd-logind[789]: Session 72 logged out. Waiting for processes to exit.
Oct 02 12:41:55 compute-0 systemd-logind[789]: Removed session 72.
Oct 02 12:41:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 322 op/s
Oct 02 12:41:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:56 compute-0 sshd-session[352665]: Accepted publickey for nova from 192.168.122.101 port 38530 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 12:41:56 compute-0 nova_compute[257802]: 2025-10-02 12:41:56.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:56 compute-0 systemd-logind[789]: New session 74 of user nova.
Oct 02 12:41:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1073050676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1073050676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:56 compute-0 systemd[1]: Started Session 74 of User nova.
Oct 02 12:41:56 compute-0 sshd-session[352665]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:41:56 compute-0 sshd-session[352668]: Received disconnect from 192.168.122.101 port 38530:11: disconnected by user
Oct 02 12:41:56 compute-0 sshd-session[352668]: Disconnected from user nova 192.168.122.101 port 38530
Oct 02 12:41:56 compute-0 sshd-session[352665]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:41:56 compute-0 systemd[1]: session-74.scope: Deactivated successfully.
Oct 02 12:41:56 compute-0 systemd-logind[789]: Session 74 logged out. Waiting for processes to exit.
Oct 02 12:41:56 compute-0 systemd-logind[789]: Removed session 74.
Oct 02 12:41:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:57 compute-0 ceph-mon[73607]: pgmap v2441: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 322 op/s
Oct 02 12:41:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3597104220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.9 MiB/s wr, 287 op/s
Oct 02 12:41:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:57.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:58 compute-0 ovn_controller[148183]: 2025-10-02T12:41:58Z|00663|memory|INFO|peak resident set size grew 50% in last 3498.9 seconds, from 16128 kB to 24204 kB
Oct 02 12:41:58 compute-0 ovn_controller[148183]: 2025-10-02T12:41:58Z|00664|memory|INFO|idl-cells-OVN_Southbound:10484 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:348 lflow-cache-entries-cache-matches:289 lflow-cache-size-KB:1397 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:622 ofctrl_installed_flow_usage-KB:455 ofctrl_sb_flow_ref_usage-KB:234
Oct 02 12:41:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:41:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.846 2 DEBUG nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.847 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.847 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.847 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.847 2 DEBUG nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:58 compute-0 nova_compute[257802]: 2025-10-02 12:41:58.848 2 WARNING nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:41:59 compute-0 nova_compute[257802]: 2025-10-02 12:41:59.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Oct 02 12:41:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Oct 02 12:41:59 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Oct 02 12:41:59 compute-0 ceph-mon[73607]: pgmap v2442: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.9 MiB/s wr, 287 op/s
Oct 02 12:41:59 compute-0 nova_compute[257802]: 2025-10-02 12:41:59.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 671 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.6 MiB/s wr, 245 op/s
Oct 02 12:41:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:41:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:59.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:00 compute-0 ceph-mon[73607]: osdmap e343: 3 total, 3 up, 3 in
Oct 02 12:42:00 compute-0 nova_compute[257802]: 2025-10-02 12:42:00.724 2 INFO nova.network.neutron [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating port ff63f6e3-cb88-42d6-98e3-4f78430c7896 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:42:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:00 compute-0 podman[352674]: 2025-10-02 12:42:00.938675616 +0000 UTC m=+0.074396831 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:42:00 compute-0 podman[352678]: 2025-10-02 12:42:00.9580387 +0000 UTC m=+0.069298326 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 12:42:00 compute-0 podman[352673]: 2025-10-02 12:42:00.962802827 +0000 UTC m=+0.102034047 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:01 compute-0 ceph-mon[73607]: pgmap v2444: 305 pgs: 305 active+clean; 671 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.6 MiB/s wr, 245 op/s
Oct 02 12:42:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1853213375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.343 2 DEBUG nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.343 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.344 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.344 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.344 2 DEBUG nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:01 compute-0 nova_compute[257802]: 2025-10-02 12:42:01.344 2 WARNING nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:42:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 680 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 183 op/s
Oct 02 12:42:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:01.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2808035723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.559 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.560 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.560 2 DEBUG nova.network.neutron [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:42:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.754 2 DEBUG nova.compute.manager [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.754 2 DEBUG nova.compute.manager [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing instance network info cache due to event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:02 compute-0 nova_compute[257802]: 2025-10-02 12:42:02.754 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:03 compute-0 nova_compute[257802]: 2025-10-02 12:42:03.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:03 compute-0 ceph-mon[73607]: pgmap v2445: 305 pgs: 305 active+clean; 680 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 183 op/s
Oct 02 12:42:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 709 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.6 MiB/s wr, 186 op/s
Oct 02 12:42:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:03.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:04 compute-0 nova_compute[257802]: 2025-10-02 12:42:04.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:04 compute-0 nova_compute[257802]: 2025-10-02 12:42:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:04 compute-0 nova_compute[257802]: 2025-10-02 12:42:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:42:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:04.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:04 compute-0 sudo[352729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:04 compute-0 sudo[352729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:04 compute-0 sudo[352729]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:04 compute-0 nova_compute[257802]: 2025-10-02 12:42:04.966 2 DEBUG nova.network.neutron [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.053 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:05 compute-0 sudo[352754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.057 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.057 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:05 compute-0 sudo[352754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:05 compute-0 sudo[352754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:05 compute-0 sudo[352785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:05 compute-0 sudo[352785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:05 compute-0 sudo[352785]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:05 compute-0 podman[352778]: 2025-10-02 12:42:05.159214986 +0000 UTC m=+0.091517521 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.185 2 DEBUG os_brick.utils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.187 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.198 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.198 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[b2abcfc6-9c28-4f72-b2d5-60067db2f20c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.199 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 sudo[352825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:42:05 compute-0 sudo[352825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.208 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.209 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[55324d03-c218-4476-8587-e06514bcbe99]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.210 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.219 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.219 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[83520d26-9008-4171-91a6-2e1af8b770dd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.220 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[01cacf25-6ffc-411e-afe7-9ab08f11ef09]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.221 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.250 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.253 2 DEBUG os_brick.initiator.connectors.lightos [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.254 2 DEBUG os_brick.initiator.connectors.lightos [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.254 2 DEBUG os_brick.initiator.connectors.lightos [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:05 compute-0 nova_compute[257802]: 2025-10-02 12:42:05.255 2 DEBUG os_brick.utils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:05 compute-0 ceph-mon[73607]: pgmap v2446: 305 pgs: 305 active+clean; 709 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.6 MiB/s wr, 186 op/s
Oct 02 12:42:05 compute-0 sudo[352825]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 44bf93e9-1491-478a-8558-e46f37b68558 does not exist
Oct 02 12:42:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 774177a4-dd99-4a28-9e2e-150cb650d116 does not exist
Oct 02 12:42:05 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 497c776d-ac9c-42f1-b51c-a9a80d7d007d does not exist
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:42:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:42:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 899 KiB/s rd, 5.4 MiB/s wr, 185 op/s
Oct 02 12:42:05 compute-0 sudo[352893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:05 compute-0 sudo[352893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:05 compute-0 sudo[352893]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:05 compute-0 sudo[352918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:05 compute-0 sudo[352918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:05 compute-0 sudo[352918]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:06 compute-0 sudo[352943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:06 compute-0 sudo[352943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:06 compute-0 sudo[352943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:06 compute-0 nova_compute[257802]: 2025-10-02 12:42:06.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:06 compute-0 sudo[352968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:42:06 compute-0 sudo[352968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:06 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:42:06 compute-0 systemd[352648]: Activating special unit Exit the Session...
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped target Main User Target.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped target Basic System.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped target Paths.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped target Sockets.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped target Timers.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:42:06 compute-0 systemd[352648]: Closed D-Bus User Message Bus Socket.
Oct 02 12:42:06 compute-0 systemd[352648]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:42:06 compute-0 systemd[352648]: Removed slice User Application Slice.
Oct 02 12:42:06 compute-0 systemd[352648]: Reached target Shutdown.
Oct 02 12:42:06 compute-0 systemd[352648]: Finished Exit the Session.
Oct 02 12:42:06 compute-0 systemd[352648]: Reached target Exit the Session.
Oct 02 12:42:06 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:42:06 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:42:06 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:42:06 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:42:06 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:42:06 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:42:06 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.403386286 +0000 UTC m=+0.050052195 container create ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:06 compute-0 systemd[1]: Started libpod-conmon-ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274.scope.
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.380860415 +0000 UTC m=+0.027526354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.492931107 +0000 UTC m=+0.139597036 container init ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.500612945 +0000 UTC m=+0.147278854 container start ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.503433994 +0000 UTC m=+0.150099933 container attach ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:42:06 compute-0 admiring_hoover[353053]: 167 167
Oct 02 12:42:06 compute-0 systemd[1]: libpod-ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274.scope: Deactivated successfully.
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.507864322 +0000 UTC m=+0.154530231 container died ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e94bc97d8fa790b85f76c919db13b126976ebd11182a1a3c889749758161c26-merged.mount: Deactivated successfully.
Oct 02 12:42:06 compute-0 podman[353035]: 2025-10-02 12:42:06.568958307 +0000 UTC m=+0.215624206 container remove ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hoover, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:06 compute-0 systemd[1]: libpod-conmon-ae24fc4dbf6e4837bfba2ef5f1fbd51b754ccaf06eddba6a79f3939de212a274.scope: Deactivated successfully.
Oct 02 12:42:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:06 compute-0 podman[353078]: 2025-10-02 12:42:06.767454173 +0000 UTC m=+0.051187703 container create 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:42:06 compute-0 systemd[1]: Started libpod-conmon-4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8.scope.
Oct 02 12:42:06 compute-0 podman[353078]: 2025-10-02 12:42:06.747650089 +0000 UTC m=+0.031383639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:06 compute-0 podman[353078]: 2025-10-02 12:42:06.890657187 +0000 UTC m=+0.174390737 container init 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:42:06 compute-0 podman[353078]: 2025-10-02 12:42:06.899959205 +0000 UTC m=+0.183692735 container start 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:06 compute-0 podman[353078]: 2025-10-02 12:42:06.903263836 +0000 UTC m=+0.186997366 container attach 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.508 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.511 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.511 2 INFO nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Creating image(s)
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.512 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.512 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Ensure instance console log exists: /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.512 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.513 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.513 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:07 compute-0 ceph-mon[73607]: pgmap v2447: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 899 KiB/s rd, 5.4 MiB/s wr, 185 op/s
Oct 02 12:42:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1730528600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2890916842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2890916842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.516 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start _get_guest_xml network_info=[{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '3f6da93c-1313-4502-a0cb-b6367e555b0d', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6916418d-7e23-49fb-b41f-d247b7619f6f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6916418d-7e23-49fb-b41f-d247b7619f6f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'f3566799-fdd0-46bf-8256-0294a227030a', 'attached_at': '2025-10-02T12:42:06.000000', 'detached_at': '', 'volume_id': '6916418d-7e23-49fb-b41f-d247b7619f6f', 'serial': '6916418d-7e23-49fb-b41f-d247b7619f6f'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.522 2 WARNING nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.527 2 DEBUG nova.virt.libvirt.host [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.528 2 DEBUG nova.virt.libvirt.host [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.533 2 DEBUG nova.virt.libvirt.host [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.533 2 DEBUG nova.virt.libvirt.host [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.535 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.535 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.535 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.535 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.536 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.536 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.536 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.536 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.537 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.537 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.537 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.537 2 DEBUG nova.virt.hardware [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.538 2 DEBUG nova.objects.instance [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'vcpu_model' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:07 compute-0 nova_compute[257802]: 2025-10-02 12:42:07.682 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:07 compute-0 crazy_bhabha[353094]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:42:07 compute-0 crazy_bhabha[353094]: --> relative data size: 1.0
Oct 02 12:42:07 compute-0 crazy_bhabha[353094]: --> All data devices are unavailable
Oct 02 12:42:07 compute-0 systemd[1]: libpod-4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8.scope: Deactivated successfully.
Oct 02 12:42:07 compute-0 podman[353078]: 2025-10-02 12:42:07.717431496 +0000 UTC m=+1.001165026 container died 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f440caa58ea9e25822885c195018fd4a921b0a9ab44534c3583b7246132496f-merged.mount: Deactivated successfully.
Oct 02 12:42:07 compute-0 podman[353078]: 2025-10-02 12:42:07.773700982 +0000 UTC m=+1.057434512 container remove 4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:42:07 compute-0 systemd[1]: libpod-conmon-4b41909029aa992b43111654559eb11ddf2dca8989cb9310502d6490ef739ef8.scope: Deactivated successfully.
Oct 02 12:42:07 compute-0 sudo[352968]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:07 compute-0 sudo[353152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:07 compute-0 sudo[353152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:07 compute-0 sudo[353152]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 899 KiB/s rd, 5.4 MiB/s wr, 185 op/s
Oct 02 12:42:07 compute-0 sudo[353186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:07 compute-0 sudo[353186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:07 compute-0 sudo[353186]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:07.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:07 compute-0 sudo[353211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:07 compute-0 sudo[353211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:07 compute-0 sudo[353211]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.026 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updated VIF entry in instance network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.027 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:08 compute-0 sudo[353236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:42:08 compute-0 sudo[353236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389789611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.154 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.225 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.262 2 DEBUG nova.virt.libvirt.vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.263 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.264 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.268 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <uuid>f3566799-fdd0-46bf-8256-0294a227030a</uuid>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <name>instance-0000009a</name>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerActionsTestOtherA-server-73692290</nova:name>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:42:07</nova:creationTime>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <nova:port uuid="ff63f6e3-cb88-42d6-98e3-4f78430c7896">
Oct 02 12:42:08 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <system>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="serial">f3566799-fdd0-46bf-8256-0294a227030a</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="uuid">f3566799-fdd0-46bf-8256-0294a227030a</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </system>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <os>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </os>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <features>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </features>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/f3566799-fdd0-46bf-8256-0294a227030a_disk.config">
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </source>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-6916418d-7e23-49fb-b41f-d247b7619f6f">
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </source>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:42:08 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <serial>6916418d-7e23-49fb-b41f-d247b7619f6f</serial>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:1d:c1:2f"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <target dev="tapff63f6e3-cb"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/console.log" append="off"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <video>
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </video>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:42:08 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:42:08 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:42:08 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:42:08 compute-0 nova_compute[257802]: </domain>
Oct 02 12:42:08 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.269 2 DEBUG nova.virt.libvirt.vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.269 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.270 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.270 2 DEBUG os_vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff63f6e3-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff63f6e3-cb, col_values=(('external_ids', {'iface-id': 'ff63f6e3-cb88-42d6-98e3-4f78430c7896', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:c1:2f', 'vm-uuid': 'f3566799-fdd0-46bf-8256-0294a227030a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 NetworkManager[44987]: <info>  [1759408928.2835] manager: (tapff63f6e3-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.291 2 INFO os_vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb')
Oct 02 12:42:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.367 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.367 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.367 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:1d:c1:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.368 2 INFO nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Using config drive
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.422719522 +0000 UTC m=+0.051841880 container create 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:42:08 compute-0 systemd[1]: Started libpod-conmon-3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3.scope.
Oct 02 12:42:08 compute-0 kernel: tapff63f6e3-cb: entered promiscuous mode
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 ovn_controller[148183]: 2025-10-02T12:42:08Z|00665|binding|INFO|Claiming lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 for this chassis.
Oct 02 12:42:08 compute-0 ovn_controller[148183]: 2025-10-02T12:42:08Z|00666|binding|INFO|ff63f6e3-cb88-42d6-98e3-4f78430c7896: Claiming fa:16:3e:1d:c1:2f 10.100.0.14
Oct 02 12:42:08 compute-0 NetworkManager[44987]: <info>  [1759408928.4710] manager: (tapff63f6e3-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.477 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:2f 10.100.0.14'], port_security=['fa:16:3e:1d:c1:2f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f3566799-fdd0-46bf-8256-0294a227030a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ff63f6e3-cb88-42d6-98e3-4f78430c7896) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.478 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ff63f6e3-cb88-42d6-98e3-4f78430c7896 in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.480 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 ovn_controller[148183]: 2025-10-02T12:42:08Z|00667|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 ovn-installed in OVS
Oct 02 12:42:08 compute-0 ovn_controller[148183]: 2025-10-02T12:42:08Z|00668|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 up in Southbound
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.39241036 +0000 UTC m=+0.021532738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.498 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f24fb414-8b77-4e5f-956d-8a30bb52e6ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 systemd-machined[211836]: New machine qemu-76-instance-0000009a.
Oct 02 12:42:08 compute-0 systemd-udevd[353357]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.527411402 +0000 UTC m=+0.156533780 container init 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:42:08 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-0000009a.
Oct 02 12:42:08 compute-0 NetworkManager[44987]: <info>  [1759408928.5312] device (tapff63f6e3-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:42:08 compute-0 NetworkManager[44987]: <info>  [1759408928.5332] device (tapff63f6e3-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:42:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3389789611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.537179081 +0000 UTC m=+0.166301439 container start 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.536 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e94f3324-ad52-487f-93b8-019fb8413d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.540 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e52b3150-2dfb-42f1-9d86-4d78b3da4a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 systemd[1]: libpod-3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3.scope: Deactivated successfully.
Oct 02 12:42:08 compute-0 funny_rosalind[353348]: 167 167
Oct 02 12:42:08 compute-0 conmon[353348]: conmon 3af1b01e206bf0d0f59a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3.scope/container/memory.events
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.550240882 +0000 UTC m=+0.179363240 container attach 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.551526223 +0000 UTC m=+0.180648581 container died 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.577 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7d32c9c9-f31e-4113-b37c-b807b9b78057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0808d8f4f58094ed2afb58b04022dda6fbbb3d2e1ce4cd2fc858e7973af621d4-merged.mount: Deactivated successfully.
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.607 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb23ed09-1d8c-4fb3-a18b-acdb1e4b94fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666919, 'reachable_time': 23631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353383, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.623 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bc57a54f-7da0-4f81-8f10-170326d167be]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666932, 'tstamp': 666932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353386, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666935, 'tstamp': 666935}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353386, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.624 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 podman[353306]: 2025-10-02 12:42:08.625946924 +0000 UTC m=+0.255069282 container remove 3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.628 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.629 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.629 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:08.629 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:08 compute-0 systemd[1]: libpod-conmon-3af1b01e206bf0d0f59aee366b11b7dda5a9dbbdb1ad90fb7c771d7c25ecf3f3.scope: Deactivated successfully.
Oct 02 12:42:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:08 compute-0 podman[353394]: 2025-10-02 12:42:08.840969414 +0000 UTC m=+0.078269026 container create f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:42:08 compute-0 podman[353394]: 2025-10-02 12:42:08.788174903 +0000 UTC m=+0.025474525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:08 compute-0 systemd[1]: Started libpod-conmon-f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547.scope.
Oct 02 12:42:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57ff88a111a74743926cd3137e05c51a66b7a3e9bc545fd8c96751fbfed7985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57ff88a111a74743926cd3137e05c51a66b7a3e9bc545fd8c96751fbfed7985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57ff88a111a74743926cd3137e05c51a66b7a3e9bc545fd8c96751fbfed7985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57ff88a111a74743926cd3137e05c51a66b7a3e9bc545fd8c96751fbfed7985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:08 compute-0 podman[353394]: 2025-10-02 12:42:08.953146628 +0000 UTC m=+0.190446320 container init f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:42:08 compute-0 podman[353394]: 2025-10-02 12:42:08.961439751 +0000 UTC m=+0.198739353 container start f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.966 2 DEBUG nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.968 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.969 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.969 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.969 2 DEBUG nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:08 compute-0 nova_compute[257802]: 2025-10-02 12:42:08.969 2 WARNING nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_finish.
Oct 02 12:42:08 compute-0 podman[353394]: 2025-10-02 12:42:08.970499884 +0000 UTC m=+0.207799536 container attach f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:09 compute-0 unix_chkpwd[353457]: password check failed for user (root)
Oct 02 12:42:09 compute-0 sshd-session[353099]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.167.220.139  user=root
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.382 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408929.3819222, f3566799-fdd0-46bf-8256-0294a227030a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.382 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Resumed (Lifecycle Event)
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.384 2 DEBUG nova.compute.manager [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.387 2 INFO nova.virt.libvirt.driver [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance running successfully.
Oct 02 12:42:09 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.389 2 DEBUG nova.virt.libvirt.guest [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.390 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.402 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.405 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.428 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.429 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759408929.3826087, f3566799-fdd0-46bf-8256-0294a227030a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.429 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Started (Lifecycle Event)
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.459 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.462 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:09 compute-0 nova_compute[257802]: 2025-10-02 12:42:09.497 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:42:09 compute-0 ceph-mon[73607]: pgmap v2448: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 899 KiB/s rd, 5.4 MiB/s wr, 185 op/s
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]: {
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:     "1": [
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:         {
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "devices": [
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "/dev/loop3"
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             ],
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "lv_name": "ceph_lv0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "lv_size": "7511998464",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "name": "ceph_lv0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "tags": {
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.cluster_name": "ceph",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.crush_device_class": "",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.encrypted": "0",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.osd_id": "1",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.type": "block",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:                 "ceph.vdo": "0"
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             },
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "type": "block",
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:             "vg_name": "ceph_vg0"
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:         }
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]:     ]
Oct 02 12:42:09 compute-0 stupefied_dhawan[353446]: }
Oct 02 12:42:09 compute-0 systemd[1]: libpod-f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547.scope: Deactivated successfully.
Oct 02 12:42:09 compute-0 podman[353394]: 2025-10-02 12:42:09.812974995 +0000 UTC m=+1.050274597 container died f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:42:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 588 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Oct 02 12:42:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:09.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d57ff88a111a74743926cd3137e05c51a66b7a3e9bc545fd8c96751fbfed7985-merged.mount: Deactivated successfully.
Oct 02 12:42:10 compute-0 podman[353394]: 2025-10-02 12:42:10.146683081 +0000 UTC m=+1.383982683 container remove f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:42:10 compute-0 sudo[353236]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:10 compute-0 systemd[1]: libpod-conmon-f065c263dc09ae6aa23c41b5e2144a835be505abc0b5a7879d9ff526926ee547.scope: Deactivated successfully.
Oct 02 12:42:10 compute-0 sudo[353474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:10 compute-0 sudo[353474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:10 compute-0 sudo[353474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:10 compute-0 sudo[353499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:10 compute-0 sudo[353499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:10 compute-0 sudo[353499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:10 compute-0 sudo[353524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:10 compute-0 sudo[353524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:10 compute-0 sudo[353524]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:10 compute-0 sudo[353549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:42:10 compute-0 sudo[353549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:10 compute-0 ceph-mon[73607]: pgmap v2449: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 588 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Oct 02 12:42:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3584363981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/429066828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/429066828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:10 compute-0 podman[353614]: 2025-10-02 12:42:10.725671587 +0000 UTC m=+0.023513655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:10 compute-0 podman[353614]: 2025-10-02 12:42:10.921893298 +0000 UTC m=+0.219735346 container create 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:42:11 compute-0 systemd[1]: Started libpod-conmon-4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5.scope.
Oct 02 12:42:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:11 compute-0 podman[353614]: 2025-10-02 12:42:11.093012646 +0000 UTC m=+0.390854724 container init 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:42:11 compute-0 podman[353614]: 2025-10-02 12:42:11.10013051 +0000 UTC m=+0.397972558 container start 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:11 compute-0 friendly_jennings[353631]: 167 167
Oct 02 12:42:11 compute-0 systemd[1]: libpod-4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5.scope: Deactivated successfully.
Oct 02 12:42:11 compute-0 podman[353614]: 2025-10-02 12:42:11.115309071 +0000 UTC m=+0.413151119 container attach 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:11 compute-0 podman[353614]: 2025-10-02 12:42:11.116265044 +0000 UTC m=+0.414107092 container died 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ec88f9c4f06f502009ef22c58531dde29abe5f62c86ef9a9da59bf52413ee5-merged.mount: Deactivated successfully.
Oct 02 12:42:11 compute-0 podman[353614]: 2025-10-02 12:42:11.169298792 +0000 UTC m=+0.467140840 container remove 4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.175 2 DEBUG nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.178 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.178 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.178 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.178 2 DEBUG nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:11 compute-0 nova_compute[257802]: 2025-10-02 12:42:11.179 2 WARNING nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state resized and task_state None.
Oct 02 12:42:11 compute-0 systemd[1]: libpod-conmon-4c0989a013e7780dc12c65e08e85a3fbb5c6df6a6a51cfbb0d99ce43bc25bcb5.scope: Deactivated successfully.
Oct 02 12:42:11 compute-0 sshd-session[353099]: Failed password for root from 43.167.220.139 port 43266 ssh2
Oct 02 12:42:11 compute-0 podman[353656]: 2025-10-02 12:42:11.373611241 +0000 UTC m=+0.038771050 container create 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:42:11 compute-0 systemd[1]: Started libpod-conmon-20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244.scope.
Oct 02 12:42:11 compute-0 podman[353656]: 2025-10-02 12:42:11.356251436 +0000 UTC m=+0.021411265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96af229f1ee6843a435f9745e8f5343a3167c88678ae3df6c526336ab0bca988/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96af229f1ee6843a435f9745e8f5343a3167c88678ae3df6c526336ab0bca988/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96af229f1ee6843a435f9745e8f5343a3167c88678ae3df6c526336ab0bca988/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96af229f1ee6843a435f9745e8f5343a3167c88678ae3df6c526336ab0bca988/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:11 compute-0 podman[353656]: 2025-10-02 12:42:11.473295079 +0000 UTC m=+0.138454918 container init 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:42:11 compute-0 podman[353656]: 2025-10-02 12:42:11.4835105 +0000 UTC m=+0.148670309 container start 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:11 compute-0 podman[353656]: 2025-10-02 12:42:11.490902451 +0000 UTC m=+0.156062260 container attach 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 02 12:42:11 compute-0 sudo[353673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:11 compute-0 sudo[353673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:11 compute-0 sudo[353673]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:11 compute-0 sudo[353703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:11 compute-0 sudo[353703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:11 compute-0 sudo[353703]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Oct 02 12:42:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:42:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:42:12 compute-0 nova_compute[257802]: 2025-10-02 12:42:12.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]: {
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:         "osd_id": 1,
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:         "type": "bluestore"
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]:     }
Oct 02 12:42:12 compute-0 lucid_hodgkin[353677]: }
Oct 02 12:42:12 compute-0 systemd[1]: libpod-20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244.scope: Deactivated successfully.
Oct 02 12:42:12 compute-0 podman[353656]: 2025-10-02 12:42:12.332602604 +0000 UTC m=+0.997762413 container died 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-96af229f1ee6843a435f9745e8f5343a3167c88678ae3df6c526336ab0bca988-merged.mount: Deactivated successfully.
Oct 02 12:42:12 compute-0 podman[353656]: 2025-10-02 12:42:12.388975203 +0000 UTC m=+1.054135012 container remove 20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:42:12 compute-0 systemd[1]: libpod-conmon-20ffdc230e04c482f6b0aaa4aded21b1d4cd0464aa76688ce5fa89efc90e1244.scope: Deactivated successfully.
Oct 02 12:42:12 compute-0 sudo[353549]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:42:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:42:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 132a524b-6b04-49b0-96c1-2366d0d43f89 does not exist
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cdb8dc74-2cef-4f04-86c1-6bd9a78b2626 does not exist
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5cc15e8e-89e4-4d4c-827b-33f28115f1d1 does not exist
Oct 02 12:42:12 compute-0 sudo[353755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:12 compute-0 sudo[353755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:12 compute-0 sudo[353755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:12 compute-0 sudo[353780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:42:12 compute-0 sudo[353780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:12 compute-0 sudo[353780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:12 compute-0 sshd-session[353099]: Connection closed by authenticating user root 43.167.220.139 port 43266 [preauth]
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:12.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:13 compute-0 nova_compute[257802]: 2025-10-02 12:42:13.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:13 compute-0 ceph-mon[73607]: pgmap v2450: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Oct 02 12:42:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:13 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:42:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 735 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 184 op/s
Oct 02 12:42:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:13.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:14 compute-0 nova_compute[257802]: 2025-10-02 12:42:14.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/919380508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:15 compute-0 ceph-mon[73607]: pgmap v2451: 305 pgs: 305 active+clean; 735 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 184 op/s
Oct 02 12:42:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4030536784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 174 op/s
Oct 02 12:42:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:15.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:16 compute-0 nova_compute[257802]: 2025-10-02 12:42:16.123 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/558785531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.321 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.321 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.402 2 DEBUG nova.objects.instance [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.483 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.483 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.484 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.484 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:17 compute-0 ceph-mon[73607]: pgmap v2452: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 174 op/s
Oct 02 12:42:17 compute-0 nova_compute[257802]: 2025-10-02 12:42:17.645 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Oct 02 12:42:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:17.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.186 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.186 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.187 2 INFO nova.compute.manager [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attaching volume 19e6039d-9ba8-4938-aa46-4b4209e53456 to /dev/vdb
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.443 2 DEBUG os_brick.utils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.444 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.456 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.456 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[13e8c741-c78c-43e8-9325-1cc0aacfb1d4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.457 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.465 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.465 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[eca91bc0-c9e3-416d-81ee-c19d0bb8913c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.466 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.475 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.477 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[777a2069-47b7-4819-a3fd-99794409f375]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.478 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[db49ad66-5f41-4839-b3c4-63dd755e65f3]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.479 2 DEBUG oslo_concurrency.processutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.505 2 DEBUG oslo_concurrency.processutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.508 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.508 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.508 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.509 2 DEBUG os_brick.utils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:18 compute-0 nova_compute[257802]: 2025-10-02 12:42:18.509 2 DEBUG nova.virt.block_device [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating existing volume attachment record: d1e501a6-68c9-4c89-b28c-0a15c57c5088 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:42:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Oct 02 12:42:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Oct 02 12:42:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2451229646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:18 compute-0 ceph-mon[73607]: pgmap v2453: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Oct 02 12:42:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2539095240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.536 2 DEBUG nova.objects.instance [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.584 2 DEBUG nova.virt.libvirt.driver [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attempting to attach volume 19e6039d-9ba8-4938-aa46-4b4209e53456 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.587 2 DEBUG nova.virt.libvirt.guest [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:42:19 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-19e6039d-9ba8-4938-aa46-4b4209e53456">
Oct 02 12:42:19 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:42:19 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:19 compute-0 nova_compute[257802]:   <serial>19e6039d-9ba8-4938-aa46-4b4209e53456</serial>
Oct 02 12:42:19 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:19 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.864 2 DEBUG nova.virt.libvirt.driver [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.864 2 DEBUG nova.virt.libvirt.driver [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.865 2 DEBUG nova.virt.libvirt.driver [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:19 compute-0 nova_compute[257802]: 2025-10-02 12:42:19.865 2 DEBUG nova.virt.libvirt.driver [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No VIF found with MAC fa:16:3e:1a:b3:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:19 compute-0 ceph-mon[73607]: osdmap e344: 3 total, 3 up, 3 in
Oct 02 12:42:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1546775005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:42:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:20 compute-0 nova_compute[257802]: 2025-10-02 12:42:20.165 2 DEBUG oslo_concurrency.lockutils [None req-fe81a130-e93c-458d-877a-492024bf47d8 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:20 compute-0 nova_compute[257802]: 2025-10-02 12:42:20.545 2 INFO nova.compute.manager [None req-9a904cb4-a2f4-471e-8595-48e9f44fa160 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Get console output
Oct 02 12:42:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:20.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:20 compute-0 ceph-mon[73607]: pgmap v2455: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.111 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.112 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.129 2 DEBUG nova.objects.instance [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.163 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.475 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [{"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.490 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.491 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.491 2 INFO nova.compute.manager [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attaching volume 4fa9e646-33dd-4e8b-95f5-a1b4436473bb to /dev/vdc
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.608 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-17766045-13fc-4377-848f-6815e8a474d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.608 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.609 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.646 2 DEBUG os_brick.utils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.647 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.657 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.657 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[c6de78f8-57f1-4760-9e4c-bca2498fbfb2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.658 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.665 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.666 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ffec7404-0917-4067-baf8-3e7e889338e7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.667 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.675 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.675 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[12234ddc-ee1c-4827-ac90-15644f943f21]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.676 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[303fe43a-714a-4014-8c2a-2484814f91ce]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.677 2 DEBUG oslo_concurrency.processutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.704 2 DEBUG oslo_concurrency.processutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.707 2 DEBUG os_brick.initiator.connectors.lightos [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.707 2 DEBUG os_brick.initiator.connectors.lightos [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.707 2 DEBUG os_brick.initiator.connectors.lightos [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.708 2 DEBUG os_brick.utils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.708 2 DEBUG nova.virt.block_device [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating existing volume attachment record: 809af09e-0563-4d39-bb5f-ac55e629447e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.848 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.848 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.848 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.849 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:42:21 compute-0 nova_compute[257802]: 2025-10-02 12:42:21.849 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 12:42:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149675546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.405 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2799676091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.525 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.526 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.530 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.530 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.534 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.535 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.535 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.648 2 DEBUG nova.objects.instance [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.695 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attempting to attach volume 4fa9e646-33dd-4e8b-95f5-a1b4436473bb with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.697 2 DEBUG nova.virt.libvirt.guest [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:42:22 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-4fa9e646-33dd-4e8b-95f5-a1b4436473bb">
Oct 02 12:42:22 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:42:22 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:42:22 compute-0 nova_compute[257802]:   <serial>4fa9e646-33dd-4e8b-95f5-a1b4436473bb</serial>
Oct 02 12:42:22 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:22 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.721 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.722 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3635MB free_disk=20.83023452758789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.722 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.722 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:22.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.874 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 17766045-13fc-4377-848f-6815e8a474d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.875 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 714ae75f-1424-4b97-b849-84e5b4e77668 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.875 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance f3566799-fdd0-46bf-8256-0294a227030a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.876 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.876 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:42:22 compute-0 nova_compute[257802]: 2025-10-02 12:42:22.972 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.148 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.148 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.149 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.149 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.149 2 DEBUG nova.virt.libvirt.driver [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] No VIF found with MAC fa:16:3e:1a:b3:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:23 compute-0 ceph-mon[73607]: pgmap v2456: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 12:42:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3149675546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2799676091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.312 2 DEBUG oslo_concurrency.lockutils [None req-181e9885-14fb-44d0-a95f-b71bca1df5c9 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839440443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.433 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.438 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.452 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.482 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:42:23 compute-0 nova_compute[257802]: 2025-10-02 12:42:23.482 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:23 compute-0 ovn_controller[148183]: 2025-10-02T12:42:23Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:c1:2f 10.100.0.14
Oct 02 12:42:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 797 KiB/s rd, 1.2 MiB/s wr, 72 op/s
Oct 02 12:42:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:23.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:24 compute-0 nova_compute[257802]: 2025-10-02 12:42:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2839440443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3383345508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:24.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:25 compute-0 ceph-mon[73607]: pgmap v2457: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 797 KiB/s rd, 1.2 MiB/s wr, 72 op/s
Oct 02 12:42:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/247854997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 58 KiB/s wr, 136 op/s
Oct 02 12:42:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:25.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:26.960 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:26.960 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:26.961 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 58 KiB/s wr, 136 op/s
Oct 02 12:42:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:27.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.090 2 DEBUG nova.compute.manager [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.091 2 DEBUG nova.compute.manager [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.091 2 DEBUG oslo_concurrency.lockutils [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.091 2 DEBUG oslo_concurrency.lockutils [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.091 2 DEBUG nova.network.neutron [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:28 compute-0 ceph-mon[73607]: pgmap v2458: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 58 KiB/s wr, 136 op/s
Oct 02 12:42:28 compute-0 nova_compute[257802]: 2025-10-02 12:42:28.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Oct 02 12:42:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Oct 02 12:42:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:28.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.208 2 DEBUG nova.network.neutron [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.209 2 DEBUG nova.network.neutron [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.255 2 DEBUG oslo_concurrency.lockutils [req-54a36d95-e602-4bb5-a398-40b33bd88e2a req-67610c47-e087-4ade-9363-0c03d8ce3960 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.387 2 DEBUG nova.compute.manager [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.387 2 DEBUG nova.compute.manager [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.387 2 DEBUG oslo_concurrency.lockutils [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.388 2 DEBUG oslo_concurrency.lockutils [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.388 2 DEBUG nova.network.neutron [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:29 compute-0 ceph-mon[73607]: pgmap v2459: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 58 KiB/s wr, 136 op/s
Oct 02 12:42:29 compute-0 ceph-mon[73607]: osdmap e345: 3 total, 3 up, 3 in
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:29.598 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:29.599 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:42:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 45 KiB/s wr, 232 op/s
Oct 02 12:42:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:29.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:29 compute-0 nova_compute[257802]: 2025-10-02 12:42:29.971 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:30 compute-0 nova_compute[257802]: 2025-10-02 12:42:30.829 2 DEBUG nova.network.neutron [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:30 compute-0 nova_compute[257802]: 2025-10-02 12:42:30.831 2 DEBUG nova.network.neutron [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:30 compute-0 nova_compute[257802]: 2025-10-02 12:42:30.856 2 DEBUG oslo_concurrency.lockutils [req-38347adb-5692-4bd1-be2f-dd59ab92902f req-25c4da31-9908-4438-94c4-7eadf2b8ce65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:31 compute-0 nova_compute[257802]: 2025-10-02 12:42:31.482 2 DEBUG nova.compute.manager [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:31 compute-0 nova_compute[257802]: 2025-10-02 12:42:31.482 2 DEBUG nova.compute.manager [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:31 compute-0 nova_compute[257802]: 2025-10-02 12:42:31.483 2 DEBUG oslo_concurrency.lockutils [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:31 compute-0 nova_compute[257802]: 2025-10-02 12:42:31.483 2 DEBUG oslo_concurrency.lockutils [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:31 compute-0 nova_compute[257802]: 2025-10-02 12:42:31.483 2 DEBUG nova.network.neutron [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:31 compute-0 ceph-mon[73607]: pgmap v2461: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 45 KiB/s wr, 232 op/s
Oct 02 12:42:31 compute-0 sudo[353916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:31 compute-0 sudo[353916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 sudo[353916]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:31 compute-0 sudo[353959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:31 compute-0 sudo[353959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 podman[353942]: 2025-10-02 12:42:31.847782779 +0000 UTC m=+0.061093337 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:31 compute-0 sudo[353959]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:31 compute-0 podman[353941]: 2025-10-02 12:42:31.852792802 +0000 UTC m=+0.069223126 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:42:31 compute-0 podman[353940]: 2025-10-02 12:42:31.881749339 +0000 UTC m=+0.099523666 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:42:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 765 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 44 KiB/s wr, 243 op/s
Oct 02 12:42:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:32 compute-0 ceph-mon[73607]: pgmap v2462: 305 pgs: 305 active+clean; 765 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 44 KiB/s wr, 243 op/s
Oct 02 12:42:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:32.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:33 compute-0 nova_compute[257802]: 2025-10-02 12:42:33.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:33 compute-0 nova_compute[257802]: 2025-10-02 12:42:33.668 2 DEBUG nova.network.neutron [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:33 compute-0 nova_compute[257802]: 2025-10-02 12:42:33.668 2 DEBUG nova.network.neutron [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:33 compute-0 nova_compute[257802]: 2025-10-02 12:42:33.682 2 DEBUG oslo_concurrency.lockutils [req-b1cba558-7690-4d6a-a965-50c78e64450c req-d7f2a1ae-de28-4090-8342-5d2c8290f933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 765 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 28 KiB/s wr, 218 op/s
Oct 02 12:42:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:33.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.804 2 DEBUG nova.compute.manager [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.804 2 DEBUG nova.compute.manager [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.805 2 DEBUG oslo_concurrency.lockutils [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.805 2 DEBUG oslo_concurrency.lockutils [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:34 compute-0 nova_compute[257802]: 2025-10-02 12:42:34.805 2 DEBUG nova.network.neutron [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:35 compute-0 ceph-mon[73607]: pgmap v2463: 305 pgs: 305 active+clean; 765 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 28 KiB/s wr, 218 op/s
Oct 02 12:42:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 772 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 131 op/s
Oct 02 12:42:35 compute-0 podman[354028]: 2025-10-02 12:42:35.960884831 +0000 UTC m=+0.103252988 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:42:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.479 2 DEBUG nova.network.neutron [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.480 2 DEBUG nova.network.neutron [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.496 2 DEBUG oslo_concurrency.lockutils [req-ee2e1e50-cc14-4bd9-bef3-7fb5bea080ee req-a2a79ce5-867a-4fa9-afd0-c9a972c66684 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:36.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.807 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.808 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.809 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.809 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.809 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.810 2 INFO nova.compute.manager [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Terminating instance
Oct 02 12:42:36 compute-0 nova_compute[257802]: 2025-10-02 12:42:36.811 2 DEBUG nova.compute.manager [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:36 compute-0 kernel: tapff63f6e3-cb (unregistering): left promiscuous mode
Oct 02 12:42:37 compute-0 NetworkManager[44987]: <info>  [1759408957.0037] device (tapff63f6e3-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:37 compute-0 ovn_controller[148183]: 2025-10-02T12:42:37Z|00669|binding|INFO|Releasing lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 from this chassis (sb_readonly=0)
Oct 02 12:42:37 compute-0 ovn_controller[148183]: 2025-10-02T12:42:37Z|00670|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 down in Southbound
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 ovn_controller[148183]: 2025-10-02T12:42:37Z|00671|binding|INFO|Removing iface tapff63f6e3-cb ovn-installed in OVS
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.023 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:2f 10.100.0.14'], port_security=['fa:16:3e:1d:c1:2f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f3566799-fdd0-46bf-8256-0294a227030a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ff63f6e3-cb88-42d6-98e3-4f78430c7896) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.024 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ff63f6e3-cb88-42d6-98e3-4f78430c7896 in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.025 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.043 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c20ffca1-9d54-435f-b621-3cc4d5c99a33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.078 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[30d18611-b644-4b0a-9273-c9405adaafc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.082 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a11d7251-d3b4-4ae8-8511-f6e30fb0d153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Oct 02 12:42:37 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009a.scope: Consumed 14.404s CPU time.
Oct 02 12:42:37 compute-0 systemd-machined[211836]: Machine qemu-76-instance-0000009a terminated.
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.117 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4fddde-cd62-409b-b4b2-24612bec0afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.133 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[94293d44-6fd1-4b5d-8f68-3782cf1f9a6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666919, 'reachable_time': 23631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354067, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.150 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c81f7770-1c8e-4956-b0d8-6d411e95af8d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666932, 'tstamp': 666932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354068, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666935, 'tstamp': 666935}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354068, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.152 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.160 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.161 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.161 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.161 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.253 2 INFO nova.virt.libvirt.driver [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance destroyed successfully.
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.254 2 DEBUG nova.objects.instance [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.273 2 DEBUG nova.virt.libvirt.vif [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.274 2 DEBUG nova.network.os_vif_util [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.275 2 DEBUG nova.network.os_vif_util [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.275 2 DEBUG os_vif [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff63f6e3-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.283 2 INFO os_vif [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb')
Oct 02 12:42:37 compute-0 ceph-mon[73607]: pgmap v2464: 305 pgs: 305 active+clean; 772 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 131 op/s
Oct 02 12:42:37 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:37.601 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.610 2 DEBUG nova.compute.manager [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.611 2 DEBUG oslo_concurrency.lockutils [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.611 2 DEBUG oslo_concurrency.lockutils [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.611 2 DEBUG oslo_concurrency.lockutils [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.611 2 DEBUG nova.compute.manager [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.611 2 DEBUG nova.compute.manager [req-c1f6afbf-eb8f-4668-a514-92605f555bb5 req-ea7ed05a-b5b7-448e-8743-426c02b4b380 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.742 2 INFO nova.virt.libvirt.driver [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Deleting instance files /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a_del
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.744 2 INFO nova.virt.libvirt.driver [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Deletion of /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a_del complete
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.835 2 INFO nova.compute.manager [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.835 2 DEBUG oslo.service.loopingcall [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.835 2 DEBUG nova.compute.manager [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.836 2 DEBUG nova.network.neutron [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 772 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 131 op/s
Oct 02 12:42:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:37.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.994 2 DEBUG oslo_concurrency.lockutils [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:37 compute-0 nova_compute[257802]: 2025-10-02 12:42:37.995 2 DEBUG oslo_concurrency.lockutils [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.013 2 INFO nova.compute.manager [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Detaching volume 19e6039d-9ba8-4938-aa46-4b4209e53456
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.162 2 INFO nova.virt.block_device [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attempting to driver detach volume 19e6039d-9ba8-4938-aa46-4b4209e53456 from mountpoint /dev/vdb
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.171 2 DEBUG nova.virt.libvirt.driver [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Attempting to detach device vdb from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.172 2 DEBUG nova.virt.libvirt.guest [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-19e6039d-9ba8-4938-aa46-4b4209e53456">
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <serial>19e6039d-9ba8-4938-aa46-4b4209e53456</serial>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:38 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.179 2 INFO nova.virt.libvirt.driver [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully detached device vdb from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the persistent domain config.
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.179 2 DEBUG nova.virt.libvirt.driver [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.180 2 DEBUG nova.virt.libvirt.guest [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-19e6039d-9ba8-4938-aa46-4b4209e53456">
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <serial>19e6039d-9ba8-4938-aa46-4b4209e53456</serial>
Oct 02 12:42:38 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:42:38 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:38 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.366 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408958.3663976, 714ae75f-1424-4b97-b849-84e5b4e77668 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.368 2 DEBUG nova.virt.libvirt.driver [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 714ae75f-1424-4b97-b849-84e5b4e77668 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.370 2 INFO nova.virt.libvirt.driver [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully detached device vdb from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the live domain config.
Oct 02 12:42:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.609 2 DEBUG nova.objects.instance [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:38 compute-0 nova_compute[257802]: 2025-10-02 12:42:38.661 2 DEBUG oslo_concurrency.lockutils [None req-495a0a02-e7e7-4b21-9139-709324e37c23 b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:38 compute-0 ceph-mon[73607]: pgmap v2465: 305 pgs: 305 active+clean; 772 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 131 op/s
Oct 02 12:42:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:38.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.146 2 DEBUG nova.network.neutron [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.196 2 INFO nova.compute.manager [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Took 1.36 seconds to deallocate network for instance.
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.520 2 INFO nova.compute.manager [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Took 0.32 seconds to detach 1 volumes for instance.
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.522 2 DEBUG nova.compute.manager [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Deleting volume: 6916418d-7e23-49fb-b41f-d247b7619f6f _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.694 2 DEBUG nova.compute.manager [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.696 2 DEBUG oslo_concurrency.lockutils [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.696 2 DEBUG oslo_concurrency.lockutils [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.696 2 DEBUG oslo_concurrency.lockutils [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.696 2 DEBUG nova.compute.manager [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.697 2 WARNING nova.compute.manager [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state deleting.
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.697 2 DEBUG nova.compute.manager [req-713a2b60-6561-461c-b2fe-fd6f11cbe80c req-6e15d09a-1c6c-45e2-bdc7-e4975408dbf2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-deleted-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.714 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.715 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:39 compute-0 nova_compute[257802]: 2025-10-02 12:42:39.829 2 DEBUG oslo_concurrency.processutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 781 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 131 op/s
Oct 02 12:42:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:39.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282430315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.281 2 DEBUG oslo_concurrency.processutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.289 2 DEBUG nova.compute.provider_tree [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.369 2 DEBUG nova.scheduler.client.report [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.673 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.706 2 INFO nova.scheduler.client.report [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Deleted allocations for instance f3566799-fdd0-46bf-8256-0294a227030a
Oct 02 12:42:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:40.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:40 compute-0 nova_compute[257802]: 2025-10-02 12:42:40.941 2 DEBUG oslo_concurrency.lockutils [None req-be41fee9-1efc-4e07-a326-39578a2b2c33 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:41 compute-0 ceph-mon[73607]: pgmap v2466: 305 pgs: 305 active+clean; 781 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 131 op/s
Oct 02 12:42:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1825861267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1825861267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3282430315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/619613514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/619613514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:41 compute-0 nova_compute[257802]: 2025-10-02 12:42:41.813 2 DEBUG oslo_concurrency.lockutils [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:41 compute-0 nova_compute[257802]: 2025-10-02 12:42:41.813 2 DEBUG oslo_concurrency.lockutils [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:41 compute-0 nova_compute[257802]: 2025-10-02 12:42:41.837 2 INFO nova.compute.manager [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Detaching volume 4fa9e646-33dd-4e8b-95f5-a1b4436473bb
Oct 02 12:42:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Oct 02 12:42:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:41.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.004 2 INFO nova.virt.block_device [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Attempting to driver detach volume 4fa9e646-33dd-4e8b-95f5-a1b4436473bb from mountpoint /dev/vdc
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.012 2 DEBUG nova.virt.libvirt.driver [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Attempting to detach device vdc from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.013 2 DEBUG nova.virt.libvirt.guest [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-4fa9e646-33dd-4e8b-95f5-a1b4436473bb">
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <serial>4fa9e646-33dd-4e8b-95f5-a1b4436473bb</serial>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:42 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.023 2 INFO nova.virt.libvirt.driver [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully detached device vdc from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the persistent domain config.
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.023 2 DEBUG nova.virt.libvirt.driver [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.024 2 DEBUG nova.virt.libvirt.guest [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-4fa9e646-33dd-4e8b-95f5-a1b4436473bb">
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   </source>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <serial>4fa9e646-33dd-4e8b-95f5-a1b4436473bb</serial>
Oct 02 12:42:42 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:42:42 compute-0 nova_compute[257802]: </disk>
Oct 02 12:42:42 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.237 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759408962.2374651, 714ae75f-1424-4b97-b849-84e5b4e77668 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.239 2 DEBUG nova.virt.libvirt.driver [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 714ae75f-1424-4b97-b849-84e5b4e77668 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.241 2 INFO nova.virt.libvirt.driver [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully detached device vdc from instance 714ae75f-1424-4b97-b849-84e5b4e77668 from the live domain config.
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.441 2 DEBUG nova.objects.instance [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'flavor' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:42:42
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control']
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:42:42 compute-0 nova_compute[257802]: 2025-10-02 12:42:42.559 2 DEBUG oslo_concurrency.lockutils [None req-758aa61b-fb38-4655-9d5a-1731a3162c9a b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:42.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:43 compute-0 ceph-mon[73607]: pgmap v2467: 305 pgs: 305 active+clean; 763 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:42:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 711 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 942 KiB/s rd, 2.3 MiB/s wr, 169 op/s
Oct 02 12:42:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:43.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:42:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771351674' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:42:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771351674' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:42:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:42:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:42:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:42:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:42:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1771351674' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1771351674' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:44.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.990 2 DEBUG nova.compute.manager [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.991 2 DEBUG nova.compute.manager [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.991 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.992 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:44 compute-0 nova_compute[257802]: 2025-10-02 12:42:44.992 2 DEBUG nova.network.neutron [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:45 compute-0 ceph-mon[73607]: pgmap v2468: 305 pgs: 305 active+clean; 711 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 942 KiB/s rd, 2.3 MiB/s wr, 169 op/s
Oct 02 12:42:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 969 KiB/s rd, 2.5 MiB/s wr, 208 op/s
Oct 02 12:42:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:45.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:46 compute-0 nova_compute[257802]: 2025-10-02 12:42:46.664 2 DEBUG nova.network.neutron [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:46 compute-0 nova_compute[257802]: 2025-10-02 12:42:46.664 2 DEBUG nova.network.neutron [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:46 compute-0 nova_compute[257802]: 2025-10-02 12:42:46.686 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:46.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:47 compute-0 ceph-mon[73607]: pgmap v2469: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 969 KiB/s rd, 2.5 MiB/s wr, 208 op/s
Oct 02 12:42:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/159983714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:47 compute-0 nova_compute[257802]: 2025-10-02 12:42:47.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 1.6 MiB/s wr, 196 op/s
Oct 02 12:42:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:47.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:42:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2987462620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:42:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2987462620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2987462620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2987462620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.077 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.077 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.078 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.078 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.078 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.079 2 INFO nova.compute.manager [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Terminating instance
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.080 2 DEBUG nova.compute.manager [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:49 compute-0 kernel: tap4c06cc55-6b (unregistering): left promiscuous mode
Oct 02 12:42:49 compute-0 NetworkManager[44987]: <info>  [1759408969.1559] device (tap4c06cc55-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:49 compute-0 ovn_controller[148183]: 2025-10-02T12:42:49Z|00672|binding|INFO|Releasing lport 4c06cc55-6b35-48e0-892a-4fd710f2cf39 from this chassis (sb_readonly=0)
Oct 02 12:42:49 compute-0 ovn_controller[148183]: 2025-10-02T12:42:49Z|00673|binding|INFO|Setting lport 4c06cc55-6b35-48e0-892a-4fd710f2cf39 down in Southbound
Oct 02 12:42:49 compute-0 ovn_controller[148183]: 2025-10-02T12:42:49Z|00674|binding|INFO|Removing iface tap4c06cc55-6b ovn-installed in OVS
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 02 12:42:49 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000008d.scope: Consumed 28.877s CPU time.
Oct 02 12:42:49 compute-0 systemd-machined[211836]: Machine qemu-72-instance-0000008d terminated.
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.310 2 INFO nova.virt.libvirt.driver [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Instance destroyed successfully.
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.310 2 DEBUG nova.objects.instance [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid 17766045-13fc-4377-848f-6815e8a474d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:49 compute-0 ceph-mon[73607]: pgmap v2470: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 1.6 MiB/s wr, 196 op/s
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.345 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:94:f8 10.100.0.8'], port_security=['fa:16:3e:ec:94:f8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '17766045-13fc-4377-848f-6815e8a474d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=4c06cc55-6b35-48e0-892a-4fd710f2cf39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.346 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 4c06cc55-6b35-48e0-892a-4fd710f2cf39 in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.348 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00455285-97a7-4fa2-ba83-e8060936877e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.349 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[023b2c58-81a2-4d5a-8b22-fd962dc9802f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.349 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace which is not needed anymore
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.359 2 DEBUG nova.virt.libvirt.vif [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1488915729',display_name='tempest-ServerActionsTestOtherA-server-1488915729',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1488915729',id=141,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-ti5wv1hn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=17766045-13fc-4377-848f-6815e8a474d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.359 2 DEBUG nova.network.os_vif_util [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "address": "fa:16:3e:ec:94:f8", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c06cc55-6b", "ovs_interfaceid": "4c06cc55-6b35-48e0-892a-4fd710f2cf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.360 2 DEBUG nova.network.os_vif_util [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.360 2 DEBUG os_vif [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c06cc55-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.366 2 INFO os_vif [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:94:f8,bridge_name='br-int',has_traffic_filtering=True,id=4c06cc55-6b35-48e0-892a-4fd710f2cf39,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c06cc55-6b')
Oct 02 12:42:49 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [NOTICE]   (345945) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:49 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [NOTICE]   (345945) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:49 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [WARNING]  (345945) : Exiting Master process...
Oct 02 12:42:49 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [ALERT]    (345945) : Current worker (345947) exited with code 143 (Terminated)
Oct 02 12:42:49 compute-0 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[345941]: [WARNING]  (345945) : All workers exited. Exiting... (0)
Oct 02 12:42:49 compute-0 systemd[1]: libpod-fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552.scope: Deactivated successfully.
Oct 02 12:42:49 compute-0 podman[354185]: 2025-10-02 12:42:49.493714255 +0000 UTC m=+0.053121581 container died fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-677887aa539c6afaf87b3d7502b43befa59f0e7bafb2b328dd2abcbe55d2d495-merged.mount: Deactivated successfully.
Oct 02 12:42:49 compute-0 podman[354185]: 2025-10-02 12:42:49.549316926 +0000 UTC m=+0.108724242 container cleanup fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:49 compute-0 systemd[1]: libpod-conmon-fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552.scope: Deactivated successfully.
Oct 02 12:42:49 compute-0 podman[354216]: 2025-10-02 12:42:49.622074496 +0000 UTC m=+0.051187764 container remove fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.628 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[520505c1-eb25-4b39-a7ea-67af451195aa]: (4, ('Thu Oct  2 12:42:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552)\nfe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552\nThu Oct  2 12:42:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (fe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552)\nfe0ac6727a1b63ea4c735c5b32c7fd95fb703d5a5d91ea43f17f7721c53c8552\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.629 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cf928420-3943-4709-8a24-fd8090816b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.630 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 kernel: tap00455285-90: left promiscuous mode
Oct 02 12:42:49 compute-0 nova_compute[257802]: 2025-10-02 12:42:49.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.650 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5c1469-21ee-4643-8fe7-e9c82f126816]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.670 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[40419097-997f-451c-9962-ee2fb69b68b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.671 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbcb8a3-1030-4eaa-b416-bb3b8472781c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.685 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37c7f915-d3c6-41cb-9f85-f24591ed44c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666913, 'reachable_time': 39585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354231, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.688 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:42:49.688 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[540fd8c9-e582-42f2-98de-964599b93a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d00455285\x2d97a7\x2d4fa2\x2dba83\x2de8060936877e.mount: Deactivated successfully.
Oct 02 12:42:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 969 KiB/s rd, 1.6 MiB/s wr, 215 op/s
Oct 02 12:42:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:49.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:50 compute-0 nova_compute[257802]: 2025-10-02 12:42:50.659 2 INFO nova.virt.libvirt.driver [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Deleting instance files /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5_del
Oct 02 12:42:50 compute-0 nova_compute[257802]: 2025-10-02 12:42:50.660 2 INFO nova.virt.libvirt.driver [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Deletion of /var/lib/nova/instances/17766045-13fc-4377-848f-6815e8a474d5_del complete
Oct 02 12:42:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:50.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.110 2 DEBUG nova.compute.manager [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-unplugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.111 2 DEBUG oslo_concurrency.lockutils [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.111 2 DEBUG oslo_concurrency.lockutils [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.111 2 DEBUG oslo_concurrency.lockutils [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.111 2 DEBUG nova.compute.manager [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] No waiting events found dispatching network-vif-unplugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.112 2 DEBUG nova.compute.manager [req-395ecfc8-61b6-4f72-8e76-2ed9f9a5bbaa req-4e51fd90-edd1-40af-9d9a-44f883810d6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-unplugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.242 2 INFO nova.compute.manager [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Took 2.16 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.242 2 DEBUG oslo.service.loopingcall [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.243 2 DEBUG nova.compute.manager [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:51 compute-0 nova_compute[257802]: 2025-10-02 12:42:51.243 2 DEBUG nova.network.neutron [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Oct 02 12:42:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Oct 02 12:42:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Oct 02 12:42:51 compute-0 ceph-mon[73607]: pgmap v2471: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 969 KiB/s rd, 1.6 MiB/s wr, 215 op/s
Oct 02 12:42:51 compute-0 sudo[354234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:51 compute-0 sudo[354234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:51 compute-0 sudo[354234]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 291 KiB/s wr, 156 op/s
Oct 02 12:42:51 compute-0 sudo[354259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:51 compute-0 sudo[354259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:51 compute-0 sudo[354259]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:51.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:52 compute-0 nova_compute[257802]: 2025-10-02 12:42:52.253 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408957.2520926, f3566799-fdd0-46bf-8256-0294a227030a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:52 compute-0 nova_compute[257802]: 2025-10-02 12:42:52.253 2 INFO nova.compute.manager [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Stopped (Lifecycle Event)
Oct 02 12:42:52 compute-0 nova_compute[257802]: 2025-10-02 12:42:52.564 2 DEBUG nova.compute.manager [None req-df2c4d5d-e9be-4ed5-b8e1-13b6c7921efa - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:53 compute-0 ceph-mon[73607]: osdmap e346: 3 total, 3 up, 3 in
Oct 02 12:42:53 compute-0 ceph-mon[73607]: pgmap v2473: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 291 KiB/s wr, 156 op/s
Oct 02 12:42:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1267667080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.631 2 DEBUG nova.compute.manager [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.631 2 DEBUG oslo_concurrency.lockutils [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "17766045-13fc-4377-848f-6815e8a474d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.631 2 DEBUG oslo_concurrency.lockutils [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.632 2 DEBUG oslo_concurrency.lockutils [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.632 2 DEBUG nova.compute.manager [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] No waiting events found dispatching network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:53 compute-0 nova_compute[257802]: 2025-10-02 12:42:53.632 2 WARNING nova.compute.manager [req-e60c7591-c0fb-4940-8af6-1e4255385cd8 req-b23b5b9d-7517-4d5d-ac61-301839b8ec36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received unexpected event network-vif-plugged-4c06cc55-6b35-48e0-892a-4fd710f2cf39 for instance with vm_state active and task_state deleting.
Oct 02 12:42:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 286 KiB/s wr, 132 op/s
Oct 02 12:42:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.349 2 DEBUG nova.network.neutron [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.501 2 INFO nova.compute.manager [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Took 3.26 seconds to deallocate network for instance.
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004584191210683045 of space, bias 1.0, pg target 1.3752573632049137 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008668014551926298 of space, bias 1.0, pg target 2.591736351025963 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004115084973478503 of space, bias 1.0, pg target 1.2221802371231154 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:42:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:54 compute-0 ceph-mon[73607]: pgmap v2474: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 286 KiB/s wr, 132 op/s
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.913 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.914 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:54 compute-0 nova_compute[257802]: 2025-10-02 12:42:54.980 2 DEBUG oslo_concurrency.processutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469068972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.417 2 DEBUG oslo_concurrency.processutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.423 2 DEBUG nova.compute.provider_tree [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.479 2 DEBUG nova.scheduler.client.report [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.737 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.763 2 DEBUG nova.compute.manager [req-bec6d478-d6a5-42ef-98aa-141d6ed203e4 req-b1cb462b-31d9-4bd6-826b-ec1255b1ace8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Received event network-vif-deleted-4c06cc55-6b35-48e0-892a-4fd710f2cf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:55 compute-0 nova_compute[257802]: 2025-10-02 12:42:55.819 2 INFO nova.scheduler.client.report [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Deleted allocations for instance 17766045-13fc-4377-848f-6815e8a474d5
Oct 02 12:42:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Oct 02 12:42:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3548777140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3548777140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/469068972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.8 MiB/s wr, 115 op/s
Oct 02 12:42:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Oct 02 12:42:55 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Oct 02 12:42:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:42:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:42:56 compute-0 nova_compute[257802]: 2025-10-02 12:42:56.025 2 DEBUG oslo_concurrency.lockutils [None req-24d52e62-890f-44c0-ba65-1a3941777744 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "17766045-13fc-4377-848f-6815e8a474d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:56.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Oct 02 12:42:56 compute-0 ceph-mon[73607]: pgmap v2475: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.8 MiB/s wr, 115 op/s
Oct 02 12:42:56 compute-0 ceph-mon[73607]: osdmap e347: 3 total, 3 up, 3 in
Oct 02 12:42:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Oct 02 12:42:57 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Oct 02 12:42:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.9 MiB/s wr, 95 op/s
Oct 02 12:42:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:57.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:58 compute-0 ceph-mon[73607]: osdmap e348: 3 total, 3 up, 3 in
Oct 02 12:42:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:42:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 189K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.84 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9271 writes, 37K keys, 9271 commit groups, 1.0 writes per commit group, ingest: 37.42 MB, 0.06 MB/s
                                           Interval WAL: 9271 writes, 3310 syncs, 2.80 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:42:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1399131135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-0 nova_compute[257802]: 2025-10-02 12:42:59.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:59 compute-0 ceph-mon[73607]: pgmap v2478: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.9 MiB/s wr, 95 op/s
Oct 02 12:42:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1399131135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-0 nova_compute[257802]: 2025-10-02 12:42:59.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 141 op/s
Oct 02 12:42:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:42:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:59.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:00.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:01 compute-0 ceph-mon[73607]: pgmap v2479: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 141 op/s
Oct 02 12:43:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2107379637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.8 MiB/s wr, 106 op/s
Oct 02 12:43:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:01.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:02 compute-0 ovn_controller[148183]: 2025-10-02T12:43:02Z|00675|binding|INFO|Releasing lport 79bf28ab-e58e-4276-adf8-279ba85b1b49 from this chassis (sb_readonly=0)
Oct 02 12:43:02 compute-0 nova_compute[257802]: 2025-10-02 12:43:02.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/21696698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-0 ovn_controller[148183]: 2025-10-02T12:43:02Z|00676|binding|INFO|Releasing lport 79bf28ab-e58e-4276-adf8-279ba85b1b49 from this chassis (sb_readonly=0)
Oct 02 12:43:02 compute-0 nova_compute[257802]: 2025-10-02 12:43:02.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:02 compute-0 podman[354313]: 2025-10-02 12:43:02.921762781 +0000 UTC m=+0.060464661 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:43:02 compute-0 podman[354315]: 2025-10-02 12:43:02.923937393 +0000 UTC m=+0.059533807 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 02 12:43:02 compute-0 podman[354314]: 2025-10-02 12:43:02.944870166 +0000 UTC m=+0.085366000 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:03 compute-0 NetworkManager[44987]: <info>  [1759408983.0155] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct 02 12:43:03 compute-0 NetworkManager[44987]: <info>  [1759408983.0165] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 02 12:43:03 compute-0 nova_compute[257802]: 2025-10-02 12:43:03.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-0 nova_compute[257802]: 2025-10-02 12:43:03.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:03 compute-0 nova_compute[257802]: 2025-10-02 12:43:03.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-0 ovn_controller[148183]: 2025-10-02T12:43:03Z|00677|binding|INFO|Releasing lport 79bf28ab-e58e-4276-adf8-279ba85b1b49 from this chassis (sb_readonly=0)
Oct 02 12:43:03 compute-0 nova_compute[257802]: 2025-10-02 12:43:03.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-0 ceph-mon[73607]: pgmap v2480: 305 pgs: 305 active+clean; 639 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.8 MiB/s wr, 106 op/s
Oct 02 12:43:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Oct 02 12:43:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Oct 02 12:43:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Oct 02 12:43:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 618 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 12:43:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.309 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408969.3080146, 17766045-13fc-4377-848f-6815e8a474d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.309 2 INFO nova.compute.manager [-] [instance: 17766045-13fc-4377-848f-6815e8a474d5] VM Stopped (Lifecycle Event)
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:04 compute-0 nova_compute[257802]: 2025-10-02 12:43:04.374 2 DEBUG nova.compute.manager [None req-7494a669-7144-4e3e-9559-9931786c8db1 - - - - - -] [instance: 17766045-13fc-4377-848f-6815e8a474d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:04 compute-0 ceph-mon[73607]: osdmap e349: 3 total, 3 up, 3 in
Oct 02 12:43:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 361 KiB/s rd, 1.0 MiB/s wr, 109 op/s
Oct 02 12:43:05 compute-0 ceph-mon[73607]: pgmap v2482: 305 pgs: 305 active+clean; 618 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 12:43:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2004316980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:43:06 compute-0 podman[354371]: 2025-10-02 12:43:06.986853287 +0000 UTC m=+0.125495602 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:43:07 compute-0 ceph-mon[73607]: pgmap v2483: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 361 KiB/s rd, 1.0 MiB/s wr, 109 op/s
Oct 02 12:43:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 911 KiB/s wr, 95 op/s
Oct 02 12:43:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:08.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.167 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.168 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.168 2 INFO nova.compute.manager [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Unshelving
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.457 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.458 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.464 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'pci_requests' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.572 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'numa_topology' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.613 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:08 compute-0 nova_compute[257802]: 2025-10-02 12:43:08.614 2 INFO nova.compute.claims [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.151 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:09 compute-0 ceph-mon[73607]: pgmap v2484: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 911 KiB/s wr, 95 op/s
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852549662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.579 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.584 2 DEBUG nova.compute.provider_tree [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:09 compute-0 nova_compute[257802]: 2025-10-02 12:43:09.825 2 DEBUG nova.scheduler.client.report [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 4.3 KiB/s wr, 44 op/s
Oct 02 12:43:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:10 compute-0 nova_compute[257802]: 2025-10-02 12:43:10.014 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:10 compute-0 nova_compute[257802]: 2025-10-02 12:43:10.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:10 compute-0 nova_compute[257802]: 2025-10-02 12:43:10.198 2 INFO nova.network.neutron [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:43:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2852549662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:10.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquired lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.036 2 DEBUG nova.network.neutron [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.137 2 DEBUG nova.compute.manager [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-changed-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.137 2 DEBUG nova.compute.manager [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Refreshing instance network info cache due to event network-changed-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:11 compute-0 nova_compute[257802]: 2025-10-02 12:43:11.138 2 DEBUG oslo_concurrency.lockutils [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:11 compute-0 ceph-mon[73607]: pgmap v2485: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 4.3 KiB/s wr, 44 op/s
Oct 02 12:43:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:43:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091719100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:43:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091719100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 4.3 KiB/s wr, 41 op/s
Oct 02 12:43:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:43:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:43:12 compute-0 sudo[354423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:12 compute-0 sudo[354423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:12 compute-0 sudo[354423]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:12 compute-0 sudo[354448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:12 compute-0 sudo[354448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:12 compute-0 sudo[354448]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.338 2 DEBUG nova.network.neutron [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating instance_info_cache with network_info: [{"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.375 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Releasing lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.377 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.377 2 INFO nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Creating image(s)
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.409 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.413 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.414 2 DEBUG oslo_concurrency.lockutils [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.415 2 DEBUG nova.network.neutron [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Refreshing network info cache for port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.461 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.487 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.490 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "654fc21910f8aa414f1f2f3790904a3ee794a41d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.491 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "654fc21910f8aa414f1f2f3790904a3ee794a41d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3091719100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3091719100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.834 2 DEBUG nova.virt.libvirt.imagebackend [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/b6b477f5-f929-4c8c-a864-0bf66fee68a2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/b6b477f5-f929-4c8c-a864-0bf66fee68a2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.892 2 DEBUG nova.virt.libvirt.imagebackend [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/b6b477f5-f929-4c8c-a864-0bf66fee68a2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:43:12 compute-0 nova_compute[257802]: 2025-10-02 12:43:12.893 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] cloning images/b6b477f5-f929-4c8c-a864-0bf66fee68a2@snap to None/c70e8f51-9397-40dd-9bbe-210e60b75364_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:43:13 compute-0 sudo[354592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:13 compute-0 sudo[354592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:13 compute-0 sudo[354592]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:13 compute-0 sudo[354619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:13 compute-0 sudo[354619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:13 compute-0 sudo[354619]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:13 compute-0 sudo[354644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:13 compute-0 sudo[354644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:13 compute-0 sudo[354644]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:13 compute-0 sudo[354669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:43:13 compute-0 sudo[354669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.477 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "654fc21910f8aa414f1f2f3790904a3ee794a41d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:13 compute-0 ceph-mon[73607]: pgmap v2486: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 4.3 KiB/s wr, 41 op/s
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.657 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'migration_context' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:13 compute-0 sudo[354669]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.714 2 DEBUG nova.network.neutron [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updated VIF entry in instance network info cache for port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.715 2 DEBUG nova.network.neutron [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating instance_info_cache with network_info: [{"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.722 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] flattening vms/c70e8f51-9397-40dd-9bbe-210e60b75364_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:43:13 compute-0 nova_compute[257802]: 2025-10-02 12:43:13.772 2 DEBUG oslo_concurrency.lockutils [req-2a34a998-935f-4d02-8347-a77a0859e54f req-331a53e3-b6e4-40b3-834b-b4b99dbbc5d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:43:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:43:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:43:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:43:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 206 KiB/s wr, 35 op/s
Oct 02 12:43:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:14.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:14 compute-0 nova_compute[257802]: 2025-10-02 12:43:14.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d823a32a-f989-46a1-b023-054b62803fb7 does not exist
Oct 02 12:43:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5947bf9c-b1f7-456d-aef6-b3121f9ca763 does not exist
Oct 02 12:43:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 148fc2b3-d823-440c-8c28-fadfe0dbf185 does not exist
Oct 02 12:43:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:43:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:43:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:43:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:43:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:43:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:14 compute-0 sudo[354818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:14 compute-0 sudo[354818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:14 compute-0 sudo[354818]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:14 compute-0 nova_compute[257802]: 2025-10-02 12:43:14.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:14 compute-0 sudo[354843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:14 compute-0 sudo[354843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:14 compute-0 sudo[354843]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:14 compute-0 nova_compute[257802]: 2025-10-02 12:43:14.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:14 compute-0 sudo[354868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:14 compute-0 sudo[354868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:14 compute-0 sudo[354868]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:14 compute-0 sudo[354893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:43:14 compute-0 sudo[354893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:14 compute-0 podman[354960]: 2025-10-02 12:43:14.771553458 +0000 UTC m=+0.053530791 container create a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:14.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:14 compute-0 podman[354960]: 2025-10-02 12:43:14.738918889 +0000 UTC m=+0.020896242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:14 compute-0 systemd[1]: Started libpod-conmon-a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7.scope.
Oct 02 12:43:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:15 compute-0 podman[354960]: 2025-10-02 12:43:15.262328305 +0000 UTC m=+0.544305728 container init a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:15 compute-0 podman[354960]: 2025-10-02 12:43:15.269651475 +0000 UTC m=+0.551628808 container start a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:15 compute-0 musing_cartwright[354976]: 167 167
Oct 02 12:43:15 compute-0 systemd[1]: libpod-a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7.scope: Deactivated successfully.
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:43:15 compute-0 ceph-mon[73607]: pgmap v2487: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 206 KiB/s wr, 35 op/s
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:43:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:15 compute-0 podman[354960]: 2025-10-02 12:43:15.408798979 +0000 UTC m=+0.690776332 container attach a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:43:15 compute-0 podman[354960]: 2025-10-02 12:43:15.410865469 +0000 UTC m=+0.692842822 container died a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:43:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 113 op/s
Oct 02 12:43:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:16.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:16 compute-0 nova_compute[257802]: 2025-10-02 12:43:16.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb0820f9fe72279270f6f72a701c7340d377f4ee3bd015f3259464d5715aa07-merged.mount: Deactivated successfully.
Oct 02 12:43:16 compute-0 podman[354960]: 2025-10-02 12:43:16.686223932 +0000 UTC m=+1.968201265 container remove a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:43:16 compute-0 systemd[1]: libpod-conmon-a8e35dae91003626b08d6c9ab1ad92897fdcc86ed46d730ba48ba42de4d547e7.scope: Deactivated successfully.
Oct 02 12:43:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:16.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:16 compute-0 podman[355001]: 2025-10-02 12:43:16.900866144 +0000 UTC m=+0.021635431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:17 compute-0 podman[355001]: 2025-10-02 12:43:17.136926099 +0000 UTC m=+0.257695326 container create 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:43:17 compute-0 systemd[1]: Started libpod-conmon-14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85.scope.
Oct 02 12:43:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:17 compute-0 podman[355001]: 2025-10-02 12:43:17.322151731 +0000 UTC m=+0.442920958 container init 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:43:17 compute-0 podman[355001]: 2025-10-02 12:43:17.329606664 +0000 UTC m=+0.450375891 container start 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:43:17 compute-0 podman[355001]: 2025-10-02 12:43:17.417992376 +0000 UTC m=+0.538761603 container attach 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:43:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2558787742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3929962429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 96 op/s
Oct 02 12:43:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:18 compute-0 nova_compute[257802]: 2025-10-02 12:43:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:18 compute-0 nova_compute[257802]: 2025-10-02 12:43:18.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:43:18 compute-0 zealous_kowalevski[355017]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:43:18 compute-0 zealous_kowalevski[355017]: --> relative data size: 1.0
Oct 02 12:43:18 compute-0 zealous_kowalevski[355017]: --> All data devices are unavailable
Oct 02 12:43:18 compute-0 systemd[1]: libpod-14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85.scope: Deactivated successfully.
Oct 02 12:43:18 compute-0 podman[355001]: 2025-10-02 12:43:18.16243918 +0000 UTC m=+1.283208407 container died 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:18 compute-0 nova_compute[257802]: 2025-10-02 12:43:18.299 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:18 compute-0 nova_compute[257802]: 2025-10-02 12:43:18.300 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:18 compute-0 nova_compute[257802]: 2025-10-02 12:43:18.300 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:43:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:18.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8b97b6e89c29c04d77ac059d1479097a69ba89d6e4b6c00d39fd7653264f64-merged.mount: Deactivated successfully.
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:19 compute-0 ceph-mon[73607]: pgmap v2488: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 113 op/s
Oct 02 12:43:19 compute-0 ceph-mon[73607]: pgmap v2489: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 96 op/s
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.851 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Image rbd:vms/c70e8f51-9397-40dd-9bbe-210e60b75364_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.852 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.853 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Ensure instance console log exists: /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.853 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.854 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.854 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.858 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Start _get_guest_xml network_info=[{"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:42:46Z,direct_url=<?>,disk_format='raw',id=b6b477f5-f929-4c8c-a864-0bf66fee68a2,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-561749725-shelved',owner='4b8f9114c7ab4b6e9fc9650d4bd08af9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:42:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.864 2 WARNING nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.882 2 DEBUG nova.virt.libvirt.host [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.883 2 DEBUG nova.virt.libvirt.host [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.887 2 DEBUG nova.virt.libvirt.host [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.888 2 DEBUG nova.virt.libvirt.host [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.890 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.890 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:42:46Z,direct_url=<?>,disk_format='raw',id=b6b477f5-f929-4c8c-a864-0bf66fee68a2,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-561749725-shelved',owner='4b8f9114c7ab4b6e9fc9650d4bd08af9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:42:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.891 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.891 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.892 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.892 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.893 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.893 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.894 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.894 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.894 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.895 2 DEBUG nova.virt.hardware [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:19 compute-0 nova_compute[257802]: 2025-10-02 12:43:19.895 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 115 op/s
Oct 02 12:43:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.046 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:20 compute-0 podman[355001]: 2025-10-02 12:43:20.175985944 +0000 UTC m=+3.296755181 container remove 14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kowalevski, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:20 compute-0 sudo[354893]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:20 compute-0 systemd[1]: libpod-conmon-14dc85764ca3b58a1874c80f3a8d8a0fbb0dcd4a1f9943c07f2d528458957c85.scope: Deactivated successfully.
Oct 02 12:43:20 compute-0 sudo[355064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:20 compute-0 sudo[355064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:20 compute-0 sudo[355064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:20 compute-0 sudo[355089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:20 compute-0 sudo[355089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:20 compute-0 sudo[355089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:20 compute-0 sudo[355114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:20 compute-0 sudo[355114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:20 compute-0 sudo[355114]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:20 compute-0 sudo[355139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:43:20 compute-0 sudo[355139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555003944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.579 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.605 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.612 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.700 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.742 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.743 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.743 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:20.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:20 compute-0 podman[355230]: 2025-10-02 12:43:20.764581245 +0000 UTC m=+0.026285265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:20 compute-0 podman[355230]: 2025-10-02 12:43:20.933747364 +0000 UTC m=+0.195451414 container create 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.950 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.951 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.951 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.951 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:43:20 compute-0 nova_compute[257802]: 2025-10-02 12:43:20.952 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200315679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.100 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:21 compute-0 systemd[1]: Started libpod-conmon-3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080.scope.
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.104 2 DEBUG nova.virt.libvirt.vif [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-561749725',display_name='tempest-TestShelveInstance-server-561749725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-561749725',id=157,image_ref='b6b477f5-f929-4c8c-a864-0bf66fee68a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1513273409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iq5s6cd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member',shelved_at='2025-10-02T12:42:58.827915',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='b6b477f5-f929-4c8c-a864-0bf66fee68a2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:08Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=c70e8f51-9397-40dd-9bbe-210e60b75364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.106 2 DEBUG nova.network.os_vif_util [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.107 2 DEBUG nova.network.os_vif_util [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.109 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'pci_devices' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:21 compute-0 podman[355230]: 2025-10-02 12:43:21.188530927 +0000 UTC m=+0.450234947 container init 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:43:21 compute-0 podman[355230]: 2025-10-02 12:43:21.195138049 +0000 UTC m=+0.456842049 container start 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:43:21 compute-0 bold_mcnulty[355280]: 167 167
Oct 02 12:43:21 compute-0 systemd[1]: libpod-3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080.scope: Deactivated successfully.
Oct 02 12:43:21 compute-0 conmon[355280]: conmon 3f7e27bc84b49acdc8e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080.scope/container/memory.events
Oct 02 12:43:21 compute-0 podman[355230]: 2025-10-02 12:43:21.255002283 +0000 UTC m=+0.516706303 container attach 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:21 compute-0 podman[355230]: 2025-10-02 12:43:21.255377523 +0000 UTC m=+0.517081543 container died 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.280 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <uuid>c70e8f51-9397-40dd-9bbe-210e60b75364</uuid>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <name>instance-0000009d</name>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:name>tempest-TestShelveInstance-server-561749725</nova:name>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:43:19</nova:creationTime>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:user uuid="56c6abe1bb704c8aa499677aeb9017f5">tempest-TestShelveInstance-1219039163-project-member</nova:user>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:project uuid="4b8f9114c7ab4b6e9fc9650d4bd08af9">tempest-TestShelveInstance-1219039163</nova:project>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="b6b477f5-f929-4c8c-a864-0bf66fee68a2"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <nova:port uuid="9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf">
Oct 02 12:43:21 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <system>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="serial">c70e8f51-9397-40dd-9bbe-210e60b75364</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="uuid">c70e8f51-9397-40dd-9bbe-210e60b75364</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </system>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <os>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </os>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <features>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </features>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c70e8f51-9397-40dd-9bbe-210e60b75364_disk">
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config">
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </source>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:43:21 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e0:bd:5e"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <target dev="tap9fd83de3-58"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/console.log" append="off"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <video>
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </video>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:43:21 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:43:21 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:43:21 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:43:21 compute-0 nova_compute[257802]: </domain>
Oct 02 12:43:21 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.282 2 DEBUG nova.compute.manager [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Preparing to wait for external event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.282 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.283 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.283 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.284 2 DEBUG nova.virt.libvirt.vif [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-561749725',display_name='tempest-TestShelveInstance-server-561749725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-561749725',id=157,image_ref='b6b477f5-f929-4c8c-a864-0bf66fee68a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1513273409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iq5s6cd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member',shelved_at='2025-10-02T12:42:58.827915',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='b6b477f5-f929-4c8c-a864-0bf66fee68a2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:08Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=c70e8f51-9397-40dd-9bbe-210e60b75364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.284 2 DEBUG nova.network.os_vif_util [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.284 2 DEBUG nova.network.os_vif_util [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.285 2 DEBUG os_vif [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fd83de3-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fd83de3-58, col_values=(('external_ids', {'iface-id': '9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:bd:5e', 'vm-uuid': 'c70e8f51-9397-40dd-9bbe-210e60b75364'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:21 compute-0 NetworkManager[44987]: <info>  [1759409001.2917] manager: (tap9fd83de3-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.301 2 INFO os_vif [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58')
Oct 02 12:43:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952725750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.419 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.559 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.559 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.559 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No VIF found with MAC fa:16:3e:e0:bd:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.560 2 INFO nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Using config drive
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.583 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.592 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.592 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.595 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.595 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.677 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2ad1967cd84fb1191acbf69a8b7808c874655e519822b439f192d36131417c-merged.mount: Deactivated successfully.
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.756 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.757 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3950MB free_disk=20.953102111816406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.757 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.757 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.768 2 DEBUG nova.objects.instance [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'keypairs' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:21 compute-0 podman[355230]: 2025-10-02 12:43:21.877518894 +0000 UTC m=+1.139222894 container remove 3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:43:21 compute-0 ceph-mon[73607]: pgmap v2490: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 115 op/s
Oct 02 12:43:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2555003944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4200315679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 714ae75f-1424-4b97-b849-84e5b4e77668 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance c70e8f51-9397-40dd-9bbe-210e60b75364 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.929 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:43:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 120 op/s
Oct 02 12:43:21 compute-0 systemd[1]: libpod-conmon-3f7e27bc84b49acdc8e2c81c90f92fd0bb3492b34dd8764d3b32e0a04b990080.scope: Deactivated successfully.
Oct 02 12:43:21 compute-0 nova_compute[257802]: 2025-10-02 12:43:21.987 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:22.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.045955605 +0000 UTC m=+0.042644014 container create 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:22 compute-0 systemd[1]: Started libpod-conmon-0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81.scope.
Oct 02 12:43:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1f8afd889b41864a87b5621defc7d86039abaab491e9124b4ebecf57776edd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1f8afd889b41864a87b5621defc7d86039abaab491e9124b4ebecf57776edd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1f8afd889b41864a87b5621defc7d86039abaab491e9124b4ebecf57776edd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1f8afd889b41864a87b5621defc7d86039abaab491e9124b4ebecf57776edd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.028862177 +0000 UTC m=+0.025550576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.129699204 +0000 UTC m=+0.126387593 container init 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.138312384 +0000 UTC m=+0.135000783 container start 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.141891082 +0000 UTC m=+0.138579471 container attach 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437726158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.428 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.434 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.492 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.552 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.552 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.756 2 INFO nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Creating config drive at /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.761 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7y2xw4p6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.895 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7y2xw4p6" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:22 compute-0 busy_wright[355343]: {
Oct 02 12:43:22 compute-0 busy_wright[355343]:     "1": [
Oct 02 12:43:22 compute-0 busy_wright[355343]:         {
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "devices": [
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "/dev/loop3"
Oct 02 12:43:22 compute-0 busy_wright[355343]:             ],
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "lv_name": "ceph_lv0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "lv_size": "7511998464",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "name": "ceph_lv0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "tags": {
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.cluster_name": "ceph",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.crush_device_class": "",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.encrypted": "0",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.osd_id": "1",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.type": "block",
Oct 02 12:43:22 compute-0 busy_wright[355343]:                 "ceph.vdo": "0"
Oct 02 12:43:22 compute-0 busy_wright[355343]:             },
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "type": "block",
Oct 02 12:43:22 compute-0 busy_wright[355343]:             "vg_name": "ceph_vg0"
Oct 02 12:43:22 compute-0 busy_wright[355343]:         }
Oct 02 12:43:22 compute-0 busy_wright[355343]:     ]
Oct 02 12:43:22 compute-0 busy_wright[355343]: }
Oct 02 12:43:22 compute-0 systemd[1]: libpod-0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81.scope: Deactivated successfully.
Oct 02 12:43:22 compute-0 podman[355328]: 2025-10-02 12:43:22.934163026 +0000 UTC m=+0.930851405 container died 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.936 2 DEBUG nova.storage.rbd_utils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:22 compute-0 nova_compute[257802]: 2025-10-02 12:43:22.941 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1952725750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:23 compute-0 ceph-mon[73607]: pgmap v2491: 305 pgs: 305 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 120 op/s
Oct 02 12:43:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/437726158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1f8afd889b41864a87b5621defc7d86039abaab491e9124b4ebecf57776edd-merged.mount: Deactivated successfully.
Oct 02 12:43:23 compute-0 podman[355328]: 2025-10-02 12:43:23.11252641 +0000 UTC m=+1.109214789 container remove 0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:43:23 compute-0 systemd[1]: libpod-conmon-0eb8a3e812742e7c92407d0a5973da948e146b017c38f5b8697cd25c5fa68e81.scope: Deactivated successfully.
Oct 02 12:43:23 compute-0 sudo[355139]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:23 compute-0 sudo[355427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:23 compute-0 sudo[355427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:23 compute-0 sudo[355427]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:23 compute-0 sudo[355452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:23 compute-0 sudo[355452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:23 compute-0 sudo[355452]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:23 compute-0 sudo[355477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:23 compute-0 sudo[355477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:23 compute-0 sudo[355477]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:23 compute-0 sudo[355505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:43:23 compute-0 sudo[355505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.569352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003569416, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1524, "num_deletes": 257, "total_data_size": 2392066, "memory_usage": 2431280, "flush_reason": "Manual Compaction"}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003584752, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2362261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53818, "largest_seqno": 55341, "table_properties": {"data_size": 2355162, "index_size": 4042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15631, "raw_average_key_size": 20, "raw_value_size": 2340659, "raw_average_value_size": 3043, "num_data_blocks": 177, "num_entries": 769, "num_filter_entries": 769, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408874, "oldest_key_time": 1759408874, "file_creation_time": 1759409003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15446 microseconds, and 6408 cpu microseconds.
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.584798) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2362261 bytes OK
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.584850) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.593590) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.593632) EVENT_LOG_v1 {"time_micros": 1759409003593623, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.593655) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2385437, prev total WAL file size 2385437, number of live WAL files 2.
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.594593) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323537' seq:0, type:0; will stop at (end)
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2306KB)], [119(10MB)]
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003594625, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12898512, "oldest_snapshot_seqno": -1}
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.593 2 DEBUG oslo_concurrency.processutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config c70e8f51-9397-40dd-9bbe-210e60b75364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.594 2 INFO nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Deleting local config drive /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364/disk.config because it was imported into RBD.
Oct 02 12:43:23 compute-0 kernel: tap9fd83de3-58: entered promiscuous mode
Oct 02 12:43:23 compute-0 NetworkManager[44987]: <info>  [1759409003.6515] manager: (tap9fd83de3-58): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Oct 02 12:43:23 compute-0 ovn_controller[148183]: 2025-10-02T12:43:23Z|00678|binding|INFO|Claiming lport 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf for this chassis.
Oct 02 12:43:23 compute-0 ovn_controller[148183]: 2025-10-02T12:43:23Z|00679|binding|INFO|9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf: Claiming fa:16:3e:e0:bd:5e 10.100.0.7
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.677 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:bd:5e 10.100.0.7'], port_security=['fa:16:3e:e0:bd:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c70e8f51-9397-40dd-9bbe-210e60b75364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8f9114c7ab4b6e9fc9650d4bd08af9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'aaa1af3a-3e07-4a02-982d-cee91699f079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04a89c39-8141-4654-8368-c858180215b3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.678 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf in datapath 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f bound to our chassis
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.680 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:43:23 compute-0 systemd-udevd[355565]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:23 compute-0 ovn_controller[148183]: 2025-10-02T12:43:23Z|00680|binding|INFO|Setting lport 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf ovn-installed in OVS
Oct 02 12:43:23 compute-0 ovn_controller[148183]: 2025-10-02T12:43:23Z|00681|binding|INFO|Setting lport 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf up in Southbound
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.691 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c2be8d39-780a-465d-bf30-047245073d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.691 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ea0a90a-91 in ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.693 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ea0a90a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.693 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf581d4-f7ee-428e-be4d-5487a5bf16c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 systemd-machined[211836]: New machine qemu-77-instance-0000009d.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.693 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f935cd-b5e3-42d4-b26e-eeb3311c561d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44987]: <info>  [1759409003.6987] device (tap9fd83de3-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:23 compute-0 NetworkManager[44987]: <info>  [1759409003.7001] device (tap9fd83de3-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.705 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[37fafc18-9362-46ba-8c7e-f2505a31780e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-0000009d.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.728 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6f86e2d2-df47-451a-b26b-0b8b936d0aeb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.756 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a5db6867-bdce-433f-a130-30fc50797f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 systemd-udevd[355572]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.763 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[acb01a8d-7035-4e67-86cd-11987dbb5434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44987]: <info>  [1759409003.7643] manager: (tap6ea0a90a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8434 keys, 12766165 bytes, temperature: kUnknown
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003775216, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 12766165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12709423, "index_size": 34550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 218774, "raw_average_key_size": 25, "raw_value_size": 12559141, "raw_average_value_size": 1489, "num_data_blocks": 1355, "num_entries": 8434, "num_filter_entries": 8434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.776415) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12766165 bytes
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.779812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.0 rd, 70.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 8965, records dropped: 531 output_compression: NoCompression
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.779870) EVENT_LOG_v1 {"time_micros": 1759409003779854, "job": 72, "event": "compaction_finished", "compaction_time_micros": 181598, "compaction_time_cpu_micros": 31457, "output_level": 6, "num_output_files": 1, "total_output_size": 12766165, "num_input_records": 8965, "num_output_records": 8434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003780495, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003782402, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.594455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.782454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.782462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.782464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.782466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:43:23.782468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.794 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2a8e8d-3b1a-4ef7-9067-c600efcc3af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.797 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[02a9e4b4-ca58-493a-ae81-bdc7dd13cde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44987]: <info>  [1759409003.8213] device (tap6ea0a90a-90): carrier: link connected
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.826 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3277def2-df0b-4aac-adaf-04999dfb6d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.845 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aed21cbe-5d4e-494d-aca5-061cc8801e09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ea0a90a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:92:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707144, 'reachable_time': 28007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355625, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 podman[355587]: 2025-10-02 12:43:23.854170965 +0000 UTC m=+0.096733998 container create df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.863 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6d04f887-77a6-4d96-b557-bcf339d1d160]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:9244'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707144, 'tstamp': 707144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355626, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 podman[355587]: 2025-10-02 12:43:23.792712111 +0000 UTC m=+0.035275164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.889 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18b6b53c-3082-4141-b010-91b25ebfd08e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ea0a90a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:92:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707144, 'reachable_time': 28007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355627, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 systemd[1]: Started libpod-conmon-df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b.scope.
Oct 02 12:43:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.928 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e49c46b1-78d8-4838-b18d-021be2abbe52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 128 op/s
Oct 02 12:43:23 compute-0 podman[355587]: 2025-10-02 12:43:23.964617937 +0000 UTC m=+0.207180980 container init df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:43:23 compute-0 podman[355587]: 2025-10-02 12:43:23.971793743 +0000 UTC m=+0.214356766 container start df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:23 compute-0 silly_swirles[355630]: 167 167
Oct 02 12:43:23 compute-0 systemd[1]: libpod-df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b.scope: Deactivated successfully.
Oct 02 12:43:23 compute-0 conmon[355630]: conmon df9eabc5e6dccd9056f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b.scope/container/memory.events
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.995 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa1a86f-a4cc-4dc5-8224-d2f809c86f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.997 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ea0a90a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.997 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:23.997 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ea0a90a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:23 compute-0 nova_compute[257802]: 2025-10-02 12:43:23.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:24 compute-0 kernel: tap6ea0a90a-90: entered promiscuous mode
Oct 02 12:43:24 compute-0 NetworkManager[44987]: <info>  [1759409004.0000] manager: (tap6ea0a90a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Oct 02 12:43:24 compute-0 podman[355587]: 2025-10-02 12:43:24.000535896 +0000 UTC m=+0.243098939 container attach df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:43:24 compute-0 podman[355587]: 2025-10-02 12:43:24.001226813 +0000 UTC m=+0.243789836 container died df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:24.001 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ea0a90a-90, col_values=(('external_ids', {'iface-id': '3850aa59-d3b6-4277-b937-ad9f4b8f7b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:24 compute-0 ovn_controller[148183]: 2025-10-02T12:43:24Z|00682|binding|INFO|Releasing lport 3850aa59-d3b6-4277-b937-ad9f4b8f7b4c from this chassis (sb_readonly=0)
Oct 02 12:43:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:24.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:24.021 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:24.022 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[42922a07-37de-46f6-99a5-e6da13c27279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:24.023 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:43:24 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:24.024 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'env', 'PROCESS_TAG=haproxy-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-115631fc5ee13d543d8a565361a613013beb2817b9040e6fa2dda4265fca5869-merged.mount: Deactivated successfully.
Oct 02 12:43:24 compute-0 podman[355587]: 2025-10-02 12:43:24.318154507 +0000 UTC m=+0.560717530 container remove df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:43:24 compute-0 systemd[1]: libpod-conmon-df9eabc5e6dccd9056f684fda8a3108e4ed5ce929ba60fe799ac5a1dbb44a86b.scope: Deactivated successfully.
Oct 02 12:43:24 compute-0 podman[355714]: 2025-10-02 12:43:24.40901728 +0000 UTC m=+0.031459150 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:43:24 compute-0 podman[355714]: 2025-10-02 12:43:24.530775229 +0000 UTC m=+0.153217079 container create 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:43:24 compute-0 systemd[1]: Started libpod-conmon-7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755.scope.
Oct 02 12:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bac7f623780e0308a7ea30500517f7344edd0147a5dca9d599fac972a67f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 podman[355738]: 2025-10-02 12:43:24.671329717 +0000 UTC m=+0.201010208 container create c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:43:24 compute-0 podman[355738]: 2025-10-02 12:43:24.601004197 +0000 UTC m=+0.130684728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:24 compute-0 ceph-mon[73607]: pgmap v2492: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 128 op/s
Oct 02 12:43:24 compute-0 systemd[1]: Started libpod-conmon-c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339.scope.
Oct 02 12:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7caec391377a8791917e33094c005de30473629cb5662363cd44ee49d058c16d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7caec391377a8791917e33094c005de30473629cb5662363cd44ee49d058c16d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7caec391377a8791917e33094c005de30473629cb5662363cd44ee49d058c16d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7caec391377a8791917e33094c005de30473629cb5662363cd44ee49d058c16d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:24 compute-0 podman[355714]: 2025-10-02 12:43:24.850392318 +0000 UTC m=+0.472834188 container init 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:43:24 compute-0 podman[355714]: 2025-10-02 12:43:24.856806626 +0000 UTC m=+0.479248476 container start 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:43:24 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [NOTICE]   (355765) : New worker (355767) forked
Oct 02 12:43:24 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [NOTICE]   (355765) : Loading success.
Oct 02 12:43:24 compute-0 podman[355738]: 2025-10-02 12:43:24.894102207 +0000 UTC m=+0.423782738 container init c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:43:24 compute-0 podman[355738]: 2025-10-02 12:43:24.904985704 +0000 UTC m=+0.434666195 container start c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.913 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409004.91325, c70e8f51-9397-40dd-9bbe-210e60b75364 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.914 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] VM Started (Lifecycle Event)
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.922 2 DEBUG nova.compute.manager [req-153a3a62-f5c2-4df4-9952-0ce65904d7aa req-64663d3c-1bbb-492d-9ccd-f552eec8c887 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.922 2 DEBUG oslo_concurrency.lockutils [req-153a3a62-f5c2-4df4-9952-0ce65904d7aa req-64663d3c-1bbb-492d-9ccd-f552eec8c887 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.923 2 DEBUG oslo_concurrency.lockutils [req-153a3a62-f5c2-4df4-9952-0ce65904d7aa req-64663d3c-1bbb-492d-9ccd-f552eec8c887 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.923 2 DEBUG oslo_concurrency.lockutils [req-153a3a62-f5c2-4df4-9952-0ce65904d7aa req-64663d3c-1bbb-492d-9ccd-f552eec8c887 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.923 2 DEBUG nova.compute.manager [req-153a3a62-f5c2-4df4-9952-0ce65904d7aa req-64663d3c-1bbb-492d-9ccd-f552eec8c887 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Processing event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.924 2 DEBUG nova.compute.manager [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.927 2 DEBUG nova.virt.libvirt.driver [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:24 compute-0 nova_compute[257802]: 2025-10-02 12:43:24.930 2 INFO nova.virt.libvirt.driver [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Instance spawned successfully.
Oct 02 12:43:24 compute-0 podman[355738]: 2025-10-02 12:43:24.933765389 +0000 UTC m=+0.463445910 container attach c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.003 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.006 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.176 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.177 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409004.9143357, c70e8f51-9397-40dd-9bbe-210e60b75364 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.177 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] VM Paused (Lifecycle Event)
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.442 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.446 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409004.9266372, c70e8f51-9397-40dd-9bbe-210e60b75364 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.447 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] VM Resumed (Lifecycle Event)
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.551 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.555 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:25 compute-0 nova_compute[257802]: 2025-10-02 12:43:25.657 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:25 compute-0 festive_wiles[355761]: {
Oct 02 12:43:25 compute-0 festive_wiles[355761]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:43:25 compute-0 festive_wiles[355761]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:43:25 compute-0 festive_wiles[355761]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:43:25 compute-0 festive_wiles[355761]:         "osd_id": 1,
Oct 02 12:43:25 compute-0 festive_wiles[355761]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:43:25 compute-0 festive_wiles[355761]:         "type": "bluestore"
Oct 02 12:43:25 compute-0 festive_wiles[355761]:     }
Oct 02 12:43:25 compute-0 festive_wiles[355761]: }
Oct 02 12:43:25 compute-0 systemd[1]: libpod-c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339.scope: Deactivated successfully.
Oct 02 12:43:25 compute-0 podman[355738]: 2025-10-02 12:43:25.755680768 +0000 UTC m=+1.285361289 container died c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:43:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 136 op/s
Oct 02 12:43:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Oct 02 12:43:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7caec391377a8791917e33094c005de30473629cb5662363cd44ee49d058c16d-merged.mount: Deactivated successfully.
Oct 02 12:43:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Oct 02 12:43:26 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Oct 02 12:43:26 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 12:43:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2177798080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:26 compute-0 nova_compute[257802]: 2025-10-02 12:43:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2177798080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1609610355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:26 compute-0 podman[355738]: 2025-10-02 12:43:26.294141381 +0000 UTC m=+1.823821882 container remove c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:26 compute-0 systemd[1]: libpod-conmon-c964652c1f7946fee9f86c0ede79c607aa408a2024b2f6d9ce8f5136c77e7339.scope: Deactivated successfully.
Oct 02 12:43:26 compute-0 sudo[355505]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:43:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:43:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:26.962 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:26.962 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:26.963 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b92ac1d4-9b51-4f52-bc13-c3e0292c9abe does not exist
Oct 02 12:43:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 61ffa96c-0b16-4597-95b8-afed6d087c09 does not exist
Oct 02 12:43:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 23b74bcc-3dbf-4d4f-842f-4554abc8142a does not exist
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.093 2 DEBUG nova.compute.manager [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.094 2 DEBUG oslo_concurrency.lockutils [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.094 2 DEBUG oslo_concurrency.lockutils [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.095 2 DEBUG oslo_concurrency.lockutils [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.095 2 DEBUG nova.compute.manager [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] No waiting events found dispatching network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:27 compute-0 nova_compute[257802]: 2025-10-02 12:43:27.095 2 WARNING nova.compute.manager [req-79cebbc6-89dd-4fbd-ad8b-efb7f5ff25ea req-6b48ac29-c17b-4c1a-a1c9-82f6cbc450a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received unexpected event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf for instance with vm_state shelved_offloaded and task_state spawning.
Oct 02 12:43:27 compute-0 sudo[355809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:27 compute-0 sudo[355809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:27 compute-0 sudo[355809]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:27 compute-0 sudo[355834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:43:27 compute-0 sudo[355834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:27 compute-0 sudo[355834]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:27 compute-0 ceph-mon[73607]: pgmap v2493: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 136 op/s
Oct 02 12:43:27 compute-0 ceph-mon[73607]: osdmap e350: 3 total, 3 up, 3 in
Oct 02 12:43:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:43:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 63 op/s
Oct 02 12:43:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:28.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Oct 02 12:43:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/424163148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Oct 02 12:43:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Oct 02 12:43:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:28.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:29 compute-0 nova_compute[257802]: 2025-10-02 12:43:29.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:29 compute-0 ceph-mon[73607]: pgmap v2495: 305 pgs: 305 active+clean; 562 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 63 op/s
Oct 02 12:43:29 compute-0 ceph-mon[73607]: osdmap e351: 3 total, 3 up, 3 in
Oct 02 12:43:29 compute-0 nova_compute[257802]: 2025-10-02 12:43:29.865 2 DEBUG nova.compute.manager [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:29 compute-0 nova_compute[257802]: 2025-10-02 12:43:29.907 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 521 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 290 KiB/s wr, 198 op/s
Oct 02 12:43:30 compute-0 nova_compute[257802]: 2025-10-02 12:43:30.017 2 DEBUG oslo_concurrency.lockutils [None req-444d4384-fe7c-422b-8f9a-6e1851310bf0 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 21.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:30.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:30.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:31 compute-0 ceph-mon[73607]: pgmap v2497: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 521 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 290 KiB/s wr, 198 op/s
Oct 02 12:43:31 compute-0 nova_compute[257802]: 2025-10-02 12:43:31.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 202 op/s
Oct 02 12:43:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:32 compute-0 sudo[355861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:32 compute-0 sudo[355861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:32 compute-0 sudo[355861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:32 compute-0 sudo[355886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:32 compute-0 sudo[355886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:32 compute-0 sudo[355886]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:43:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535884199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:43:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535884199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:32 compute-0 ceph-mon[73607]: pgmap v2498: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 202 op/s
Oct 02 12:43:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3535884199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3535884199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:32.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Oct 02 12:43:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Oct 02 12:43:33 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Oct 02 12:43:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 520 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Oct 02 12:43:34 compute-0 podman[355912]: 2025-10-02 12:43:34.011024274 +0000 UTC m=+0.149305324 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:43:34 compute-0 podman[355914]: 2025-10-02 12:43:34.013371642 +0000 UTC m=+0.146054745 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:43:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:34.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:34 compute-0 podman[355913]: 2025-10-02 12:43:34.041622653 +0000 UTC m=+0.179910633 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 12:43:34 compute-0 nova_compute[257802]: 2025-10-02 12:43:34.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:34 compute-0 ceph-mon[73607]: osdmap e352: 3 total, 3 up, 3 in
Oct 02 12:43:34 compute-0 ceph-mon[73607]: pgmap v2500: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 520 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Oct 02 12:43:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2805326133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Oct 02 12:43:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3377471684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:36.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:36 compute-0 nova_compute[257802]: 2025-10-02 12:43:36.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Oct 02 12:43:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Oct 02 12:43:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Oct 02 12:43:37 compute-0 nova_compute[257802]: 2025-10-02 12:43:37.617 2 DEBUG nova.compute.manager [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:37 compute-0 nova_compute[257802]: 2025-10-02 12:43:37.617 2 DEBUG nova.compute.manager [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing instance network info cache due to event network-changed-763709ed-3fe4-45a4-8a2f-4b21f4534590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:37 compute-0 nova_compute[257802]: 2025-10-02 12:43:37.618 2 DEBUG oslo_concurrency.lockutils [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:37 compute-0 nova_compute[257802]: 2025-10-02 12:43:37.618 2 DEBUG oslo_concurrency.lockutils [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:37 compute-0 nova_compute[257802]: 2025-10-02 12:43:37.619 2 DEBUG nova.network.neutron [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Refreshing network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:37 compute-0 ceph-mon[73607]: pgmap v2501: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Oct 02 12:43:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Oct 02 12:43:37 compute-0 podman[355971]: 2025-10-02 12:43:37.951496503 +0000 UTC m=+0.080758586 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:43:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:38.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Oct 02 12:43:38 compute-0 ceph-mon[73607]: osdmap e353: 3 total, 3 up, 3 in
Oct 02 12:43:38 compute-0 ceph-mon[73607]: pgmap v2503: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Oct 02 12:43:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Oct 02 12:43:38 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Oct 02 12:43:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:38.889 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:38 compute-0 nova_compute[257802]: 2025-10-02 12:43:38.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:38.891 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:43:39 compute-0 nova_compute[257802]: 2025-10-02 12:43:39.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:39 compute-0 ovn_controller[148183]: 2025-10-02T12:43:39Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:bd:5e 10.100.0.7
Oct 02 12:43:39 compute-0 ceph-mon[73607]: osdmap e354: 3 total, 3 up, 3 in
Oct 02 12:43:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 517 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 727 KiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 02 12:43:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:43:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:40.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:43:40 compute-0 nova_compute[257802]: 2025-10-02 12:43:40.545 2 DEBUG nova.network.neutron [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updated VIF entry in instance network info cache for port 763709ed-3fe4-45a4-8a2f-4b21f4534590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:40 compute-0 nova_compute[257802]: 2025-10-02 12:43:40.545 2 DEBUG nova.network.neutron [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [{"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:40 compute-0 nova_compute[257802]: 2025-10-02 12:43:40.602 2 DEBUG oslo_concurrency.lockutils [req-668f7ead-cbac-46a3-85b5-880336317968 req-7d53abd5-78b3-43f7-bff1-4cf57912ec82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-714ae75f-1424-4b97-b849-84e5b4e77668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:41 compute-0 ceph-mon[73607]: pgmap v2505: 305 pgs: 305 active+clean; 517 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 727 KiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 02 12:43:41 compute-0 nova_compute[257802]: 2025-10-02 12:43:41.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:41.893 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 525 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 174 op/s
Oct 02 12:43:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:43:42
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups']
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:43:43 compute-0 ceph-mon[73607]: pgmap v2506: 305 pgs: 305 active+clean; 525 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 174 op/s
Oct 02 12:43:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:43 compute-0 nova_compute[257802]: 2025-10-02 12:43:43.716 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:43 compute-0 nova_compute[257802]: 2025-10-02 12:43:43.717 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:43 compute-0 nova_compute[257802]: 2025-10-02 12:43:43.889 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:43:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 525 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 186 op/s
Oct 02 12:43:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:44 compute-0 nova_compute[257802]: 2025-10-02 12:43:44.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:43:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:43:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:43:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:43:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:43:44 compute-0 ceph-mon[73607]: pgmap v2507: 305 pgs: 305 active+clean; 525 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 186 op/s
Oct 02 12:43:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:44 compute-0 nova_compute[257802]: 2025-10-02 12:43:44.903 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:44 compute-0 nova_compute[257802]: 2025-10-02 12:43:44.903 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:44 compute-0 nova_compute[257802]: 2025-10-02 12:43:44.910 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:44 compute-0 nova_compute[257802]: 2025-10-02 12:43:44.910 2 INFO nova.compute.claims [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.236 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219295916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.659 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.664 2 DEBUG nova.compute.provider_tree [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.786 2 DEBUG nova.scheduler.client.report [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2219295916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.945 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 511 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 199 op/s
Oct 02 12:43:45 compute-0 nova_compute[257802]: 2025-10-02 12:43:45.947 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:43:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.405 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.406 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.482 2 INFO nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.588 2 DEBUG nova.policy [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '734ae44830d540d8ab51c2a3d75ecd80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:43:46 compute-0 nova_compute[257802]: 2025-10-02 12:43:46.849 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:43:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:46 compute-0 ceph-mon[73607]: pgmap v2508: 305 pgs: 305 active+clean; 511 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 199 op/s
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.365 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.367 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.368 2 INFO nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Creating image(s)
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.401 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.434 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.463 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.467 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4baed542c9df4566caac038224dee0ff4dfdf888" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.468 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4baed542c9df4566caac038224dee0ff4dfdf888" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.750 2 DEBUG nova.virt.libvirt.imagebackend [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:43:47 compute-0 nova_compute[257802]: 2025-10-02 12:43:47.777 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Successfully created port: 242e9f5d-5808-46e5-877c-ab6c97cacc64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:43:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 511 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Oct 02 12:43:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.014 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.102 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.part --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.104 2 DEBUG nova.virt.images [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] 6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.110 2 DEBUG nova.privsep.utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.110 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.part /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:49 compute-0 ceph-mon[73607]: pgmap v2509: 305 pgs: 305 active+clean; 511 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Oct 02 12:43:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/62800550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.139 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Successfully updated port: 242e9f5d-5808-46e5-877c-ab6c97cacc64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.264 2 DEBUG nova.compute.manager [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.264 2 DEBUG nova.compute.manager [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.265 2 DEBUG oslo_concurrency.lockutils [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.265 2 DEBUG oslo_concurrency.lockutils [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.266 2 DEBUG nova.network.neutron [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.275 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.537 2 DEBUG nova.network.neutron [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.690 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.part /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.converted" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.695 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.729 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.730 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.730 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.730 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.731 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.732 2 INFO nova.compute.manager [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Terminating instance
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.734 2 DEBUG nova.compute.manager [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.770 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.771 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4baed542c9df4566caac038224dee0ff4dfdf888" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.797 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.803 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:49 compute-0 kernel: tap9fd83de3-58 (unregistering): left promiscuous mode
Oct 02 12:43:49 compute-0 NetworkManager[44987]: <info>  [1759409029.8465] device (tap9fd83de3-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.889 2 DEBUG nova.compute.manager [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-changed-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.890 2 DEBUG nova.compute.manager [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Refreshing instance network info cache due to event network-changed-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.891 2 DEBUG oslo_concurrency.lockutils [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.891 2 DEBUG oslo_concurrency.lockutils [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.891 2 DEBUG nova.network.neutron [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Refreshing network info cache for port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:49 compute-0 ovn_controller[148183]: 2025-10-02T12:43:49Z|00683|binding|INFO|Releasing lport 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf from this chassis (sb_readonly=0)
Oct 02 12:43:49 compute-0 ovn_controller[148183]: 2025-10-02T12:43:49Z|00684|binding|INFO|Setting lport 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf down in Southbound
Oct 02 12:43:49 compute-0 ovn_controller[148183]: 2025-10-02T12:43:49Z|00685|binding|INFO|Removing iface tap9fd83de3-58 ovn-installed in OVS
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:49 compute-0 nova_compute[257802]: 2025-10-02 12:43:49.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 480 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Oct 02 12:43:49 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Oct 02 12:43:49 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009d.scope: Consumed 14.900s CPU time.
Oct 02 12:43:49 compute-0 systemd-machined[211836]: Machine qemu-77-instance-0000009d terminated.
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.021 2 DEBUG nova.network.neutron [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:50 compute-0 sshd-session[356136]: Invalid user shoaib from 167.99.55.34 port 49224
Oct 02 12:43:50 compute-0 sshd-session[356136]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:43:50 compute-0 sshd-session[356136]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:43:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:50.149 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:bd:5e 10.100.0.7'], port_security=['fa:16:3e:e0:bd:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c70e8f51-9397-40dd-9bbe-210e60b75364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8f9114c7ab4b6e9fc9650d4bd08af9', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'aaa1af3a-3e07-4a02-982d-cee91699f079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04a89c39-8141-4654-8368-c858180215b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:50.151 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf in datapath 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f unbound from our chassis
Oct 02 12:43:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:50.153 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:43:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:50.155 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae8e0b9-5d97-422b-a58b-01904d17201a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:50.155 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f namespace which is not needed anymore
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.173 2 INFO nova.virt.libvirt.driver [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Instance destroyed successfully.
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.173 2 DEBUG nova.objects.instance [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'resources' on Instance uuid c70e8f51-9397-40dd-9bbe-210e60b75364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.183 2 DEBUG oslo_concurrency.lockutils [req-abad4dd3-e2fb-4e28-9280-3b997f6d4f2c req-a747b6af-d4b2-40ff-a613-3e6375af0b4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.184 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.184 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.228 2 DEBUG nova.virt.libvirt.vif [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:42:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-561749725',display_name='tempest-TestShelveInstance-server-561749725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-561749725',id=157,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhf9w8pkEKWa7V1ngOe9fjFIi8JcNaUtJyznubChlj27hHukuq0Ytpxs3mHaFViqIafdIVxRwuOXby9NJMGuDWmrvU49YApKESuv4kV9WfKPY1JgB2zj33RiXhpo9OCqg==',key_name='tempest-TestShelveInstance-1513273409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iq5s6cd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:29Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=c70e8f51-9397-40dd-9bbe-210e60b75364,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.229 2 DEBUG nova.network.os_vif_util [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.230 2 DEBUG nova.network.os_vif_util [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.230 2 DEBUG os_vif [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fd83de3-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.239 2 INFO os_vif [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:bd:5e,bridge_name='br-int',has_traffic_filtering=True,id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd83de3-58')
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [NOTICE]   (355765) : haproxy version is 2.8.14-c23fe91
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [NOTICE]   (355765) : path to executable is /usr/sbin/haproxy
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [WARNING]  (355765) : Exiting Master process...
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [WARNING]  (355765) : Exiting Master process...
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [ALERT]    (355765) : Current worker (355767) exited with code 143 (Terminated)
Oct 02 12:43:50 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[355754]: [WARNING]  (355765) : All workers exited. Exiting... (0)
Oct 02 12:43:50 compute-0 systemd[1]: libpod-7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755.scope: Deactivated successfully.
Oct 02 12:43:50 compute-0 podman[356167]: 2025-10-02 12:43:50.308537485 +0000 UTC m=+0.071354004 container died 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755-userdata-shm.mount: Deactivated successfully.
Oct 02 12:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-505bac7f623780e0308a7ea30500517f7344edd0147a5dca9d599fac972a67f8-merged.mount: Deactivated successfully.
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.533 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.638 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.835s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:50 compute-0 nova_compute[257802]: 2025-10-02 12:43:50.726 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] resizing rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:43:50 compute-0 podman[356167]: 2025-10-02 12:43:50.745219742 +0000 UTC m=+0.508036291 container cleanup 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:50 compute-0 systemd[1]: libpod-conmon-7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755.scope: Deactivated successfully.
Oct 02 12:43:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:51 compute-0 podman[356266]: 2025-10-02 12:43:51.076692204 +0000 UTC m=+0.306984942 container remove 7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.085 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00249066-b3f2-4ef8-a781-a576d76a6f02]: (4, ('Thu Oct  2 12:43:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f (7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755)\n7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755\nThu Oct  2 12:43:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f (7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755)\n7537435ca9ec16f0ef7204bcd3cd687b495a95662bb2cd4ded048ad0ab67c755\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.088 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[61f3b3bb-e703-47a4-ab8f-bb562e3bd782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.089 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ea0a90a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:51 compute-0 kernel: tap6ea0a90a-90: left promiscuous mode
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.111 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2e638a6c-5acd-4b2e-bde4-ddea81d57abe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.136 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb38cc95-bd85-4986-b75d-23e1982e5212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.139 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29e75ab7-b5d4-4c83-8b28-bc5cc05485e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.159 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[34656033-23eb-46c1-91ff-68d1886b9ef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707137, 'reachable_time': 23189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356283, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.161 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:43:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:51.162 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[f88cbdb5-077d-4cc3-a5b3-b1cb50da4f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ea0a90a\x2d9528\x2d4fe1\x2d8b35\x2ddfde9b35e85f.mount: Deactivated successfully.
Oct 02 12:43:51 compute-0 ceph-mon[73607]: pgmap v2510: 305 pgs: 305 active+clean; 480 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Oct 02 12:43:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1716887063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.386 2 DEBUG nova.objects.instance [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.528 2 DEBUG nova.network.neutron [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updated VIF entry in instance network info cache for port 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.528 2 DEBUG nova.network.neutron [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating instance_info_cache with network_info: [{"id": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "address": "fa:16:3e:e0:bd:5e", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd83de3-58", "ovs_interfaceid": "9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.610 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.610 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Ensure instance console log exists: /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.611 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.612 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.612 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.800 2 DEBUG oslo_concurrency.lockutils [req-ee44bd4e-6abe-4d89-99a4-a33d89de9e20 req-cf4fcb8c-a409-47b9-94b6-0518c41de595 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c70e8f51-9397-40dd-9bbe-210e60b75364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:51 compute-0 nova_compute[257802]: 2025-10-02 12:43:51.871 2 DEBUG nova.network.neutron [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:51 compute-0 sshd-session[356136]: Failed password for invalid user shoaib from 167.99.55.34 port 49224 ssh2
Oct 02 12:43:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 502 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 149 op/s
Oct 02 12:43:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.060 2 DEBUG nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-unplugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.061 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.061 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.062 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.062 2 DEBUG nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] No waiting events found dispatching network-vif-unplugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.062 2 DEBUG nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-unplugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.062 2 DEBUG nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.063 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.063 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.063 2 DEBUG oslo_concurrency.lockutils [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.063 2 DEBUG nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] No waiting events found dispatching network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.064 2 WARNING nova.compute.manager [req-1b90779c-47c5-4de1-9555-d882453754e4 req-5f2c4521-b6e6-4812-9e26-2dfcc8de4580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received unexpected event network-vif-plugged-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf for instance with vm_state active and task_state deleting.
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.245 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.247 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance network_info: |[{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.251 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Start _get_guest_xml network_info=[{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:43:34Z,direct_url=<?>,disk_format='qcow2',id=6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4,min_disk=0,min_ram=0,name='tempest-scenario-img--534925320',owner='2e53064cd4d645f09bd59bbca09b98e0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:43:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': '6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.256 2 WARNING nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.261 2 DEBUG nova.virt.libvirt.host [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.262 2 DEBUG nova.virt.libvirt.host [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.265 2 DEBUG nova.virt.libvirt.host [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.265 2 DEBUG nova.virt.libvirt.host [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.266 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.267 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:43:34Z,direct_url=<?>,disk_format='qcow2',id=6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4,min_disk=0,min_ram=0,name='tempest-scenario-img--534925320',owner='2e53064cd4d645f09bd59bbca09b98e0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:43:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.267 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.267 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.267 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.268 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.268 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.268 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.268 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.268 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.269 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.269 2 DEBUG nova.virt.hardware [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.271 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.446 2 INFO nova.virt.libvirt.driver [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Deleting instance files /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364_del
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.448 2 INFO nova.virt.libvirt.driver [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Deletion of /var/lib/nova/instances/c70e8f51-9397-40dd-9bbe-210e60b75364_del complete
Oct 02 12:43:52 compute-0 sudo[356306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:52 compute-0 sudo[356306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:52 compute-0 sudo[356306]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:52 compute-0 sudo[356348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:52 compute-0 sudo[356348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:52 compute-0 sudo[356348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.729 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.754 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.757 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.795 2 INFO nova.compute.manager [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Took 3.06 seconds to destroy the instance on the hypervisor.
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.795 2 DEBUG oslo.service.loopingcall [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.796 2 DEBUG nova.compute.manager [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.796 2 DEBUG nova.network.neutron [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:43:52 compute-0 sshd-session[356136]: Connection closed by invalid user shoaib 167.99.55.34 port 49224 [preauth]
Oct 02 12:43:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.949 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.949 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.949 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.950 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.950 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.951 2 INFO nova.compute.manager [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Terminating instance
Oct 02 12:43:52 compute-0 nova_compute[257802]: 2025-10-02 12:43:52.952 2 DEBUG nova.compute.manager [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:43:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282326066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.188 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.189 2 DEBUG nova.virt.libvirt.vif [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1177339644',display_name='tempest-TestMinimumBasicScenario-server-1177339644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1177339644',id=159,image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD6z5KFLzCCipER/PvHuq/MNGTxd1RUhD7rY5o2WpTORO0UwKn2m0zVFbpQwYXEC7RieyYdRnhp+ULi3gCBX1FpCTLHoDreyHt1lDTwb0yPiwclRQO8cg/Ijl4ojEGDkg==',key_name='tempest-TestMinimumBasicScenario-992418222',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-mrk9aw2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:46Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.189 2 DEBUG nova.network.os_vif_util [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.190 2 DEBUG nova.network.os_vif_util [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.192 2 DEBUG nova.objects.instance [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:53 compute-0 kernel: tap763709ed-3f (unregistering): left promiscuous mode
Oct 02 12:43:53 compute-0 NetworkManager[44987]: <info>  [1759409033.2027] device (tap763709ed-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:43:53 compute-0 ovn_controller[148183]: 2025-10-02T12:43:53Z|00686|binding|INFO|Releasing lport 763709ed-3fe4-45a4-8a2f-4b21f4534590 from this chassis (sb_readonly=0)
Oct 02 12:43:53 compute-0 ovn_controller[148183]: 2025-10-02T12:43:53Z|00687|binding|INFO|Setting lport 763709ed-3fe4-45a4-8a2f-4b21f4534590 down in Southbound
Oct 02 12:43:53 compute-0 ovn_controller[148183]: 2025-10-02T12:43:53Z|00688|binding|INFO|Removing iface tap763709ed-3f ovn-installed in OVS
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:53.247 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:b3:16 10.100.0.7'], port_security=['fa:16:3e:1a:b3:16 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '714ae75f-1424-4b97-b849-84e5b4e77668', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a41d99312f014c65adddea4f70536a15', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5f3db9ba-e6e8-41b4-b916-387b4ad385f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eaf2b53-ef61-475e-8161-94a8e63ff149, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=763709ed-3fe4-45a4-8a2f-4b21f4534590) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:53.248 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 763709ed-3fe4-45a4-8a2f-4b21f4534590 in datapath e7b8a8de-b6cd-4283-854b-a2bd919c371d unbound from our chassis
Oct 02 12:43:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:53.250 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e7b8a8de-b6cd-4283-854b-a2bd919c371d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:43:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:53.250 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9a4a80-5664-4445-909f-c522076403b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:53.251 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d namespace which is not needed anymore
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.254 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <uuid>4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf</uuid>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <name>instance-0000009f</name>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:name>tempest-TestMinimumBasicScenario-server-1177339644</nova:name>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:43:52</nova:creationTime>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:user uuid="734ae44830d540d8ab51c2a3d75ecd80">tempest-TestMinimumBasicScenario-999813940-project-member</nova:user>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:project uuid="2e53064cd4d645f09bd59bbca09b98e0">tempest-TestMinimumBasicScenario-999813940</nova:project>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <nova:port uuid="242e9f5d-5808-46e5-877c-ab6c97cacc64">
Oct 02 12:43:53 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <system>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="serial">4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="uuid">4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </system>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <os>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </os>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <features>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </features>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk">
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </source>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config">
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </source>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:43:53 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:25:64:28"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <target dev="tap242e9f5d-58"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/console.log" append="off"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <video>
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </video>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:43:53 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:43:53 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:43:53 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:43:53 compute-0 nova_compute[257802]: </domain>
Oct 02 12:43:53 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.255 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Preparing to wait for external event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.255 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.255 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.256 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.256 2 DEBUG nova.virt.libvirt.vif [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1177339644',display_name='tempest-TestMinimumBasicScenario-server-1177339644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1177339644',id=159,image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD6z5KFLzCCipER/PvHuq/MNGTxd1RUhD7rY5o2WpTORO0UwKn2m0zVFbpQwYXEC7RieyYdRnhp+ULi3gCBX1FpCTLHoDreyHt1lDTwb0yPiwclRQO8cg/Ijl4ojEGDkg==',key_name='tempest-TestMinimumBasicScenario-992418222',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-mrk9aw2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:46Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.257 2 DEBUG nova.network.os_vif_util [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.257 2 DEBUG nova.network.os_vif_util [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.257 2 DEBUG os_vif [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap242e9f5d-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap242e9f5d-58, col_values=(('external_ids', {'iface-id': '242e9f5d-5808-46e5-877c-ab6c97cacc64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:64:28', 'vm-uuid': '4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 NetworkManager[44987]: <info>  [1759409033.2649] manager: (tap242e9f5d-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:53 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Consumed 20.681s CPU time.
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.269 2 INFO os_vif [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58')
Oct 02 12:43:53 compute-0 systemd-machined[211836]: Machine qemu-75-instance-00000099 terminated.
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.390 2 INFO nova.virt.libvirt.driver [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Instance destroyed successfully.
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.390 2 DEBUG nova.objects.instance [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lazy-loading 'resources' on Instance uuid 714ae75f-1424-4b97-b849-84e5b4e77668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:53 compute-0 ceph-mon[73607]: pgmap v2511: 305 pgs: 305 active+clean; 502 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 149 op/s
Oct 02 12:43:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3505284374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1282326066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.523 2 DEBUG nova.virt.libvirt.vif [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-686473254',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-686473254',id=153,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKhJxrDtnxwBQUfhXEoiE7UJdnEItyt2MVgFBXsCoh01cS2FKjJZa0tSLP7/9uktcmwDXaXDiKLD638dMdEY8dQy2aXxdKxSuJAyk4atAc8PHb6iv+FO/634dBFNFVRVg==',key_name='tempest-TestInstancesWithCinderVolumes-1888663332',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a41d99312f014c65adddea4f70536a15',ramdisk_id='',reservation_id='r-0xfegyog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-99684106',owner_user_name='tempest-TestInstancesWithCinderVolumes-99684106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:33Z,user_data=None,user_id='b82c89ad6c4a49e78943f7a92d0a6560',uuid=714ae75f-1424-4b97-b849-84e5b4e77668,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.523 2 DEBUG nova.network.os_vif_util [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converting VIF {"id": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "address": "fa:16:3e:1a:b3:16", "network": {"id": "e7b8a8de-b6cd-4283-854b-a2bd919c371d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1851369337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a41d99312f014c65adddea4f70536a15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap763709ed-3f", "ovs_interfaceid": "763709ed-3fe4-45a4-8a2f-4b21f4534590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.524 2 DEBUG nova.network.os_vif_util [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.524 2 DEBUG os_vif [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap763709ed-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.535 2 INFO os_vif [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:b3:16,bridge_name='br-int',has_traffic_filtering=True,id=763709ed-3fe4-45a4-8a2f-4b21f4534590,network=Network(e7b8a8de-b6cd-4283-854b-a2bd919c371d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap763709ed-3f')
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [NOTICE]   (352457) : haproxy version is 2.8.14-c23fe91
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [NOTICE]   (352457) : path to executable is /usr/sbin/haproxy
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [WARNING]  (352457) : Exiting Master process...
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [WARNING]  (352457) : Exiting Master process...
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [ALERT]    (352457) : Current worker (352478) exited with code 143 (Terminated)
Oct 02 12:43:53 compute-0 neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d[352424]: [WARNING]  (352457) : All workers exited. Exiting... (0)
Oct 02 12:43:53 compute-0 systemd[1]: libpod-e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137.scope: Deactivated successfully.
Oct 02 12:43:53 compute-0 podman[356441]: 2025-10-02 12:43:53.551225501 +0000 UTC m=+0.210810640 container died e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:43:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.767 2 DEBUG nova.compute.manager [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-unplugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.768 2 DEBUG oslo_concurrency.lockutils [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.768 2 DEBUG oslo_concurrency.lockutils [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.769 2 DEBUG oslo_concurrency.lockutils [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.769 2 DEBUG nova.compute.manager [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] No waiting events found dispatching network-vif-unplugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.770 2 DEBUG nova.compute.manager [req-ae5c6fb1-8816-4a4d-8c53-4503acc03ba2 req-4296f074-a7f1-40fc-b11e-a7ba457e2e94 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-unplugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.775 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.776 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.776 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No VIF found with MAC fa:16:3e:25:64:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.777 2 INFO nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Using config drive
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.805 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.925 2 DEBUG nova.network.neutron [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 477 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Oct 02 12:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137-userdata-shm.mount: Deactivated successfully.
Oct 02 12:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c8346cf1d5e47e85a7984b34b0820c2aa59f5156484ca527e226d4de46ee913-merged.mount: Deactivated successfully.
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.985 2 DEBUG nova.compute.manager [req-50b00ccc-6e6d-48ae-b457-d177e7cad5ad req-8d50f9fc-0421-4d57-8e78-ac6cee172f82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Received event network-vif-deleted-9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.986 2 INFO nova.compute.manager [req-50b00ccc-6e6d-48ae-b457-d177e7cad5ad req-8d50f9fc-0421-4d57-8e78-ac6cee172f82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Neutron deleted interface 9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf; detaching it from the instance and deleting it from the info cache
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.986 2 DEBUG nova.network.neutron [req-50b00ccc-6e6d-48ae-b457-d177e7cad5ad req-8d50f9fc-0421-4d57-8e78-ac6cee172f82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:53 compute-0 nova_compute[257802]: 2025-10-02 12:43:53.988 2 INFO nova.compute.manager [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Took 1.19 seconds to deallocate network for instance.
Oct 02 12:43:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.088 2 INFO nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Creating config drive at /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.093 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rjhvkxl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.124 2 DEBUG nova.compute.manager [req-50b00ccc-6e6d-48ae-b457-d177e7cad5ad req-8d50f9fc-0421-4d57-8e78-ac6cee172f82 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Detach interface failed, port_id=9fd83de3-587e-4631-a4e3-ee5ab5b1cbbf, reason: Instance c70e8f51-9397-40dd-9bbe-210e60b75364 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.158 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.158 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:54 compute-0 podman[356441]: 2025-10-02 12:43:54.177204878 +0000 UTC m=+0.836790017 container cleanup e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:43:54 compute-0 systemd[1]: libpod-conmon-e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137.scope: Deactivated successfully.
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.227 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rjhvkxl" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.262 2 DEBUG nova.storage.rbd_utils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.269 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.363 2 DEBUG oslo_concurrency.processutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:54 compute-0 ovn_controller[148183]: 2025-10-02T12:43:54Z|00689|binding|INFO|Releasing lport 79bf28ab-e58e-4276-adf8-279ba85b1b49 from this chassis (sb_readonly=0)
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:54 compute-0 podman[356526]: 2025-10-02 12:43:54.504003365 +0000 UTC m=+0.302011699 container remove e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.510 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0fb2bb-e67b-4a31-a92c-17b27e62fbb0]: (4, ('Thu Oct  2 12:43:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d (e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137)\ne1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137\nThu Oct  2 12:43:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d (e1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137)\ne1f3f2ed9e2aadb6df1e2e5a402752956c9819321c3ef75c3a044707e3862137\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.513 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2be9c0c9-9ff4-4fe4-84fe-d6e65d0ea10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.514 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7b8a8de-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014665704099199703 of space, bias 1.0, pg target 0.4399711229759911 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008675102945281966 of space, bias 1.0, pg target 2.6025308835845897 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003806103724641727 of space, bias 1.0, pg target 1.1342189099432347 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.761 2 DEBUG oslo_concurrency.processutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.761 2 INFO nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Deleting local config drive /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf/disk.config because it was imported into RBD.
Oct 02 12:43:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340941978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:54 compute-0 NetworkManager[44987]: <info>  [1759409034.8242] manager: (tap242e9f5d-58): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Oct 02 12:43:54 compute-0 systemd-udevd[356419]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.833 2 DEBUG oslo_concurrency.processutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.844 2 DEBUG nova.compute.provider_tree [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:54 compute-0 systemd-machined[211836]: New machine qemu-78-instance-0000009f.
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.862 2 DEBUG nova.scheduler.client.report [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:54 compute-0 kernel: tape7b8a8de-b0: left promiscuous mode
Oct 02 12:43:54 compute-0 kernel: tap242e9f5d-58: entered promiscuous mode
Oct 02 12:43:54 compute-0 NetworkManager[44987]: <info>  [1759409034.8786] device (tap242e9f5d-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:54 compute-0 NetworkManager[44987]: <info>  [1759409034.8800] device (tap242e9f5d-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:54.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.884 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7515e7-727e-4524-b128-5d71f6f34775]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-0000009f.
Oct 02 12:43:54 compute-0 ovn_controller[148183]: 2025-10-02T12:43:54Z|00690|binding|INFO|Claiming lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 for this chassis.
Oct 02 12:43:54 compute-0 ovn_controller[148183]: 2025-10-02T12:43:54Z|00691|binding|INFO|242e9f5d-5808-46e5-877c-ab6c97cacc64: Claiming fa:16:3e:25:64:28 10.100.0.6
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.910 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5d2d76-c00d-4a03-a5c9-ccc1e7e4fc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.911 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.911 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[989726f5-b782-4146-96eb-3e5e5be28894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.913 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:64:28 10.100.0.6'], port_security=['fa:16:3e:25:64:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=242e9f5d-5808-46e5-877c-ab6c97cacc64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:54 compute-0 ceph-mon[73607]: pgmap v2512: 305 pgs: 305 active+clean; 477 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.928 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[644b8ca4-d0d2-454a-b5d0-3b6d2b112223]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695790, 'reachable_time': 26307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356615, 'error': None, 'target': 'ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 systemd[1]: run-netns-ovnmeta\x2de7b8a8de\x2db6cd\x2d4283\x2d854b\x2da2bd919c371d.mount: Deactivated successfully.
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.933 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e7b8a8de-b6cd-4283-854b-a2bd919c371d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.933 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d3bad2-ca93-421f-9546-d21187614d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.933 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 242e9f5d-5808-46e5-877c-ab6c97cacc64 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 bound to our chassis
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.935 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.943 2 INFO nova.scheduler.client.report [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Deleted allocations for instance c70e8f51-9397-40dd-9bbe-210e60b75364
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.948 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[277d6730-de96-4cbc-911a-6fc80088f508]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.948 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e53961-91 in ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.950 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e53961-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.950 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4b1226-03a5-43e6-9414-265f460037f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.951 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[506c1ff0-7de9-46ba-ba50-1f69b1117933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.963 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[36deb136-f7af-4cd5-af55-2f5fd6783eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 ovn_controller[148183]: 2025-10-02T12:43:54Z|00692|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 ovn-installed in OVS
Oct 02 12:43:54 compute-0 ovn_controller[148183]: 2025-10-02T12:43:54Z|00693|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 up in Southbound
Oct 02 12:43:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:54.987 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b323efdd-5e98-46b8-a608-951d2722e86a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:54 compute-0 nova_compute[257802]: 2025-10-02 12:43:54.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.015 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3ccb75-27d4-47ea-bb1b-5b24dc2d3d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 NetworkManager[44987]: <info>  [1759409035.0205] manager: (tap99e53961-90): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.023 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8458cfe3-2b92-4645-b3b7-bc76c61a8f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.029 2 DEBUG oslo_concurrency.lockutils [None req-e6ba4950-6417-430b-a2d8-68a6d14ddd9b 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "c70e8f51-9397-40dd-9bbe-210e60b75364" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.056 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[799d2e5b-8e7c-4e16-aab9-39f1adff1770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.059 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[094dab3c-7dd3-4c22-bb44-207da42ac916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 NetworkManager[44987]: <info>  [1759409035.0782] device (tap99e53961-90): carrier: link connected
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.082 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d3461f44-fe12-4438-816b-77da7f24222f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.096 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1875c146-984b-4576-8e47-91def25871d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710270, 'reachable_time': 43347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356646, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c61203-cca6-4272-aa7d-3310f5c7ba02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:1562'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710270, 'tstamp': 710270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356647, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.122 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e44746dc-ff4d-467d-a67f-8d95632d5515]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710270, 'reachable_time': 43347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356648, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.148 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e96524-4e7a-4e52-8fc5-22a1ed3c5344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.188 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b676bd5f-6990-4843-a446-861b1ab2bb35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.190 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.190 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.190 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e53961-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:55 compute-0 NetworkManager[44987]: <info>  [1759409035.1925] manager: (tap99e53961-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct 02 12:43:55 compute-0 kernel: tap99e53961-90: entered promiscuous mode
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.194 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e53961-90, col_values=(('external_ids', {'iface-id': '15e9653f-d8e0-49a9-859e-c8a4f714df4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:55 compute-0 ovn_controller[148183]: 2025-10-02T12:43:55Z|00694|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.215 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.216 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c75c4cf3-6577-4e63-9633-86a7d61b74a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.217 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:43:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:43:55.219 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'env', 'PROCESS_TAG=haproxy-99e53961-97c9-4d79-b2bc-ba336c204821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e53961-97c9-4d79-b2bc-ba336c204821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.324 2 DEBUG nova.compute.manager [req-6ace05d1-83b4-421c-a343-e97bf6fd5780 req-853fbf70-d3ce-45f4-97f2-ad73c9bccd2a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.324 2 DEBUG oslo_concurrency.lockutils [req-6ace05d1-83b4-421c-a343-e97bf6fd5780 req-853fbf70-d3ce-45f4-97f2-ad73c9bccd2a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.324 2 DEBUG oslo_concurrency.lockutils [req-6ace05d1-83b4-421c-a343-e97bf6fd5780 req-853fbf70-d3ce-45f4-97f2-ad73c9bccd2a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.325 2 DEBUG oslo_concurrency.lockutils [req-6ace05d1-83b4-421c-a343-e97bf6fd5780 req-853fbf70-d3ce-45f4-97f2-ad73c9bccd2a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:55 compute-0 nova_compute[257802]: 2025-10-02 12:43:55.325 2 DEBUG nova.compute.manager [req-6ace05d1-83b4-421c-a343-e97bf6fd5780 req-853fbf70-d3ce-45f4-97f2-ad73c9bccd2a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Processing event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:55 compute-0 podman[356680]: 2025-10-02 12:43:55.561700568 +0000 UTC m=+0.021104000 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:43:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 12:43:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.060 2 DEBUG nova.compute.manager [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.061 2 DEBUG oslo_concurrency.lockutils [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.061 2 DEBUG oslo_concurrency.lockutils [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.062 2 DEBUG oslo_concurrency.lockutils [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.063 2 DEBUG nova.compute.manager [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] No waiting events found dispatching network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.063 2 WARNING nova.compute.manager [req-3c1788cf-6936-4a12-99f9-21f3c17b6935 req-c388e05c-97e3-434b-b68a-c8992e6b6a7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received unexpected event network-vif-plugged-763709ed-3fe4-45a4-8a2f-4b21f4534590 for instance with vm_state active and task_state deleting.
Oct 02 12:43:56 compute-0 podman[356680]: 2025-10-02 12:43:56.261073617 +0000 UTC m=+0.720477049 container create 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:43:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1340941978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3145299276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3145299276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:56 compute-0 systemd[1]: Started libpod-conmon-7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99.scope.
Oct 02 12:43:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/029a5b10d3d98f8daf0efe73ee491e42e0f9e8679e8d9ada87e5fa9c04b83859/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:56 compute-0 podman[356680]: 2025-10-02 12:43:56.536904293 +0000 UTC m=+0.996307735 container init 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:43:56 compute-0 podman[356680]: 2025-10-02 12:43:56.544035118 +0000 UTC m=+1.003438530 container start 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:43:56 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [NOTICE]   (356743) : New worker (356745) forked
Oct 02 12:43:56 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [NOTICE]   (356743) : Loading success.
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.570 2 INFO nova.virt.libvirt.driver [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Deleting instance files /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668_del
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.572 2 INFO nova.virt.libvirt.driver [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Deletion of /var/lib/nova/instances/714ae75f-1424-4b97-b849-84e5b4e77668_del complete
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.640 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409036.640154, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.641 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Started (Lifecycle Event)
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.643 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.646 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.650 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance spawned successfully.
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.650 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.726 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.732 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.733 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.733 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.734 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.734 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.734 2 DEBUG nova.virt.libvirt.driver [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.738 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.760 2 INFO nova.compute.manager [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Took 3.81 seconds to destroy the instance on the hypervisor.
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.760 2 DEBUG oslo.service.loopingcall [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.761 2 DEBUG nova.compute.manager [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:43:56 compute-0 nova_compute[257802]: 2025-10-02 12:43:56.761 2 DEBUG nova.network.neutron [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:43:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:43:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.187 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.188 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409036.6402788, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.188 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Paused (Lifecycle Event)
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.303 2 INFO nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Took 9.94 seconds to spawn the instance on the hypervisor.
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.303 2 DEBUG nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.461 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.465 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409036.6462343, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.465 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Resumed (Lifecycle Event)
Oct 02 12:43:57 compute-0 ceph-mon[73607]: pgmap v2513: 305 pgs: 305 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.575 2 INFO nova.compute.manager [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Took 13.27 seconds to build instance.
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.608 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.611 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:57 compute-0 nova_compute[257802]: 2025-10-02 12:43:57.683 2 DEBUG oslo_concurrency.lockutils [None req-ef23e784-4e6e-4bc7-bc70-bd07ff248a7e 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:43:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:58 compute-0 nova_compute[257802]: 2025-10-02 12:43:58.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:58 compute-0 ceph-mon[73607]: pgmap v2514: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:43:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:43:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.498 2 DEBUG nova.compute.manager [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.499 2 DEBUG oslo_concurrency.lockutils [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.499 2 DEBUG oslo_concurrency.lockutils [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.499 2 DEBUG oslo_concurrency.lockutils [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.499 2 DEBUG nova.compute.manager [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:59 compute-0 nova_compute[257802]: 2025-10-02 12:43:59.499 2 WARNING nova.compute.manager [req-4d327912-fc45-48c8-ac58-39d48357cf84 req-aa33c2fe-7d45-462a-98a8-18d4699ca223 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state None.
Oct 02 12:43:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Oct 02 12:44:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.095 2 DEBUG nova.compute.manager [req-73bcf7b1-479a-4b7a-8cd9-c7487cecf672 req-e4c94afd-cc8c-4f9a-9603-ca46e626d965 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Received event network-vif-deleted-763709ed-3fe4-45a4-8a2f-4b21f4534590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.095 2 INFO nova.compute.manager [req-73bcf7b1-479a-4b7a-8cd9-c7487cecf672 req-e4c94afd-cc8c-4f9a-9603-ca46e626d965 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Neutron deleted interface 763709ed-3fe4-45a4-8a2f-4b21f4534590; detaching it from the instance and deleting it from the info cache
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.095 2 DEBUG nova.network.neutron [req-73bcf7b1-479a-4b7a-8cd9-c7487cecf672 req-e4c94afd-cc8c-4f9a-9603-ca46e626d965 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.102 2 DEBUG nova.network.neutron [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.123 2 INFO nova.compute.manager [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Took 3.36 seconds to deallocate network for instance.
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.124 2 DEBUG nova.compute.manager [req-73bcf7b1-479a-4b7a-8cd9-c7487cecf672 req-e4c94afd-cc8c-4f9a-9603-ca46e626d965 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Detach interface failed, port_id=763709ed-3fe4-45a4-8a2f-4b21f4534590, reason: Instance 714ae75f-1424-4b97-b849-84e5b4e77668 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.381 2 INFO nova.compute.manager [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Took 0.26 seconds to detach 1 volumes for instance.
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.434 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.434 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.533 2 DEBUG oslo_concurrency.processutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:44:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:44:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168303721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.965 2 DEBUG oslo_concurrency.processutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:00 compute-0 nova_compute[257802]: 2025-10-02 12:44:00.973 2 DEBUG nova.compute.provider_tree [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:44:01 compute-0 nova_compute[257802]: 2025-10-02 12:44:01.071 2 DEBUG nova.scheduler.client.report [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:44:01 compute-0 nova_compute[257802]: 2025-10-02 12:44:01.093 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:01 compute-0 nova_compute[257802]: 2025-10-02 12:44:01.123 2 INFO nova.scheduler.client.report [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Deleted allocations for instance 714ae75f-1424-4b97-b849-84e5b4e77668
Oct 02 12:44:01 compute-0 ceph-mon[73607]: pgmap v2515: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Oct 02 12:44:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/168303721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:01 compute-0 nova_compute[257802]: 2025-10-02 12:44:01.218 2 DEBUG oslo_concurrency.lockutils [None req-9d3622a2-65df-41df-95c0-631b23f5a66f b82c89ad6c4a49e78943f7a92d0a6560 a41d99312f014c65adddea4f70536a15 - - default default] Lock "714ae75f-1424-4b97-b849-84e5b4e77668" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 02 12:44:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2310385740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:02.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:03 compute-0 nova_compute[257802]: 2025-10-02 12:44:03.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:03 compute-0 nova_compute[257802]: 2025-10-02 12:44:03.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:03 compute-0 ceph-mon[73607]: pgmap v2516: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 02 12:44:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/109427468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 02 12:44:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:04 compute-0 nova_compute[257802]: 2025-10-02 12:44:04.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:04.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:04 compute-0 podman[356783]: 2025-10-02 12:44:04.922871394 +0000 UTC m=+0.055093295 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:44:04 compute-0 podman[356781]: 2025-10-02 12:44:04.941256195 +0000 UTC m=+0.080149740 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 12:44:04 compute-0 podman[356782]: 2025-10-02 12:44:04.947186141 +0000 UTC m=+0.084282901 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 12:44:04 compute-0 ceph-mon[73607]: pgmap v2517: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.167 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.170 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409030.1686285, c70e8f51-9397-40dd-9bbe-210e60b75364 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.170 2 INFO nova.compute.manager [-] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] VM Stopped (Lifecycle Event)
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.239 2 DEBUG nova.compute.manager [None req-38f2e43e-99ea-488a-a3ef-9210e480afc2 - - - - - -] [instance: c70e8f51-9397-40dd-9bbe-210e60b75364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.846 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.847 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:05 compute-0 nova_compute[257802]: 2025-10-02 12:44:05.870 2 DEBUG nova.objects.instance [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 657 KiB/s wr, 113 op/s
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.062 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.407 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.408 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.408 2 INFO nova.compute.manager [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Attaching volume 5711af91-e6ed-47a0-ad60-a4e65171a2af to /dev/vdb
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.622 2 DEBUG os_brick.utils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.623 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.633 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.634 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[10023a8f-eb7e-426e-8e14-3f78c858fe99]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.635 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.643 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.643 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bc852d-d41d-4a05-a877-3481b8c600ef]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.645 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.654 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.655 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[e5db386c-1896-4fc0-a6fe-9917bb5d6bf6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.657 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6c333f4d-ff34-4e3c-b62e-097db76d845d]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.658 2 DEBUG oslo_concurrency.processutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.692 2 DEBUG oslo_concurrency.processutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.694 2 DEBUG os_brick.initiator.connectors.lightos [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.694 2 DEBUG os_brick.initiator.connectors.lightos [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.695 2 DEBUG os_brick.initiator.connectors.lightos [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.695 2 DEBUG os_brick.utils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:44:06 compute-0 nova_compute[257802]: 2025-10-02 12:44:06.695 2 DEBUG nova.virt.block_device [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating existing volume attachment record: 1c9af883-4d21-48b6-9e77-fd71aed53dbf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:44:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:07 compute-0 ceph-mon[73607]: pgmap v2518: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 657 KiB/s wr, 113 op/s
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.551 2 DEBUG nova.objects.instance [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.571 2 DEBUG nova.virt.libvirt.driver [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Attempting to attach volume 5711af91-e6ed-47a0-ad60-a4e65171a2af with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.574 2 DEBUG nova.virt.libvirt.guest [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:44:07 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5711af91-e6ed-47a0-ad60-a4e65171a2af">
Oct 02 12:44:07 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   </source>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:44:07 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:44:07 compute-0 nova_compute[257802]:   <serial>5711af91-e6ed-47a0-ad60-a4e65171a2af</serial>
Oct 02 12:44:07 compute-0 nova_compute[257802]: </disk>
Oct 02 12:44:07 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.875 2 DEBUG nova.virt.libvirt.driver [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.876 2 DEBUG nova.virt.libvirt.driver [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.876 2 DEBUG nova.virt.libvirt.driver [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:07 compute-0 nova_compute[257802]: 2025-10-02 12:44:07.876 2 DEBUG nova.virt.libvirt.driver [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No VIF found with MAC fa:16:3e:25:64:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:44:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 114 op/s
Oct 02 12:44:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:08 compute-0 nova_compute[257802]: 2025-10-02 12:44:08.150 2 DEBUG oslo_concurrency.lockutils [None req-59acebeb-290e-48af-a244-a90930c75a69 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3975239224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:08 compute-0 nova_compute[257802]: 2025-10-02 12:44:08.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409033.3879364, 714ae75f-1424-4b97-b849-84e5b4e77668 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:08 compute-0 nova_compute[257802]: 2025-10-02 12:44:08.389 2 INFO nova.compute.manager [-] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] VM Stopped (Lifecycle Event)
Oct 02 12:44:08 compute-0 nova_compute[257802]: 2025-10-02 12:44:08.413 2 DEBUG nova.compute.manager [None req-e5a04180-021f-4d4e-bced-14b54bef361d - - - - - -] [instance: 714ae75f-1424-4b97-b849-84e5b4e77668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:08 compute-0 nova_compute[257802]: 2025-10-02 12:44:08.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:08 compute-0 podman[356865]: 2025-10-02 12:44:08.995489576 +0000 UTC m=+0.128753394 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:44:09 compute-0 nova_compute[257802]: 2025-10-02 12:44:09.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:09 compute-0 nova_compute[257802]: 2025-10-02 12:44:09.167 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:09 compute-0 ceph-mon[73607]: pgmap v2519: 305 pgs: 305 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 114 op/s
Oct 02 12:44:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4175023848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 456 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 671 KiB/s wr, 101 op/s
Oct 02 12:44:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:10 compute-0 nova_compute[257802]: 2025-10-02 12:44:10.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:10 compute-0 ovn_controller[148183]: 2025-10-02T12:44:10Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:64:28 10.100.0.6
Oct 02 12:44:10 compute-0 ovn_controller[148183]: 2025-10-02T12:44:10Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:64:28 10.100.0.6
Oct 02 12:44:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:11 compute-0 nova_compute[257802]: 2025-10-02 12:44:11.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:11 compute-0 ceph-mon[73607]: pgmap v2520: 305 pgs: 305 active+clean; 456 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 671 KiB/s wr, 101 op/s
Oct 02 12:44:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2528956362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2528956362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 439 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 93 op/s
Oct 02 12:44:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:12 compute-0 nova_compute[257802]: 2025-10-02 12:44:12.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:12 compute-0 NetworkManager[44987]: <info>  [1759409052.3096] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct 02 12:44:12 compute-0 NetworkManager[44987]: <info>  [1759409052.3107] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 02 12:44:12 compute-0 nova_compute[257802]: 2025-10-02 12:44:12.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:12 compute-0 ovn_controller[148183]: 2025-10-02T12:44:12Z|00695|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct 02 12:44:12 compute-0 sudo[356891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:12 compute-0 nova_compute[257802]: 2025-10-02 12:44:12.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:12 compute-0 sudo[356891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:12 compute-0 sudo[356891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:12 compute-0 nova_compute[257802]: 2025-10-02 12:44:12.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:12 compute-0 sudo[356918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:12 compute-0 sudo[356918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:12 compute-0 sudo[356918]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:12 compute-0 ceph-mon[73607]: pgmap v2521: 305 pgs: 305 active+clean; 439 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 93 op/s
Oct 02 12:44:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3067201084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3067201084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:12.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.322 2 DEBUG nova.compute.manager [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.323 2 DEBUG nova.compute.manager [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.323 2 DEBUG oslo_concurrency.lockutils [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.323 2 DEBUG oslo_concurrency.lockutils [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.323 2 DEBUG nova.network.neutron [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:13 compute-0 nova_compute[257802]: 2025-10-02 12:44:13.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 409 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 12:44:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.370 2 DEBUG nova.compute.manager [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.370 2 DEBUG nova.compute.manager [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.371 2 DEBUG oslo_concurrency.lockutils [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.699 2 DEBUG nova.network.neutron [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.699 2 DEBUG nova.network.neutron [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.728 2 DEBUG oslo_concurrency.lockutils [req-257f978b-ad61-4be3-8eb7-1c6cdec778db req-c1cbfd6b-376f-4dee-aad0-dd3c6b3a7fa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.729 2 DEBUG oslo_concurrency.lockutils [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:14 compute-0 nova_compute[257802]: 2025-10-02 12:44:14.729 2 DEBUG nova.network.neutron [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 12:44:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 12:44:15 compute-0 ceph-mon[73607]: pgmap v2522: 305 pgs: 305 active+clean; 409 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 12:44:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 02 12:44:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:16.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:44:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324904890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:44:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324904890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2324904890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2324904890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.486 2 DEBUG nova.compute.manager [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.487 2 DEBUG nova.compute.manager [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.487 2 DEBUG oslo_concurrency.lockutils [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.635 2 DEBUG nova.network.neutron [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.636 2 DEBUG nova.network.neutron [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.662 2 DEBUG oslo_concurrency.lockutils [req-774e6867-0a8a-4e03-b7c4-7b93d7716b75 req-477b41ed-8dc2-48d0-aae3-4a433d74b04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.662 2 DEBUG oslo_concurrency.lockutils [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:16 compute-0 nova_compute[257802]: 2025-10-02 12:44:16.663 2 DEBUG nova.network.neutron [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:16.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:17 compute-0 ceph-mon[73607]: pgmap v2523: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 02 12:44:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3504724591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:17 compute-0 nova_compute[257802]: 2025-10-02 12:44:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:17.767 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:17.769 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:44:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:44:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.101 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.102 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.134 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/764864428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.680 2 DEBUG nova.network.neutron [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.681 2 DEBUG nova.network.neutron [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:18 compute-0 nova_compute[257802]: 2025-10-02 12:44:18.762 2 DEBUG oslo_concurrency.lockutils [req-0244cab0-3ef0-4a6c-be91-c73ea6ad77e0 req-e3935d4b-3187-4af8-877c-aac2458ce703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:19 compute-0 nova_compute[257802]: 2025-10-02 12:44:19.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:19 compute-0 ceph-mon[73607]: pgmap v2524: 305 pgs: 305 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Oct 02 12:44:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Oct 02 12:44:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Oct 02 12:44:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Oct 02 12:44:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 287 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 488 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 02 12:44:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.269 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.270 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.271 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.271 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.272 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431885482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:20 compute-0 nova_compute[257802]: 2025-10-02 12:44:20.736 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:20 compute-0 ceph-mon[73607]: osdmap e355: 3 total, 3 up, 3 in
Oct 02 12:44:20 compute-0 ceph-mon[73607]: pgmap v2526: 305 pgs: 305 active+clean; 287 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 488 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 02 12:44:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3502647660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3431885482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:20.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.378 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.378 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.379 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.556 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.557 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=20.942886352539062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.558 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:21 compute-0 nova_compute[257802]: 2025-10-02 12:44:21.559 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 282 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.9 MiB/s wr, 146 op/s
Oct 02 12:44:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:22 compute-0 nova_compute[257802]: 2025-10-02 12:44:22.616 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:44:22 compute-0 nova_compute[257802]: 2025-10-02 12:44:22.618 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:44:22 compute-0 nova_compute[257802]: 2025-10-02 12:44:22.619 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:44:22 compute-0 nova_compute[257802]: 2025-10-02 12:44:22.666 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:22.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:23 compute-0 ceph-mon[73607]: pgmap v2527: 305 pgs: 305 active+clean; 282 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.9 MiB/s wr, 146 op/s
Oct 02 12:44:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577776401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.119 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.125 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.154 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.231 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.232 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.233 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:23 compute-0 nova_compute[257802]: 2025-10-02 12:44:23.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 274 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Oct 02 12:44:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:24 compute-0 nova_compute[257802]: 2025-10-02 12:44:24.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/577776401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2992762048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:25 compute-0 ceph-mon[73607]: pgmap v2528: 305 pgs: 305 active+clean; 274 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Oct 02 12:44:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3968223177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 12:44:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:26.771 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:26.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:26.962 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:26.963 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:26.963 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:27 compute-0 ceph-mon[73607]: pgmap v2529: 305 pgs: 305 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 12:44:27 compute-0 sudo[356996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:27 compute-0 sudo[356996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:27 compute-0 sudo[356996]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:27 compute-0 sudo[357021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:27 compute-0 sudo[357021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:27 compute-0 sudo[357021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:27 compute-0 sudo[357046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:27 compute-0 sudo[357046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:27 compute-0 sudo[357046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:27 compute-0 sudo[357071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:44:27 compute-0 sudo[357071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 272 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 02 12:44:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:28 compute-0 sudo[357071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.507 2 DEBUG nova.compute.manager [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.507 2 DEBUG nova.compute.manager [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.507 2 DEBUG oslo_concurrency.lockutils [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.508 2 DEBUG oslo_concurrency.lockutils [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.508 2 DEBUG nova.network.neutron [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:28 compute-0 nova_compute[257802]: 2025-10-02 12:44:28.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Oct 02 12:44:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Oct 02 12:44:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Oct 02 12:44:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:44:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:28.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:44:29 compute-0 nova_compute[257802]: 2025-10-02 12:44:29.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:29 compute-0 ceph-mon[73607]: pgmap v2530: 305 pgs: 305 active+clean; 272 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 02 12:44:29 compute-0 ceph-mon[73607]: osdmap e356: 3 total, 3 up, 3 in
Oct 02 12:44:29 compute-0 nova_compute[257802]: 2025-10-02 12:44:29.657 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:29 compute-0 nova_compute[257802]: 2025-10-02 12:44:29.658 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:29 compute-0 nova_compute[257802]: 2025-10-02 12:44:29.658 2 INFO nova.compute.manager [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Rebooting instance
Oct 02 12:44:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:44:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:44:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.068 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.624 2 DEBUG nova.network.neutron [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.625 2 DEBUG nova.network.neutron [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3197e72e-6d6e-474e-ba35-0a55263e3b78 does not exist
Oct 02 12:44:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 32e5da7a-be33-4c70-b462-145b2c94ef7f does not exist
Oct 02 12:44:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 48119c01-29e6-4ca9-bde7-e7f6ae39fd8f does not exist
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:44:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:30 compute-0 sudo[357130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:30 compute-0 sudo[357130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:30 compute-0 sudo[357130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:30 compute-0 sudo[357155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:30 compute-0 sudo[357155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:30 compute-0 sudo[357155]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.825 2 DEBUG oslo_concurrency.lockutils [req-176e808b-2a9a-4b97-9415-7dfd3050b668 req-484e2466-350c-423f-ae2f-68262356ac88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.826 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:30 compute-0 nova_compute[257802]: 2025-10-02 12:44:30.826 2 DEBUG nova.network.neutron [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:30 compute-0 ceph-mon[73607]: pgmap v2532: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:44:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:30 compute-0 sudo[357180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:30 compute-0 sudo[357180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:30 compute-0 sudo[357180]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:30 compute-0 sudo[357205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:44:30 compute-0 sudo[357205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:30 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 02 12:44:30 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:30.984362) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:44:30 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 02 12:44:30 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409070984484, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 966, "num_deletes": 253, "total_data_size": 1368775, "memory_usage": 1389648, "flush_reason": "Manual Compaction"}
Oct 02 12:44:30 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071029862, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1352364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55342, "largest_seqno": 56307, "table_properties": {"data_size": 1347517, "index_size": 2371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11219, "raw_average_key_size": 20, "raw_value_size": 1337629, "raw_average_value_size": 2440, "num_data_blocks": 102, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409004, "oldest_key_time": 1759409004, "file_creation_time": 1759409070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 45562 microseconds, and 5221 cpu microseconds.
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.029932) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1352364 bytes OK
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.029962) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.060719) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.060802) EVENT_LOG_v1 {"time_micros": 1759409071060784, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.060893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1364136, prev total WAL file size 1364136, number of live WAL files 2.
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.061867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1320KB)], [122(12MB)]
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071061921, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14118529, "oldest_snapshot_seqno": -1}
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.278450284 +0000 UTC m=+0.031192597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8459 keys, 12095452 bytes, temperature: kUnknown
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071431982, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12095452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12039084, "index_size": 34112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220198, "raw_average_key_size": 26, "raw_value_size": 11888769, "raw_average_value_size": 1405, "num_data_blocks": 1331, "num_entries": 8459, "num_filter_entries": 8459, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.432223) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12095452 bytes
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.441805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.1 rd, 32.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.2 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.4) write-amplify(8.9) OK, records in: 8982, records dropped: 523 output_compression: NoCompression
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.441865) EVENT_LOG_v1 {"time_micros": 1759409071441852, "job": 74, "event": "compaction_finished", "compaction_time_micros": 370129, "compaction_time_cpu_micros": 28428, "output_level": 6, "num_output_files": 1, "total_output_size": 12095452, "num_input_records": 8982, "num_output_records": 8459, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071442226, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071444120, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.061697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.444272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.444279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.444281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.444283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:44:31.444284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.470237525 +0000 UTC m=+0.222979818 container create 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:31 compute-0 systemd[1]: Started libpod-conmon-47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5.scope.
Oct 02 12:44:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.71552577 +0000 UTC m=+0.468268113 container init 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.724942052 +0000 UTC m=+0.477684335 container start 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:44:31 compute-0 systemd[1]: libpod-47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5.scope: Deactivated successfully.
Oct 02 12:44:31 compute-0 romantic_hoover[357285]: 167 167
Oct 02 12:44:31 compute-0 conmon[357285]: conmon 47dc457274269f2fbb72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5.scope/container/memory.events
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.816215964 +0000 UTC m=+0.568958257 container attach 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:31 compute-0 podman[357269]: 2025-10-02 12:44:31.819484104 +0000 UTC m=+0.572226387 container died 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:44:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Oct 02 12:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff156133cd4c47732fea8ff09761afeb7cb53dff9da1b791af5afacae1a9e11a-merged.mount: Deactivated successfully.
Oct 02 12:44:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:32 compute-0 podman[357269]: 2025-10-02 12:44:32.128152657 +0000 UTC m=+0.880894940 container remove 47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:44:32 compute-0 systemd[1]: libpod-conmon-47dc457274269f2fbb72613ffa9ceb578966ccabcb3f44a21d1416efdc3483e5.scope: Deactivated successfully.
Oct 02 12:44:32 compute-0 podman[357309]: 2025-10-02 12:44:32.277736801 +0000 UTC m=+0.021896489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:32 compute-0 podman[357309]: 2025-10-02 12:44:32.413608639 +0000 UTC m=+0.157768317 container create 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:32 compute-0 systemd[1]: Started libpod-conmon-6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca.scope.
Oct 02 12:44:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:32 compute-0 podman[357309]: 2025-10-02 12:44:32.646990941 +0000 UTC m=+0.391150629 container init 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:44:32 compute-0 podman[357309]: 2025-10-02 12:44:32.657774186 +0000 UTC m=+0.401933864 container start 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:32 compute-0 podman[357309]: 2025-10-02 12:44:32.665964548 +0000 UTC m=+0.410124256 container attach 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:32 compute-0 sudo[357331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:32 compute-0 sudo[357331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:32 compute-0 sudo[357331]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:32 compute-0 sudo[357356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:32 compute-0 sudo[357356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:32 compute-0 sudo[357356]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:32.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:33 compute-0 ovn_controller[148183]: 2025-10-02T12:44:33Z|00696|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct 02 12:44:33 compute-0 nova_compute[257802]: 2025-10-02 12:44:33.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 nova_compute[257802]: 2025-10-02 12:44:33.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:33 compute-0 nova_compute[257802]: 2025-10-02 12:44:33.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:33 compute-0 nova_compute[257802]: 2025-10-02 12:44:33.116 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:44:33 compute-0 ceph-mon[73607]: pgmap v2533: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Oct 02 12:44:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/665459189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2679859747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:33 compute-0 nifty_mccarthy[357325]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:44:33 compute-0 nifty_mccarthy[357325]: --> relative data size: 1.0
Oct 02 12:44:33 compute-0 nifty_mccarthy[357325]: --> All data devices are unavailable
Oct 02 12:44:33 compute-0 systemd[1]: libpod-6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca.scope: Deactivated successfully.
Oct 02 12:44:33 compute-0 podman[357309]: 2025-10-02 12:44:33.447621659 +0000 UTC m=+1.191781337 container died 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a1df37b1457e2938c844179edcf0e795aed4e1c38b3dfd0e107949560a38f26-merged.mount: Deactivated successfully.
Oct 02 12:44:33 compute-0 nova_compute[257802]: 2025-10-02 12:44:33.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:33 compute-0 podman[357309]: 2025-10-02 12:44:33.838151752 +0000 UTC m=+1.582311430 container remove 6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:33 compute-0 systemd[1]: libpod-conmon-6b12622247c37b26aea09d0a0a66f9751cc2ae1d9f858fc51432c7b0e3a71fca.scope: Deactivated successfully.
Oct 02 12:44:33 compute-0 sudo[357205]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:33 compute-0 sudo[357405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:33 compute-0 sudo[357405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:33 compute-0 sudo[357405]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:44:33 compute-0 sudo[357430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:33 compute-0 sudo[357430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:34 compute-0 sudo[357430]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:34 compute-0 sudo[357455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:34 compute-0 sudo[357455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:34 compute-0 sudo[357455]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:34 compute-0 sudo[357480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:44:34 compute-0 sudo[357480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:34 compute-0 nova_compute[257802]: 2025-10-02 12:44:34.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 nova_compute[257802]: 2025-10-02 12:44:34.579 2 DEBUG nova.network.neutron [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:34 compute-0 nova_compute[257802]: 2025-10-02 12:44:34.623 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:34 compute-0 nova_compute[257802]: 2025-10-02 12:44:34.624 2 DEBUG nova.compute.manager [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.548466092 +0000 UTC m=+0.021381337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.646995572 +0000 UTC m=+0.119910787 container create bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:44:34 compute-0 systemd[1]: Started libpod-conmon-bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a.scope.
Oct 02 12:44:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.836144558 +0000 UTC m=+0.309059793 container init bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.843057878 +0000 UTC m=+0.315973093 container start bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:44:34 compute-0 thirsty_williamson[357564]: 167 167
Oct 02 12:44:34 compute-0 systemd[1]: libpod-bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a.scope: Deactivated successfully.
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.885389228 +0000 UTC m=+0.358304443 container attach bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:44:34 compute-0 podman[357546]: 2025-10-02 12:44:34.886355362 +0000 UTC m=+0.359270577 container died bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:44:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fa3c905a6b7a0e03b499d47c2471c3bc2bc2120b832ca47b52e0e125ebeb606-merged.mount: Deactivated successfully.
Oct 02 12:44:35 compute-0 podman[357546]: 2025-10-02 12:44:35.218400188 +0000 UTC m=+0.691315403 container remove bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:44:35 compute-0 systemd[1]: libpod-conmon-bb13ae83b8d91056a91af39995809196406c07bf4629566e10061743124b309a.scope: Deactivated successfully.
Oct 02 12:44:35 compute-0 podman[357603]: 2025-10-02 12:44:35.281649352 +0000 UTC m=+0.210288177 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:44:35 compute-0 ceph-mon[73607]: pgmap v2534: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:44:35 compute-0 podman[357582]: 2025-10-02 12:44:35.348917474 +0000 UTC m=+0.383331128 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:44:35 compute-0 podman[357581]: 2025-10-02 12:44:35.353909957 +0000 UTC m=+0.393205830 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:44:35 compute-0 podman[357647]: 2025-10-02 12:44:35.49692874 +0000 UTC m=+0.106584059 container create f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:44:35 compute-0 podman[357647]: 2025-10-02 12:44:35.422205855 +0000 UTC m=+0.031861174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:35 compute-0 systemd[1]: Started libpod-conmon-f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c.scope.
Oct 02 12:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf724ce7a698be88983ef7d983f6c73485f1f88b66b4f746ca2bfa69fe8da2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf724ce7a698be88983ef7d983f6c73485f1f88b66b4f746ca2bfa69fe8da2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf724ce7a698be88983ef7d983f6c73485f1f88b66b4f746ca2bfa69fe8da2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf724ce7a698be88983ef7d983f6c73485f1f88b66b4f746ca2bfa69fe8da2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:35 compute-0 podman[357647]: 2025-10-02 12:44:35.752623762 +0000 UTC m=+0.362279091 container init f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:44:35 compute-0 podman[357647]: 2025-10-02 12:44:35.763740215 +0000 UTC m=+0.373395514 container start f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:44:35 compute-0 podman[357647]: 2025-10-02 12:44:35.823067852 +0000 UTC m=+0.432723151 container attach f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:44:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 12:44:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:36.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:36 compute-0 loving_wescoff[357665]: {
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:     "1": [
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:         {
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "devices": [
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "/dev/loop3"
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             ],
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "lv_name": "ceph_lv0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "lv_size": "7511998464",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "name": "ceph_lv0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "tags": {
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.cluster_name": "ceph",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.crush_device_class": "",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.encrypted": "0",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.osd_id": "1",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.type": "block",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:                 "ceph.vdo": "0"
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             },
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "type": "block",
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:             "vg_name": "ceph_vg0"
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:         }
Oct 02 12:44:36 compute-0 loving_wescoff[357665]:     ]
Oct 02 12:44:36 compute-0 loving_wescoff[357665]: }
Oct 02 12:44:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1677197474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:36 compute-0 systemd[1]: libpod-f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c.scope: Deactivated successfully.
Oct 02 12:44:36 compute-0 podman[357647]: 2025-10-02 12:44:36.48772706 +0000 UTC m=+1.097382359 container died f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf724ce7a698be88983ef7d983f6c73485f1f88b66b4f746ca2bfa69fe8da2ef-merged.mount: Deactivated successfully.
Oct 02 12:44:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:36 compute-0 podman[357647]: 2025-10-02 12:44:36.974880126 +0000 UTC m=+1.584535425 container remove f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:36 compute-0 systemd[1]: libpod-conmon-f4287b18c04374809072d4b277a548fc178c57e24c9400a17f8bebd9b0b0805c.scope: Deactivated successfully.
Oct 02 12:44:37 compute-0 sudo[357480]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:37 compute-0 sudo[357690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:37 compute-0 sudo[357690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:37 compute-0 sudo[357690]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:37 compute-0 sudo[357715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:37 compute-0 sudo[357715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:37 compute-0 sudo[357715]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:37 compute-0 sudo[357740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:37 compute-0 sudo[357740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:37 compute-0 sudo[357740]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:37 compute-0 sudo[357765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:44:37 compute-0 sudo[357765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:37 compute-0 ceph-mon[73607]: pgmap v2535: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 12:44:37 compute-0 podman[357831]: 2025-10-02 12:44:37.631338331 +0000 UTC m=+0.025764013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:37 compute-0 podman[357831]: 2025-10-02 12:44:37.810274267 +0000 UTC m=+0.204699929 container create 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:44:37 compute-0 systemd[1]: Started libpod-conmon-4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835.scope.
Oct 02 12:44:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 12:44:38 compute-0 podman[357831]: 2025-10-02 12:44:38.099722448 +0000 UTC m=+0.494148130 container init 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:44:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:38.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:38 compute-0 podman[357831]: 2025-10-02 12:44:38.106605637 +0000 UTC m=+0.501031299 container start 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:44:38 compute-0 objective_rosalind[357848]: 167 167
Oct 02 12:44:38 compute-0 systemd[1]: libpod-4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835.scope: Deactivated successfully.
Oct 02 12:44:38 compute-0 podman[357831]: 2025-10-02 12:44:38.310211868 +0000 UTC m=+0.704637560 container attach 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:44:38 compute-0 podman[357831]: 2025-10-02 12:44:38.310719001 +0000 UTC m=+0.705144663 container died 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:44:38 compute-0 kernel: tap242e9f5d-58 (unregistering): left promiscuous mode
Oct 02 12:44:38 compute-0 NetworkManager[44987]: <info>  [1759409078.3512] device (tap242e9f5d-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:44:38 compute-0 ovn_controller[148183]: 2025-10-02T12:44:38Z|00697|binding|INFO|Releasing lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 from this chassis (sb_readonly=0)
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 ovn_controller[148183]: 2025-10-02T12:44:38Z|00698|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 down in Southbound
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 ovn_controller[148183]: 2025-10-02T12:44:38Z|00699|binding|INFO|Removing iface tap242e9f5d-58 ovn-installed in OVS
Oct 02 12:44:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:38.426 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:64:28 10.100.0.6'], port_security=['fa:16:3e:25:64:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778 fe4782c0-da23-4067-acf9-d5016fb96624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=242e9f5d-5808-46e5-877c-ab6c97cacc64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:38.427 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 242e9f5d-5808-46e5-877c-ab6c97cacc64 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 unbound from our chassis
Oct 02 12:44:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:38.429 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e53961-97c9-4d79-b2bc-ba336c204821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:44:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:38.430 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b30a04-f899-4d36-9b98-f23a4c6c7159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:38 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:38.430 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace which is not needed anymore
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Oct 02 12:44:38 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Consumed 15.560s CPU time.
Oct 02 12:44:38 compute-0 systemd-machined[211836]: Machine qemu-78-instance-0000009f terminated.
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:38 compute-0 nova_compute[257802]: 2025-10-02 12:44:38.972 2 INFO nova.virt.libvirt.driver [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance shutdown successfully.
Oct 02 12:44:39 compute-0 kernel: tap242e9f5d-58: entered promiscuous mode
Oct 02 12:44:39 compute-0 NetworkManager[44987]: <info>  [1759409079.0411] manager: (tap242e9f5d-58): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct 02 12:44:39 compute-0 systemd-udevd[357869]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:39 compute-0 nova_compute[257802]: 2025-10-02 12:44:39.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-0 ovn_controller[148183]: 2025-10-02T12:44:39Z|00700|binding|INFO|Claiming lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 for this chassis.
Oct 02 12:44:39 compute-0 ovn_controller[148183]: 2025-10-02T12:44:39Z|00701|binding|INFO|242e9f5d-5808-46e5-877c-ab6c97cacc64: Claiming fa:16:3e:25:64:28 10.100.0.6
Oct 02 12:44:39 compute-0 NetworkManager[44987]: <info>  [1759409079.0562] device (tap242e9f5d-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:44:39 compute-0 NetworkManager[44987]: <info>  [1759409079.0575] device (tap242e9f5d-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-df7a659ccd01d402d63be91dff583ceedd5713372fd3f1a418d5b0687311ec21-merged.mount: Deactivated successfully.
Oct 02 12:44:39 compute-0 ovn_controller[148183]: 2025-10-02T12:44:39Z|00702|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 ovn-installed in OVS
Oct 02 12:44:39 compute-0 nova_compute[257802]: 2025-10-02 12:44:39.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-0 nova_compute[257802]: 2025-10-02 12:44:39.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-0 ovn_controller[148183]: 2025-10-02T12:44:39Z|00703|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 up in Southbound
Oct 02 12:44:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:39.080 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:64:28 10.100.0.6'], port_security=['fa:16:3e:25:64:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778 fe4782c0-da23-4067-acf9-d5016fb96624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=242e9f5d-5808-46e5-877c-ab6c97cacc64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:39 compute-0 systemd-machined[211836]: New machine qemu-79-instance-0000009f.
Oct 02 12:44:39 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-0000009f.
Oct 02 12:44:39 compute-0 nova_compute[257802]: 2025-10-02 12:44:39.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-0 ceph-mon[73607]: pgmap v2536: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 12:44:39 compute-0 podman[357831]: 2025-10-02 12:44:39.699128127 +0000 UTC m=+2.093553829 container remove 4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:44:39 compute-0 systemd[1]: libpod-conmon-4f0e5f25ac3360d94aeb7508b89db68af7f56322c9230208c7566c80bc6ac835.scope: Deactivated successfully.
Oct 02 12:44:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 895 KiB/s wr, 99 op/s
Oct 02 12:44:40 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [NOTICE]   (356743) : haproxy version is 2.8.14-c23fe91
Oct 02 12:44:40 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [NOTICE]   (356743) : path to executable is /usr/sbin/haproxy
Oct 02 12:44:40 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [WARNING]  (356743) : Exiting Master process...
Oct 02 12:44:40 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [ALERT]    (356743) : Current worker (356745) exited with code 143 (Terminated)
Oct 02 12:44:40 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[356739]: [WARNING]  (356743) : All workers exited. Exiting... (0)
Oct 02 12:44:40 compute-0 systemd[1]: libpod-7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99.scope: Deactivated successfully.
Oct 02 12:44:40 compute-0 podman[357952]: 2025-10-02 12:44:40.060025702 +0000 UTC m=+0.288476037 container died 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:44:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.287 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.287 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.287 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.287 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.288 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.288 2 WARNING nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state reboot_started.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.288 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.288 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.288 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.289 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.289 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.289 2 WARNING nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state reboot_started.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.289 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.289 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.290 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.290 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.290 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.290 2 WARNING nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state reboot_started.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.290 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.291 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.291 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.291 2 DEBUG oslo_concurrency.lockutils [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.291 2 DEBUG nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.291 2 WARNING nova.compute.manager [req-dc291d60-e450-48f6-a304-4e30b233cd98 req-82172791-c3b4-4360-a876-b08eeff5f18a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state reboot_started.
Oct 02 12:44:40 compute-0 radosgw[92027]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 12:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99-userdata-shm.mount: Deactivated successfully.
Oct 02 12:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-029a5b10d3d98f8daf0efe73ee491e42e0f9e8679e8d9ada87e5fa9c04b83859-merged.mount: Deactivated successfully.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.463 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.464 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409080.463386, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.465 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Resumed (Lifecycle Event)
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.471 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance running successfully.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.472 2 INFO nova.virt.libvirt.driver [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance soft rebooted successfully.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.473 2 DEBUG nova.compute.manager [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.486 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.490 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:40 compute-0 podman[358016]: 2025-10-02 12:44:40.453781874 +0000 UTC m=+0.570337710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.586 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.587 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409080.464943, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.587 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Started (Lifecycle Event)
Oct 02 12:44:40 compute-0 podman[357952]: 2025-10-02 12:44:40.608316961 +0000 UTC m=+0.836767276 container cleanup 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:44:40 compute-0 systemd[1]: libpod-conmon-7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99.scope: Deactivated successfully.
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.673 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.677 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:40 compute-0 nova_compute[257802]: 2025-10-02 12:44:40.755 2 DEBUG oslo_concurrency.lockutils [None req-e0490c1a-4cc3-4bc8-995f-73fa167bf4a9 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 11.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:40 compute-0 podman[358016]: 2025-10-02 12:44:40.771478089 +0000 UTC m=+0.888033875 container create 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:40 compute-0 ceph-mon[73607]: pgmap v2537: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 895 KiB/s wr, 99 op/s
Oct 02 12:44:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:41 compute-0 systemd[1]: Started libpod-conmon-028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa.scope.
Oct 02 12:44:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71fb089546238a09e65ce9e8ea717beb6641d156e1829df187236fccea308dd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71fb089546238a09e65ce9e8ea717beb6641d156e1829df187236fccea308dd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71fb089546238a09e65ce9e8ea717beb6641d156e1829df187236fccea308dd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71fb089546238a09e65ce9e8ea717beb6641d156e1829df187236fccea308dd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:41 compute-0 podman[358016]: 2025-10-02 12:44:41.309728161 +0000 UTC m=+1.426283977 container init 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:44:41 compute-0 podman[358016]: 2025-10-02 12:44:41.319726427 +0000 UTC m=+1.436282223 container start 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:41 compute-0 podman[358016]: 2025-10-02 12:44:41.525940182 +0000 UTC m=+1.642495968 container attach 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:41 compute-0 podman[358055]: 2025-10-02 12:44:41.659551284 +0000 UTC m=+1.026289892 container remove 7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:41 compute-0 podman[357910]: 2025-10-02 12:44:41.664535377 +0000 UTC m=+2.586484578 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.667 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[92e0b35c-ec63-4065-b3a7-95b34e458d2c]: (4, ('Thu Oct  2 12:44:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99)\n7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99\nThu Oct  2 12:44:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99)\n7ff532a58bcfcdbdf22d01044b1264ffa4f77cfe839054cc1cc2e1be105bae99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.670 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b8199bd6-66ef-4368-bac8-f7c906255494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.671 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:41 compute-0 nova_compute[257802]: 2025-10-02 12:44:41.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:41 compute-0 kernel: tap99e53961-90: left promiscuous mode
Oct 02 12:44:41 compute-0 nova_compute[257802]: 2025-10-02 12:44:41.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.691 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5886b35e-9229-4dbc-a094-dfc8eaf0b28a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.789 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd7341a-c4c8-404b-9e16-e575e66ee5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.791 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c55877-c208-45dd-afbc-7fbadbb3af2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.809 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ef538e-f891-450f-878d-876ba2c48809]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710263, 'reachable_time': 44857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358076, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d99e53961\x2d97c9\x2d4d79\x2db2bc\x2dba336c204821.mount: Deactivated successfully.
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.813 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.813 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[027298f8-f76b-4b9f-8126-79da862ef441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.814 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 242e9f5d-5808-46e5-877c-ab6c97cacc64 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 unbound from our chassis
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.816 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.826 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ea29fd-70fe-45c4-807a-417dfa03a1c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.827 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e53961-91 in ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.829 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e53961-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.830 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dd7acb-c507-401e-b50c-1d3c509b2af0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.830 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd18897-a287-45a6-9963-c7bb946b46f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.845 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5751da7a-041b-4273-9448-efc8e5cecf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.862 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3130d73e-cbcd-4dac-be30-256d2c7f359d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.889 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c6f3b9-7362-452f-8d85-63e76c382a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.894 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c2d0d9-a6ce-41bd-a376-ed225b79919a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 systemd-udevd[358078]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:41 compute-0 NetworkManager[44987]: <info>  [1759409081.8990] manager: (tap99e53961-90): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.930 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9061ce5f-ca80-4273-a15f-fefbd048f7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.933 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5709ab-e9cb-4727-8ad8-b7711aa3c773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 NetworkManager[44987]: <info>  [1759409081.9601] device (tap99e53961-90): carrier: link connected
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.965 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6b114153-2a08-4d18-a870-a66acfeae242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 307 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 45 KiB/s wr, 120 op/s
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.984 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[831f51c5-14cf-4eb7-8ec4-12dc2aabe8d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714958, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358105, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:41.998 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4699ec66-7265-4973-aca2-810e3d13a57b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:1562'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714958, 'tstamp': 714958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358107, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.016 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7643d1-4d01-4079-8e4a-8f2b853f381a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714958, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358109, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.045 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e9514850-c182-47e4-8009-898a9e133ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:42.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.107 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed033f8-cf86-45eb-b120-1c4756b56f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.109 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.109 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.109 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e53961-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:42 compute-0 nova_compute[257802]: 2025-10-02 12:44:42.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:42 compute-0 NetworkManager[44987]: <info>  [1759409082.1122] manager: (tap99e53961-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 02 12:44:42 compute-0 kernel: tap99e53961-90: entered promiscuous mode
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.114 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e53961-90, col_values=(('external_ids', {'iface-id': '15e9653f-d8e0-49a9-859e-c8a4f714df4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:42 compute-0 nova_compute[257802]: 2025-10-02 12:44:42.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:42 compute-0 nova_compute[257802]: 2025-10-02 12:44:42.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.118 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.119 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e92b74f-e27c-4d37-9708-52d378fe2d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.120 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 99e53961-97c9-4d79-b2bc-ba336c204821
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:44:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:44:42.122 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'env', 'PROCESS_TAG=haproxy-99e53961-97c9-4d79-b2bc-ba336c204821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e53961-97c9-4d79-b2bc-ba336c204821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:44:42 compute-0 ovn_controller[148183]: 2025-10-02T12:44:42Z|00704|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct 02 12:44:42 compute-0 happy_sutherland[358070]: {
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:         "osd_id": 1,
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:         "type": "bluestore"
Oct 02 12:44:42 compute-0 happy_sutherland[358070]:     }
Oct 02 12:44:42 compute-0 happy_sutherland[358070]: }
Oct 02 12:44:42 compute-0 systemd[1]: libpod-028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa.scope: Deactivated successfully.
Oct 02 12:44:42 compute-0 conmon[358070]: conmon 028a63ca03eeb070d7dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa.scope/container/memory.events
Oct 02 12:44:42 compute-0 podman[358016]: 2025-10-02 12:44:42.17271154 +0000 UTC m=+2.289267336 container died 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:42 compute-0 nova_compute[257802]: 2025-10-02 12:44:42.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:42 compute-0 nova_compute[257802]: 2025-10-02 12:44:42.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:44:42
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root']
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-71fb089546238a09e65ce9e8ea717beb6641d156e1829df187236fccea308dd9-merged.mount: Deactivated successfully.
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:42.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:43 compute-0 podman[358016]: 2025-10-02 12:44:43.229346917 +0000 UTC m=+3.345902733 container remove 028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:44:43 compute-0 sudo[357765]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:44:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:44:43 compute-0 systemd[1]: libpod-conmon-028a63ca03eeb070d7dd8605103c11d8ea29735f5e151a8fba500ebe8e94affa.scope: Deactivated successfully.
Oct 02 12:44:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5c5ad138-3cd5-4fad-a9c1-2073a28a66e2 does not exist
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 65331152-99e8-42ea-af2b-a8569b0b24ed does not exist
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e3786799-7758-42cf-b97c-b629c8487329 does not exist
Oct 02 12:44:43 compute-0 podman[358167]: 2025-10-02 12:44:43.334037217 +0000 UTC m=+0.026092781 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:44:43 compute-0 sudo[358178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:43 compute-0 sudo[358178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:43 compute-0 sudo[358178]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:43 compute-0 sudo[358203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:44:43 compute-0 sudo[358203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:43 compute-0 sudo[358203]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:43 compute-0 podman[358167]: 2025-10-02 12:44:43.588420307 +0000 UTC m=+0.280475851 container create 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:44:43 compute-0 ceph-mon[73607]: pgmap v2538: 305 pgs: 305 active+clean; 307 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 45 KiB/s wr, 120 op/s
Oct 02 12:44:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:44:43 compute-0 systemd[1]: Started libpod-conmon-0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58.scope.
Oct 02 12:44:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3072bbb4ef7c11f1256f1456a356dfca5a543290bdfbb7e543f6b70ff32f5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:43 compute-0 nova_compute[257802]: 2025-10-02 12:44:43.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:43 compute-0 podman[358167]: 2025-10-02 12:44:43.710245389 +0000 UTC m=+0.402300983 container init 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:44:43 compute-0 podman[358167]: 2025-10-02 12:44:43.716370219 +0000 UTC m=+0.408425763 container start 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:44:43 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [NOTICE]   (358237) : New worker (358239) forked
Oct 02 12:44:43 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [NOTICE]   (358237) : Loading success.
Oct 02 12:44:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 290 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 44 KiB/s wr, 177 op/s
Oct 02 12:44:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:44.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:44:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:44:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:44:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:44:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:44:44 compute-0 nova_compute[257802]: 2025-10-02 12:44:44.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:44 compute-0 ceph-mon[73607]: pgmap v2539: 305 pgs: 305 active+clean; 290 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 44 KiB/s wr, 177 op/s
Oct 02 12:44:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:44.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1346795661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 44 KiB/s wr, 274 op/s
Oct 02 12:44:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:46.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:47 compute-0 ceph-mon[73607]: pgmap v2540: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 44 KiB/s wr, 274 op/s
Oct 02 12:44:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 44 KiB/s wr, 347 op/s
Oct 02 12:44:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:48 compute-0 nova_compute[257802]: 2025-10-02 12:44:48.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:49 compute-0 ceph-mon[73607]: pgmap v2541: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 44 KiB/s wr, 347 op/s
Oct 02 12:44:49 compute-0 nova_compute[257802]: 2025-10-02 12:44:49.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 28 KiB/s wr, 413 op/s
Oct 02 12:44:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:50.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:44:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.071 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:51 compute-0 ceph-mon[73607]: pgmap v2542: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 28 KiB/s wr, 413 op/s
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.188 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.189 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.189 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:51 compute-0 ovn_controller[148183]: 2025-10-02T12:44:51Z|00705|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.260 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:51 compute-0 nova_compute[257802]: 2025-10-02 12:44:51.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 274 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 721 KiB/s wr, 345 op/s
Oct 02 12:44:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:52.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:52 compute-0 sudo[358253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:52 compute-0 sudo[358253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:52 compute-0 sudo[358253]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:52 compute-0 sudo[358278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:52 compute-0 sudo[358278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:52 compute-0 sudo[358278]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:53 compute-0 ceph-mon[73607]: pgmap v2543: 305 pgs: 305 active+clean; 274 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 721 KiB/s wr, 345 op/s
Oct 02 12:44:53 compute-0 ovn_controller[148183]: 2025-10-02T12:44:53Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:64:28 10.100.0.6
Oct 02 12:44:53 compute-0 nova_compute[257802]: 2025-10-02 12:44:53.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 288 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 333 op/s
Oct 02 12:44:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:54.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:54 compute-0 nova_compute[257802]: 2025-10-02 12:44:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021806806020844964 of space, bias 1.0, pg target 0.6542041806253489 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004007486797412991 of space, bias 1.0, pg target 1.2022460392238972 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:44:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:44:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950524332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:44:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950524332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:55 compute-0 ceph-mon[73607]: pgmap v2544: 305 pgs: 305 active+clean; 288 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 333 op/s
Oct 02 12:44:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/950524332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/950524332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 298 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 322 op/s
Oct 02 12:44:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:56.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:56 compute-0 nova_compute[257802]: 2025-10-02 12:44:56.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:56 compute-0 ceph-mon[73607]: pgmap v2545: 305 pgs: 305 active+clean; 298 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 322 op/s
Oct 02 12:44:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 299 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Oct 02 12:44:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:58 compute-0 nova_compute[257802]: 2025-10-02 12:44:58.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:44:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:59 compute-0 ceph-mon[73607]: pgmap v2546: 305 pgs: 305 active+clean; 299 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Oct 02 12:44:59 compute-0 nova_compute[257802]: 2025-10-02 12:44:59.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 181 op/s
Oct 02 12:45:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:00.016 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:00 compute-0 nova_compute[257802]: 2025-10-02 12:45:00.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:00.018 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:45:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:00.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:01 compute-0 ceph-mon[73607]: pgmap v2547: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 181 op/s
Oct 02 12:45:01 compute-0 nova_compute[257802]: 2025-10-02 12:45:01.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 02 12:45:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2562379280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:02 compute-0 nova_compute[257802]: 2025-10-02 12:45:02.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:03 compute-0 ceph-mon[73607]: pgmap v2548: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 02 12:45:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4084718820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 nova_compute[257802]: 2025-10-02 12:45:03.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 1.5 MiB/s wr, 106 op/s
Oct 02 12:45:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:04 compute-0 nova_compute[257802]: 2025-10-02 12:45:04.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:04 compute-0 nova_compute[257802]: 2025-10-02 12:45:04.216 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:04.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:05 compute-0 nova_compute[257802]: 2025-10-02 12:45:05.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:05 compute-0 nova_compute[257802]: 2025-10-02 12:45:05.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:45:05 compute-0 ceph-mon[73607]: pgmap v2549: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 1.5 MiB/s wr, 106 op/s
Oct 02 12:45:05 compute-0 podman[358309]: 2025-10-02 12:45:05.912616469 +0000 UTC m=+0.050604894 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:05 compute-0 podman[358310]: 2025-10-02 12:45:05.919767704 +0000 UTC m=+0.055731569 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:45:05 compute-0 podman[358311]: 2025-10-02 12:45:05.9203631 +0000 UTC m=+0.050306447 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 12:45:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 671 KiB/s rd, 657 KiB/s wr, 81 op/s
Oct 02 12:45:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:06.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:07.020 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:07 compute-0 ceph-mon[73607]: pgmap v2550: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 671 KiB/s rd, 657 KiB/s wr, 81 op/s
Oct 02 12:45:07 compute-0 nova_compute[257802]: 2025-10-02 12:45:07.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 63 KiB/s wr, 34 op/s
Oct 02 12:45:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:08.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1111525070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:08 compute-0 nova_compute[257802]: 2025-10-02 12:45:08.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:08.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:09 compute-0 nova_compute[257802]: 2025-10-02 12:45:09.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:09 compute-0 ceph-mon[73607]: pgmap v2551: 305 pgs: 305 active+clean; 301 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 63 KiB/s wr, 34 op/s
Oct 02 12:45:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 305 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 312 KiB/s wr, 38 op/s
Oct 02 12:45:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:10.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3072466742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:11 compute-0 nova_compute[257802]: 2025-10-02 12:45:11.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:11 compute-0 nova_compute[257802]: 2025-10-02 12:45:11.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:11 compute-0 ceph-mon[73607]: pgmap v2552: 305 pgs: 305 active+clean; 305 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 312 KiB/s wr, 38 op/s
Oct 02 12:45:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1711706408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:11 compute-0 podman[358367]: 2025-10-02 12:45:11.936029423 +0000 UTC m=+0.079442581 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:45:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 942 KiB/s wr, 38 op/s
Oct 02 12:45:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:45:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.678 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.679 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.679 2 INFO nova.compute.manager [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Unshelving
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.730 2 INFO nova.virt.block_device [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Booting with volume 857b1b6f-42d5-4289-a74d-1acb4fd6b032 at /dev/vda
Oct 02 12:45:12 compute-0 ceph-mon[73607]: pgmap v2553: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 942 KiB/s wr, 38 op/s
Oct 02 12:45:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2092898847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.972 2 DEBUG os_brick.utils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.974 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.988 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.988 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d40df14e-15ce-4e3a-8c7a-962f85c0239a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.989 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.997 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.997 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2562b395-248e-430a-9abb-d1d0c64ce475]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-0 nova_compute[257802]: 2025-10-02 12:45:12.998 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.007 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.008 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2a503bac-91cf-4afc-b497-03be3041127b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.009 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[b6900951-e56a-4436-a521-4d08c78f6423]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.009 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.052 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.055 2 DEBUG os_brick.initiator.connectors.lightos [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.055 2 DEBUG os_brick.initiator.connectors.lightos [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.055 2 DEBUG os_brick.initiator.connectors.lightos [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.055 2 DEBUG os_brick.utils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.056 2 DEBUG nova.virt.block_device [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating existing volume attachment record: 6ef1d958-2b5d-43fd-a484-a5fdc599cab6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:13 compute-0 sudo[358401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:13 compute-0 sudo[358401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:13 compute-0 sudo[358401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:13 compute-0 sudo[358426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:13 compute-0 sudo[358426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:13 compute-0 sudo[358426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:13 compute-0 nova_compute[257802]: 2025-10-02 12:45:13.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2407577364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/670527462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 368 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Oct 02 12:45:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.243 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.244 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.251 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'pci_requests' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.272 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.288 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.289 2 INFO nova.compute.claims [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.516 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880755275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.963 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.971 2 DEBUG nova.compute.provider_tree [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:14 compute-0 nova_compute[257802]: 2025-10-02 12:45:14.993 2 DEBUG nova.scheduler.client.report [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:14 compute-0 ceph-mon[73607]: pgmap v2554: 305 pgs: 305 active+clean; 368 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Oct 02 12:45:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.039 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.230 2 INFO nova.network.neutron [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating port 36888ba0-b822-4067-a556-6a12a1136d08 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.488 2 DEBUG nova.compute.manager [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.488 2 DEBUG nova.compute.manager [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.488 2 DEBUG oslo_concurrency.lockutils [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.489 2 DEBUG oslo_concurrency.lockutils [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:15 compute-0 nova_compute[257802]: 2025-10-02 12:45:15.489 2 DEBUG nova.network.neutron [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 377 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Oct 02 12:45:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1880755275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3693070582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:16 compute-0 nova_compute[257802]: 2025-10-02 12:45:16.764 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:16 compute-0 nova_compute[257802]: 2025-10-02 12:45:16.765 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquired lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:16 compute-0 nova_compute[257802]: 2025-10-02 12:45:16.765 2 DEBUG nova.network.neutron [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:45:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:16.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.259 2 DEBUG nova.network.neutron [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.260 2 DEBUG nova.network.neutron [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:17 compute-0 ceph-mon[73607]: pgmap v2555: 305 pgs: 305 active+clean; 377 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Oct 02 12:45:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3783318637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.422 2 DEBUG oslo_concurrency.lockutils [req-99aa4332-9290-49e6-b913-4de5daf0894c req-54678030-2587-4c2c-8768-56d72b6d0159 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.628 2 DEBUG nova.compute.manager [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-changed-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.628 2 DEBUG nova.compute.manager [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Refreshing instance network info cache due to event network-changed-36888ba0-b822-4067-a556-6a12a1136d08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:17 compute-0 nova_compute[257802]: 2025-10-02 12:45:17.629 2 DEBUG oslo_concurrency.lockutils [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Oct 02 12:45:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.316 2 DEBUG nova.network.neutron [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating instance_info_cache with network_info: [{"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2295967782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.524 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Releasing lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.526 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.526 2 INFO nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Creating image(s)
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.526 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.526 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Ensure instance console log exists: /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.527 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.527 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.527 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.529 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Start _get_guest_xml network_info=[{"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '6ef1d958-2b5d-43fd-a484-a5fdc599cab6', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-857b1b6f-42d5-4289-a74d-1acb4fd6b032', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '857b1b6f-42d5-4289-a74d-1acb4fd6b032', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '38b13275-2908-42f3-bb70-73c050f375ea', 'attached_at': '', 'detached_at': '', 'volume_id': '857b1b6f-42d5-4289-a74d-1acb4fd6b032', 'serial': '857b1b6f-42d5-4289-a74d-1acb4fd6b032'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.530 2 DEBUG oslo_concurrency.lockutils [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.530 2 DEBUG nova.network.neutron [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Refreshing network info cache for port 36888ba0-b822-4067-a556-6a12a1136d08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.535 2 WARNING nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.541 2 DEBUG nova.virt.libvirt.host [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.541 2 DEBUG nova.virt.libvirt.host [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.545 2 DEBUG nova.virt.libvirt.host [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.546 2 DEBUG nova.virt.libvirt.host [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.547 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.548 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.548 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.549 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.549 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.549 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.549 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.549 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.550 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.550 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.550 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.551 2 DEBUG nova.virt.hardware [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.551 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.855 2 DEBUG nova.storage.rbd_utils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image 38b13275-2908-42f3-bb70-73c050f375ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:18 compute-0 nova_compute[257802]: 2025-10-02 12:45:18.860 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:19.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134422757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.293 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.350 2 DEBUG nova.virt.libvirt.vif [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-59400876',display_name='tempest-TestShelveInstance-server-59400876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-59400876',id=161,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1912831029',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iu4ddc0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:12Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=38b13275-2908-42f3-bb70-73c050f375ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.351 2 DEBUG nova.network.os_vif_util [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.352 2 DEBUG nova.network.os_vif_util [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.353 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.387 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <uuid>38b13275-2908-42f3-bb70-73c050f375ea</uuid>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <name>instance-000000a1</name>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:name>tempest-TestShelveInstance-server-59400876</nova:name>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:45:18</nova:creationTime>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:user uuid="56c6abe1bb704c8aa499677aeb9017f5">tempest-TestShelveInstance-1219039163-project-member</nova:user>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:project uuid="4b8f9114c7ab4b6e9fc9650d4bd08af9">tempest-TestShelveInstance-1219039163</nova:project>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <nova:port uuid="36888ba0-b822-4067-a556-6a12a1136d08">
Oct 02 12:45:19 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <system>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="serial">38b13275-2908-42f3-bb70-73c050f375ea</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="uuid">38b13275-2908-42f3-bb70-73c050f375ea</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </system>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <os>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </os>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <features>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </features>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/38b13275-2908-42f3-bb70-73c050f375ea_disk.config">
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </source>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-857b1b6f-42d5-4289-a74d-1acb4fd6b032">
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </source>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:45:19 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <serial>857b1b6f-42d5-4289-a74d-1acb4fd6b032</serial>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:8d:89:9a"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <target dev="tap36888ba0-b8"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/console.log" append="off"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <video>
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </video>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:45:19 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:45:19 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:45:19 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:45:19 compute-0 nova_compute[257802]: </domain>
Oct 02 12:45:19 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.388 2 DEBUG nova.compute.manager [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Preparing to wait for external event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.389 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.389 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.389 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.390 2 DEBUG nova.virt.libvirt.vif [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-59400876',display_name='tempest-TestShelveInstance-server-59400876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-59400876',id=161,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1912831029',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iu4ddc0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:12Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=38b13275-2908-42f3-bb70-73c050f375ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.390 2 DEBUG nova.network.os_vif_util [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.391 2 DEBUG nova.network.os_vif_util [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.391 2 DEBUG os_vif [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36888ba0-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36888ba0-b8, col_values=(('external_ids', {'iface-id': '36888ba0-b822-4067-a556-6a12a1136d08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:89:9a', 'vm-uuid': '38b13275-2908-42f3-bb70-73c050f375ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 NetworkManager[44987]: <info>  [1759409119.3991] manager: (tap36888ba0-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 ceph-mon[73607]: pgmap v2556: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Oct 02 12:45:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1450786427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3134422757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.411 2 INFO os_vif [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8')
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.478 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.478 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.479 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] No VIF found with MAC fa:16:3e:8d:89:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.480 2 INFO nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Using config drive
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.526 2 DEBUG nova.storage.rbd_utils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image 38b13275-2908-42f3-bb70-73c050f375ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.555 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.556 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.556 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.556 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.560 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-0 nova_compute[257802]: 2025-10-02 12:45:19.650 2 DEBUG nova.objects.instance [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'keypairs' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:45:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:20.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:21 compute-0 ceph-mon[73607]: pgmap v2557: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 02 12:45:21 compute-0 nova_compute[257802]: 2025-10-02 12:45:21.730 2 INFO nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Creating config drive at /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config
Oct 02 12:45:21 compute-0 nova_compute[257802]: 2025-10-02 12:45:21.737 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxhpgh1e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:21 compute-0 nova_compute[257802]: 2025-10-02 12:45:21.874 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxhpgh1e" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:21 compute-0 nova_compute[257802]: 2025-10-02 12:45:21.915 2 DEBUG nova.storage.rbd_utils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] rbd image 38b13275-2908-42f3-bb70-73c050f375ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:21 compute-0 nova_compute[257802]: 2025-10-02 12:45:21.920 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config 38b13275-2908-42f3-bb70-73c050f375ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 119 op/s
Oct 02 12:45:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:22 compute-0 nova_compute[257802]: 2025-10-02 12:45:22.590 2 DEBUG nova.network.neutron [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updated VIF entry in instance network info cache for port 36888ba0-b822-4067-a556-6a12a1136d08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:22 compute-0 nova_compute[257802]: 2025-10-02 12:45:22.591 2 DEBUG nova.network.neutron [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating instance_info_cache with network_info: [{"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:22 compute-0 nova_compute[257802]: 2025-10-02 12:45:22.611 2 DEBUG oslo_concurrency.lockutils [req-f052b682-a55c-47d1-afc1-88929edc77be req-f287e266-a6a6-49ce-90cc-8fc357e071fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:22 compute-0 ceph-mon[73607]: pgmap v2558: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 119 op/s
Oct 02 12:45:22 compute-0 nova_compute[257802]: 2025-10-02 12:45:22.974 2 DEBUG oslo_concurrency.processutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config 38b13275-2908-42f3-bb70-73c050f375ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:22 compute-0 nova_compute[257802]: 2025-10-02 12:45:22.974 2 INFO nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Deleting local config drive /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea/disk.config because it was imported into RBD.
Oct 02 12:45:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:23 compute-0 kernel: tap36888ba0-b8: entered promiscuous mode
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.0313] manager: (tap36888ba0-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ovn_controller[148183]: 2025-10-02T12:45:23Z|00706|binding|INFO|Claiming lport 36888ba0-b822-4067-a556-6a12a1136d08 for this chassis.
Oct 02 12:45:23 compute-0 ovn_controller[148183]: 2025-10-02T12:45:23Z|00707|binding|INFO|36888ba0-b822-4067-a556-6a12a1136d08: Claiming fa:16:3e:8d:89:9a 10.100.0.12
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ovn_controller[148183]: 2025-10-02T12:45:23Z|00708|binding|INFO|Setting lport 36888ba0-b822-4067-a556-6a12a1136d08 ovn-installed in OVS
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ovn_controller[148183]: 2025-10-02T12:45:23Z|00709|binding|INFO|Setting lport 36888ba0-b822-4067-a556-6a12a1136d08 up in Southbound
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.054 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:89:9a 10.100.0.12'], port_security=['fa:16:3e:8d:89:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b13275-2908-42f3-bb70-73c050f375ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8f9114c7ab4b6e9fc9650d4bd08af9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cfad1590-a7c0-4c27-a7db-b88ec54c64dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04a89c39-8141-4654-8368-c858180215b3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=36888ba0-b822-4067-a556-6a12a1136d08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.056 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 36888ba0-b822-4067-a556-6a12a1136d08 in datapath 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f bound to our chassis
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.057 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:45:23 compute-0 systemd-udevd[358592]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.068 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cc01358f-69c1-4247-9324-c827bd63afec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.070 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ea0a90a-91 in ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:45:23 compute-0 systemd-machined[211836]: New machine qemu-80-instance-000000a1.
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.071 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ea0a90a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.071 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[63f28ee7-be62-4e81-8e90-5815d1532948]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.072 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15fe0c4f-b9c3-4ef9-9f77-942864b48977]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.0820] device (tap36888ba0-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.0832] device (tap36888ba0-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.086 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[b732cc89-fe88-41fc-81e0-0a308ce2ce57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-000000a1.
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.101 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[77983f9e-3ddc-445b-a05a-9b04f33fdf5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.129 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[85a00eae-47cb-4c9b-8467-b650289fda27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.134 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee73b893-b155-43c4-aad1-c64c356aa11e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.1351] manager: (tap6ea0a90a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.164 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[afd2dabe-d465-4672-ada5-3c06da123cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.167 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[45d7780e-5a0d-4463-a690-f59a1427fcef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.1963] device (tap6ea0a90a-90): carrier: link connected
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.208 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc8aac-b83a-4f59-86d9-eff4840c4aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.232 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[27fe836a-a657-4746-a796-e35d60793649]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ea0a90a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:92:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719082, 'reachable_time': 40198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358626, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.264 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2958c39-3d4e-44f4-93ce-c646c731b082]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:9244'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719082, 'tstamp': 719082}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358627, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.294 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0c4dc2-c021-4639-a7ff-5dafd3be916a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ea0a90a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:92:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719082, 'reachable_time': 40198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358628, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.297 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.324 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.325 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.326 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.336 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[003e27e7-7290-403f-9b82-e9f00957e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.371 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.372 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.372 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.372 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.373 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.403 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0c871488-06fe-44e5-9fcb-6ce00f9750d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.407 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ea0a90a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.407 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.408 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ea0a90a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:23 compute-0 NetworkManager[44987]: <info>  [1759409123.4105] manager: (tap6ea0a90a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct 02 12:45:23 compute-0 kernel: tap6ea0a90a-90: entered promiscuous mode
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.417 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ea0a90a-90, col_values=(('external_ids', {'iface-id': '3850aa59-d3b6-4277-b937-ad9f4b8f7b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:23 compute-0 ovn_controller[148183]: 2025-10-02T12:45:23Z|00710|binding|INFO|Releasing lport 3850aa59-d3b6-4277-b937-ad9f4b8f7b4c from this chassis (sb_readonly=0)
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.419 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.422 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98d70813-3770-45a4-83a3-94c65ca88682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.423 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.pid.haproxy
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:45:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:23.424 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'env', 'PROCESS_TAG=haproxy-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ea0a90a-9528-4fe1-8b35-dfde9b35e85f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3423625658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.850 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:23 compute-0 podman[358722]: 2025-10-02 12:45:23.759369032 +0000 UTC m=+0.025328203 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:45:23 compute-0 podman[358722]: 2025-10-02 12:45:23.91399311 +0000 UTC m=+0.179952261 container create 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.950 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409123.9497387, 38b13275-2908-42f3-bb70-73c050f375ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:23 compute-0 nova_compute[257802]: 2025-10-02 12:45:23.950 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] VM Started (Lifecycle Event)
Oct 02 12:45:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Oct 02 12:45:23 compute-0 systemd[1]: Started libpod-conmon-44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7.scope.
Oct 02 12:45:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32504923aae0cdcc6674f735804da4aa080d5356ad5f46ac608ff07ecd185e06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.033 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.038 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409123.9498973, 38b13275-2908-42f3-bb70-73c050f375ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.038 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] VM Paused (Lifecycle Event)
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.058 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.058 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.058 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:24 compute-0 podman[358722]: 2025-10-02 12:45:24.061145465 +0000 UTC m=+0.327104616 container init 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:24 compute-0 podman[358722]: 2025-10-02 12:45:24.070781062 +0000 UTC m=+0.336740213 container start 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:45:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3423625658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:24 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [NOTICE]   (358744) : New worker (358746) forked
Oct 02 12:45:24 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [NOTICE]   (358744) : Loading success.
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.107 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.109 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:24.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.195 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.230 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.231 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=20.900821685791016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.231 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.231 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.371 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.372 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 38b13275-2908-42f3-bb70-73c050f375ea actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.372 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.372 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.391 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.422 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.422 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.462 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.507 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:45:24 compute-0 nova_compute[257802]: 2025-10-02 12:45:24.609 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:45:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:45:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392713089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.058 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.063 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:25 compute-0 ceph-mon[73607]: pgmap v2559: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Oct 02 12:45:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3392713089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.083 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.085 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.085 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.208 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.209 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.209 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.209 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.209 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Processing event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.210 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.210 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing instance network info cache due to event network-changed-242e9f5d-5808-46e5-877c-ab6c97cacc64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.210 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.210 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.210 2 DEBUG nova.network.neutron [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Refreshing network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.211 2 DEBUG nova.compute.manager [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.215 2 DEBUG nova.virt.libvirt.driver [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.216 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409125.2155964, 38b13275-2908-42f3-bb70-73c050f375ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.217 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] VM Resumed (Lifecycle Event)
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.221 2 INFO nova.virt.libvirt.driver [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Instance spawned successfully.
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.222 2 DEBUG nova.compute.manager [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.267 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.270 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.295 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:25 compute-0 nova_compute[257802]: 2025-10-02 12:45:25.323 2 DEBUG oslo_concurrency.lockutils [None req-56ab5902-b8f6-4e5d-9dd6-17b48bb0675f 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Oct 02 12:45:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:26.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:26 compute-0 nova_compute[257802]: 2025-10-02 12:45:26.825 2 DEBUG oslo_concurrency.lockutils [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:26 compute-0 nova_compute[257802]: 2025-10-02 12:45:26.826 2 DEBUG oslo_concurrency.lockutils [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:26 compute-0 nova_compute[257802]: 2025-10-02 12:45:26.856 2 INFO nova.compute.manager [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Detaching volume 5711af91-e6ed-47a0-ad60-a4e65171a2af
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:26.963 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:26.963 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:26.964 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:27 compute-0 ceph-mon[73607]: pgmap v2560: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.252 2 INFO nova.virt.block_device [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Attempting to driver detach volume 5711af91-e6ed-47a0-ad60-a4e65171a2af from mountpoint /dev/vdb
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.259 2 DEBUG nova.virt.libvirt.driver [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Attempting to detach device vdb from instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.259 2 DEBUG nova.virt.libvirt.guest [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5711af91-e6ed-47a0-ad60-a4e65171a2af">
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   </source>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <serial>5711af91-e6ed-47a0-ad60-a4e65171a2af</serial>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]: </disk>
Oct 02 12:45:27 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.267 2 INFO nova.virt.libvirt.driver [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully detached device vdb from instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf from the persistent domain config.
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.268 2 DEBUG nova.virt.libvirt.driver [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.268 2 DEBUG nova.virt.libvirt.guest [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-5711af91-e6ed-47a0-ad60-a4e65171a2af">
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   </source>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <serial>5711af91-e6ed-47a0-ad60-a4e65171a2af</serial>
Oct 02 12:45:27 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:45:27 compute-0 nova_compute[257802]: </disk>
Oct 02 12:45:27 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.381 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759409127.380934, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.382 2 DEBUG nova.virt.libvirt.driver [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.384 2 INFO nova.virt.libvirt.driver [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully detached device vdb from instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf from the live domain config.
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.688 2 DEBUG nova.objects.instance [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:27 compute-0 nova_compute[257802]: 2025-10-02 12:45:27.759 2 DEBUG oslo_concurrency.lockutils [None req-7603f526-fb5a-4563-8fa9-8f715f87ccc0 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 950 KiB/s wr, 176 op/s
Oct 02 12:45:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.865 2 DEBUG nova.network.neutron [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updated VIF entry in instance network info cache for port 242e9f5d-5808-46e5-877c-ab6c97cacc64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.865 2 DEBUG nova.network.neutron [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [{"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.893 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.894 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.894 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.895 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.895 2 DEBUG oslo_concurrency.lockutils [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.896 2 DEBUG nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] No waiting events found dispatching network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:28 compute-0 nova_compute[257802]: 2025-10-02 12:45:28.896 2 WARNING nova.compute.manager [req-bff4fdfd-1be3-439d-b726-49835c44a50e req-505efeb1-e3bf-464b-8c7b-3a1e4b8c7b68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received unexpected event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 for instance with vm_state shelved_offloaded and task_state spawning.
Oct 02 12:45:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:29 compute-0 ceph-mon[73607]: pgmap v2561: 305 pgs: 305 active+clean; 394 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 950 KiB/s wr, 176 op/s
Oct 02 12:45:29 compute-0 nova_compute[257802]: 2025-10-02 12:45:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-0 nova_compute[257802]: 2025-10-02 12:45:29.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 388 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 328 KiB/s wr, 235 op/s
Oct 02 12:45:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2824413070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2824413070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.134 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.135 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.135 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.136 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.136 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.137 2 INFO nova.compute.manager [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Terminating instance
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.139 2 DEBUG nova.compute.manager [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:45:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:30.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:30 compute-0 kernel: tap242e9f5d-58 (unregistering): left promiscuous mode
Oct 02 12:45:30 compute-0 NetworkManager[44987]: <info>  [1759409130.2624] device (tap242e9f5d-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 ovn_controller[148183]: 2025-10-02T12:45:30Z|00711|binding|INFO|Releasing lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 from this chassis (sb_readonly=0)
Oct 02 12:45:30 compute-0 ovn_controller[148183]: 2025-10-02T12:45:30Z|00712|binding|INFO|Setting lport 242e9f5d-5808-46e5-877c-ab6c97cacc64 down in Southbound
Oct 02 12:45:30 compute-0 ovn_controller[148183]: 2025-10-02T12:45:30Z|00713|binding|INFO|Removing iface tap242e9f5d-58 ovn-installed in OVS
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.283 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:64:28 10.100.0.6'], port_security=['fa:16:3e:25:64:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=242e9f5d-5808-46e5-877c-ab6c97cacc64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.288 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 242e9f5d-5808-46e5-877c-ab6c97cacc64 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 unbound from our chassis
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.290 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e53961-97c9-4d79-b2bc-ba336c204821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.299 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c060197-8f06-4d2f-9f02-259f514ebdc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.300 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace which is not needed anymore
Oct 02 12:45:30 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Oct 02 12:45:30 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Consumed 15.805s CPU time.
Oct 02 12:45:30 compute-0 systemd-machined[211836]: Machine qemu-79-instance-0000009f terminated.
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.384 2 INFO nova.virt.libvirt.driver [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Instance destroyed successfully.
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.385 2 DEBUG nova.objects.instance [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'resources' on Instance uuid 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.415 2 DEBUG nova.virt.libvirt.vif [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1177339644',display_name='tempest-TestMinimumBasicScenario-server-1177339644',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1177339644',id=159,image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD6z5KFLzCCipER/PvHuq/MNGTxd1RUhD7rY5o2WpTORO0UwKn2m0zVFbpQwYXEC7RieyYdRnhp+ULi3gCBX1FpCTLHoDreyHt1lDTwb0yPiwclRQO8cg/Ijl4ojEGDkg==',key_name='tempest-TestMinimumBasicScenario-992418222',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-mrk9aw2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6225d2a0-8cbb-42ed-9a0a-13744b0f7ae4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:40Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.416 2 DEBUG nova.network.os_vif_util [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "address": "fa:16:3e:25:64:28", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap242e9f5d-58", "ovs_interfaceid": "242e9f5d-5808-46e5-877c-ab6c97cacc64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.417 2 DEBUG nova.network.os_vif_util [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.417 2 DEBUG os_vif [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap242e9f5d-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.425 2 INFO os_vif [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:64:28,bridge_name='br-int',has_traffic_filtering=True,id=242e9f5d-5808-46e5-877c-ab6c97cacc64,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap242e9f5d-58')
Oct 02 12:45:30 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [NOTICE]   (358237) : haproxy version is 2.8.14-c23fe91
Oct 02 12:45:30 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [NOTICE]   (358237) : path to executable is /usr/sbin/haproxy
Oct 02 12:45:30 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [ALERT]    (358237) : Current worker (358239) exited with code 143 (Terminated)
Oct 02 12:45:30 compute-0 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[358233]: [WARNING]  (358237) : All workers exited. Exiting... (0)
Oct 02 12:45:30 compute-0 systemd[1]: libpod-0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58.scope: Deactivated successfully.
Oct 02 12:45:30 compute-0 podman[358811]: 2025-10-02 12:45:30.460414343 +0000 UTC m=+0.053701641 container died 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58-userdata-shm.mount: Deactivated successfully.
Oct 02 12:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3072bbb4ef7c11f1256f1456a356dfca5a543290bdfbb7e543f6b70ff32f5b-merged.mount: Deactivated successfully.
Oct 02 12:45:30 compute-0 podman[358811]: 2025-10-02 12:45:30.512615765 +0000 UTC m=+0.105903063 container cleanup 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:45:30 compute-0 systemd[1]: libpod-conmon-0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58.scope: Deactivated successfully.
Oct 02 12:45:30 compute-0 podman[358856]: 2025-10-02 12:45:30.591630015 +0000 UTC m=+0.053327151 container remove 0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.597 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6349b5-cc1e-4f64-b262-f86b04e0116a]: (4, ('Thu Oct  2 12:45:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58)\n0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58\nThu Oct  2 12:45:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58)\n0f23ca896fc871e542db70527f42c86708b8d47e0b3bbb836bdfc7f71ff3fa58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.599 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ea9c66-901d-44a4-aac4-483e9c6220a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.600 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:30 compute-0 kernel: tap99e53961-90: left promiscuous mode
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 nova_compute[257802]: 2025-10-02 12:45:30.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.620 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ece7a585-5a71-435d-b3a1-9c56f12f7d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.646 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc1ffaf-a7ab-4a72-9f12-4890ce9f72c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.647 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[af570e95-eed7-4be5-888c-e60421654fed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.665 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6e01579a-dcdf-4cfd-b4f1-57f3e29d322a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714951, 'reachable_time': 19537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358873, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d99e53961\x2d97c9\x2d4d79\x2db2bc\x2dba336c204821.mount: Deactivated successfully.
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.667 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:45:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:30.667 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[11fc7b12-4dba-47fe-9661-4fc43beb085d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.045 2 INFO nova.virt.libvirt.driver [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Deleting instance files /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_del
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.046 2 INFO nova.virt.libvirt.driver [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Deletion of /var/lib/nova/instances/4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf_del complete
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.120 2 INFO nova.compute.manager [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.120 2 DEBUG oslo.service.loopingcall [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.120 2 DEBUG nova.compute.manager [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.121 2 DEBUG nova.network.neutron [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.216 2 DEBUG nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.216 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.216 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.217 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.217 2 DEBUG nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.217 2 DEBUG nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-unplugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.217 2 DEBUG nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.218 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.218 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.218 2 DEBUG oslo_concurrency.lockutils [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.218 2 DEBUG nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] No waiting events found dispatching network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:31 compute-0 nova_compute[257802]: 2025-10-02 12:45:31.219 2 WARNING nova.compute.manager [req-4f6c0af7-139a-4efb-b8e6-0731fcdf43df req-187076b8-f959-4d1b-9db9-2648c38038d5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received unexpected event network-vif-plugged-242e9f5d-5808-46e5-877c-ab6c97cacc64 for instance with vm_state active and task_state deleting.
Oct 02 12:45:31 compute-0 ceph-mon[73607]: pgmap v2562: 305 pgs: 305 active+clean; 388 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 328 KiB/s wr, 235 op/s
Oct 02 12:45:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2787986661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.1 MiB/s wr, 220 op/s
Oct 02 12:45:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.171 2 DEBUG nova.network.neutron [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.267 2 INFO nova.compute.manager [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Took 1.15 seconds to deallocate network for instance.
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.427 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.428 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.490 2 DEBUG nova.compute.manager [req-23534578-0ea4-4d73-a1cf-13f86429bbf9 req-f345eaa4-b9bf-41b9-8e3b-2307e34a2083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Received event network-vif-deleted-242e9f5d-5808-46e5-877c-ab6c97cacc64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.541 2 DEBUG oslo_concurrency.processutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.857 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438487518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.957 2 DEBUG oslo_concurrency.processutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.963 2 DEBUG nova.compute.provider_tree [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:32 compute-0 nova_compute[257802]: 2025-10-02 12:45:32.986 2 DEBUG nova.scheduler.client.report [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:33 compute-0 nova_compute[257802]: 2025-10-02 12:45:33.016 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:33 compute-0 nova_compute[257802]: 2025-10-02 12:45:33.052 2 INFO nova.scheduler.client.report [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Deleted allocations for instance 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf
Oct 02 12:45:33 compute-0 nova_compute[257802]: 2025-10-02 12:45:33.165 2 DEBUG oslo_concurrency.lockutils [None req-1d4092cc-a3c2-4aa1-8e1f-5a11ab332f08 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:33 compute-0 sudo[358898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:33 compute-0 sudo[358898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:33 compute-0 sudo[358898]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:33 compute-0 sudo[358923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:33 compute-0 sudo[358923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:33 compute-0 sudo[358923]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:33 compute-0 ceph-mon[73607]: pgmap v2563: 305 pgs: 305 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.1 MiB/s wr, 220 op/s
Oct 02 12:45:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1438487518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 320 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Oct 02 12:45:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:34.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:34 compute-0 nova_compute[257802]: 2025-10-02 12:45:34.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Oct 02 12:45:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Oct 02 12:45:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Oct 02 12:45:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:35.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:35 compute-0 ceph-mon[73607]: pgmap v2564: 305 pgs: 305 active+clean; 320 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Oct 02 12:45:35 compute-0 ceph-mon[73607]: osdmap e357: 3 total, 3 up, 3 in
Oct 02 12:45:35 compute-0 nova_compute[257802]: 2025-10-02 12:45:35.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 299 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.6 MiB/s wr, 274 op/s
Oct 02 12:45:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:36 compute-0 podman[358952]: 2025-10-02 12:45:36.945768705 +0000 UTC m=+0.082702663 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 02 12:45:36 compute-0 podman[358950]: 2025-10-02 12:45:36.945818406 +0000 UTC m=+0.088110596 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:45:36 compute-0 podman[358951]: 2025-10-02 12:45:36.946459111 +0000 UTC m=+0.086336172 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:45:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:37 compute-0 ovn_controller[148183]: 2025-10-02T12:45:37Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:89:9a 10.100.0.12
Oct 02 12:45:37 compute-0 ceph-mon[73607]: pgmap v2566: 305 pgs: 305 active+clean; 299 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.6 MiB/s wr, 274 op/s
Oct 02 12:45:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 291 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 251 op/s
Oct 02 12:45:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:38.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:38 compute-0 ovn_controller[148183]: 2025-10-02T12:45:38Z|00714|binding|INFO|Releasing lport 3850aa59-d3b6-4277-b937-ad9f4b8f7b4c from this chassis (sb_readonly=0)
Oct 02 12:45:38 compute-0 nova_compute[257802]: 2025-10-02 12:45:38.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:39.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:39 compute-0 nova_compute[257802]: 2025-10-02 12:45:39.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:39 compute-0 ceph-mon[73607]: pgmap v2567: 305 pgs: 305 active+clean; 291 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 251 op/s
Oct 02 12:45:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Oct 02 12:45:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:45:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:45:40 compute-0 nova_compute[257802]: 2025-10-02 12:45:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:41 compute-0 ovn_controller[148183]: 2025-10-02T12:45:41Z|00715|binding|INFO|Releasing lport 3850aa59-d3b6-4277-b937-ad9f4b8f7b4c from this chassis (sb_readonly=0)
Oct 02 12:45:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:41.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:41 compute-0 nova_compute[257802]: 2025-10-02 12:45:41.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:41 compute-0 ceph-mon[73607]: pgmap v2568: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Oct 02 12:45:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 174 op/s
Oct 02 12:45:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:42.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:45:42
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'backups', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:42 compute-0 podman[359011]: 2025-10-02 12:45:42.970036019 +0000 UTC m=+0.110710140 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:43.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:45:43 compute-0 ceph-mon[73607]: pgmap v2569: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 174 op/s
Oct 02 12:45:43 compute-0 sudo[359038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:43 compute-0 sudo[359038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:43 compute-0 sudo[359038]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:43 compute-0 sudo[359063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:43 compute-0 sudo[359063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:43 compute-0 sudo[359063]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Oct 02 12:45:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Oct 02 12:45:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 39 KiB/s wr, 95 op/s
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Oct 02 12:45:44 compute-0 sudo[359088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:44 compute-0 sudo[359088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:44 compute-0 sudo[359088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:44 compute-0 sudo[359113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:45:44 compute-0 sudo[359113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:45:44 compute-0 nova_compute[257802]: 2025-10-02 12:45:44.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:44.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:44 compute-0 sudo[359113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 76a10729-3674-4019-a714-7699c6d432ac does not exist
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b779ae82-7855-4328-a9e4-1b80890270fe does not exist
Oct 02 12:45:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8eb12df4-bc59-4d0f-a673-4315f3d36700 does not exist
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:45:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:45:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:45:44 compute-0 sudo[359170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:44 compute-0 sudo[359170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:44 compute-0 sudo[359170]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:44 compute-0 sudo[359195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:44 compute-0 sudo[359195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:44 compute-0 sudo[359195]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:44 compute-0 sudo[359220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:44 compute-0 sudo[359220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:44 compute-0 sudo[359220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:44 compute-0 sudo[359245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:45:44 compute-0 sudo[359245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:45 compute-0 ceph-mon[73607]: pgmap v2570: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 39 KiB/s wr, 95 op/s
Oct 02 12:45:45 compute-0 ceph-mon[73607]: osdmap e358: 3 total, 3 up, 3 in
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:45:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:45:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:45.030 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:45.031 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:45:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:45 compute-0 nova_compute[257802]: 2025-10-02 12:45:45.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.30750273 +0000 UTC m=+0.047021437 container create 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:45:45 compute-0 systemd[1]: Started libpod-conmon-0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974.scope.
Oct 02 12:45:45 compute-0 nova_compute[257802]: 2025-10-02 12:45:45.379 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409130.3790321, 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:45 compute-0 nova_compute[257802]: 2025-10-02 12:45:45.380 2 INFO nova.compute.manager [-] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] VM Stopped (Lifecycle Event)
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.286618727 +0000 UTC m=+0.026137454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.411375041 +0000 UTC m=+0.150893788 container init 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:45:45 compute-0 nova_compute[257802]: 2025-10-02 12:45:45.417 2 DEBUG nova.compute.manager [None req-efaac654-1d82-4cb6-a0bd-a66f297251a0 - - - - - -] [instance: 4a3d81a0-ce3a-4e61-a0a6-a0eab9546dbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.426245506 +0000 UTC m=+0.165764213 container start 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:45:45 compute-0 nova_compute[257802]: 2025-10-02 12:45:45.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:45 compute-0 mystifying_ride[359328]: 167 167
Oct 02 12:45:45 compute-0 systemd[1]: libpod-0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974.scope: Deactivated successfully.
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.434408917 +0000 UTC m=+0.173927654 container attach 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:45:45 compute-0 conmon[359328]: conmon 0aa52e4b400d1a65c37c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974.scope/container/memory.events
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.437290377 +0000 UTC m=+0.176809084 container died 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d358666b88e3918d279f0b18a137437a6f5c0d7db2370b499aa910de35ef1382-merged.mount: Deactivated successfully.
Oct 02 12:45:45 compute-0 podman[359312]: 2025-10-02 12:45:45.493916109 +0000 UTC m=+0.233434816 container remove 0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ride, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:45:45 compute-0 systemd[1]: libpod-conmon-0aa52e4b400d1a65c37c7cab7be9f1ab50b1659cca62eb75d93da448b4013974.scope: Deactivated successfully.
Oct 02 12:45:45 compute-0 podman[359353]: 2025-10-02 12:45:45.712123819 +0000 UTC m=+0.049424606 container create 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:45 compute-0 systemd[1]: Started libpod-conmon-173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263.scope.
Oct 02 12:45:45 compute-0 podman[359353]: 2025-10-02 12:45:45.692436206 +0000 UTC m=+0.029737023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:45 compute-0 podman[359353]: 2025-10-02 12:45:45.818635756 +0000 UTC m=+0.155936573 container init 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:45 compute-0 podman[359353]: 2025-10-02 12:45:45.827482343 +0000 UTC m=+0.164783130 container start 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:45:45 compute-0 podman[359353]: 2025-10-02 12:45:45.8407737 +0000 UTC m=+0.178074507 container attach 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:45:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 35 KiB/s wr, 80 op/s
Oct 02 12:45:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:46 compute-0 upbeat_hugle[359370]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:45:46 compute-0 upbeat_hugle[359370]: --> relative data size: 1.0
Oct 02 12:45:46 compute-0 upbeat_hugle[359370]: --> All data devices are unavailable
Oct 02 12:45:46 compute-0 systemd[1]: libpod-173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263.scope: Deactivated successfully.
Oct 02 12:45:46 compute-0 podman[359353]: 2025-10-02 12:45:46.665360914 +0000 UTC m=+1.002661721 container died 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-559daf75c7c5c14c80881f47f1e620328c6a062733bda1c453a4ca733d3a8434-merged.mount: Deactivated successfully.
Oct 02 12:45:46 compute-0 podman[359353]: 2025-10-02 12:45:46.730114836 +0000 UTC m=+1.067415623 container remove 173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:45:46 compute-0 systemd[1]: libpod-conmon-173f8f1719b48a79f7083922466a738eb04256d8ecf393567670967253828263.scope: Deactivated successfully.
Oct 02 12:45:46 compute-0 sudo[359245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:46 compute-0 sudo[359400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:46 compute-0 sudo[359400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:46 compute-0 sudo[359400]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:46 compute-0 sudo[359425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:46 compute-0 sudo[359425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:46 compute-0 sudo[359425]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:46 compute-0 sudo[359450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:46 compute-0 sudo[359450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:46 compute-0 sudo[359450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:46 compute-0 sudo[359475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:45:46 compute-0 sudo[359475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:47 compute-0 ceph-mon[73607]: pgmap v2572: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 35 KiB/s wr, 80 op/s
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.331668382 +0000 UTC m=+0.042055344 container create 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:45:47 compute-0 systemd[1]: Started libpod-conmon-768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80.scope.
Oct 02 12:45:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.387212337 +0000 UTC m=+0.097599319 container init 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.393904532 +0000 UTC m=+0.104291534 container start 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.397215923 +0000 UTC m=+0.107602905 container attach 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:45:47 compute-0 determined_mendel[359556]: 167 167
Oct 02 12:45:47 compute-0 systemd[1]: libpod-768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80.scope: Deactivated successfully.
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.398233238 +0000 UTC m=+0.108620200 container died 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.312727208 +0000 UTC m=+0.023114180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e9f3d197f1dbda7f77130d216ffb531f0065a5ced3020784c459440f3892f23-merged.mount: Deactivated successfully.
Oct 02 12:45:47 compute-0 podman[359540]: 2025-10-02 12:45:47.443960202 +0000 UTC m=+0.154347164 container remove 768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:45:47 compute-0 systemd[1]: libpod-conmon-768da0588bfa167c74d9600b6caeeabd379a4f631b7f98e4c5b5eb2ed4803c80.scope: Deactivated successfully.
Oct 02 12:45:47 compute-0 podman[359579]: 2025-10-02 12:45:47.635422394 +0000 UTC m=+0.079514474 container create d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:45:47 compute-0 podman[359579]: 2025-10-02 12:45:47.578414624 +0000 UTC m=+0.022506724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:47 compute-0 systemd[1]: Started libpod-conmon-d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d.scope.
Oct 02 12:45:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf07bba0dd3c3a6b73081f6dc083e75002926ef43ebc9b3374540644836f134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf07bba0dd3c3a6b73081f6dc083e75002926ef43ebc9b3374540644836f134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf07bba0dd3c3a6b73081f6dc083e75002926ef43ebc9b3374540644836f134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf07bba0dd3c3a6b73081f6dc083e75002926ef43ebc9b3374540644836f134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:47 compute-0 podman[359579]: 2025-10-02 12:45:47.726059511 +0000 UTC m=+0.170151621 container init d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:45:47 compute-0 podman[359579]: 2025-10-02 12:45:47.736913378 +0000 UTC m=+0.181005448 container start d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:47 compute-0 podman[359579]: 2025-10-02 12:45:47.742860004 +0000 UTC m=+0.186952074 container attach d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:45:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 649 KiB/s rd, 32 KiB/s wr, 57 op/s
Oct 02 12:45:48 compute-0 nova_compute[257802]: 2025-10-02 12:45:48.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:48 compute-0 ovn_controller[148183]: 2025-10-02T12:45:48Z|00716|binding|INFO|Releasing lport 3850aa59-d3b6-4277-b937-ad9f4b8f7b4c from this chassis (sb_readonly=0)
Oct 02 12:45:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:48 compute-0 nova_compute[257802]: 2025-10-02 12:45:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:48 compute-0 jolly_swirles[359595]: {
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:     "1": [
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:         {
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "devices": [
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "/dev/loop3"
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             ],
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "lv_name": "ceph_lv0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "lv_size": "7511998464",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "name": "ceph_lv0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "tags": {
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.cluster_name": "ceph",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.crush_device_class": "",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.encrypted": "0",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.osd_id": "1",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.type": "block",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:                 "ceph.vdo": "0"
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             },
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "type": "block",
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:             "vg_name": "ceph_vg0"
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:         }
Oct 02 12:45:48 compute-0 jolly_swirles[359595]:     ]
Oct 02 12:45:48 compute-0 jolly_swirles[359595]: }
Oct 02 12:45:48 compute-0 systemd[1]: libpod-d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d.scope: Deactivated successfully.
Oct 02 12:45:48 compute-0 podman[359579]: 2025-10-02 12:45:48.537653118 +0000 UTC m=+0.981745238 container died d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf07bba0dd3c3a6b73081f6dc083e75002926ef43ebc9b3374540644836f134-merged.mount: Deactivated successfully.
Oct 02 12:45:48 compute-0 podman[359579]: 2025-10-02 12:45:48.621644051 +0000 UTC m=+1.065736121 container remove d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:45:48 compute-0 systemd[1]: libpod-conmon-d8b8f6559028eee98a46e1026b3c109350c56ca5e6026da498555b8d8892221d.scope: Deactivated successfully.
Oct 02 12:45:48 compute-0 sudo[359475]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:48 compute-0 sudo[359620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:48 compute-0 sudo[359620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:48 compute-0 sudo[359620]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:48 compute-0 sudo[359645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:48 compute-0 sudo[359645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:48 compute-0 sudo[359645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:48 compute-0 sudo[359670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:48 compute-0 sudo[359670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:48 compute-0 sudo[359670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:48 compute-0 sudo[359695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:45:48 compute-0 sudo[359695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:49.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:49 compute-0 ceph-mon[73607]: pgmap v2573: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 649 KiB/s rd, 32 KiB/s wr, 57 op/s
Oct 02 12:45:49 compute-0 nova_compute[257802]: 2025-10-02 12:45:49.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.250232832 +0000 UTC m=+0.038056916 container create cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:45:49 compute-0 systemd[1]: Started libpod-conmon-cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87.scope.
Oct 02 12:45:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.326619548 +0000 UTC m=+0.114443542 container init cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.233105621 +0000 UTC m=+0.020929625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.332311148 +0000 UTC m=+0.120135122 container start cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:45:49 compute-0 lucid_johnson[359777]: 167 167
Oct 02 12:45:49 compute-0 systemd[1]: libpod-cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87.scope: Deactivated successfully.
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.338256324 +0000 UTC m=+0.126080318 container attach cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.338533461 +0000 UTC m=+0.126357435 container died cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b77c6a45c53ff2cef6bc29fa4404e0d00739638857dc53c7c80f1203a6d15a1-merged.mount: Deactivated successfully.
Oct 02 12:45:49 compute-0 podman[359761]: 2025-10-02 12:45:49.382685915 +0000 UTC m=+0.170509889 container remove cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:49 compute-0 systemd[1]: libpod-conmon-cdd2d0c022cd0a561339e0e215f3aa3cacb50e44437c9555b5abf20d0073bc87.scope: Deactivated successfully.
Oct 02 12:45:49 compute-0 podman[359802]: 2025-10-02 12:45:49.589342592 +0000 UTC m=+0.056204841 container create fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:45:49 compute-0 systemd[1]: Started libpod-conmon-fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78.scope.
Oct 02 12:45:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b5b2398b3c09a1d1c1dc824b447e82f2d2f0843cd7a59c82543c0c12191ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b5b2398b3c09a1d1c1dc824b447e82f2d2f0843cd7a59c82543c0c12191ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b5b2398b3c09a1d1c1dc824b447e82f2d2f0843cd7a59c82543c0c12191ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8b5b2398b3c09a1d1c1dc824b447e82f2d2f0843cd7a59c82543c0c12191ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:49 compute-0 podman[359802]: 2025-10-02 12:45:49.564308477 +0000 UTC m=+0.031170796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:49 compute-0 podman[359802]: 2025-10-02 12:45:49.66579079 +0000 UTC m=+0.132653079 container init fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:45:49 compute-0 podman[359802]: 2025-10-02 12:45:49.67308854 +0000 UTC m=+0.139950789 container start fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:45:49 compute-0 podman[359802]: 2025-10-02 12:45:49.680734577 +0000 UTC m=+0.147596816 container attach fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 15 KiB/s wr, 5 op/s
Oct 02 12:45:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:50.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:50 compute-0 nova_compute[257802]: 2025-10-02 12:45:50.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:50 compute-0 nova_compute[257802]: 2025-10-02 12:45:50.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:50 compute-0 confident_goodall[359818]: {
Oct 02 12:45:50 compute-0 confident_goodall[359818]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:45:50 compute-0 confident_goodall[359818]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:45:50 compute-0 confident_goodall[359818]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:45:50 compute-0 confident_goodall[359818]:         "osd_id": 1,
Oct 02 12:45:50 compute-0 confident_goodall[359818]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:45:50 compute-0 confident_goodall[359818]:         "type": "bluestore"
Oct 02 12:45:50 compute-0 confident_goodall[359818]:     }
Oct 02 12:45:50 compute-0 confident_goodall[359818]: }
Oct 02 12:45:50 compute-0 systemd[1]: libpod-fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78.scope: Deactivated successfully.
Oct 02 12:45:50 compute-0 podman[359839]: 2025-10-02 12:45:50.576574693 +0000 UTC m=+0.023244792 container died fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8b5b2398b3c09a1d1c1dc824b447e82f2d2f0843cd7a59c82543c0c12191ac-merged.mount: Deactivated successfully.
Oct 02 12:45:50 compute-0 podman[359839]: 2025-10-02 12:45:50.649912295 +0000 UTC m=+0.096582374 container remove fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:50 compute-0 systemd[1]: libpod-conmon-fcfe282aeb1b2d5ac16431c0913babc6dbff6e4322a8be5df4988e5a16bcec78.scope: Deactivated successfully.
Oct 02 12:45:50 compute-0 sudo[359695]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:45:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:45:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fdb1f209-e955-4272-a54e-067b01d2f4d2 does not exist
Oct 02 12:45:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ae899640-14c3-48be-aa62-dea2ab836cd5 does not exist
Oct 02 12:45:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 020d40d6-ddbc-4aa1-9bfc-1d782d9ca19b does not exist
Oct 02 12:45:50 compute-0 sudo[359855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:50 compute-0 sudo[359855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:50 compute-0 sudo[359855]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:50 compute-0 sudo[359880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:45:50 compute-0 sudo[359880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:50 compute-0 sudo[359880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:51.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:51 compute-0 ceph-mon[73607]: pgmap v2574: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 15 KiB/s wr, 5 op/s
Oct 02 12:45:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:45:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 818 B/s rd, 15 KiB/s wr, 1 op/s
Oct 02 12:45:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:45:52.033 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:52.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:53.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:53 compute-0 ceph-mon[73607]: pgmap v2575: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 818 B/s rd, 15 KiB/s wr, 1 op/s
Oct 02 12:45:53 compute-0 sudo[359906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:53 compute-0 sudo[359906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:53 compute-0 sudo[359906]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:53 compute-0 sudo[359931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:53 compute-0 sudo[359931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:53 compute-0 sudo[359931]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Oct 02 12:45:54 compute-0 nova_compute[257802]: 2025-10-02 12:45:54.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002177227282244556 of space, bias 1.0, pg target 0.6531681846733669 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004331008340312675 of space, bias 1.0, pg target 1.2993025020938025 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:45:54 compute-0 nova_compute[257802]: 2025-10-02 12:45:54.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:54 compute-0 nova_compute[257802]: 2025-10-02 12:45:54.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:55.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:45:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683212583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:45:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683212583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:55 compute-0 ceph-mon[73607]: pgmap v2576: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Oct 02 12:45:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2683212583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2683212583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:55 compute-0 nova_compute[257802]: 2025-10-02 12:45:55.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 12:45:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:57.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:57 compute-0 ceph-mon[73607]: pgmap v2577: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 12:45:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s wr, 0 op/s
Oct 02 12:45:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:45:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:45:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:59.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:45:59 compute-0 nova_compute[257802]: 2025-10-02 12:45:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:59 compute-0 ceph-mon[73607]: pgmap v2578: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s wr, 0 op/s
Oct 02 12:46:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Oct 02 12:46:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.751 2 DEBUG nova.compute.manager [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-changed-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.752 2 DEBUG nova.compute.manager [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Refreshing instance network info cache due to event network-changed-36888ba0-b822-4067-a556-6a12a1136d08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.753 2 DEBUG oslo_concurrency.lockutils [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.754 2 DEBUG oslo_concurrency.lockutils [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.754 2 DEBUG nova.network.neutron [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Refreshing network info cache for port 36888ba0-b822-4067-a556-6a12a1136d08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.880 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.881 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.881 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.881 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.882 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.883 2 INFO nova.compute.manager [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Terminating instance
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.884 2 DEBUG nova.compute.manager [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:46:00 compute-0 kernel: tap36888ba0-b8 (unregistering): left promiscuous mode
Oct 02 12:46:00 compute-0 NetworkManager[44987]: <info>  [1759409160.9560] device (tap36888ba0-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:00 compute-0 ovn_controller[148183]: 2025-10-02T12:46:00Z|00717|binding|INFO|Releasing lport 36888ba0-b822-4067-a556-6a12a1136d08 from this chassis (sb_readonly=0)
Oct 02 12:46:00 compute-0 ovn_controller[148183]: 2025-10-02T12:46:00Z|00718|binding|INFO|Setting lport 36888ba0-b822-4067-a556-6a12a1136d08 down in Southbound
Oct 02 12:46:00 compute-0 ovn_controller[148183]: 2025-10-02T12:46:00Z|00719|binding|INFO|Removing iface tap36888ba0-b8 ovn-installed in OVS
Oct 02 12:46:00 compute-0 nova_compute[257802]: 2025-10-02 12:46:00.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.022 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:89:9a 10.100.0.12'], port_security=['fa:16:3e:8d:89:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b13275-2908-42f3-bb70-73c050f375ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8f9114c7ab4b6e9fc9650d4bd08af9', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cfad1590-a7c0-4c27-a7db-b88ec54c64dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04a89c39-8141-4654-8368-c858180215b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=36888ba0-b822-4067-a556-6a12a1136d08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.023 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 36888ba0-b822-4067-a556-6a12a1136d08 in datapath 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f unbound from our chassis
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.024 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.025 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[90e56943-9f46-497b-86a9-add5e139073b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.026 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f namespace which is not needed anymore
Oct 02 12:46:01 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Oct 02 12:46:01 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a1.scope: Consumed 14.261s CPU time.
Oct 02 12:46:01 compute-0 systemd-machined[211836]: Machine qemu-80-instance-000000a1 terminated.
Oct 02 12:46:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:01.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.138 2 INFO nova.virt.libvirt.driver [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Instance destroyed successfully.
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.139 2 DEBUG nova.objects.instance [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lazy-loading 'resources' on Instance uuid 38b13275-2908-42f3-bb70-73c050f375ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.198 2 DEBUG nova.virt.libvirt.vif [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-59400876',display_name='tempest-TestShelveInstance-server-59400876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-59400876',id=161,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOey7OItBT8ky28T+jgqzhjbvGoB+pWZYmixBVLc/rLrtlb2/muXhDo2zj9MYH0P2A6ukY8/c6TiMhKqcmGhKPZ0/ha7STFDDz62rpDlcbzBiZArK4kjT3veuuC9b5czRQ==',key_name='tempest-TestShelveInstance-1912831029',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8f9114c7ab4b6e9fc9650d4bd08af9',ramdisk_id='',reservation_id='r-iu4ddc0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1219039163',owner_user_name='tempest-TestShelveInstance-1219039163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:25Z,user_data=None,user_id='56c6abe1bb704c8aa499677aeb9017f5',uuid=38b13275-2908-42f3-bb70-73c050f375ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.200 2 DEBUG nova.network.os_vif_util [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converting VIF {"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.201 2 DEBUG nova.network.os_vif_util [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.202 2 DEBUG os_vif [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:46:01 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [NOTICE]   (358744) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:01 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [NOTICE]   (358744) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:01 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [WARNING]  (358744) : Exiting Master process...
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36888ba0-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:01 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [ALERT]    (358744) : Current worker (358746) exited with code 143 (Terminated)
Oct 02 12:46:01 compute-0 neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f[358740]: [WARNING]  (358744) : All workers exited. Exiting... (0)
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 systemd[1]: libpod-44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7.scope: Deactivated successfully.
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 conmon[358740]: conmon 44f5586b3a27b9b41f7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7.scope/container/memory.events
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.253 2 INFO os_vif [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:89:9a,bridge_name='br-int',has_traffic_filtering=True,id=36888ba0-b822-4067-a556-6a12a1136d08,network=Network(6ea0a90a-9528-4fe1-8b35-dfde9b35e85f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36888ba0-b8')
Oct 02 12:46:01 compute-0 podman[359985]: 2025-10-02 12:46:01.258767569 +0000 UTC m=+0.133964351 container died 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-32504923aae0cdcc6674f735804da4aa080d5356ad5f46ac608ff07ecd185e06-merged.mount: Deactivated successfully.
Oct 02 12:46:01 compute-0 podman[359985]: 2025-10-02 12:46:01.306712447 +0000 UTC m=+0.181909219 container cleanup 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:46:01 compute-0 systemd[1]: libpod-conmon-44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7.scope: Deactivated successfully.
Oct 02 12:46:01 compute-0 ceph-mon[73607]: pgmap v2579: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Oct 02 12:46:01 compute-0 podman[360036]: 2025-10-02 12:46:01.396032251 +0000 UTC m=+0.057607166 container remove 44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.404 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[865a2f9b-49d0-42a0-bbe8-ffc05b82e4ce]: (4, ('Thu Oct  2 12:46:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f (44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7)\n44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7\nThu Oct  2 12:46:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f (44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7)\n44f5586b3a27b9b41f7fc6cad820e5a4ec20f03a83054868f1a1460e1b90fdf7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.407 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e02d4a-a77c-4ae2-9395-8b4b122e992f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.408 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ea0a90a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 kernel: tap6ea0a90a-90: left promiscuous mode
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[038111d3-fa7a-4965-929d-8f01c456c4b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.467 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e6c899-c788-4dea-969e-7f4cca97e455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.469 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f8510010-9ff9-4d4b-8dc9-fed52dc486b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.487 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f94bdfb3-3a94-4668-88e0-c147d3ac8dc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719074, 'reachable_time': 28039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360054, 'error': None, 'target': 'ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.489 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ea0a90a-9528-4fe1-8b35-dfde9b35e85f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:01.489 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[4aca3666-c5ea-4eb7-920b-12ef4e4f1457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ea0a90a\x2d9528\x2d4fe1\x2d8b35\x2ddfde9b35e85f.mount: Deactivated successfully.
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.889 2 INFO nova.virt.libvirt.driver [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Deleting instance files /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea_del
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.891 2 INFO nova.virt.libvirt.driver [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Deletion of /var/lib/nova/instances/38b13275-2908-42f3-bb70-73c050f375ea_del complete
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.978 2 INFO nova.compute.manager [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Took 1.09 seconds to destroy the instance on the hypervisor.
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.978 2 DEBUG oslo.service.loopingcall [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.979 2 DEBUG nova.compute.manager [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:46:01 compute-0 nova_compute[257802]: 2025-10-02 12:46:01.979 2 DEBUG nova.network.neutron [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:46:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 1.3 KiB/s wr, 2 op/s
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.171 2 DEBUG nova.compute.manager [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-unplugged-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.173 2 DEBUG oslo_concurrency.lockutils [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.173 2 DEBUG oslo_concurrency.lockutils [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.174 2 DEBUG oslo_concurrency.lockutils [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.174 2 DEBUG nova.compute.manager [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] No waiting events found dispatching network-vif-unplugged-36888ba0-b822-4067-a556-6a12a1136d08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.175 2 DEBUG nova.compute.manager [req-1524fd0a-25f6-4718-a4ed-2e8f25044348 req-61baa4b9-757c-4335-9d5d-53857a0f0522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-unplugged-36888ba0-b822-4067-a556-6a12a1136d08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:46:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2674747343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2659117791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 nova_compute[257802]: 2025-10-02 12:46:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:03.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:03 compute-0 nova_compute[257802]: 2025-10-02 12:46:03.209 2 DEBUG nova.network.neutron [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:03 compute-0 nova_compute[257802]: 2025-10-02 12:46:03.264 2 INFO nova.compute.manager [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Took 1.29 seconds to deallocate network for instance.
Oct 02 12:46:03 compute-0 ceph-mon[73607]: pgmap v2580: 305 pgs: 305 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 1.3 KiB/s wr, 2 op/s
Oct 02 12:46:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/845368689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:03 compute-0 nova_compute[257802]: 2025-10-02 12:46:03.554 2 DEBUG nova.compute.manager [req-7aa0a7ff-0bb4-4b58-91cb-21c3a04c063f req-6dc9f576-3239-4603-9678-383ca4fedbf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-deleted-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.002 2 DEBUG nova.network.neutron [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updated VIF entry in instance network info cache for port 36888ba0-b822-4067-a556-6a12a1136d08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.002 2 DEBUG nova.network.neutron [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Updating instance_info_cache with network_info: [{"id": "36888ba0-b822-4067-a556-6a12a1136d08", "address": "fa:16:3e:8d:89:9a", "network": {"id": "6ea0a90a-9528-4fe1-8b35-dfde9b35e85f", "bridge": "br-int", "label": "tempest-TestShelveInstance-563697374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8f9114c7ab4b6e9fc9650d4bd08af9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36888ba0-b8", "ovs_interfaceid": "36888ba0-b822-4067-a556-6a12a1136d08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 289 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 273 KiB/s wr, 25 op/s
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.044 2 DEBUG oslo_concurrency.lockutils [req-2f11f92f-b6f4-43ab-9a6e-6e57286719be req-0469ffa9-cea4-43af-b1fd-e4ba91c8c37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-38b13275-2908-42f3-bb70-73c050f375ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:04.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.425 2 DEBUG nova.compute.manager [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.426 2 DEBUG oslo_concurrency.lockutils [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "38b13275-2908-42f3-bb70-73c050f375ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.427 2 DEBUG oslo_concurrency.lockutils [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.427 2 DEBUG oslo_concurrency.lockutils [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.428 2 DEBUG nova.compute.manager [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] No waiting events found dispatching network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.428 2 WARNING nova.compute.manager [req-be3b8607-9ead-451d-b75c-a3d8ee98eca0 req-85a65dac-8f90-4afe-bbcd-42d730766959 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Received unexpected event network-vif-plugged-36888ba0-b822-4067-a556-6a12a1136d08 for instance with vm_state active and task_state deleting.
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.488 2 INFO nova.compute.manager [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Took 1.22 seconds to detach 1 volumes for instance.
Oct 02 12:46:04 compute-0 nova_compute[257802]: 2025-10-02 12:46:04.490 2 DEBUG nova.compute.manager [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Deleting volume: 857b1b6f-42d5-4289-a74d-1acb4fd6b032 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.008 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.009 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:05.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.067 2 DEBUG oslo_concurrency.processutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.103 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.104 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:46:05 compute-0 ceph-mon[73607]: pgmap v2581: 305 pgs: 305 active+clean; 289 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 273 KiB/s wr, 25 op/s
Oct 02 12:46:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1004384928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622875506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.520 2 DEBUG oslo_concurrency.processutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.528 2 DEBUG nova.compute.provider_tree [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.556 2 DEBUG nova.scheduler.client.report [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.591 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.672 2 INFO nova.scheduler.client.report [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Deleted allocations for instance 38b13275-2908-42f3-bb70-73c050f375ea
Oct 02 12:46:05 compute-0 nova_compute[257802]: 2025-10-02 12:46:05.792 2 DEBUG oslo_concurrency.lockutils [None req-1697a09a-eabd-4780-a539-eb757466bd99 56c6abe1bb704c8aa499677aeb9017f5 4b8f9114c7ab4b6e9fc9650d4bd08af9 - - default default] Lock "38b13275-2908-42f3-bb70-73c050f375ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 303 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 821 KiB/s wr, 38 op/s
Oct 02 12:46:06 compute-0 nova_compute[257802]: 2025-10-02 12:46:06.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:46:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962579119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:46:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962579119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-0 nova_compute[257802]: 2025-10-02 12:46:06.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3622875506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2962579119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2962579119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:07 compute-0 ceph-mon[73607]: pgmap v2582: 305 pgs: 305 active+clean; 303 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 821 KiB/s wr, 38 op/s
Oct 02 12:46:07 compute-0 podman[360082]: 2025-10-02 12:46:07.919566591 +0000 UTC m=+0.056244643 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 12:46:07 compute-0 podman[360083]: 2025-10-02 12:46:07.921921259 +0000 UTC m=+0.055752891 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:07 compute-0 podman[360081]: 2025-10-02 12:46:07.942604777 +0000 UTC m=+0.080366954 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:46:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 1.9 MiB/s wr, 47 op/s
Oct 02 12:46:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:08.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:09 compute-0 nova_compute[257802]: 2025-10-02 12:46:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:09 compute-0 ceph-mon[73607]: pgmap v2583: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 1.9 MiB/s wr, 47 op/s
Oct 02 12:46:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:46:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:11.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:11 compute-0 nova_compute[257802]: 2025-10-02 12:46:11.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:11 compute-0 nova_compute[257802]: 2025-10-02 12:46:11.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:11 compute-0 ceph-mon[73607]: pgmap v2584: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:46:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1504725169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:46:12 compute-0 nova_compute[257802]: 2025-10-02 12:46:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:12.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/662368703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3989483832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3642926876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:46:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:46:13 compute-0 nova_compute[257802]: 2025-10-02 12:46:13.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:13 compute-0 ceph-mon[73607]: pgmap v2585: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:46:13 compute-0 sudo[360140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:13 compute-0 sudo[360140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 sudo[360140]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 sudo[360171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:13 compute-0 sudo[360171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 sudo[360171]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 podman[360164]: 2025-10-02 12:46:13.803849037 +0000 UTC m=+0.158857083 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:46:13 compute-0 nova_compute[257802]: 2025-10-02 12:46:13.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Oct 02 12:46:14 compute-0 nova_compute[257802]: 2025-10-02 12:46:14.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:14.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:46:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:46:15 compute-0 ceph-mon[73607]: pgmap v2586: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Oct 02 12:46:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Oct 02 12:46:16 compute-0 nova_compute[257802]: 2025-10-02 12:46:16.136 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409161.134569, 38b13275-2908-42f3-bb70-73c050f375ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:16 compute-0 nova_compute[257802]: 2025-10-02 12:46:16.136 2 INFO nova.compute.manager [-] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] VM Stopped (Lifecycle Event)
Oct 02 12:46:16 compute-0 nova_compute[257802]: 2025-10-02 12:46:16.171 2 DEBUG nova.compute.manager [None req-98f0fb1f-e7a1-491a-a73d-ad080b7eaf8e - - - - - -] [instance: 38b13275-2908-42f3-bb70-73c050f375ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:16 compute-0 nova_compute[257802]: 2025-10-02 12:46:16.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:16 compute-0 ceph-mon[73607]: pgmap v2587: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Oct 02 12:46:16 compute-0 nova_compute[257802]: 2025-10-02 12:46:16.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:17 compute-0 nova_compute[257802]: 2025-10-02 12:46:17.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:17 compute-0 nova_compute[257802]: 2025-10-02 12:46:17.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1529374331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Oct 02 12:46:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:18.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:18 compute-0 ceph-mon[73607]: pgmap v2588: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Oct 02 12:46:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1007076917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:19.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:19 compute-0 nova_compute[257802]: 2025-10-02 12:46:19.127 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:19 compute-0 nova_compute[257802]: 2025-10-02 12:46:19.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 176 op/s
Oct 02 12:46:20 compute-0 nova_compute[257802]: 2025-10-02 12:46:20.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:20 compute-0 nova_compute[257802]: 2025-10-02 12:46:20.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:46:20 compute-0 nova_compute[257802]: 2025-10-02 12:46:20.158 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:46:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:20.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:21.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:21 compute-0 ceph-mon[73607]: pgmap v2589: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 176 op/s
Oct 02 12:46:21 compute-0 nova_compute[257802]: 2025-10-02 12:46:21.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:23 compute-0 ceph-mon[73607]: pgmap v2590: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.127 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.127 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.127 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.128 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3343397400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.563 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.723 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.725 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4279MB free_disk=20.90093231201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.725 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.725 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.812 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.812 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:46:23 compute-0 nova_compute[257802]: 2025-10-02 12:46:23.854 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3343397400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210517282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.267 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.275 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.307 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.369 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.369 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.981 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:24 compute-0 nova_compute[257802]: 2025-10-02 12:46:24.982 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.002 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:46:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:46:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:25.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.106 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.107 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.115 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.115 2 INFO nova.compute.claims [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:46:25 compute-0 ceph-mon[73607]: pgmap v2591: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/210517282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.298 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638787222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.716 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.724 2 DEBUG nova.compute.provider_tree [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.743 2 DEBUG nova.scheduler.client.report [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.784 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.785 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.841 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.841 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.890 2 INFO nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:46:25 compute-0 nova_compute[257802]: 2025-10-02 12:46:25.978 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:46:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.102 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.103 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.104 2 INFO nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Creating image(s)
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:26.123 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:26.125 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.137 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.165 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.193 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.197 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.273 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.274 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.275 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.275 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.304 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.307 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 da95c339-6bd5-495a-bd12-d1e71a8017b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1638787222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.518 2 DEBUG nova.policy [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.612 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 da95c339-6bd5-495a-bd12-d1e71a8017b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.679 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.814 2 DEBUG nova.objects.instance [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid da95c339-6bd5-495a-bd12-d1e71a8017b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.863 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.864 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Ensure instance console log exists: /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.865 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.865 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:26 compute-0 nova_compute[257802]: 2025-10-02 12:46:26.865 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:26.964 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:26.965 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:26.965 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:27.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:27 compute-0 ceph-mon[73607]: pgmap v2592: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 12:46:27 compute-0 nova_compute[257802]: 2025-10-02 12:46:27.794 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Successfully created port: ba5bd4c0-6961-4ed8-bbac-669993d3af7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:46:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Oct 02 12:46:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:28.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.288 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Successfully updated port: ba5bd4c0-6961-4ed8-bbac-669993d3af7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.470 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.471 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.471 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.514 2 DEBUG nova.compute.manager [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.514 2 DEBUG nova.compute.manager [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing instance network info cache due to event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:29 compute-0 nova_compute[257802]: 2025-10-02 12:46:29.514 2 DEBUG oslo_concurrency.lockutils [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:29 compute-0 ceph-mon[73607]: pgmap v2593: 305 pgs: 305 active+clean; 313 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Oct 02 12:46:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 369 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Oct 02 12:46:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:30 compute-0 nova_compute[257802]: 2025-10-02 12:46:30.425 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:46:30 compute-0 ceph-mon[73607]: pgmap v2594: 305 pgs: 305 active+clean; 369 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Oct 02 12:46:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:31 compute-0 nova_compute[257802]: 2025-10-02 12:46:31.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.0 MiB/s wr, 156 op/s
Oct 02 12:46:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:32.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:32 compute-0 nova_compute[257802]: 2025-10-02 12:46:32.369 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:33 compute-0 ceph-mon[73607]: pgmap v2595: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.0 MiB/s wr, 156 op/s
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.732 2 DEBUG nova.network.neutron [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:33 compute-0 sudo[360461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:33 compute-0 sudo[360461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:33 compute-0 sudo[360461]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.771 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.771 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Instance network_info: |[{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.772 2 DEBUG oslo_concurrency.lockutils [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.772 2 DEBUG nova.network.neutron [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.775 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Start _get_guest_xml network_info=[{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.778 2 WARNING nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.789 2 DEBUG nova.virt.libvirt.host [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.789 2 DEBUG nova.virt.libvirt.host [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.796 2 DEBUG nova.virt.libvirt.host [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.796 2 DEBUG nova.virt.libvirt.host [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.797 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.797 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.798 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.798 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.798 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.799 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.799 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.799 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.799 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.800 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.800 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.800 2 DEBUG nova.virt.hardware [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:46:33 compute-0 nova_compute[257802]: 2025-10-02 12:46:33.803 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:33 compute-0 sudo[360486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:33 compute-0 sudo[360486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:33 compute-0 sudo[360486]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 663 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Oct 02 12:46:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240046431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.240 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.263 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.266 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/240046431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236436593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.684 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.685 2 DEBUG nova.virt.libvirt.vif [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2123262195',display_name='tempest-TestNetworkBasicOps-server-2123262195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2123262195',id=166,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH6s/dG1JiNtqePgHlYs1LronjfyxaqEaySu675K65y94dsBugZbcdfbZOjwxnVn3sGta76xhoGcwU6I2nP82629sTfaxQ+DCqERXItbZ2bh8zj2URBCCk/NIgreotxLBg==',key_name='tempest-TestNetworkBasicOps-1527655277',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m7ns156l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:26Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=da95c339-6bd5-495a-bd12-d1e71a8017b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.685 2 DEBUG nova.network.os_vif_util [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.686 2 DEBUG nova.network.os_vif_util [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.687 2 DEBUG nova.objects.instance [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid da95c339-6bd5-495a-bd12-d1e71a8017b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.705 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <uuid>da95c339-6bd5-495a-bd12-d1e71a8017b6</uuid>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <name>instance-000000a6</name>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-2123262195</nova:name>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:46:33</nova:creationTime>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <nova:port uuid="ba5bd4c0-6961-4ed8-bbac-669993d3af7c">
Oct 02 12:46:34 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <system>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="serial">da95c339-6bd5-495a-bd12-d1e71a8017b6</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="uuid">da95c339-6bd5-495a-bd12-d1e71a8017b6</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </system>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <os>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </os>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <features>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </features>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/da95c339-6bd5-495a-bd12-d1e71a8017b6_disk">
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config">
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </source>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:46:34 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e0:6d:8a"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <target dev="tapba5bd4c0-69"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/console.log" append="off"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <video>
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </video>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:46:34 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:46:34 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:46:34 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:46:34 compute-0 nova_compute[257802]: </domain>
Oct 02 12:46:34 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.707 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Preparing to wait for external event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.707 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.708 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.708 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.709 2 DEBUG nova.virt.libvirt.vif [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2123262195',display_name='tempest-TestNetworkBasicOps-server-2123262195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2123262195',id=166,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH6s/dG1JiNtqePgHlYs1LronjfyxaqEaySu675K65y94dsBugZbcdfbZOjwxnVn3sGta76xhoGcwU6I2nP82629sTfaxQ+DCqERXItbZ2bh8zj2URBCCk/NIgreotxLBg==',key_name='tempest-TestNetworkBasicOps-1527655277',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m7ns156l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:26Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=da95c339-6bd5-495a-bd12-d1e71a8017b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.709 2 DEBUG nova.network.os_vif_util [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.710 2 DEBUG nova.network.os_vif_util [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.710 2 DEBUG os_vif [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba5bd4c0-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba5bd4c0-69, col_values=(('external_ids', {'iface-id': 'ba5bd4c0-6961-4ed8-bbac-669993d3af7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:6d:8a', 'vm-uuid': 'da95c339-6bd5-495a-bd12-d1e71a8017b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:34 compute-0 NetworkManager[44987]: <info>  [1759409194.7178] manager: (tapba5bd4c0-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.725 2 INFO os_vif [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69')
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.799 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.800 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.800 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:e0:6d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.800 2 INFO nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Using config drive
Oct 02 12:46:34 compute-0 nova_compute[257802]: 2025-10-02 12:46:34.826 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:35.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:35 compute-0 ceph-mon[73607]: pgmap v2596: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 663 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Oct 02 12:46:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3236436593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:35 compute-0 nova_compute[257802]: 2025-10-02 12:46:35.769 2 INFO nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Creating config drive at /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config
Oct 02 12:46:35 compute-0 nova_compute[257802]: 2025-10-02 12:46:35.775 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk9_ru9hq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:35 compute-0 nova_compute[257802]: 2025-10-02 12:46:35.912 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk9_ru9hq" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:35 compute-0 nova_compute[257802]: 2025-10-02 12:46:35.941 2 DEBUG nova.storage.rbd_utils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:35 compute-0 nova_compute[257802]: 2025-10-02 12:46:35.945 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 673 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.120 2 DEBUG oslo_concurrency.processutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config da95c339-6bd5-495a-bd12-d1e71a8017b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.121 2 INFO nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Deleting local config drive /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6/disk.config because it was imported into RBD.
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.128 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-0 kernel: tapba5bd4c0-69: entered promiscuous mode
Oct 02 12:46:36 compute-0 ovn_controller[148183]: 2025-10-02T12:46:36Z|00720|binding|INFO|Claiming lport ba5bd4c0-6961-4ed8-bbac-669993d3af7c for this chassis.
Oct 02 12:46:36 compute-0 ovn_controller[148183]: 2025-10-02T12:46:36Z|00721|binding|INFO|ba5bd4c0-6961-4ed8-bbac-669993d3af7c: Claiming fa:16:3e:e0:6d:8a 10.100.0.4
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.2029] manager: (tapba5bd4c0-69): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.215 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:6d:8a 10.100.0.4'], port_security=['fa:16:3e:e0:6d:8a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'da95c339-6bd5-495a-bd12-d1e71a8017b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6500315-835f-4d3a-971d-50fa2592498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04565a27-da45-4d09-9fa3-2f55284ff111', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0813b7b-0afa-481f-9588-b4fef02778ee, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ba5bd4c0-6961-4ed8-bbac-669993d3af7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.216 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ba5bd4c0-6961-4ed8-bbac-669993d3af7c in datapath c6500315-835f-4d3a-971d-50fa2592498e bound to our chassis
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.217 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6500315-835f-4d3a-971d-50fa2592498e
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fd34b39b-fd52-4dd6-82ab-4208c3cb3396]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.230 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6500315-81 in ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.231 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6500315-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.231 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6f74a79f-3fa1-487a-9c85-ae15993f7a6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:36.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.232 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6610f0ad-c2ad-473d-afe7-2fde2a0ce4ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 systemd-udevd[360647]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.243 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[895e3c52-c5b3-4737-a67d-a757b97e2a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.2526] device (tapba5bd4c0-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.2535] device (tapba5bd4c0-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:46:36 compute-0 systemd-machined[211836]: New machine qemu-81-instance-000000a6.
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.269 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f9117add-3f3e-4193-8c78-0602786e9a40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ovn_controller[148183]: 2025-10-02T12:46:36Z|00722|binding|INFO|Setting lport ba5bd4c0-6961-4ed8-bbac-669993d3af7c ovn-installed in OVS
Oct 02 12:46:36 compute-0 ovn_controller[148183]: 2025-10-02T12:46:36Z|00723|binding|INFO|Setting lport ba5bd4c0-6961-4ed8-bbac-669993d3af7c up in Southbound
Oct 02 12:46:36 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-000000a6.
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.298 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0a636ac6-5cb2-4214-9efa-314d3e469915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.3071] manager: (tapc6500315-80): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.308 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50dbfe84-f771-4792-92d8-6b16411b53c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.351 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[30c7426b-21f1-41a4-81a9-a76448056a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.355 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[40857eed-45a9-4e56-8a42-c46dde0291b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.3806] device (tapc6500315-80): carrier: link connected
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.386 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bc479d02-2c33-490f-a3cd-8478fba72737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.403 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c0ada4-f1c5-4526-8986-0be3c0b7d93d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6500315-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:4a:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726400, 'reachable_time': 23225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360680, 'error': None, 'target': 'ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1827592277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1679318807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.419 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db7eb2ef-6b8b-487a-82ab-5182bd40c8bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:4aa5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 726400, 'tstamp': 726400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360681, 'error': None, 'target': 'ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.434 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b06969a7-69c0-4a8c-af93-c0212a7d091f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6500315-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:4a:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726400, 'reachable_time': 23225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360682, 'error': None, 'target': 'ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.464 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdd51fc-0c60-4697-b06a-1191904fbbd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.534 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[498af6f6-9e21-43c4-9fda-3cf4dc21d3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.535 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6500315-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.535 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.536 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6500315-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 kernel: tapc6500315-80: entered promiscuous mode
Oct 02 12:46:36 compute-0 NetworkManager[44987]: <info>  [1759409196.5386] manager: (tapc6500315-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.540 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6500315-80, col_values=(('external_ids', {'iface-id': '8f987545-a334-4580-ac52-776ea28e9410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ovn_controller[148183]: 2025-10-02T12:46:36Z|00724|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:46:36 compute-0 nova_compute[257802]: 2025-10-02 12:46:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.558 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6500315-835f-4d3a-971d-50fa2592498e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6500315-835f-4d3a-971d-50fa2592498e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.559 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5c274a-a001-49d7-b2f2-fb3bd7ec0d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.560 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-c6500315-835f-4d3a-971d-50fa2592498e
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/c6500315-835f-4d3a-971d-50fa2592498e.pid.haproxy
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID c6500315-835f-4d3a-971d-50fa2592498e
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:46:36 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:46:36.561 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e', 'env', 'PROCESS_TAG=haproxy-c6500315-835f-4d3a-971d-50fa2592498e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6500315-835f-4d3a-971d-50fa2592498e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:46:36 compute-0 podman[360755]: 2025-10-02 12:46:36.915444821 +0000 UTC m=+0.045577571 container create 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:46:36 compute-0 systemd[1]: Started libpod-conmon-44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74.scope.
Oct 02 12:46:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56d17a1b8c02201d1e608b1abd0a42399f31ac6fcece01a0a925c577c96341c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:36 compute-0 podman[360755]: 2025-10-02 12:46:36.976052499 +0000 UTC m=+0.106185269 container init 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:36 compute-0 podman[360755]: 2025-10-02 12:46:36.981474003 +0000 UTC m=+0.111606753 container start 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:46:36 compute-0 podman[360755]: 2025-10-02 12:46:36.892308912 +0000 UTC m=+0.022441682 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:46:37 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [NOTICE]   (360775) : New worker (360777) forked
Oct 02 12:46:37 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [NOTICE]   (360775) : Loading success.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.021 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409197.0201259, da95c339-6bd5-495a-bd12-d1e71a8017b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.022 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] VM Started (Lifecycle Event)
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.048 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.052 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409197.0204535, da95c339-6bd5-495a-bd12-d1e71a8017b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.052 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] VM Paused (Lifecycle Event)
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.075 2 DEBUG nova.network.neutron [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updated VIF entry in instance network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.076 2 DEBUG nova.network.neutron [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.093 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.096 2 DEBUG oslo_concurrency.lockutils [req-dbe78182-83a4-4421-9141-ebdc42bcb0e4 req-34a2e2aa-8b65-49e7-83af-eee1abb8dd72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.098 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.105 2 DEBUG nova.compute.manager [req-ed9ab056-1b4a-4cc9-b724-763a36c67c03 req-489f5121-a05a-4eaf-a19e-b6d051b699f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.105 2 DEBUG oslo_concurrency.lockutils [req-ed9ab056-1b4a-4cc9-b724-763a36c67c03 req-489f5121-a05a-4eaf-a19e-b6d051b699f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.105 2 DEBUG oslo_concurrency.lockutils [req-ed9ab056-1b4a-4cc9-b724-763a36c67c03 req-489f5121-a05a-4eaf-a19e-b6d051b699f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.106 2 DEBUG oslo_concurrency.lockutils [req-ed9ab056-1b4a-4cc9-b724-763a36c67c03 req-489f5121-a05a-4eaf-a19e-b6d051b699f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.106 2 DEBUG nova.compute.manager [req-ed9ab056-1b4a-4cc9-b724-763a36c67c03 req-489f5121-a05a-4eaf-a19e-b6d051b699f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Processing event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.107 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.110 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.115 2 INFO nova.virt.libvirt.driver [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Instance spawned successfully.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.115 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.118 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.119 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409197.1093051, da95c339-6bd5-495a-bd12-d1e71a8017b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.119 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] VM Resumed (Lifecycle Event)
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.154 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.159 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.160 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.160 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.161 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.161 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.162 2 DEBUG nova.virt.libvirt.driver [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.167 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.210 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.228 2 INFO nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Took 11.13 seconds to spawn the instance on the hypervisor.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.228 2 DEBUG nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.293 2 INFO nova.compute.manager [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Took 12.21 seconds to build instance.
Oct 02 12:46:37 compute-0 nova_compute[257802]: 2025-10-02 12:46:37.338 2 DEBUG oslo_concurrency.lockutils [None req-dff607e9-0ae2-472d-b5af-f174642de458 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:37 compute-0 ceph-mon[73607]: pgmap v2597: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 673 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 02 12:46:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1319475623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 426 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 6.6 MiB/s wr, 172 op/s
Oct 02 12:46:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:38 compute-0 podman[360787]: 2025-10-02 12:46:38.922486663 +0000 UTC m=+0.061672216 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:46:38 compute-0 podman[360788]: 2025-10-02 12:46:38.930773487 +0000 UTC m=+0.065990352 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:46:38 compute-0 podman[360789]: 2025-10-02 12:46:38.949392304 +0000 UTC m=+0.080906358 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 02 12:46:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:39.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.227 2 DEBUG nova.compute.manager [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.228 2 DEBUG oslo_concurrency.lockutils [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.229 2 DEBUG oslo_concurrency.lockutils [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.229 2 DEBUG oslo_concurrency.lockutils [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.229 2 DEBUG nova.compute.manager [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] No waiting events found dispatching network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.230 2 WARNING nova.compute.manager [req-fcbdd093-3f38-4ac8-a804-15ed450f816c req-c707a48a-e6f2-41dc-aa37-ba2032bd8ba9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received unexpected event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c for instance with vm_state active and task_state None.
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:39 compute-0 ceph-mon[73607]: pgmap v2598: 305 pgs: 305 active+clean; 426 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 6.6 MiB/s wr, 172 op/s
Oct 02 12:46:39 compute-0 nova_compute[257802]: 2025-10-02 12:46:39.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.7 MiB/s wr, 207 op/s
Oct 02 12:46:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:40.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:41 compute-0 ceph-mon[73607]: pgmap v2599: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.7 MiB/s wr, 207 op/s
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 157 op/s
Oct 02 12:46:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:42 compute-0 NetworkManager[44987]: <info>  [1759409202.4113] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct 02 12:46:42 compute-0 NetworkManager[44987]: <info>  [1759409202.4121] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Oct 02 12:46:42 compute-0 nova_compute[257802]: 2025-10-02 12:46:42.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:46:42
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:46:42 compute-0 nova_compute[257802]: 2025-10-02 12:46:42.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:42 compute-0 ovn_controller[148183]: 2025-10-02T12:46:42Z|00725|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:46:42 compute-0 nova_compute[257802]: 2025-10-02 12:46:42.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct 02 12:46:43 compute-0 nova_compute[257802]: 2025-10-02 12:46:43.278 2 DEBUG nova.compute.manager [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:43 compute-0 nova_compute[257802]: 2025-10-02 12:46:43.279 2 DEBUG nova.compute.manager [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing instance network info cache due to event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:43 compute-0 nova_compute[257802]: 2025-10-02 12:46:43.280 2 DEBUG oslo_concurrency.lockutils [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:43 compute-0 nova_compute[257802]: 2025-10-02 12:46:43.281 2 DEBUG oslo_concurrency.lockutils [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:43 compute-0 nova_compute[257802]: 2025-10-02 12:46:43.282 2 DEBUG nova.network.neutron [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:46:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:46:43 compute-0 ceph-mon[73607]: pgmap v2600: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 157 op/s
Oct 02 12:46:43 compute-0 podman[360845]: 2025-10-02 12:46:43.938679786 +0000 UTC m=+0.083309008 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:46:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 175 op/s
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:46:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:46:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:44.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:44 compute-0 nova_compute[257802]: 2025-10-02 12:46:44.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:44 compute-0 nova_compute[257802]: 2025-10-02 12:46:44.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:45 compute-0 ceph-mon[73607]: pgmap v2601: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 175 op/s
Oct 02 12:46:45 compute-0 nova_compute[257802]: 2025-10-02 12:46:45.705 2 DEBUG nova.network.neutron [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updated VIF entry in instance network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:45 compute-0 nova_compute[257802]: 2025-10-02 12:46:45.705 2 DEBUG nova.network.neutron [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:45 compute-0 nova_compute[257802]: 2025-10-02 12:46:45.752 2 DEBUG oslo_concurrency.lockutils [req-11076eb0-0519-4864-bf1b-69d6fe6177a4 req-4e0d348a-c47a-4f7b-93f3-cd8cb4b9fec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Oct 02 12:46:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:46.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/845394669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:46 compute-0 nova_compute[257802]: 2025-10-02 12:46:46.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:47 compute-0 ceph-mon[73607]: pgmap v2602: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Oct 02 12:46:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 419 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 12:46:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:49 compute-0 nova_compute[257802]: 2025-10-02 12:46:49.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:49 compute-0 ceph-mon[73607]: pgmap v2603: 305 pgs: 305 active+clean; 419 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 12:46:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4030086630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:49 compute-0 nova_compute[257802]: 2025-10-02 12:46:49.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 417 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 210 op/s
Oct 02 12:46:50 compute-0 ovn_controller[148183]: 2025-10-02T12:46:50Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:6d:8a 10.100.0.4
Oct 02 12:46:50 compute-0 ovn_controller[148183]: 2025-10-02T12:46:50Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:6d:8a 10.100.0.4
Oct 02 12:46:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:50 compute-0 ceph-mon[73607]: pgmap v2604: 305 pgs: 305 active+clean; 417 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 210 op/s
Oct 02 12:46:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:51 compute-0 sudo[360876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:51 compute-0 sudo[360876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[360876]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[360901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:51 compute-0 sudo[360901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[360901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[360926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:51 compute-0 sudo[360926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[360926]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[360951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:46:51 compute-0 sudo[360951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[360951]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:46:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:46:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:46:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:46:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3232759905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1732234033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:51 compute-0 sudo[360995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:51 compute-0 sudo[360995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[360995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[361020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:51 compute-0 sudo[361020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[361020]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[361045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:51 compute-0 sudo[361045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:51 compute-0 sudo[361045]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:51 compute-0 sudo[361070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:46:51 compute-0 sudo[361070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 434 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Oct 02 12:46:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:52 compute-0 sudo[361070]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fe9b8dc6-f540-42cc-81e1-1fcc09c8db13 does not exist
Oct 02 12:46:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1abfa3b7-b52c-4fb3-8dbf-625855a732a4 does not exist
Oct 02 12:46:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9d2a0914-0863-4ff3-9176-819d8118d768 does not exist
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:46:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:52 compute-0 sudo[361126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:52 compute-0 sudo[361126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:52 compute-0 sudo[361126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:52 compute-0 sudo[361152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:52 compute-0 sudo[361152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:52 compute-0 sudo[361152]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:52 compute-0 sudo[361177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:52 compute-0 sudo[361177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:52 compute-0 sudo[361177]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:52 compute-0 sudo[361202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:52 compute-0 ceph-mon[73607]: pgmap v2605: 305 pgs: 305 active+clean; 434 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:46:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:52 compute-0 sudo[361202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.090217272 +0000 UTC m=+0.043944320 container create a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:53 compute-0 systemd[1]: Started libpod-conmon-a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0.scope.
Oct 02 12:46:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.072108988 +0000 UTC m=+0.025836036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:53.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.186760884 +0000 UTC m=+0.140487962 container init a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.194522324 +0000 UTC m=+0.148249372 container start a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:46:53 compute-0 ecstatic_carver[361285]: 167 167
Oct 02 12:46:53 compute-0 systemd[1]: libpod-a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0.scope: Deactivated successfully.
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.202972613 +0000 UTC m=+0.156699651 container attach a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.2032841 +0000 UTC m=+0.157011148 container died a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa727e3bcb332abc117ace1d183d6a78a0bed84a237a199c81c26cc241c62e66-merged.mount: Deactivated successfully.
Oct 02 12:46:53 compute-0 podman[361269]: 2025-10-02 12:46:53.249875164 +0000 UTC m=+0.203602212 container remove a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:46:53 compute-0 systemd[1]: libpod-conmon-a6e7ba9b7644740f4a31cd9901afc66ded4ddc741b367b2f46ec49916f9148e0.scope: Deactivated successfully.
Oct 02 12:46:53 compute-0 podman[361310]: 2025-10-02 12:46:53.427903897 +0000 UTC m=+0.038645169 container create 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:46:53 compute-0 systemd[1]: Started libpod-conmon-4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83.scope.
Oct 02 12:46:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:53 compute-0 podman[361310]: 2025-10-02 12:46:53.411886094 +0000 UTC m=+0.022627386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:53 compute-0 podman[361310]: 2025-10-02 12:46:53.511975573 +0000 UTC m=+0.122716845 container init 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:46:53 compute-0 podman[361310]: 2025-10-02 12:46:53.522394349 +0000 UTC m=+0.133135621 container start 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:46:53 compute-0 podman[361310]: 2025-10-02 12:46:53.525411073 +0000 UTC m=+0.136152375 container attach 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:53 compute-0 sudo[361331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:53 compute-0 sudo[361331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:53 compute-0 sudo[361331]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:53 compute-0 sudo[361356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:53 compute-0 sudo[361356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:53 compute-0 sudo[361356]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 450 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 219 op/s
Oct 02 12:46:54 compute-0 nova_compute[257802]: 2025-10-02 12:46:54.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:54 compute-0 nova_compute[257802]: 2025-10-02 12:46:54.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:54 compute-0 determined_antonelli[361326]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:46:54 compute-0 determined_antonelli[361326]: --> relative data size: 1.0
Oct 02 12:46:54 compute-0 determined_antonelli[361326]: --> All data devices are unavailable
Oct 02 12:46:54 compute-0 systemd[1]: libpod-4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83.scope: Deactivated successfully.
Oct 02 12:46:54 compute-0 podman[361310]: 2025-10-02 12:46:54.335178175 +0000 UTC m=+0.945919457 container died 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd728f0192f12c41b54c3cc1ab7d12eaa3a3fefdf101a72d69edf25fa663f18b-merged.mount: Deactivated successfully.
Oct 02 12:46:54 compute-0 podman[361310]: 2025-10-02 12:46:54.414684628 +0000 UTC m=+1.025425900 container remove 4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:54 compute-0 systemd[1]: libpod-conmon-4a6ae650b9a6214a4016c1e997adc41bca58a929c3d2f7fe87b29b66920fcc83.scope: Deactivated successfully.
Oct 02 12:46:54 compute-0 sudo[361202]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:54 compute-0 sudo[361403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:54 compute-0 sudo[361403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:54 compute-0 sudo[361403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:54 compute-0 sudo[361428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:54 compute-0 sudo[361428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:54 compute-0 sudo[361428]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:54 compute-0 sudo[361453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:54 compute-0 sudo[361453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:54 compute-0 sudo[361453]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008485533861436814 of space, bias 1.0, pg target 2.545660158431044 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6441557469058254 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:46:54 compute-0 sudo[361479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:46:54 compute-0 sudo[361479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:54 compute-0 nova_compute[257802]: 2025-10-02 12:46:54.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:54 compute-0 ovn_controller[148183]: 2025-10-02T12:46:54Z|00726|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:46:54 compute-0 nova_compute[257802]: 2025-10-02 12:46:54.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:54 compute-0 podman[361544]: 2025-10-02 12:46:54.979889962 +0000 UTC m=+0.037163534 container create a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:46:55 compute-0 systemd[1]: Started libpod-conmon-a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966.scope.
Oct 02 12:46:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:55.053037889 +0000 UTC m=+0.110311481 container init a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:55.059011395 +0000 UTC m=+0.116284967 container start a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:54.963614902 +0000 UTC m=+0.020888494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:55.062316716 +0000 UTC m=+0.119590308 container attach a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:46:55 compute-0 jovial_pasteur[361561]: 167 167
Oct 02 12:46:55 compute-0 systemd[1]: libpod-a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966.scope: Deactivated successfully.
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:55.064618433 +0000 UTC m=+0.121892005 container died a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cfcc4f0a94f436a0fd70ed921cd165cc3858019fb886489f1f188099d01025a-merged.mount: Deactivated successfully.
Oct 02 12:46:55 compute-0 ceph-mon[73607]: pgmap v2606: 305 pgs: 305 active+clean; 450 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 219 op/s
Oct 02 12:46:55 compute-0 podman[361544]: 2025-10-02 12:46:55.101877619 +0000 UTC m=+0.159151191 container remove a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:46:55 compute-0 systemd[1]: libpod-conmon-a4393257fdcbc8c6b8b14e33de1f989c8e5e0486262d7227f84f622ff6681966.scope: Deactivated successfully.
Oct 02 12:46:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:55 compute-0 podman[361584]: 2025-10-02 12:46:55.261632393 +0000 UTC m=+0.039167563 container create ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:46:55 compute-0 systemd[1]: Started libpod-conmon-ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1.scope.
Oct 02 12:46:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f653464790be556a6e8fdf0950432e5b3ff5306ea330f041a13de62cb8fbc27a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f653464790be556a6e8fdf0950432e5b3ff5306ea330f041a13de62cb8fbc27a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f653464790be556a6e8fdf0950432e5b3ff5306ea330f041a13de62cb8fbc27a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f653464790be556a6e8fdf0950432e5b3ff5306ea330f041a13de62cb8fbc27a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:55 compute-0 podman[361584]: 2025-10-02 12:46:55.333417646 +0000 UTC m=+0.110952846 container init ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:46:55 compute-0 podman[361584]: 2025-10-02 12:46:55.242707898 +0000 UTC m=+0.020243088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:55 compute-0 podman[361584]: 2025-10-02 12:46:55.340413368 +0000 UTC m=+0.117948538 container start ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:46:55 compute-0 podman[361584]: 2025-10-02 12:46:55.343472463 +0000 UTC m=+0.121007643 container attach ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]: {
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:     "1": [
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:         {
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "devices": [
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "/dev/loop3"
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             ],
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "lv_name": "ceph_lv0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "lv_size": "7511998464",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "name": "ceph_lv0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "tags": {
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.cluster_name": "ceph",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.crush_device_class": "",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.encrypted": "0",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.osd_id": "1",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.type": "block",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:                 "ceph.vdo": "0"
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             },
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "type": "block",
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:             "vg_name": "ceph_vg0"
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:         }
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]:     ]
Oct 02 12:46:56 compute-0 inspiring_babbage[361601]: }
Oct 02 12:46:56 compute-0 systemd[1]: libpod-ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1.scope: Deactivated successfully.
Oct 02 12:46:56 compute-0 conmon[361601]: conmon ba0d0970c567bb078b25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1.scope/container/memory.events
Oct 02 12:46:56 compute-0 podman[361584]: 2025-10-02 12:46:56.069481898 +0000 UTC m=+0.847017068 container died ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f653464790be556a6e8fdf0950432e5b3ff5306ea330f041a13de62cb8fbc27a-merged.mount: Deactivated successfully.
Oct 02 12:46:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3160753360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3160753360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:56 compute-0 podman[361584]: 2025-10-02 12:46:56.13554472 +0000 UTC m=+0.913079890 container remove ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:56 compute-0 systemd[1]: libpod-conmon-ba0d0970c567bb078b258ef5c68ea20841973fe305fc282c4d24c7607b45dde1.scope: Deactivated successfully.
Oct 02 12:46:56 compute-0 sudo[361479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:56 compute-0 nova_compute[257802]: 2025-10-02 12:46:56.211 2 INFO nova.compute.manager [None req-192f8832-248d-4328-9bf4-fd1335d2c633 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Get console output
Oct 02 12:46:56 compute-0 sudo[361621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:56 compute-0 nova_compute[257802]: 2025-10-02 12:46:56.220 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:46:56 compute-0 sudo[361621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:56 compute-0 sudo[361621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:56 compute-0 sudo[361646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:56 compute-0 sudo[361646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:56 compute-0 sudo[361646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:56 compute-0 sudo[361671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:56 compute-0 sudo[361671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:56 compute-0 sudo[361671]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:56 compute-0 sudo[361696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:46:56 compute-0 sudo[361696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.725053011 +0000 UTC m=+0.046991415 container create e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:46:56 compute-0 systemd[1]: Started libpod-conmon-e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89.scope.
Oct 02 12:46:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.705493661 +0000 UTC m=+0.027432085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.812855419 +0000 UTC m=+0.134793843 container init e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.818956028 +0000 UTC m=+0.140894432 container start e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.822279789 +0000 UTC m=+0.144218213 container attach e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:46:56 compute-0 sharp_kirch[361780]: 167 167
Oct 02 12:46:56 compute-0 systemd[1]: libpod-e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89.scope: Deactivated successfully.
Oct 02 12:46:56 compute-0 conmon[361780]: conmon e18ef86dbc69a472d01c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89.scope/container/memory.events
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.826578806 +0000 UTC m=+0.148517210 container died e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-872df3b07d2c8010c5028e2a1d1da0ac85f6b3ecb2404b12a044e8e800a4d466-merged.mount: Deactivated successfully.
Oct 02 12:46:56 compute-0 podman[361763]: 2025-10-02 12:46:56.862360254 +0000 UTC m=+0.184298658 container remove e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:56 compute-0 systemd[1]: libpod-conmon-e18ef86dbc69a472d01cfc8bba864d723b278655c20bda142bf8e31e5d312a89.scope: Deactivated successfully.
Oct 02 12:46:57 compute-0 podman[361804]: 2025-10-02 12:46:57.052592028 +0000 UTC m=+0.043362737 container create c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:46:57 compute-0 systemd[1]: Started libpod-conmon-c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788.scope.
Oct 02 12:46:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cbc2a1f1b862bf3a08a2ad3fdf750c03cd7f695336b5bb88f35afa8c0ad44a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cbc2a1f1b862bf3a08a2ad3fdf750c03cd7f695336b5bb88f35afa8c0ad44a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cbc2a1f1b862bf3a08a2ad3fdf750c03cd7f695336b5bb88f35afa8c0ad44a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cbc2a1f1b862bf3a08a2ad3fdf750c03cd7f695336b5bb88f35afa8c0ad44a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:57 compute-0 podman[361804]: 2025-10-02 12:46:57.031679844 +0000 UTC m=+0.022450573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:57 compute-0 podman[361804]: 2025-10-02 12:46:57.127472127 +0000 UTC m=+0.118242856 container init c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:46:57 compute-0 podman[361804]: 2025-10-02 12:46:57.137580706 +0000 UTC m=+0.128351415 container start c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:57 compute-0 podman[361804]: 2025-10-02 12:46:57.141554842 +0000 UTC m=+0.132325571 container attach c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:46:57 compute-0 ceph-mon[73607]: pgmap v2607: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Oct 02 12:46:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:57 compute-0 nova_compute[257802]: 2025-10-02 12:46:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]: {
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:         "osd_id": 1,
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:         "type": "bluestore"
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]:     }
Oct 02 12:46:57 compute-0 frosty_lichterman[361820]: }
Oct 02 12:46:57 compute-0 systemd[1]: libpod-c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788.scope: Deactivated successfully.
Oct 02 12:46:57 compute-0 podman[361841]: 2025-10-02 12:46:57.964284553 +0000 UTC m=+0.023273183 container died c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-47cbc2a1f1b862bf3a08a2ad3fdf750c03cd7f695336b5bb88f35afa8c0ad44a-merged.mount: Deactivated successfully.
Oct 02 12:46:58 compute-0 podman[361841]: 2025-10-02 12:46:58.024531913 +0000 UTC m=+0.083520503 container remove c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:46:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 993 KiB/s rd, 3.9 MiB/s wr, 184 op/s
Oct 02 12:46:58 compute-0 systemd[1]: libpod-conmon-c1f3f59a59c7b6c074093af856e7ecf099074f0d91979648979e1bae8b386788.scope: Deactivated successfully.
Oct 02 12:46:58 compute-0 sudo[361696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:46:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:46:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4153179b-e3ec-4a90-a1d2-cfa6affda764 does not exist
Oct 02 12:46:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 05907358-1542-4cc4-bf48-249e8288b410 does not exist
Oct 02 12:46:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5e2bd90e-3142-4d32-95f9-3143b77c4d87 does not exist
Oct 02 12:46:58 compute-0 sudo[361856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:58 compute-0 sudo[361856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:58 compute-0 sudo[361856]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:58 compute-0 sudo[361881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:46:58 compute-0 sudo[361881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:58 compute-0 sudo[361881]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:58.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:59 compute-0 ceph-mon[73607]: pgmap v2608: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 993 KiB/s rd, 3.9 MiB/s wr, 184 op/s
Oct 02 12:46:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:46:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:46:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:46:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:59.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:46:59 compute-0 nova_compute[257802]: 2025-10-02 12:46:59.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:59 compute-0 nova_compute[257802]: 2025-10-02 12:46:59.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 443 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 261 op/s
Oct 02 12:47:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:47:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:00.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:47:01 compute-0 ceph-mon[73607]: pgmap v2609: 305 pgs: 305 active+clean; 443 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 261 op/s
Oct 02 12:47:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3388750140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:01.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 424 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 215 op/s
Oct 02 12:47:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3752406533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:47:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:02.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:47:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:03.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:03.199 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:03.200 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:47:03 compute-0 nova_compute[257802]: 2025-10-02 12:47:03.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-0 ceph-mon[73607]: pgmap v2610: 305 pgs: 305 active+clean; 424 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 215 op/s
Oct 02 12:47:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4123488424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Oct 02 12:47:04 compute-0 nova_compute[257802]: 2025-10-02 12:47:04.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:04 compute-0 nova_compute[257802]: 2025-10-02 12:47:04.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:05 compute-0 ceph-mon[73607]: pgmap v2611: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Oct 02 12:47:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 43 KiB/s wr, 130 op/s
Oct 02 12:47:06 compute-0 nova_compute[257802]: 2025-10-02 12:47:06.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:06 compute-0 nova_compute[257802]: 2025-10-02 12:47:06.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:47:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:06.201 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:06.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:06 compute-0 ovn_controller[148183]: 2025-10-02T12:47:06Z|00727|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:47:07 compute-0 nova_compute[257802]: 2025-10-02 12:47:07.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:07.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:07 compute-0 ceph-mon[73607]: pgmap v2612: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 43 KiB/s wr, 130 op/s
Oct 02 12:47:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 27 KiB/s wr, 109 op/s
Oct 02 12:47:08 compute-0 nova_compute[257802]: 2025-10-02 12:47:08.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:09 compute-0 nova_compute[257802]: 2025-10-02 12:47:09.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:09 compute-0 ceph-mon[73607]: pgmap v2613: 305 pgs: 305 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 27 KiB/s wr, 109 op/s
Oct 02 12:47:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2225986146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:09 compute-0 nova_compute[257802]: 2025-10-02 12:47:09.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:09 compute-0 podman[361912]: 2025-10-02 12:47:09.924868224 +0000 UTC m=+0.059197986 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:09 compute-0 podman[361914]: 2025-10-02 12:47:09.925091949 +0000 UTC m=+0.058524579 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:09 compute-0 podman[361913]: 2025-10-02 12:47:09.945731585 +0000 UTC m=+0.079985905 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:47:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 399 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 02 12:47:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:10.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:11.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.376 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.376 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.416 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:11 compute-0 ceph-mon[73607]: pgmap v2614: 305 pgs: 305 active+clean; 399 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.537 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.538 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.563 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.563 2 INFO nova.compute.claims [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:47:11 compute-0 nova_compute[257802]: 2025-10-02 12:47:11.718 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 79 op/s
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146099768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.168 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.175 2 DEBUG nova.compute.provider_tree [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.197 2 DEBUG nova.scheduler.client.report [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.234 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.235 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:12.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.328 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.329 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.379 2 INFO nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.415 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/146099768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.566 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.567 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.567 2 INFO nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Creating image(s)
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.599 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.639 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.670 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.674 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.706 2 DEBUG nova.policy [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.740 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.741 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.742 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.742 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.775 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:12 compute-0 nova_compute[257802]: 2025-10-02 12:47:12.780 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.031 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.097 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.128 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:13.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.213 2 DEBUG nova.objects.instance [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 80786e14-5db1-47fe-94ae-7bd13aea6bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.330 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.330 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Ensure instance console log exists: /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.331 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.331 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:13 compute-0 nova_compute[257802]: 2025-10-02 12:47:13.331 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:13 compute-0 ceph-mon[73607]: pgmap v2615: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 79 op/s
Oct 02 12:47:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 446 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.5 MiB/s wr, 114 op/s
Oct 02 12:47:14 compute-0 sudo[362159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:14 compute-0 sudo[362159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:14 compute-0 sudo[362159]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:14 compute-0 nova_compute[257802]: 2025-10-02 12:47:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:14 compute-0 sudo[362190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:14 compute-0 sudo[362190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:14 compute-0 sudo[362190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:14 compute-0 podman[362183]: 2025-10-02 12:47:14.181800774 +0000 UTC m=+0.086863484 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:47:14 compute-0 nova_compute[257802]: 2025-10-02 12:47:14.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:14.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:14 compute-0 nova_compute[257802]: 2025-10-02 12:47:14.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:14 compute-0 nova_compute[257802]: 2025-10-02 12:47:14.821 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Successfully created port: 6858a032-3425-4c0e-b1ab-a9a68d9242bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:47:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:15 compute-0 ceph-mon[73607]: pgmap v2616: 305 pgs: 305 active+clean; 446 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.5 MiB/s wr, 114 op/s
Oct 02 12:47:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1062975263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 483 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 5.0 MiB/s wr, 106 op/s
Oct 02 12:47:16 compute-0 nova_compute[257802]: 2025-10-02 12:47:16.218 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Successfully updated port: 6858a032-3425-4c0e-b1ab-a9a68d9242bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:16 compute-0 nova_compute[257802]: 2025-10-02 12:47:16.243 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:16 compute-0 nova_compute[257802]: 2025-10-02 12:47:16.243 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:16 compute-0 nova_compute[257802]: 2025-10-02 12:47:16.244 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:16 compute-0 nova_compute[257802]: 2025-10-02 12:47:16.560 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/990030479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3699447283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2341942147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/102933903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:17 compute-0 ceph-mon[73607]: pgmap v2617: 305 pgs: 305 active+clean; 483 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 5.0 MiB/s wr, 106 op/s
Oct 02 12:47:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1377427477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 6.0 MiB/s wr, 119 op/s
Oct 02 12:47:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:18.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.408 2 DEBUG nova.compute.manager [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-changed-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.409 2 DEBUG nova.compute.manager [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Refreshing instance network info cache due to event network-changed-6858a032-3425-4c0e-b1ab-a9a68d9242bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.409 2 DEBUG oslo_concurrency.lockutils [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:18 compute-0 ceph-mon[73607]: pgmap v2618: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 6.0 MiB/s wr, 119 op/s
Oct 02 12:47:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2180663085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.765 2 DEBUG nova.network.neutron [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Updating instance_info_cache with network_info: [{"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.879 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.880 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Instance network_info: |[{"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.880 2 DEBUG oslo_concurrency.lockutils [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.880 2 DEBUG nova.network.neutron [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Refreshing network info cache for port 6858a032-3425-4c0e-b1ab-a9a68d9242bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.882 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Start _get_guest_xml network_info=[{"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.887 2 WARNING nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.891 2 DEBUG nova.virt.libvirt.host [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.891 2 DEBUG nova.virt.libvirt.host [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.896 2 DEBUG nova.virt.libvirt.host [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.897 2 DEBUG nova.virt.libvirt.host [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.898 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.899 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.899 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.900 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.900 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.900 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.900 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.900 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.901 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.901 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.901 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.901 2 DEBUG nova.virt.hardware [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:18 compute-0 nova_compute[257802]: 2025-10-02 12:47:18.904 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631875633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.336 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.364 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.368 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/631875633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52251164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.792 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.793 2 DEBUG nova.virt.libvirt.vif [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1729947760',display_name='tempest-TestNetworkBasicOps-server-1729947760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1729947760',id=169,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPeviUNX5KqhJmHmdLPCiFE2g8mRklzWSMPDVRyMqtSDDOw371mpWai4NIY5lnxSjDRhb9u0GW36rOorq81/kausSuGhJSK9xg+wpkw85YJzgcBlJEruYAR5Py+GrJBQdQ==',key_name='tempest-TestNetworkBasicOps-720730733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-bbzvabl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:12Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80786e14-5db1-47fe-94ae-7bd13aea6bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.793 2 DEBUG nova.network.os_vif_util [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.794 2 DEBUG nova.network.os_vif_util [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.795 2 DEBUG nova.objects.instance [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 80786e14-5db1-47fe-94ae-7bd13aea6bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.889 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <uuid>80786e14-5db1-47fe-94ae-7bd13aea6bb8</uuid>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <name>instance-000000a9</name>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-1729947760</nova:name>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:47:18</nova:creationTime>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <nova:port uuid="6858a032-3425-4c0e-b1ab-a9a68d9242bd">
Oct 02 12:47:19 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <system>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="serial">80786e14-5db1-47fe-94ae-7bd13aea6bb8</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="uuid">80786e14-5db1-47fe-94ae-7bd13aea6bb8</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </system>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <os>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </os>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <features>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </features>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk">
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </source>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config">
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </source>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:47:19 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e5:da:8b"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <target dev="tap6858a032-34"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/console.log" append="off"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <video>
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </video>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:47:19 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:47:19 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:47:19 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:47:19 compute-0 nova_compute[257802]: </domain>
Oct 02 12:47:19 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.891 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Preparing to wait for external event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.892 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.892 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.893 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.894 2 DEBUG nova.virt.libvirt.vif [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1729947760',display_name='tempest-TestNetworkBasicOps-server-1729947760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1729947760',id=169,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPeviUNX5KqhJmHmdLPCiFE2g8mRklzWSMPDVRyMqtSDDOw371mpWai4NIY5lnxSjDRhb9u0GW36rOorq81/kausSuGhJSK9xg+wpkw85YJzgcBlJEruYAR5Py+GrJBQdQ==',key_name='tempest-TestNetworkBasicOps-720730733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-bbzvabl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:12Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80786e14-5db1-47fe-94ae-7bd13aea6bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.895 2 DEBUG nova.network.os_vif_util [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.896 2 DEBUG nova.network.os_vif_util [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.896 2 DEBUG os_vif [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6858a032-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6858a032-34, col_values=(('external_ids', {'iface-id': '6858a032-3425-4c0e-b1ab-a9a68d9242bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:da:8b', 'vm-uuid': '80786e14-5db1-47fe-94ae-7bd13aea6bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:19 compute-0 NetworkManager[44987]: <info>  [1759409239.9055] manager: (tap6858a032-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:19 compute-0 nova_compute[257802]: 2025-10-02 12:47:19.915 2 INFO os_vif [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34')
Oct 02 12:47:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 545 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 422 KiB/s rd, 7.5 MiB/s wr, 148 op/s
Oct 02 12:47:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:20 compute-0 nova_compute[257802]: 2025-10-02 12:47:20.597 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:20 compute-0 nova_compute[257802]: 2025-10-02 12:47:20.598 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:20 compute-0 nova_compute[257802]: 2025-10-02 12:47:20.598 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:e5:da:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:20 compute-0 nova_compute[257802]: 2025-10-02 12:47:20.599 2 INFO nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Using config drive
Oct 02 12:47:20 compute-0 nova_compute[257802]: 2025-10-02 12:47:20.625 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/52251164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:20 compute-0 ceph-mon[73607]: pgmap v2619: 305 pgs: 305 active+clean; 545 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 422 KiB/s rd, 7.5 MiB/s wr, 148 op/s
Oct 02 12:47:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:21.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 463 KiB/s rd, 5.4 MiB/s wr, 110 op/s
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.149 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:47:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:22.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.751 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.751 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.752 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.752 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid da95c339-6bd5-495a-bd12-d1e71a8017b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.938 2 INFO nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Creating config drive at /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config
Oct 02 12:47:22 compute-0 nova_compute[257802]: 2025-10-02 12:47:22.942 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvt2x9mw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.075 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvt2x9mw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:23 compute-0 ceph-mon[73607]: pgmap v2620: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 463 KiB/s rd, 5.4 MiB/s wr, 110 op/s
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.123 2 DEBUG nova.storage.rbd_utils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.130 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.306 2 DEBUG oslo_concurrency.processutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config 80786e14-5db1-47fe-94ae-7bd13aea6bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.307 2 INFO nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Deleting local config drive /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8/disk.config because it was imported into RBD.
Oct 02 12:47:23 compute-0 kernel: tap6858a032-34: entered promiscuous mode
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.3533] manager: (tap6858a032-34): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 ovn_controller[148183]: 2025-10-02T12:47:23Z|00728|binding|INFO|Claiming lport 6858a032-3425-4c0e-b1ab-a9a68d9242bd for this chassis.
Oct 02 12:47:23 compute-0 ovn_controller[148183]: 2025-10-02T12:47:23Z|00729|binding|INFO|6858a032-3425-4c0e-b1ab-a9a68d9242bd: Claiming fa:16:3e:e5:da:8b 10.100.0.21
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.365 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:da:8b 10.100.0.21'], port_security=['fa:16:3e:e5:da:8b 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '80786e14-5db1-47fe-94ae-7bd13aea6bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '704f25f4-12e0-4e6e-a623-b55edb54fa20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9dd08d07-e148-4d01-999e-2ed7de7f386d, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6858a032-3425-4c0e-b1ab-a9a68d9242bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.366 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6858a032-3425-4c0e-b1ab-a9a68d9242bd in datapath 8658555b-fcfb-47a7-bb49-98c6ca6ccca4 bound to our chassis
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.367 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8658555b-fcfb-47a7-bb49-98c6ca6ccca4
Oct 02 12:47:23 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.378 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4803f530-5a95-4c64-9a21-72978430188e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.379 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8658555b-f1 in ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.381 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8658555b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c88a09-e489-4015-8dcb-6794edbd1f13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2139e61d-d70f-48c3-a3cc-e0ffdc529526]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 systemd-udevd[362379]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.394 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcd7a2f-acef-4005-990a-71dc2ce2b7d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 systemd-machined[211836]: New machine qemu-82-instance-000000a9.
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.4084] device (tap6858a032-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.4098] device (tap6858a032-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:23 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-000000a9.
Oct 02 12:47:23 compute-0 ovn_controller[148183]: 2025-10-02T12:47:23Z|00730|binding|INFO|Setting lport 6858a032-3425-4c0e-b1ab-a9a68d9242bd ovn-installed in OVS
Oct 02 12:47:23 compute-0 ovn_controller[148183]: 2025-10-02T12:47:23Z|00731|binding|INFO|Setting lport 6858a032-3425-4c0e-b1ab-a9a68d9242bd up in Southbound
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.417 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[25e9263b-9a26-449e-a123-38869d08379d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.443 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[72f5ccb9-3b8a-43e9-a493-f37cbd26fd4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.448 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[302e162d-d773-4ddc-87c8-b43bb9fe461e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.4497] manager: (tap8658555b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/332)
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.484 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[346f40dd-474d-4962-9234-e2541e013edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.487 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e99910f8-d07a-4f51-99bc-91935464b878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.5100] device (tap8658555b-f0): carrier: link connected
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.516 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d821c-3350-407e-a5d5-49d971fb5f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.531 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eed6c263-732b-4ee5-9ee8-ea43841d3489]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8658555b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:56:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731113, 'reachable_time': 43580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362410, 'error': None, 'target': 'ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.548 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aba83d-46ff-4b6b-b4b0-bd8e310d477d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:56e1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731113, 'tstamp': 731113}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362411, 'error': None, 'target': 'ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.566 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4c401910-8d70-4cc7-a002-35a40a538d42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8658555b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:56:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731113, 'reachable_time': 43580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362412, 'error': None, 'target': 'ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.598 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f19620e6-c2c8-4c4d-8598-8e87331b0bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.674 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d1ee50-4dd6-440e-991c-0058fd535edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.676 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8658555b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.676 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.677 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8658555b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 NetworkManager[44987]: <info>  [1759409243.6792] manager: (tap8658555b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Oct 02 12:47:23 compute-0 kernel: tap8658555b-f0: entered promiscuous mode
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.683 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8658555b-f0, col_values=(('external_ids', {'iface-id': '68645b1d-7865-4f65-9498-bbb3db0974e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 ovn_controller[148183]: 2025-10-02T12:47:23Z|00732|binding|INFO|Releasing lport 68645b1d-7865-4f65-9498-bbb3db0974e8 from this chassis (sb_readonly=0)
Oct 02 12:47:23 compute-0 dnf[362373]: Metadata cache refreshed recently.
Oct 02 12:47:23 compute-0 nova_compute[257802]: 2025-10-02 12:47:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.716 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8658555b-fcfb-47a7-bb49-98c6ca6ccca4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8658555b-fcfb-47a7-bb49-98c6ca6ccca4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.717 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1b916ad4-4cd5-4cd5-8ce6-3afe1f76340b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.718 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-8658555b-fcfb-47a7-bb49-98c6ca6ccca4
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/8658555b-fcfb-47a7-bb49-98c6ca6ccca4.pid.haproxy
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 8658555b-fcfb-47a7-bb49-98c6ca6ccca4
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:23.721 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'env', 'PROCESS_TAG=haproxy-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8658555b-fcfb-47a7-bb49-98c6ca6ccca4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:23 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 12:47:23 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 12:47:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 170 op/s
Oct 02 12:47:24 compute-0 podman[362486]: 2025-10-02 12:47:24.118109309 +0000 UTC m=+0.081517994 container create 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:47:24 compute-0 podman[362486]: 2025-10-02 12:47:24.069637308 +0000 UTC m=+0.033046023 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:24 compute-0 systemd[1]: Started libpod-conmon-4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3.scope.
Oct 02 12:47:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.189 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409244.1884274, 80786e14-5db1-47fe-94ae-7bd13aea6bb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.189 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] VM Started (Lifecycle Event)
Oct 02 12:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c758e20a1840f0e4f37843cf694290d66b268acce434cd4301694da150d329/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.211 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.215 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409244.1886165, 80786e14-5db1-47fe-94ae-7bd13aea6bb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.215 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] VM Paused (Lifecycle Event)
Oct 02 12:47:24 compute-0 podman[362486]: 2025-10-02 12:47:24.215907281 +0000 UTC m=+0.179315976 container init 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:47:24 compute-0 podman[362486]: 2025-10-02 12:47:24.22357608 +0000 UTC m=+0.186984765 container start 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:24 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [NOTICE]   (362506) : New worker (362508) forked
Oct 02 12:47:24 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [NOTICE]   (362506) : Loading success.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.263 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.268 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:24.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.301 2 DEBUG nova.compute.manager [req-ca5d7821-796d-4b54-9fb1-cfc34c30022c req-b0af86ef-4571-4915-ab77-6dd8fb8585b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.301 2 DEBUG oslo_concurrency.lockutils [req-ca5d7821-796d-4b54-9fb1-cfc34c30022c req-b0af86ef-4571-4915-ab77-6dd8fb8585b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.302 2 DEBUG oslo_concurrency.lockutils [req-ca5d7821-796d-4b54-9fb1-cfc34c30022c req-b0af86ef-4571-4915-ab77-6dd8fb8585b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.302 2 DEBUG oslo_concurrency.lockutils [req-ca5d7821-796d-4b54-9fb1-cfc34c30022c req-b0af86ef-4571-4915-ab77-6dd8fb8585b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.302 2 DEBUG nova.compute.manager [req-ca5d7821-796d-4b54-9fb1-cfc34c30022c req-b0af86ef-4571-4915-ab77-6dd8fb8585b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Processing event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.303 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.308 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.310 2 INFO nova.virt.libvirt.driver [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Instance spawned successfully.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.311 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.375 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.376 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409244.3062038, 80786e14-5db1-47fe-94ae-7bd13aea6bb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.376 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] VM Resumed (Lifecycle Event)
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.382 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.382 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.383 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.383 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.383 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.384 2 DEBUG nova.virt.libvirt.driver [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.481 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.485 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.690 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.749 2 INFO nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Took 12.18 seconds to spawn the instance on the hypervisor.
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.749 2 DEBUG nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.832 2 DEBUG nova.network.neutron [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Updated VIF entry in instance network info cache for port 6858a032-3425-4c0e-b1ab-a9a68d9242bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.833 2 DEBUG nova.network.neutron [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Updating instance_info_cache with network_info: [{"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:24 compute-0 nova_compute[257802]: 2025-10-02 12:47:24.931 2 DEBUG oslo_concurrency.lockutils [req-6ddf5e50-d941-497e-bdbf-64110ac680e9 req-61847f82-4e7a-4752-980a-732a30b32924 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80786e14-5db1-47fe-94ae-7bd13aea6bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:25 compute-0 nova_compute[257802]: 2025-10-02 12:47:25.041 2 INFO nova.compute.manager [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Took 13.53 seconds to build instance.
Oct 02 12:47:25 compute-0 nova_compute[257802]: 2025-10-02 12:47:25.156 2 DEBUG oslo_concurrency.lockutils [None req-c526a871-0ad5-4bd5-ad1d-015b9735ea07 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:25 compute-0 ceph-mon[73607]: pgmap v2621: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 170 op/s
Oct 02 12:47:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:47:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:47:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 171 op/s
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.044 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.135 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.135 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.136 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.395 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.396 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.396 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.396 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.396 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309466380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:26 compute-0 nova_compute[257802]: 2025-10-02 12:47:26.860 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:26.966 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:26.966 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:26.967 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:27 compute-0 ceph-mon[73607]: pgmap v2622: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 171 op/s
Oct 02 12:47:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/309466380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.356 2 DEBUG nova.compute.manager [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.356 2 DEBUG oslo_concurrency.lockutils [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.357 2 DEBUG oslo_concurrency.lockutils [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.357 2 DEBUG oslo_concurrency.lockutils [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.357 2 DEBUG nova.compute.manager [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] No waiting events found dispatching network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.357 2 WARNING nova.compute.manager [req-e59d1b42-4af5-4f87-9bcf-e48c554fdb82 req-66e8ee60-01ea-4868-bcf5-92d5ecaf2f4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received unexpected event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd for instance with vm_state active and task_state None.
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.702 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.702 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.707 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.708 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.888 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.890 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3924MB free_disk=20.768226623535156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.890 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:27 compute-0 nova_compute[257802]: 2025-10-02 12:47:27.890 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 206 op/s
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance da95c339-6bd5-495a-bd12-d1e71a8017b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 80786e14-5db1-47fe-94ae-7bd13aea6bb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.081 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.081 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.164 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:28.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362693385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.581 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.587 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.610 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.634 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:47:28 compute-0 nova_compute[257802]: 2025-10-02 12:47:28.635 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:29 compute-0 ceph-mon[73607]: pgmap v2623: 305 pgs: 305 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 206 op/s
Oct 02 12:47:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1362693385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:29 compute-0 nova_compute[257802]: 2025-10-02 12:47:29.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:29 compute-0 nova_compute[257802]: 2025-10-02 12:47:29.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 535 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.5 MiB/s wr, 308 op/s
Oct 02 12:47:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:30.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:30 compute-0 nova_compute[257802]: 2025-10-02 12:47:30.556 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:30 compute-0 nova_compute[257802]: 2025-10-02 12:47:30.556 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:30 compute-0 nova_compute[257802]: 2025-10-02 12:47:30.741 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.254 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.255 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.260 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.261 2 INFO nova.compute.claims [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:47:31 compute-0 ceph-mon[73607]: pgmap v2624: 305 pgs: 305 active+clean; 535 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 1.5 MiB/s wr, 308 op/s
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.571 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803529179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.989 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:31 compute-0 nova_compute[257802]: 2025-10-02 12:47:31.994 2 DEBUG nova.compute.provider_tree [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.027 2 DEBUG nova.scheduler.client.report [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 43 KiB/s wr, 282 op/s
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.087 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.088 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.182 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.183 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.230 2 INFO nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.247 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:32.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.449 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.450 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.451 2 INFO nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Creating image(s)
Oct 02 12:47:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2803529179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.486 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.515 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.543 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.545 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.574 2 DEBUG nova.policy [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.613 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.614 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.615 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.615 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.642 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:32 compute-0 nova_compute[257802]: 2025-10-02 12:47:32.645 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.422 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:33 compute-0 ceph-mon[73607]: pgmap v2625: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 43 KiB/s wr, 282 op/s
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.491 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.611 2 DEBUG nova.objects.instance [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid 2340536c-13c5-4863-80fe-b3f9bc5dfe7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.656 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.657 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Ensure instance console log exists: /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.657 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.657 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.658 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:33 compute-0 nova_compute[257802]: 2025-10-02 12:47:33.951 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Successfully created port: aa08bc10-64b4-4b2b-88d3-2f1a994c799c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:47:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 1.5 MiB/s wr, 319 op/s
Oct 02 12:47:34 compute-0 sudo[362755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:34 compute-0 sudo[362755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:34 compute-0 sudo[362755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:34 compute-0 sudo[362780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:34 compute-0 sudo[362780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:34 compute-0 sudo[362780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:34 compute-0 nova_compute[257802]: 2025-10-02 12:47:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:34 compute-0 nova_compute[257802]: 2025-10-02 12:47:34.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:35 compute-0 ceph-mon[73607]: pgmap v2626: 305 pgs: 305 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 1.5 MiB/s wr, 319 op/s
Oct 02 12:47:35 compute-0 nova_compute[257802]: 2025-10-02 12:47:35.597 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 473 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 266 op/s
Oct 02 12:47:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.703 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Successfully updated port: aa08bc10-64b4-4b2b-88d3-2f1a994c799c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.728 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.729 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.729 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.982 2 DEBUG nova.compute.manager [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.983 2 DEBUG nova.compute.manager [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing instance network info cache due to event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:36 compute-0 nova_compute[257802]: 2025-10-02 12:47:36.983 2 DEBUG oslo_concurrency.lockutils [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:37 compute-0 nova_compute[257802]: 2025-10-02 12:47:37.094 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:37 compute-0 ceph-mon[73607]: pgmap v2627: 305 pgs: 305 active+clean; 473 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 266 op/s
Oct 02 12:47:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4200530497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 495 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Oct 02 12:47:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:38 compute-0 ceph-mon[73607]: pgmap v2628: 305 pgs: 305 active+clean; 495 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Oct 02 12:47:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.156 2 DEBUG nova.network.neutron [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.181 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.181 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Instance network_info: |[{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.182 2 DEBUG oslo_concurrency.lockutils [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.182 2 DEBUG nova.network.neutron [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.184 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Start _get_guest_xml network_info=[{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.188 2 WARNING nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.195 2 DEBUG nova.virt.libvirt.host [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.195 2 DEBUG nova.virt.libvirt.host [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.200 2 DEBUG nova.virt.libvirt.host [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.200 2 DEBUG nova.virt.libvirt.host [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.201 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.201 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.202 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.202 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.202 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.202 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.203 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.203 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.203 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.203 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.204 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.204 2 DEBUG nova.virt.hardware [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.206 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:39 compute-0 ovn_controller[148183]: 2025-10-02T12:47:39Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:da:8b 10.100.0.21
Oct 02 12:47:39 compute-0 ovn_controller[148183]: 2025-10-02T12:47:39Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:da:8b 10.100.0.21
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220388728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.645 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.669 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.674 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/220388728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:39 compute-0 nova_compute[257802]: 2025-10-02 12:47:39.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 507 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.9 MiB/s wr, 305 op/s
Oct 02 12:47:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1719465363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.144 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.147 2 DEBUG nova.virt.libvirt.vif [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=170,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-wxciva9w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:32Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=2340536c-13c5-4863-80fe-b3f9bc5dfe7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.147 2 DEBUG nova.network.os_vif_util [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.148 2 DEBUG nova.network.os_vif_util [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.149 2 DEBUG nova.objects.instance [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2340536c-13c5-4863-80fe-b3f9bc5dfe7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.183 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <uuid>2340536c-13c5-4863-80fe-b3f9bc5dfe7d</uuid>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <name>instance-000000aa</name>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021</nova:name>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:47:39</nova:creationTime>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <nova:port uuid="aa08bc10-64b4-4b2b-88d3-2f1a994c799c">
Oct 02 12:47:40 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <system>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="serial">2340536c-13c5-4863-80fe-b3f9bc5dfe7d</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="uuid">2340536c-13c5-4863-80fe-b3f9bc5dfe7d</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </system>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <os>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </os>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <features>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </features>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk">
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config">
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:47:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:90:d0:c0"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <target dev="tapaa08bc10-64"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/console.log" append="off"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <video>
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </video>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:47:40 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:47:40 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:47:40 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:47:40 compute-0 nova_compute[257802]: </domain>
Oct 02 12:47:40 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.184 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Preparing to wait for external event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.184 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.185 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.185 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.185 2 DEBUG nova.virt.libvirt.vif [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=170,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-wxciva9w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:32Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=2340536c-13c5-4863-80fe-b3f9bc5dfe7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.186 2 DEBUG nova.network.os_vif_util [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.186 2 DEBUG nova.network.os_vif_util [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.186 2 DEBUG os_vif [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa08bc10-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa08bc10-64, col_values=(('external_ids', {'iface-id': 'aa08bc10-64b4-4b2b-88d3-2f1a994c799c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:d0:c0', 'vm-uuid': '2340536c-13c5-4863-80fe-b3f9bc5dfe7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 NetworkManager[44987]: <info>  [1759409260.1930] manager: (tapaa08bc10-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.200 2 INFO os_vif [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64')
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.269 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.270 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.270 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:90:d0:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.270 2 INFO nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Using config drive
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.300 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.861 2 INFO nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Creating config drive at /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config
Oct 02 12:47:40 compute-0 nova_compute[257802]: 2025-10-02 12:47:40.868 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrlffhg1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:40 compute-0 podman[362892]: 2025-10-02 12:47:40.953852746 +0000 UTC m=+0.076032978 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:40 compute-0 podman[362893]: 2025-10-02 12:47:40.95643853 +0000 UTC m=+0.070058182 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:40 compute-0 podman[362891]: 2025-10-02 12:47:40.971655844 +0000 UTC m=+0.095637001 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 12:47:41 compute-0 ceph-mon[73607]: pgmap v2629: 305 pgs: 305 active+clean; 507 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.9 MiB/s wr, 305 op/s
Oct 02 12:47:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1719465363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.014 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrlffhg1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.039 2 DEBUG nova.storage.rbd_utils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.044 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.668 2 DEBUG oslo_concurrency.processutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config 2340536c-13c5-4863-80fe-b3f9bc5dfe7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.670 2 INFO nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Deleting local config drive /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d/disk.config because it was imported into RBD.
Oct 02 12:47:41 compute-0 kernel: tapaa08bc10-64: entered promiscuous mode
Oct 02 12:47:41 compute-0 ovn_controller[148183]: 2025-10-02T12:47:41Z|00733|binding|INFO|Claiming lport aa08bc10-64b4-4b2b-88d3-2f1a994c799c for this chassis.
Oct 02 12:47:41 compute-0 ovn_controller[148183]: 2025-10-02T12:47:41Z|00734|binding|INFO|aa08bc10-64b4-4b2b-88d3-2f1a994c799c: Claiming fa:16:3e:90:d0:c0 10.100.0.6
Oct 02 12:47:41 compute-0 NetworkManager[44987]: <info>  [1759409261.7281] manager: (tapaa08bc10-64): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:41 compute-0 ovn_controller[148183]: 2025-10-02T12:47:41Z|00735|binding|INFO|Setting lport aa08bc10-64b4-4b2b-88d3-2f1a994c799c ovn-installed in OVS
Oct 02 12:47:41 compute-0 nova_compute[257802]: 2025-10-02 12:47:41.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:41 compute-0 systemd-udevd[363000]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:41 compute-0 systemd-machined[211836]: New machine qemu-83-instance-000000aa.
Oct 02 12:47:41 compute-0 ovn_controller[148183]: 2025-10-02T12:47:41Z|00736|binding|INFO|Setting lport aa08bc10-64b4-4b2b-88d3-2f1a994c799c up in Southbound
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.780 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:d0:c0 10.100.0.6'], port_security=['fa:16:3e:90:d0:c0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2340536c-13c5-4863-80fe-b3f9bc5dfe7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccdcfae6-20a7-43b2-a33d-9f1e1bd6bb8c d8ca8f1f-f5ac-4bad-9c2b-c266c19224ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b23ccf-1bf4-44ff-8965-dd1a37c9d290, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=aa08bc10-64b4-4b2b-88d3-2f1a994c799c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.781 158261 INFO neutron.agent.ovn.metadata.agent [-] Port aa08bc10-64b4-4b2b-88d3-2f1a994c799c in datapath a0b40647-5bdc-42ab-8337-b3fcdc66ecfc bound to our chassis
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.783 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0b40647-5bdc-42ab-8337-b3fcdc66ecfc
Oct 02 12:47:41 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-000000aa.
Oct 02 12:47:41 compute-0 NetworkManager[44987]: <info>  [1759409261.7959] device (tapaa08bc10-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.795 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5824fa0b-4714-4819-bf55-f63a9bacdd64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 NetworkManager[44987]: <info>  [1759409261.7977] device (tapaa08bc10-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.797 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0b40647-51 in ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.800 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0b40647-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.800 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce24288-023f-41d8-bf55-b4fafd850fc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.801 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f63ec755-0863-4989-9a18-21401edfbbbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.816 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1a9ec6-2930-4921-a9cb-6dd8e627bc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.844 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8ec3b3-e97c-4329-9e37-2724e7b56160]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.874 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3132e966-f2ba-4d96-8427-ab72ac499e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.882 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67c1a90e-7e34-43f3-a620-e807f11c1f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 NetworkManager[44987]: <info>  [1759409261.8837] manager: (tapa0b40647-50): new Veth device (/org/freedesktop/NetworkManager/Devices/336)
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.913 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7f37a220-ab07-403c-805d-4d0bec7b0104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.916 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[22965666-30a5-44bb-b14f-67926ff878ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 NetworkManager[44987]: <info>  [1759409261.9434] device (tapa0b40647-50): carrier: link connected
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.950 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cce3c52e-800e-4ea7-b954-ce7ace4e3492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.967 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8b23b7-2ab6-4720-b0e0-73d25736f468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0b40647-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:6a:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732956, 'reachable_time': 19677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363034, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:41.983 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3e5fa7-a8e0-4e1d-8745-b4d693b3d919]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:6a85'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732956, 'tstamp': 732956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363035, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.001 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7dce82b6-ff2b-49eb-baf2-40fa4fef5feb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0b40647-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:6a:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732956, 'reachable_time': 19677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363036, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.036 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[35fdbd85-3911-4abe-9188-cf8d6b6a7129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 455 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 236 op/s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.098 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0277f42f-bae5-46d7-bf1a-da31554d7e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.100 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0b40647-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.100 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.100 2 DEBUG nova.network.neutron [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updated VIF entry in instance network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.101 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0b40647-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.101 2 DEBUG nova.network.neutron [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:42 compute-0 kernel: tapa0b40647-50: entered promiscuous mode
Oct 02 12:47:42 compute-0 NetworkManager[44987]: <info>  [1759409262.1037] manager: (tapa0b40647-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.107 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0b40647-50, col_values=(('external_ids', {'iface-id': 'd1c4722b-043f-4662-89a4-55feb7508119'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:42 compute-0 ovn_controller[148183]: 2025-10-02T12:47:42Z|00737|binding|INFO|Releasing lport d1c4722b-043f-4662-89a4-55feb7508119 from this chassis (sb_readonly=0)
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.109 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.110 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15e7d319-6de4-4404-b404-c598a6856dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.111 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID a0b40647-5bdc-42ab-8337-b3fcdc66ecfc
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:42.113 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'env', 'PROCESS_TAG=haproxy-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.172 2 DEBUG oslo_concurrency.lockutils [req-30a3fa6b-12ab-49f5-9266-5cbecbbafdd3 req-cff1597c-9fdf-46f1-b557-6d85a4ef7474 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:47:42
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control']
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:47:42 compute-0 podman[363110]: 2025-10-02 12:47:42.51397621 +0000 UTC m=+0.056245892 container create 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:42 compute-0 systemd[1]: Started libpod-conmon-5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929.scope.
Oct 02 12:47:42 compute-0 podman[363110]: 2025-10-02 12:47:42.484440995 +0000 UTC m=+0.026710687 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6aeaab9af37e146f895636990a34b45809c14328b11930886348b6419c1f1aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:42 compute-0 podman[363110]: 2025-10-02 12:47:42.614030698 +0000 UTC m=+0.156300400 container init 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:47:42 compute-0 podman[363110]: 2025-10-02 12:47:42.624166067 +0000 UTC m=+0.166435739 container start 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:47:42 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [NOTICE]   (363130) : New worker (363132) forked
Oct 02 12:47:42 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [NOTICE]   (363130) : Loading success.
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.809 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409262.8085752, 2340536c-13c5-4863-80fe-b3f9bc5dfe7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.810 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] VM Started (Lifecycle Event)
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.960 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.964 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409262.808788, 2340536c-13c5-4863-80fe-b3f9bc5dfe7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:42 compute-0 nova_compute[257802]: 2025-10-02 12:47:42.964 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] VM Paused (Lifecycle Event)
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.013 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.016 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.086 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:43 compute-0 ceph-mon[73607]: pgmap v2630: 305 pgs: 305 active+clean; 455 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 236 op/s
Oct 02 12:47:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1205338556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:47:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:47:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:47:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:47:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.743 2 DEBUG nova.compute.manager [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.744 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.745 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.745 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.745 2 DEBUG nova.compute.manager [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Processing event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.746 2 DEBUG nova.compute.manager [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.747 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.747 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.747 2 DEBUG oslo_concurrency.lockutils [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.748 2 DEBUG nova.compute.manager [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] No waiting events found dispatching network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.748 2 WARNING nova.compute.manager [req-52f5de35-625c-4151-9570-f98e45c23b41 req-34a2aac4-f43c-454f-a519-e0f8e5349126 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received unexpected event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c for instance with vm_state building and task_state spawning.
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.749 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.754 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409263.7544172, 2340536c-13c5-4863-80fe-b3f9bc5dfe7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.755 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] VM Resumed (Lifecycle Event)
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.757 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.760 2 INFO nova.virt.libvirt.driver [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Instance spawned successfully.
Oct 02 12:47:43 compute-0 nova_compute[257802]: 2025-10-02 12:47:43.761 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 411 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:47:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:47:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.434 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.438 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.439 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.439 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.440 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.440 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.440 2 DEBUG nova.virt.libvirt.driver [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.444 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.613 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.860 2 INFO nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Took 12.41 seconds to spawn the instance on the hypervisor.
Oct 02 12:47:44 compute-0 nova_compute[257802]: 2025-10-02 12:47:44.861 2 DEBUG nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:44 compute-0 podman[363142]: 2025-10-02 12:47:44.9605213 +0000 UTC m=+0.090009472 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 12:47:45 compute-0 nova_compute[257802]: 2025-10-02 12:47:45.145 2 INFO nova.compute.manager [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Took 13.93 seconds to build instance.
Oct 02 12:47:45 compute-0 ceph-mon[73607]: pgmap v2631: 305 pgs: 305 active+clean; 411 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Oct 02 12:47:45 compute-0 nova_compute[257802]: 2025-10-02 12:47:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:45 compute-0 nova_compute[257802]: 2025-10-02 12:47:45.256 2 DEBUG oslo_concurrency.lockutils [None req-0fbfc001-c10d-42b9-872a-b429bbeb6bdb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 4.6 MiB/s wr, 209 op/s
Oct 02 12:47:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:47 compute-0 ceph-mon[73607]: pgmap v2632: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 4.6 MiB/s wr, 209 op/s
Oct 02 12:47:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 213 op/s
Oct 02 12:47:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:49 compute-0 ceph-mon[73607]: pgmap v2633: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 213 op/s
Oct 02 12:47:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:49.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:49 compute-0 nova_compute[257802]: 2025-10-02 12:47:49.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2989209813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 223 op/s
Oct 02 12:47:50 compute-0 nova_compute[257802]: 2025-10-02 12:47:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2989209813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.236 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.237 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:47:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:51 compute-0 ceph-mon[73607]: pgmap v2634: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 223 op/s
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.721 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.721 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.722 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.722 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.722 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.724 2 INFO nova.compute.manager [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Terminating instance
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.725 2 DEBUG nova.compute.manager [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:47:51 compute-0 kernel: tap6858a032-34 (unregistering): left promiscuous mode
Oct 02 12:47:51 compute-0 NetworkManager[44987]: <info>  [1759409271.7877] device (tap6858a032-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:47:51 compute-0 ovn_controller[148183]: 2025-10-02T12:47:51Z|00738|binding|INFO|Releasing lport 6858a032-3425-4c0e-b1ab-a9a68d9242bd from this chassis (sb_readonly=0)
Oct 02 12:47:51 compute-0 ovn_controller[148183]: 2025-10-02T12:47:51Z|00739|binding|INFO|Setting lport 6858a032-3425-4c0e-b1ab-a9a68d9242bd down in Southbound
Oct 02 12:47:51 compute-0 ovn_controller[148183]: 2025-10-02T12:47:51Z|00740|binding|INFO|Removing iface tap6858a032-34 ovn-installed in OVS
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.824 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:da:8b 10.100.0.21'], port_security=['fa:16:3e:e5:da:8b 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '80786e14-5db1-47fe-94ae-7bd13aea6bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '704f25f4-12e0-4e6e-a623-b55edb54fa20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9dd08d07-e148-4d01-999e-2ed7de7f386d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=6858a032-3425-4c0e-b1ab-a9a68d9242bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.826 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 6858a032-3425-4c0e-b1ab-a9a68d9242bd in datapath 8658555b-fcfb-47a7-bb49-98c6ca6ccca4 unbound from our chassis
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.828 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8658555b-fcfb-47a7-bb49-98c6ca6ccca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.829 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cc58a751-c52f-48a5-918e-677c5d22c2dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:51.830 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4 namespace which is not needed anymore
Oct 02 12:47:51 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Oct 02 12:47:51 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Consumed 13.733s CPU time.
Oct 02 12:47:51 compute-0 systemd-machined[211836]: Machine qemu-82-instance-000000a9 terminated.
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.959 2 INFO nova.virt.libvirt.driver [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Instance destroyed successfully.
Oct 02 12:47:51 compute-0 nova_compute[257802]: 2025-10-02 12:47:51.959 2 DEBUG nova.objects.instance [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 80786e14-5db1-47fe-94ae-7bd13aea6bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:51 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [NOTICE]   (362506) : haproxy version is 2.8.14-c23fe91
Oct 02 12:47:51 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [NOTICE]   (362506) : path to executable is /usr/sbin/haproxy
Oct 02 12:47:51 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [WARNING]  (362506) : Exiting Master process...
Oct 02 12:47:51 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [ALERT]    (362506) : Current worker (362508) exited with code 143 (Terminated)
Oct 02 12:47:51 compute-0 neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4[362502]: [WARNING]  (362506) : All workers exited. Exiting... (0)
Oct 02 12:47:51 compute-0 systemd[1]: libpod-4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3.scope: Deactivated successfully.
Oct 02 12:47:51 compute-0 podman[363195]: 2025-10-02 12:47:51.985524019 +0000 UTC m=+0.049069066 container died 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.018 2 DEBUG nova.virt.libvirt.vif [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1729947760',display_name='tempest-TestNetworkBasicOps-server-1729947760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1729947760',id=169,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPeviUNX5KqhJmHmdLPCiFE2g8mRklzWSMPDVRyMqtSDDOw371mpWai4NIY5lnxSjDRhb9u0GW36rOorq81/kausSuGhJSK9xg+wpkw85YJzgcBlJEruYAR5Py+GrJBQdQ==',key_name='tempest-TestNetworkBasicOps-720730733',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-bbzvabl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:24Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80786e14-5db1-47fe-94ae-7bd13aea6bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.019 2 DEBUG nova.network.os_vif_util [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "address": "fa:16:3e:e5:da:8b", "network": {"id": "8658555b-fcfb-47a7-bb49-98c6ca6ccca4", "bridge": "br-int", "label": "tempest-network-smoke--1403947383", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6858a032-34", "ovs_interfaceid": "6858a032-3425-4c0e-b1ab-a9a68d9242bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.020 2 DEBUG nova.network.os_vif_util [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.020 2 DEBUG os_vif [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6858a032-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.028 2 INFO os_vif [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:da:8b,bridge_name='br-int',has_traffic_filtering=True,id=6858a032-3425-4c0e-b1ab-a9a68d9242bd,network=Network(8658555b-fcfb-47a7-bb49-98c6ca6ccca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6858a032-34')
Oct 02 12:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c758e20a1840f0e4f37843cf694290d66b268acce434cd4301694da150d329-merged.mount: Deactivated successfully.
Oct 02 12:47:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 152 KiB/s wr, 146 op/s
Oct 02 12:47:52 compute-0 podman[363195]: 2025-10-02 12:47:52.057611929 +0000 UTC m=+0.121156976 container cleanup 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:47:52 compute-0 systemd[1]: libpod-conmon-4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3.scope: Deactivated successfully.
Oct 02 12:47:52 compute-0 podman[363250]: 2025-10-02 12:47:52.125165799 +0000 UTC m=+0.040850635 container remove 4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.131 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[46caa819-16e0-47ec-bb66-efabc0bc0126]: (4, ('Thu Oct  2 12:47:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4 (4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3)\n4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3\nThu Oct  2 12:47:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4 (4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3)\n4c487337460131cdb45c8efb532468065d65ea12da210567b839ffe82a16a3b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.134 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba6a736-07d9-4f51-ac19-a24bf4dbdf75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.135 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8658555b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 kernel: tap8658555b-f0: left promiscuous mode
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.158 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fee88576-78c3-427f-a5f7-f69e7de05812]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.192 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a164e716-588b-4c02-b95f-0b6ce3fa0551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.193 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4038dc0-8a04-4158-a034-93cf1a4c821d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.211 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3601c41f-a9ac-4b2b-bb58-971190828dad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731106, 'reachable_time': 24974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363268, 'error': None, 'target': 'ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d8658555b\x2dfcfb\x2d47a7\x2dbb49\x2d98c6ca6ccca4.mount: Deactivated successfully.
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.216 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8658555b-fcfb-47a7-bb49-98c6ca6ccca4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:47:52.216 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[16c1dfcc-ebba-4001-a470-3283e434bc92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.345 2 DEBUG nova.compute.manager [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-unplugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.346 2 DEBUG oslo_concurrency.lockutils [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.347 2 DEBUG oslo_concurrency.lockutils [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.347 2 DEBUG oslo_concurrency.lockutils [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.347 2 DEBUG nova.compute.manager [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] No waiting events found dispatching network-vif-unplugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.348 2 DEBUG nova.compute.manager [req-3014122c-71b7-4595-903e-320e39c2b7c9 req-d4bf9189-b5c7-4091-9e45-45ab37edb059 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-unplugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.531 2 INFO nova.virt.libvirt.driver [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Deleting instance files /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8_del
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.532 2 INFO nova.virt.libvirt.driver [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Deletion of /var/lib/nova/instances/80786e14-5db1-47fe-94ae-7bd13aea6bb8_del complete
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.618 2 INFO nova.compute.manager [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.619 2 DEBUG oslo.service.loopingcall [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.619 2 DEBUG nova.compute.manager [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:47:52 compute-0 nova_compute[257802]: 2025-10-02 12:47:52.620 2 DEBUG nova.network.neutron [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:47:53 compute-0 nova_compute[257802]: 2025-10-02 12:47:53.178 2 DEBUG nova.compute.manager [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:53 compute-0 nova_compute[257802]: 2025-10-02 12:47:53.179 2 DEBUG nova.compute.manager [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing instance network info cache due to event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:53 compute-0 nova_compute[257802]: 2025-10-02 12:47:53.179 2 DEBUG oslo_concurrency.lockutils [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:53 compute-0 nova_compute[257802]: 2025-10-02 12:47:53.179 2 DEBUG oslo_concurrency.lockutils [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:53 compute-0 nova_compute[257802]: 2025-10-02 12:47:53.180 2 DEBUG nova.network.neutron [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:53 compute-0 ceph-mon[73607]: pgmap v2635: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 152 KiB/s wr, 146 op/s
Oct 02 12:47:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:54 compute-0 ovn_controller[148183]: 2025-10-02T12:47:54Z|00741|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:47:54 compute-0 ovn_controller[148183]: 2025-10-02T12:47:54Z|00742|binding|INFO|Releasing lport d1c4722b-043f-4662-89a4-55feb7508119 from this chassis (sb_readonly=0)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 377 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 111 KiB/s wr, 114 op/s
Oct 02 12:47:54 compute-0 ovn_controller[148183]: 2025-10-02T12:47:54Z|00743|binding|INFO|Releasing lport 8f987545-a334-4580-ac52-776ea28e9410 from this chassis (sb_readonly=0)
Oct 02 12:47:54 compute-0 ovn_controller[148183]: 2025-10-02T12:47:54Z|00744|binding|INFO|Releasing lport d1c4722b-043f-4662-89a4-55feb7508119 from this chassis (sb_readonly=0)
Oct 02 12:47:54 compute-0 nova_compute[257802]: 2025-10-02 12:47:54.200 2 DEBUG nova.network.neutron [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:54 compute-0 nova_compute[257802]: 2025-10-02 12:47:54.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-0 nova_compute[257802]: 2025-10-02 12:47:54.228 2 INFO nova.compute.manager [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Took 1.61 seconds to deallocate network for instance.
Oct 02 12:47:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:54 compute-0 nova_compute[257802]: 2025-10-02 12:47:54.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-0 sudo[363271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:54 compute-0 sudo[363271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:54 compute-0 sudo[363271]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:54 compute-0 sudo[363296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:54 compute-0 sudo[363296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:54 compute-0 sudo[363296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006757238158384515 of space, bias 1.0, pg target 2.0271714475153546 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6442640720965941 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:47:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:55 compute-0 ceph-mon[73607]: pgmap v2636: 305 pgs: 305 active+clean; 377 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 111 KiB/s wr, 114 op/s
Oct 02 12:47:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1821411203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:47:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1821411203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.981 2 DEBUG nova.compute.manager [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.982 2 DEBUG oslo_concurrency.lockutils [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.982 2 DEBUG oslo_concurrency.lockutils [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.982 2 DEBUG oslo_concurrency.lockutils [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.983 2 DEBUG nova.compute.manager [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] No waiting events found dispatching network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.983 2 WARNING nova.compute.manager [req-1a076c5d-192b-4496-b1f7-6d7167b8acb9 req-99304fc4-dcce-439c-aa25-44221e4df63d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received unexpected event network-vif-plugged-6858a032-3425-4c0e-b1ab-a9a68d9242bd for instance with vm_state active and task_state deleting.
Oct 02 12:47:55 compute-0 nova_compute[257802]: 2025-10-02 12:47:55.984 2 DEBUG nova.compute.manager [req-eee7e063-2b25-4d7a-b451-fb1a1c64d262 req-56759ba6-f97b-41fd-b4a7-fc9e5da8ac9f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Received event network-vif-deleted-6858a032-3425-4c0e-b1ab-a9a68d9242bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.042 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.042 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 353 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 224 KiB/s wr, 98 op/s
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.184 2 DEBUG oslo_concurrency.processutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155926240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.605 2 DEBUG oslo_concurrency.processutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.612 2 DEBUG nova.compute.provider_tree [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.650 2 DEBUG nova.scheduler.client.report [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.686 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.727 2 INFO nova.scheduler.client.report [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 80786e14-5db1-47fe-94ae-7bd13aea6bb8
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.822 2 DEBUG oslo_concurrency.lockutils [None req-3922f5d0-3fe5-45c2-88c0-49dec7dd898d fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80786e14-5db1-47fe-94ae-7bd13aea6bb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.905 2 DEBUG nova.network.neutron [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updated VIF entry in instance network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.906 2 DEBUG nova.network.neutron [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:56 compute-0 nova_compute[257802]: 2025-10-02 12:47:56.942 2 DEBUG oslo_concurrency.lockutils [req-d17a4f07-87a5-40e8-98d9-4f2d45888228 req-7a78e7ee-06ac-4fca-aca2-3e0aaf48e7cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:57 compute-0 nova_compute[257802]: 2025-10-02 12:47:57.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:57 compute-0 ceph-mon[73607]: pgmap v2637: 305 pgs: 305 active+clean; 353 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 224 KiB/s wr, 98 op/s
Oct 02 12:47:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3155926240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:57 compute-0 ovn_controller[148183]: 2025-10-02T12:47:57Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:d0:c0 10.100.0.6
Oct 02 12:47:57 compute-0 ovn_controller[148183]: 2025-10-02T12:47:57Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:d0:c0 10.100.0.6
Oct 02 12:47:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 337 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 02 12:47:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Oct 02 12:47:58 compute-0 sudo[363345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:58 compute-0 sudo[363345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:58 compute-0 sudo[363345]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:58 compute-0 sudo[363371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:58 compute-0 sudo[363371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:58 compute-0 sudo[363371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:58 compute-0 sudo[363396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Oct 02 12:47:58 compute-0 sudo[363396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:58 compute-0 sudo[363396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:58 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Oct 02 12:47:58 compute-0 sudo[363421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:47:58 compute-0 sudo[363421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:58 compute-0 ceph-mon[73607]: pgmap v2638: 305 pgs: 305 active+clean; 337 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 02 12:47:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.042186) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279042283, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2147, "num_deletes": 253, "total_data_size": 3764102, "memory_usage": 3827952, "flush_reason": "Manual Compaction"}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279059205, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 3685261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56309, "largest_seqno": 58454, "table_properties": {"data_size": 3675699, "index_size": 5992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20366, "raw_average_key_size": 20, "raw_value_size": 3656361, "raw_average_value_size": 3693, "num_data_blocks": 261, "num_entries": 990, "num_filter_entries": 990, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409071, "oldest_key_time": 1759409071, "file_creation_time": 1759409279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 17053 microseconds, and 9091 cpu microseconds.
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.059253) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 3685261 bytes OK
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.059276) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.061229) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.061244) EVENT_LOG_v1 {"time_micros": 1759409279061239, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.061264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3755384, prev total WAL file size 3755384, number of live WAL files 2.
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.062359) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(3598KB)], [125(11MB)]
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279062439, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 15780713, "oldest_snapshot_seqno": -1}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8924 keys, 13811252 bytes, temperature: kUnknown
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279161846, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 13811252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13750270, "index_size": 37573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 230698, "raw_average_key_size": 25, "raw_value_size": 13590537, "raw_average_value_size": 1522, "num_data_blocks": 1473, "num_entries": 8924, "num_filter_entries": 8924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.162139) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13811252 bytes
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.163988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.6 rd, 138.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.5 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9449, records dropped: 525 output_compression: NoCompression
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.164010) EVENT_LOG_v1 {"time_micros": 1759409279163999, "job": 76, "event": "compaction_finished", "compaction_time_micros": 99501, "compaction_time_cpu_micros": 53248, "output_level": 6, "num_output_files": 1, "total_output_size": 13811252, "num_input_records": 9449, "num_output_records": 8924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279164758, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279166994, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.062207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.167074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.167080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.167081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.167083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:47:59.167084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:47:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:47:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:47:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:47:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:47:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:47:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:47:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:47:59 compute-0 sudo[363421]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:47:59 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:47:59 compute-0 nova_compute[257802]: 2025-10-02 12:47:59.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:59 compute-0 ceph-mon[73607]: osdmap e359: 3 total, 3 up, 3 in
Oct 02 12:47:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:47:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:47:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:47:59 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 159 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:48:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 95f48b0f-9661-456a-86c5-cdc10b6567d5 does not exist
Oct 02 12:48:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 69a000b9-6a29-41be-9f76-2152f8a44d89 does not exist
Oct 02 12:48:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 655bafc1-f293-4a40-b5db-1826d37b831d does not exist
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:48:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:48:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:00 compute-0 sudo[363479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:00 compute-0 sudo[363479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:00 compute-0 sudo[363479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:00 compute-0 sudo[363504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:00 compute-0 sudo[363504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:00 compute-0 sudo[363504]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:00 compute-0 sudo[363529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:00 compute-0 sudo[363529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:00 compute-0 sudo[363529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:00 compute-0 sudo[363554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:48:00 compute-0 sudo[363554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:01 compute-0 podman[363623]: 2025-10-02 12:48:00.999034334 +0000 UTC m=+0.051509385 container create ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:01 compute-0 systemd[1]: Started libpod-conmon-ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5.scope.
Oct 02 12:48:01 compute-0 ceph-mon[73607]: pgmap v2640: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 159 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:48:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:01 compute-0 podman[363623]: 2025-10-02 12:48:00.97277432 +0000 UTC m=+0.025249411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:01 compute-0 podman[363623]: 2025-10-02 12:48:01.094555522 +0000 UTC m=+0.147030563 container init ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:48:01 compute-0 podman[363623]: 2025-10-02 12:48:01.103746087 +0000 UTC m=+0.156221128 container start ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:48:01 compute-0 podman[363623]: 2025-10-02 12:48:01.107051689 +0000 UTC m=+0.159526750 container attach ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:48:01 compute-0 tender_dijkstra[363639]: 167 167
Oct 02 12:48:01 compute-0 systemd[1]: libpod-ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5.scope: Deactivated successfully.
Oct 02 12:48:01 compute-0 conmon[363639]: conmon ed1f1ad82d6802479b63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5.scope/container/memory.events
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.113 2 DEBUG nova.compute.manager [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.113 2 DEBUG nova.compute.manager [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing instance network info cache due to event network-changed-ba5bd4c0-6961-4ed8-bbac-669993d3af7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.114 2 DEBUG oslo_concurrency.lockutils [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.114 2 DEBUG oslo_concurrency.lockutils [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.114 2 DEBUG nova.network.neutron [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Refreshing network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:01 compute-0 podman[363644]: 2025-10-02 12:48:01.157424666 +0000 UTC m=+0.028551753 container died ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-326f4c97c96c0f5a08f7d8a723665503661ed3b246ca412509b2771b54fe7fcd-merged.mount: Deactivated successfully.
Oct 02 12:48:01 compute-0 podman[363644]: 2025-10-02 12:48:01.190799545 +0000 UTC m=+0.061926612 container remove ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dijkstra, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:48:01 compute-0 systemd[1]: libpod-conmon-ed1f1ad82d6802479b6359a627213bd054b8f99d41fc853e0b0920f2dd2920c5.scope: Deactivated successfully.
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.240 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.269 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.269 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.273 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.273 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.273 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.274 2 INFO nova.compute.manager [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Terminating instance
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.275 2 DEBUG nova.compute.manager [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:48:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:01 compute-0 kernel: tapba5bd4c0-69 (unregistering): left promiscuous mode
Oct 02 12:48:01 compute-0 NetworkManager[44987]: <info>  [1759409281.3336] device (tapba5bd4c0-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 ovn_controller[148183]: 2025-10-02T12:48:01Z|00745|binding|INFO|Releasing lport ba5bd4c0-6961-4ed8-bbac-669993d3af7c from this chassis (sb_readonly=0)
Oct 02 12:48:01 compute-0 ovn_controller[148183]: 2025-10-02T12:48:01Z|00746|binding|INFO|Setting lport ba5bd4c0-6961-4ed8-bbac-669993d3af7c down in Southbound
Oct 02 12:48:01 compute-0 ovn_controller[148183]: 2025-10-02T12:48:01Z|00747|binding|INFO|Removing iface tapba5bd4c0-69 ovn-installed in OVS
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.353 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:6d:8a 10.100.0.4'], port_security=['fa:16:3e:e0:6d:8a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'da95c339-6bd5-495a-bd12-d1e71a8017b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6500315-835f-4d3a-971d-50fa2592498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04565a27-da45-4d09-9fa3-2f55284ff111', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0813b7b-0afa-481f-9588-b4fef02778ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ba5bd4c0-6961-4ed8-bbac-669993d3af7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.354 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ba5bd4c0-6961-4ed8-bbac-669993d3af7c in datapath c6500315-835f-4d3a-971d-50fa2592498e unbound from our chassis
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.356 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6500315-835f-4d3a-971d-50fa2592498e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.360 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00fdd4bb-21fd-4bbe-86e0-011355813899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.361 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e namespace which is not needed anymore
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Oct 02 12:48:01 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Consumed 16.304s CPU time.
Oct 02 12:48:01 compute-0 systemd-machined[211836]: Machine qemu-81-instance-000000a6 terminated.
Oct 02 12:48:01 compute-0 podman[363676]: 2025-10-02 12:48:01.449214614 +0000 UTC m=+0.045442278 container create 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:01 compute-0 systemd[1]: Started libpod-conmon-40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a.scope.
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [NOTICE]   (360775) : haproxy version is 2.8.14-c23fe91
Oct 02 12:48:01 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [NOTICE]   (360775) : path to executable is /usr/sbin/haproxy
Oct 02 12:48:01 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [ALERT]    (360775) : Current worker (360777) exited with code 143 (Terminated)
Oct 02 12:48:01 compute-0 neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e[360771]: [WARNING]  (360775) : All workers exited. Exiting... (0)
Oct 02 12:48:01 compute-0 systemd[1]: libpod-44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74.scope: Deactivated successfully.
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.511 2 INFO nova.virt.libvirt.driver [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Instance destroyed successfully.
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.512 2 DEBUG nova.objects.instance [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid da95c339-6bd5-495a-bd12-d1e71a8017b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:01 compute-0 podman[363699]: 2025-10-02 12:48:01.51623074 +0000 UTC m=+0.057934875 container died 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:01 compute-0 podman[363676]: 2025-10-02 12:48:01.42874078 +0000 UTC m=+0.024968474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.530 2 DEBUG nova.virt.libvirt.vif [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2123262195',display_name='tempest-TestNetworkBasicOps-server-2123262195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2123262195',id=166,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH6s/dG1JiNtqePgHlYs1LronjfyxaqEaySu675K65y94dsBugZbcdfbZOjwxnVn3sGta76xhoGcwU6I2nP82629sTfaxQ+DCqERXItbZ2bh8zj2URBCCk/NIgreotxLBg==',key_name='tempest-TestNetworkBasicOps-1527655277',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m7ns156l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:37Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=da95c339-6bd5-495a-bd12-d1e71a8017b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.531 2 DEBUG nova.network.os_vif_util [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.532 2 DEBUG nova.network.os_vif_util [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.532 2 DEBUG os_vif [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba5bd4c0-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.541 2 INFO os_vif [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:6d:8a,bridge_name='br-int',has_traffic_filtering=True,id=ba5bd4c0-6961-4ed8-bbac-669993d3af7c,network=Network(c6500315-835f-4d3a-971d-50fa2592498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba5bd4c0-69')
Oct 02 12:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:01 compute-0 podman[363676]: 2025-10-02 12:48:01.55123839 +0000 UTC m=+0.147466064 container init 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a56d17a1b8c02201d1e608b1abd0a42399f31ac6fcece01a0a925c577c96341c-merged.mount: Deactivated successfully.
Oct 02 12:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74-userdata-shm.mount: Deactivated successfully.
Oct 02 12:48:01 compute-0 podman[363699]: 2025-10-02 12:48:01.563793408 +0000 UTC m=+0.105497533 container cleanup 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:48:01 compute-0 podman[363676]: 2025-10-02 12:48:01.564237209 +0000 UTC m=+0.160464873 container start 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:48:01 compute-0 podman[363676]: 2025-10-02 12:48:01.567727305 +0000 UTC m=+0.163954989 container attach 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:48:01 compute-0 systemd[1]: libpod-conmon-44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74.scope: Deactivated successfully.
Oct 02 12:48:01 compute-0 podman[363766]: 2025-10-02 12:48:01.637397986 +0000 UTC m=+0.047895098 container remove 44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.643 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8d29f1ea-851b-4587-a75b-68ce53e0e486]: (4, ('Thu Oct  2 12:48:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e (44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74)\n44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74\nThu Oct  2 12:48:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e (44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74)\n44927a09f23ea5125d2d0816a21c2da56484582420f8267a876d3bf5cc874c74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.644 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aaca7385-f092-47e3-9f63-08a309036382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.645 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6500315-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 kernel: tapc6500315-80: left promiscuous mode
Oct 02 12:48:01 compute-0 nova_compute[257802]: 2025-10-02 12:48:01.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.663 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[198d4e48-eafb-409c-8231-666834b7ae25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.688 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2535ade9-0fdb-491e-9ae3-aa02d35ce05b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.689 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[629d5665-d373-444f-8e5a-c0e727113ebc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.704 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8848e890-9c54-4382-9aca-6f8eda643fb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726392, 'reachable_time': 21329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363786, 'error': None, 'target': 'ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.705 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6500315-835f-4d3a-971d-50fa2592498e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:48:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:01.706 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[322dcb7a-874f-4266-baba-3a1319f622f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6500315\x2d835f\x2d4d3a\x2d971d\x2d50fa2592498e.mount: Deactivated successfully.
Oct 02 12:48:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 2.8 MiB/s wr, 129 op/s
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.087 2 DEBUG nova.compute.manager [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-unplugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.087 2 DEBUG oslo_concurrency.lockutils [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.088 2 DEBUG oslo_concurrency.lockutils [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.088 2 DEBUG oslo_concurrency.lockutils [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.088 2 DEBUG nova.compute.manager [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] No waiting events found dispatching network-vif-unplugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.088 2 DEBUG nova.compute.manager [req-7d7a242e-93d1-46ed-9062-0b10dd492b07 req-ed2b5e8f-2862-4aa8-9815-8139c3c023a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-unplugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:48:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1578206844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.393 2 INFO nova.virt.libvirt.driver [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Deleting instance files /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6_del
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.394 2 INFO nova.virt.libvirt.driver [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Deletion of /var/lib/nova/instances/da95c339-6bd5-495a-bd12-d1e71a8017b6_del complete
Oct 02 12:48:02 compute-0 infallible_fermi[363722]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.452 2 INFO nova.compute.manager [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Took 1.18 seconds to destroy the instance on the hypervisor.
Oct 02 12:48:02 compute-0 infallible_fermi[363722]: --> relative data size: 1.0
Oct 02 12:48:02 compute-0 infallible_fermi[363722]: --> All data devices are unavailable
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.452 2 DEBUG oslo.service.loopingcall [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.452 2 DEBUG nova.compute.manager [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:48:02 compute-0 nova_compute[257802]: 2025-10-02 12:48:02.453 2 DEBUG nova.network.neutron [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:48:02 compute-0 systemd[1]: libpod-40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a.scope: Deactivated successfully.
Oct 02 12:48:02 compute-0 podman[363676]: 2025-10-02 12:48:02.480662661 +0000 UTC m=+1.076890325 container died 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-18c4b411d492aa8a1db364e17347e0287a8a0bf5a8ddfb8159d448788dc3d0d6-merged.mount: Deactivated successfully.
Oct 02 12:48:02 compute-0 podman[363676]: 2025-10-02 12:48:02.528540577 +0000 UTC m=+1.124768241 container remove 40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:48:02 compute-0 systemd[1]: libpod-conmon-40344eaf4c7a0afc76f32686dddb1ff23b7785bfb6ff8c34553c5d6cf7f0e51a.scope: Deactivated successfully.
Oct 02 12:48:02 compute-0 sudo[363554]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:02 compute-0 sudo[363812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:02 compute-0 sudo[363812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:02 compute-0 sudo[363812]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:02 compute-0 sudo[363838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:02 compute-0 sudo[363838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:02 compute-0 sudo[363838]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:02 compute-0 sudo[363863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:02 compute-0 sudo[363863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:02 compute-0 sudo[363863]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:02 compute-0 sudo[363888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:48:02 compute-0 sudo[363888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.106178016 +0000 UTC m=+0.039806128 container create 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:48:03 compute-0 systemd[1]: Started libpod-conmon-1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2.scope.
Oct 02 12:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.091124947 +0000 UTC m=+0.024753059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.19384866 +0000 UTC m=+0.127476792 container init 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.200070213 +0000 UTC m=+0.133698325 container start 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:48:03 compute-0 flamboyant_bouman[363970]: 167 167
Oct 02 12:48:03 compute-0 systemd[1]: libpod-1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2.scope: Deactivated successfully.
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.205350552 +0000 UTC m=+0.138978664 container attach 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.205966278 +0000 UTC m=+0.139594390 container died 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-154ad63e7313d7382c0e8e8bd58763a438a47d3bec0e9669d2e637835bb1618a-merged.mount: Deactivated successfully.
Oct 02 12:48:03 compute-0 podman[363954]: 2025-10-02 12:48:03.237628595 +0000 UTC m=+0.171256697 container remove 1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:48:03 compute-0 systemd[1]: libpod-conmon-1adf9fa161bb5aed9c74fd2aad5d61799d0dd90cb80bf77a9abf3377492c50d2.scope: Deactivated successfully.
Oct 02 12:48:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Oct 02 12:48:03 compute-0 ceph-mon[73607]: pgmap v2641: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 2.8 MiB/s wr, 129 op/s
Oct 02 12:48:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3044533313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:03 compute-0 podman[363993]: 2025-10-02 12:48:03.393801612 +0000 UTC m=+0.038106157 container create d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:48:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Oct 02 12:48:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Oct 02 12:48:03 compute-0 systemd[1]: Started libpod-conmon-d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d.scope.
Oct 02 12:48:03 compute-0 podman[363993]: 2025-10-02 12:48:03.379113721 +0000 UTC m=+0.023418136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379934e1449b34b0a24a9baa74781e6e018a447d6d9bd0fc7ccf16721a0916f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379934e1449b34b0a24a9baa74781e6e018a447d6d9bd0fc7ccf16721a0916f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379934e1449b34b0a24a9baa74781e6e018a447d6d9bd0fc7ccf16721a0916f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379934e1449b34b0a24a9baa74781e6e018a447d6d9bd0fc7ccf16721a0916f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:03 compute-0 podman[363993]: 2025-10-02 12:48:03.499496968 +0000 UTC m=+0.143801383 container init d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:48:03 compute-0 podman[363993]: 2025-10-02 12:48:03.505887735 +0000 UTC m=+0.150192130 container start d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:48:03 compute-0 podman[363993]: 2025-10-02 12:48:03.509185516 +0000 UTC m=+0.153489931 container attach d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:48:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 318 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.207 2 DEBUG nova.compute.manager [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.209 2 DEBUG oslo_concurrency.lockutils [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.209 2 DEBUG oslo_concurrency.lockutils [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.209 2 DEBUG oslo_concurrency.lockutils [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.209 2 DEBUG nova.compute.manager [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] No waiting events found dispatching network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.209 2 WARNING nova.compute.manager [req-b1959f2c-2454-438f-9905-dfbe5339d97f req-2f61da25-670c-4879-8cb1-88e5fb69c1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received unexpected event network-vif-plugged-ba5bd4c0-6961-4ed8-bbac-669993d3af7c for instance with vm_state active and task_state deleting.
Oct 02 12:48:04 compute-0 great_dubinsky[364010]: {
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:     "1": [
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:         {
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "devices": [
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "/dev/loop3"
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             ],
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "lv_name": "ceph_lv0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "lv_size": "7511998464",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "name": "ceph_lv0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "tags": {
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.cluster_name": "ceph",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.crush_device_class": "",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.encrypted": "0",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.osd_id": "1",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.type": "block",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:                 "ceph.vdo": "0"
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             },
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "type": "block",
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:             "vg_name": "ceph_vg0"
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:         }
Oct 02 12:48:04 compute-0 great_dubinsky[364010]:     ]
Oct 02 12:48:04 compute-0 great_dubinsky[364010]: }
Oct 02 12:48:04 compute-0 systemd[1]: libpod-d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d.scope: Deactivated successfully.
Oct 02 12:48:04 compute-0 podman[363993]: 2025-10-02 12:48:04.274371833 +0000 UTC m=+0.918676238 container died d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-379934e1449b34b0a24a9baa74781e6e018a447d6d9bd0fc7ccf16721a0916f6-merged.mount: Deactivated successfully.
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.307 2 DEBUG nova.network.neutron [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.327 2 INFO nova.compute.manager [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Took 1.87 seconds to deallocate network for instance.
Oct 02 12:48:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:04 compute-0 podman[363993]: 2025-10-02 12:48:04.332757717 +0000 UTC m=+0.977062112 container remove d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:04 compute-0 systemd[1]: libpod-conmon-d89eb93e2112c9ed545dbd3f657b09b0189da280f90a6e7fb44607a2ddea720d.scope: Deactivated successfully.
Oct 02 12:48:04 compute-0 sudo[363888]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.385 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.385 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.411 2 DEBUG nova.compute.manager [req-8db273a7-33c3-40b4-9f3d-3b7c9ebd0124 req-1e0aec6a-eae2-4cf8-98d0-49c66e20bdc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Received event network-vif-deleted-ba5bd4c0-6961-4ed8-bbac-669993d3af7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:04 compute-0 ceph-mon[73607]: osdmap e360: 3 total, 3 up, 3 in
Oct 02 12:48:04 compute-0 sudo[364034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:04 compute-0 sudo[364034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:04 compute-0 sudo[364034]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.462 2 DEBUG oslo_concurrency.processutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:04 compute-0 sudo[364059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:04 compute-0 sudo[364059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:04 compute-0 sudo[364059]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:04 compute-0 sudo[364085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:04 compute-0 sudo[364085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:04 compute-0 sudo[364085]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:04 compute-0 sudo[364110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:48:04 compute-0 sudo[364110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.720 2 DEBUG nova.network.neutron [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updated VIF entry in instance network info cache for port ba5bd4c0-6961-4ed8-bbac-669993d3af7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.721 2 DEBUG nova.network.neutron [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Updating instance_info_cache with network_info: [{"id": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "address": "fa:16:3e:e0:6d:8a", "network": {"id": "c6500315-835f-4d3a-971d-50fa2592498e", "bridge": "br-int", "label": "tempest-network-smoke--376518162", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba5bd4c0-69", "ovs_interfaceid": "ba5bd4c0-6961-4ed8-bbac-669993d3af7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.744 2 DEBUG oslo_concurrency.lockutils [req-3187769b-b8ae-485d-80e3-d33ee7295f9b req-139fab75-cdc7-48b5-a4d4-dbc35e682a87 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-da95c339-6bd5-495a-bd12-d1e71a8017b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854781329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.877 2 DEBUG oslo_concurrency.processutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.883 2 DEBUG nova.compute.provider_tree [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.900025272 +0000 UTC m=+0.042681419 container create ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.918 2 DEBUG nova.scheduler.client.report [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:04 compute-0 systemd[1]: Started libpod-conmon-ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f.scope.
Oct 02 12:48:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.950 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.9654757 +0000 UTC m=+0.108131867 container init ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.973075257 +0000 UTC m=+0.115731404 container start ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:48:04 compute-0 nova_compute[257802]: 2025-10-02 12:48:04.975 2 INFO nova.scheduler.client.report [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance da95c339-6bd5-495a-bd12-d1e71a8017b6
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.883755512 +0000 UTC m=+0.026411679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.977559577 +0000 UTC m=+0.120215744 container attach ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 02 12:48:04 compute-0 friendly_galois[364213]: 167 167
Oct 02 12:48:04 compute-0 systemd[1]: libpod-ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f.scope: Deactivated successfully.
Oct 02 12:48:04 compute-0 podman[364195]: 2025-10-02 12:48:04.979403822 +0000 UTC m=+0.122059969 container died ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 12:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b725d032e35e15e8558ebbd0ddc7a5b9ef433b2444088ff3ccf3c0627cbc9aa7-merged.mount: Deactivated successfully.
Oct 02 12:48:05 compute-0 podman[364195]: 2025-10-02 12:48:05.013952511 +0000 UTC m=+0.156608658 container remove ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:48:05 compute-0 systemd[1]: libpod-conmon-ab25f1f03706f7bcb2b9dc047adfce8cc05690661c10ad2ebcb0f6538c95750f.scope: Deactivated successfully.
Oct 02 12:48:05 compute-0 nova_compute[257802]: 2025-10-02 12:48:05.042 2 DEBUG oslo_concurrency.lockutils [None req-b9d3b3ce-a8bb-473f-8518-20b23ef7ca83 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "da95c339-6bd5-495a-bd12-d1e71a8017b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:05 compute-0 podman[364237]: 2025-10-02 12:48:05.174778762 +0000 UTC m=+0.039169684 container create f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:48:05 compute-0 systemd[1]: Started libpod-conmon-f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77.scope.
Oct 02 12:48:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3877ededd865c0f0e4ebcd12a49e37623af784035cd95e5e3e5940c9dcf17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3877ededd865c0f0e4ebcd12a49e37623af784035cd95e5e3e5940c9dcf17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3877ededd865c0f0e4ebcd12a49e37623af784035cd95e5e3e5940c9dcf17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3877ededd865c0f0e4ebcd12a49e37623af784035cd95e5e3e5940c9dcf17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:05 compute-0 podman[364237]: 2025-10-02 12:48:05.239123362 +0000 UTC m=+0.103514314 container init f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 12:48:05 compute-0 podman[364237]: 2025-10-02 12:48:05.251240559 +0000 UTC m=+0.115631521 container start f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:48:05 compute-0 podman[364237]: 2025-10-02 12:48:05.158571363 +0000 UTC m=+0.022962295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:05 compute-0 podman[364237]: 2025-10-02 12:48:05.254820558 +0000 UTC m=+0.119211510 container attach f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:48:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Oct 02 12:48:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Oct 02 12:48:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Oct 02 12:48:05 compute-0 ceph-mon[73607]: pgmap v2643: 305 pgs: 305 active+clean; 318 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Oct 02 12:48:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3854781329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 170 op/s
Oct 02 12:48:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:48:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:48:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]: {
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:         "osd_id": 1,
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:         "type": "bluestore"
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]:     }
Oct 02 12:48:06 compute-0 exciting_hofstadter[364253]: }
Oct 02 12:48:06 compute-0 ceph-mon[73607]: osdmap e361: 3 total, 3 up, 3 in
Oct 02 12:48:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Oct 02 12:48:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Oct 02 12:48:06 compute-0 systemd[1]: libpod-f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77.scope: Deactivated successfully.
Oct 02 12:48:06 compute-0 conmon[364253]: conmon f42829f6d47cf1cd2215 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77.scope/container/memory.events
Oct 02 12:48:06 compute-0 podman[364237]: 2025-10-02 12:48:06.485283544 +0000 UTC m=+1.349674486 container died f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:48:06 compute-0 nova_compute[257802]: 2025-10-02 12:48:06.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d3877ededd865c0f0e4ebcd12a49e37623af784035cd95e5e3e5940c9dcf17-merged.mount: Deactivated successfully.
Oct 02 12:48:06 compute-0 podman[364237]: 2025-10-02 12:48:06.848387273 +0000 UTC m=+1.712778205 container remove f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:48:06 compute-0 sudo[364110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:48:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:48:06 compute-0 nova_compute[257802]: 2025-10-02 12:48:06.958 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409271.9569972, 80786e14-5db1-47fe-94ae-7bd13aea6bb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:06 compute-0 nova_compute[257802]: 2025-10-02 12:48:06.958 2 INFO nova.compute.manager [-] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] VM Stopped (Lifecycle Event)
Oct 02 12:48:06 compute-0 systemd[1]: libpod-conmon-f42829f6d47cf1cd22151f71ecd201467dd652e690c26dce26b9bd96e2239b77.scope: Deactivated successfully.
Oct 02 12:48:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 73f5e015-01a1-4157-aafd-390d7846c2be does not exist
Oct 02 12:48:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ba1287c6-b549-4e31-ba8f-4f528886df1a does not exist
Oct 02 12:48:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 91dc26b6-4d52-43c9-8bb2-ea727556e882 does not exist
Oct 02 12:48:06 compute-0 nova_compute[257802]: 2025-10-02 12:48:06.993 2 DEBUG nova.compute.manager [None req-8e54fe63-3ce0-4f38-9e97-1e20abb4c6a9 - - - - - -] [instance: 80786e14-5db1-47fe-94ae-7bd13aea6bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:07 compute-0 sudo[364288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:07 compute-0 sudo[364288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:07 compute-0 sudo[364288]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:07 compute-0 nova_compute[257802]: 2025-10-02 12:48:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:07 compute-0 nova_compute[257802]: 2025-10-02 12:48:07.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:48:07 compute-0 sudo[364313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:48:07 compute-0 sudo[364313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:07 compute-0 sudo[364313]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:07 compute-0 ceph-mon[73607]: pgmap v2645: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 170 op/s
Oct 02 12:48:07 compute-0 ceph-mon[73607]: osdmap e362: 3 total, 3 up, 3 in
Oct 02 12:48:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:48:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 131 op/s
Oct 02 12:48:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:48:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:48:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:09 compute-0 nova_compute[257802]: 2025-10-02 12:48:09.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:09 compute-0 nova_compute[257802]: 2025-10-02 12:48:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:09 compute-0 ceph-mon[73607]: pgmap v2647: 305 pgs: 305 active+clean; 317 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 131 op/s
Oct 02 12:48:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 154 op/s
Oct 02 12:48:10 compute-0 nova_compute[257802]: 2025-10-02 12:48:10.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:10 compute-0 ceph-mon[73607]: pgmap v2648: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 154 op/s
Oct 02 12:48:10 compute-0 ovn_controller[148183]: 2025-10-02T12:48:10Z|00748|binding|INFO|Releasing lport d1c4722b-043f-4662-89a4-55feb7508119 from this chassis (sb_readonly=0)
Oct 02 12:48:10 compute-0 nova_compute[257802]: 2025-10-02 12:48:10.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:11 compute-0 nova_compute[257802]: 2025-10-02 12:48:11.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:11 compute-0 podman[364341]: 2025-10-02 12:48:11.925667616 +0000 UTC m=+0.056321624 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:48:11 compute-0 podman[364340]: 2025-10-02 12:48:11.926971838 +0000 UTC m=+0.060897916 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:48:11 compute-0 podman[364342]: 2025-10-02 12:48:11.927161983 +0000 UTC m=+0.052432959 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 02 12:48:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:13 compute-0 nova_compute[257802]: 2025-10-02 12:48:13.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:13 compute-0 ceph-mon[73607]: pgmap v2649: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 02 12:48:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Oct 02 12:48:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 69 op/s
Oct 02 12:48:14 compute-0 nova_compute[257802]: 2025-10-02 12:48:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Oct 02 12:48:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Oct 02 12:48:14 compute-0 nova_compute[257802]: 2025-10-02 12:48:14.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:14 compute-0 sudo[364396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:14 compute-0 sudo[364396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:14 compute-0 sudo[364396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:14 compute-0 sudo[364421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:14 compute-0 sudo[364421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:14 compute-0 sudo[364421]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:15 compute-0 nova_compute[257802]: 2025-10-02 12:48:15.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:15 compute-0 ceph-mon[73607]: pgmap v2650: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 69 op/s
Oct 02 12:48:15 compute-0 ceph-mon[73607]: osdmap e363: 3 total, 3 up, 3 in
Oct 02 12:48:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:15.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:16 compute-0 podman[364448]: 2025-10-02 12:48:16.016674801 +0000 UTC m=+0.148582311 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:48:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1080719275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:16.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.510 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409281.5082955, da95c339-6bd5-495a-bd12-d1e71a8017b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.511 2 INFO nova.compute.manager [-] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] VM Stopped (Lifecycle Event)
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.539 2 DEBUG nova.compute.manager [None req-45050c64-3217-411f-b58a-2827469f6cbe - - - - - -] [instance: da95c339-6bd5-495a-bd12-d1e71a8017b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:16 compute-0 nova_compute[257802]: 2025-10-02 12:48:16.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:17.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:17 compute-0 ceph-mon[73607]: pgmap v2652: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 12:48:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1882792622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 75 op/s
Oct 02 12:48:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:18.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:19 compute-0 nova_compute[257802]: 2025-10-02 12:48:19.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:19 compute-0 ceph-mon[73607]: pgmap v2653: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 75 op/s
Oct 02 12:48:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/949741358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2259826827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 406 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 12:48:20 compute-0 nova_compute[257802]: 2025-10-02 12:48:20.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:20 compute-0 nova_compute[257802]: 2025-10-02 12:48:20.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:20 compute-0 ceph-mon[73607]: pgmap v2654: 305 pgs: 305 active+clean; 406 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 12:48:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1654658840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:21 compute-0 nova_compute[257802]: 2025-10-02 12:48:21.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/207111384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 435 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Oct 02 12:48:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:23 compute-0 nova_compute[257802]: 2025-10-02 12:48:23.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2753409632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:23 compute-0 ceph-mon[73607]: pgmap v2655: 305 pgs: 305 active+clean; 435 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Oct 02 12:48:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1462268955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1839463791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:23.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 4.1 MiB/s wr, 102 op/s
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:24.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.434 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.435 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.435 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:48:24 compute-0 nova_compute[257802]: 2025-10-02 12:48:24.436 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2340536c-13c5-4863-80fe-b3f9bc5dfe7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:25 compute-0 ceph-mon[73607]: pgmap v2656: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 4.1 MiB/s wr, 102 op/s
Oct 02 12:48:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/574974802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 02 12:48:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3848049764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:26 compute-0 nova_compute[257802]: 2025-10-02 12:48:26.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:26.967 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:26.967 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:26.968 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:27 compute-0 ceph-mon[73607]: pgmap v2657: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.830 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.864 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.864 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.865 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.941 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.941 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.941 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.941 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:48:27 compute-0 nova_compute[257802]: 2025-10-02 12:48:27.942 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Oct 02 12:48:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576815039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.353 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3576815039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.651 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.652 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.833 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.834 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4065MB free_disk=20.855300903320312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:28 compute-0 nova_compute[257802]: 2025-10-02 12:48:28.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:28.886 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:28.887 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:48:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:28.888 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.112 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 2340536c-13c5-4863-80fe-b3f9bc5dfe7d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.112 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.112 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.169 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 12:48:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:29.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352135656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:29 compute-0 ceph-mon[73607]: pgmap v2658: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.640 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.648 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.754 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.906 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:48:29 compute-0 nova_compute[257802]: 2025-10-02 12:48:29.907 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Oct 02 12:48:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:30.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/352135656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:31.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:31 compute-0 nova_compute[257802]: 2025-10-02 12:48:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:31 compute-0 ceph-mon[73607]: pgmap v2659: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Oct 02 12:48:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.8 MiB/s wr, 229 op/s
Oct 02 12:48:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:32 compute-0 ceph-mon[73607]: pgmap v2660: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.8 MiB/s wr, 229 op/s
Oct 02 12:48:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Oct 02 12:48:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Oct 02 12:48:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Oct 02 12:48:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 159 KiB/s wr, 272 op/s
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.266 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.268 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.293 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.362 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.362 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.370 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.370 2 INFO nova.compute.claims [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:48:34 compute-0 nova_compute[257802]: 2025-10-02 12:48:34.516 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:34 compute-0 sudo[364539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:34 compute-0 sudo[364539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:34 compute-0 sudo[364539]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:34 compute-0 sudo[364574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:34 compute-0 sudo[364574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:34 compute-0 sudo[364574]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413541723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.000 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.006 2 DEBUG nova.compute.provider_tree [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.024 2 DEBUG nova.scheduler.client.report [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.055 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.056 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:48:35 compute-0 ceph-mon[73607]: osdmap e364: 3 total, 3 up, 3 in
Oct 02 12:48:35 compute-0 ceph-mon[73607]: pgmap v2662: 305 pgs: 305 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 159 KiB/s wr, 272 op/s
Oct 02 12:48:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3413541723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.112 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.112 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.138 2 INFO nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.155 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.248 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.249 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.249 2 INFO nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Creating image(s)
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.284 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.313 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.340 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.343 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.373 2 DEBUG nova.policy [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.412 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.413 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.413 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.414 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.442 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.446 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.909 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:35 compute-0 nova_compute[257802]: 2025-10-02 12:48:35.976 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:48:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 472 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 1005 KiB/s wr, 305 op/s
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.179 2 DEBUG nova.objects.instance [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.194 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.195 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Ensure instance console log exists: /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.195 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.195 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.196 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Oct 02 12:48:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Oct 02 12:48:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Oct 02 12:48:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:36 compute-0 nova_compute[257802]: 2025-10-02 12:48:36.945 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Successfully created port: b1e398cd-0bdc-4194-b98d-41c1a091d947 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:48:37 compute-0 nova_compute[257802]: 2025-10-02 12:48:37.141 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Oct 02 12:48:37 compute-0 ceph-mon[73607]: pgmap v2663: 305 pgs: 305 active+clean; 472 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 1005 KiB/s wr, 305 op/s
Oct 02 12:48:37 compute-0 ceph-mon[73607]: osdmap e365: 3 total, 3 up, 3 in
Oct 02 12:48:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:37.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Oct 02 12:48:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Oct 02 12:48:37 compute-0 nova_compute[257802]: 2025-10-02 12:48:37.988 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Successfully updated port: b1e398cd-0bdc-4194-b98d-41c1a091d947 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.013 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.013 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.013 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:48:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 472 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 122 op/s
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.106 2 DEBUG nova.compute.manager [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.107 2 DEBUG nova.compute.manager [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing instance network info cache due to event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.107 2 DEBUG oslo_concurrency.lockutils [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:38 compute-0 nova_compute[257802]: 2025-10-02 12:48:38.215 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:48:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:38.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:38 compute-0 ceph-mon[73607]: osdmap e366: 3 total, 3 up, 3 in
Oct 02 12:48:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.242 2 DEBUG nova.network.neutron [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.264 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.264 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance network_info: |[{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.265 2 DEBUG oslo_concurrency.lockutils [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.265 2 DEBUG nova.network.neutron [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing network info cache for port b1e398cd-0bdc-4194-b98d-41c1a091d947 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.267 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Start _get_guest_xml network_info=[{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.271 2 WARNING nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.277 2 DEBUG nova.virt.libvirt.host [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.278 2 DEBUG nova.virt.libvirt.host [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.285 2 DEBUG nova.virt.libvirt.host [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.285 2 DEBUG nova.virt.libvirt.host [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.286 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.286 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.287 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.287 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.287 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.287 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.288 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.288 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.288 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.288 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.288 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.289 2 DEBUG nova.virt.hardware [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.291 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:39 compute-0 ceph-mon[73607]: pgmap v2666: 305 pgs: 305 active+clean; 472 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 122 op/s
Oct 02 12:48:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717993820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.742 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.793 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:39 compute-0 nova_compute[257802]: 2025-10-02 12:48:39.797 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 572 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 311 op/s
Oct 02 12:48:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027014302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.230 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.232 2 DEBUG nova.virt.libvirt.vif [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:35Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.232 2 DEBUG nova.network.os_vif_util [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.233 2 DEBUG nova.network.os_vif_util [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.234 2 DEBUG nova.objects.instance [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.252 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <uuid>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</uuid>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <name>instance-000000ae</name>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:48:39</nova:creationTime>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:48:40 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <system>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="serial">a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="uuid">a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </system>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <os>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </os>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <features>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </features>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk">
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config">
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </source>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:48:40 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:97:95:8d"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <target dev="tapb1e398cd-0b"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log" append="off"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <video>
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </video>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:48:40 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:48:40 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:48:40 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:48:40 compute-0 nova_compute[257802]: </domain>
Oct 02 12:48:40 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.253 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Preparing to wait for external event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.254 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.254 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.254 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.255 2 DEBUG nova.virt.libvirt.vif [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:35Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.255 2 DEBUG nova.network.os_vif_util [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.255 2 DEBUG nova.network.os_vif_util [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.256 2 DEBUG os_vif [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1e398cd-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1e398cd-0b, col_values=(('external_ids', {'iface-id': 'b1e398cd-0bdc-4194-b98d-41c1a091d947', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:95:8d', 'vm-uuid': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:40 compute-0 NetworkManager[44987]: <info>  [1759409320.2628] manager: (tapb1e398cd-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.269 2 INFO os_vif [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b')
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.325 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.326 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.326 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:97:95:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.326 2 INFO nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Using config drive
Oct 02 12:48:40 compute-0 nova_compute[257802]: 2025-10-02 12:48:40.352 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/717993820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1027014302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.072 2 INFO nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Creating config drive at /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.077 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfykhoa3r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.209 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfykhoa3r" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.241 2 DEBUG nova.storage.rbd_utils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.245 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.472 2 DEBUG nova.network.neutron [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updated VIF entry in instance network info cache for port b1e398cd-0bdc-4194-b98d-41c1a091d947. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.473 2 DEBUG nova.network.neutron [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:41 compute-0 nova_compute[257802]: 2025-10-02 12:48:41.493 2 DEBUG oslo_concurrency.lockutils [req-a34a15d3-f196-417b-bca0-004b3ab281e9 req-094c9dd9-3454-43f7-b020-57ab440a70ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:42 compute-0 ceph-mon[73607]: pgmap v2667: 305 pgs: 305 active+clean; 572 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 311 op/s
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 593 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 9.3 MiB/s wr, 320 op/s
Oct 02 12:48:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:48:42
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images']
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:42 compute-0 nova_compute[257802]: 2025-10-02 12:48:42.875 2 DEBUG oslo_concurrency.processutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:42 compute-0 nova_compute[257802]: 2025-10-02 12:48:42.876 2 INFO nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Deleting local config drive /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/disk.config because it was imported into RBD.
Oct 02 12:48:42 compute-0 podman[364894]: 2025-10-02 12:48:42.94326269 +0000 UTC m=+0.081088843 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:48:42 compute-0 kernel: tapb1e398cd-0b: entered promiscuous mode
Oct 02 12:48:42 compute-0 NetworkManager[44987]: <info>  [1759409322.9554] manager: (tapb1e398cd-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 02 12:48:42 compute-0 nova_compute[257802]: 2025-10-02 12:48:42.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:42 compute-0 ovn_controller[148183]: 2025-10-02T12:48:42Z|00749|binding|INFO|Claiming lport b1e398cd-0bdc-4194-b98d-41c1a091d947 for this chassis.
Oct 02 12:48:42 compute-0 ovn_controller[148183]: 2025-10-02T12:48:42Z|00750|binding|INFO|b1e398cd-0bdc-4194-b98d-41c1a091d947: Claiming fa:16:3e:97:95:8d 10.100.0.10
Oct 02 12:48:42 compute-0 nova_compute[257802]: 2025-10-02 12:48:42.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.979 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:95:8d 10.100.0.10'], port_security=['fa:16:3e:97:95:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7903f35c-f06f-45c6-b8fd-ceb1b636ba65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16ce81b6-9ce6-4724-b3e5-191386c7c3a3, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b1e398cd-0bdc-4194-b98d-41c1a091d947) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.981 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b1e398cd-0bdc-4194-b98d-41c1a091d947 in datapath 7ab6b6d4-5590-4247-9dd8-59243897cce9 bound to our chassis
Oct 02 12:48:42 compute-0 ovn_controller[148183]: 2025-10-02T12:48:42Z|00751|binding|INFO|Setting lport b1e398cd-0bdc-4194-b98d-41c1a091d947 ovn-installed in OVS
Oct 02 12:48:42 compute-0 ovn_controller[148183]: 2025-10-02T12:48:42Z|00752|binding|INFO|Setting lport b1e398cd-0bdc-4194-b98d-41c1a091d947 up in Southbound
Oct 02 12:48:42 compute-0 nova_compute[257802]: 2025-10-02 12:48:42.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:42 compute-0 systemd-udevd[364966]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:48:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.983 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ab6b6d4-5590-4247-9dd8-59243897cce9
Oct 02 12:48:42 compute-0 systemd-machined[211836]: New machine qemu-84-instance-000000ae.
Oct 02 12:48:42 compute-0 podman[364896]: 2025-10-02 12:48:42.991611507 +0000 UTC m=+0.099365752 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:48:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.993 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1215a3da-0499-459a-aed0-42c1959fc1f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.996 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ab6b6d4-51 in ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:48:42 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-000000ae.
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.998 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ab6b6d4-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:42.998 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15995f4d-4fb3-4b64-a40c-6ebf295efcd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 podman[364895]: 2025-10-02 12:48:43.001807507 +0000 UTC m=+0.125904353 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:43 compute-0 NetworkManager[44987]: <info>  [1759409323.0022] device (tapb1e398cd-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:48:43 compute-0 NetworkManager[44987]: <info>  [1759409323.0034] device (tapb1e398cd-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.001 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f361d2ff-0693-48f5-af47-9963ce0f9a14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.015 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1037ea58-fcd1-4570-a2ac-5f1acf591bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.037 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f8f652-4ccb-4df3-a727-81c42c84087f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.062 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[50c96655-bc9f-47bb-889b-7ed75b3b62dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 NetworkManager[44987]: <info>  [1759409323.0680] manager: (tap7ab6b6d4-50): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.067 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[752e45dc-a042-4547-826b-ceaee29531dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.095 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[afabbf7e-ba80-45cb-9bb6-1e0dff6c9fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.098 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[04e241f9-c1d0-4eb2-b1c7-2cdd78fddfd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 NetworkManager[44987]: <info>  [1759409323.1181] device (tap7ab6b6d4-50): carrier: link connected
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.124 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[087368f2-ff39-4996-9ab7-fbfea0e40a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.139 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b055b088-9b74-4a2d-a1c2-b5096be0fad1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab6b6d4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:d0:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739074, 'reachable_time': 35704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365000, 'error': None, 'target': 'ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.154 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[afaf8935-fe45-42c1-b01d-c04b74f29fa9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:d0c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739074, 'tstamp': 739074}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365001, 'error': None, 'target': 'ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.167 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c71826bd-2699-4a5e-9d44-3565d4c320ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab6b6d4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:d0:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739074, 'reachable_time': 35704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365002, 'error': None, 'target': 'ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.197 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c24c76d7-69d2-45d9-bde2-2ac32c8f4112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ceph-mon[73607]: pgmap v2668: 305 pgs: 305 active+clean; 593 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 9.3 MiB/s wr, 320 op/s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.254 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3f01a7a4-7d8a-4cc0-8434-1c67e0760a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.255 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab6b6d4-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.256 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.256 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ab6b6d4-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:43 compute-0 NetworkManager[44987]: <info>  [1759409323.2585] manager: (tap7ab6b6d4-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct 02 12:48:43 compute-0 nova_compute[257802]: 2025-10-02 12:48:43.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:43 compute-0 kernel: tap7ab6b6d4-50: entered promiscuous mode
Oct 02 12:48:43 compute-0 nova_compute[257802]: 2025-10-02 12:48:43.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.263 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ab6b6d4-50, col_values=(('external_ids', {'iface-id': 'cd85454a-320b-4c80-984b-1f77580d2ea7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:43 compute-0 nova_compute[257802]: 2025-10-02 12:48:43.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:43 compute-0 ovn_controller[148183]: 2025-10-02T12:48:43Z|00753|binding|INFO|Releasing lport cd85454a-320b-4c80-984b-1f77580d2ea7 from this chassis (sb_readonly=0)
Oct 02 12:48:43 compute-0 nova_compute[257802]: 2025-10-02 12:48:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.281 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ab6b6d4-5590-4247-9dd8-59243897cce9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ab6b6d4-5590-4247-9dd8-59243897cce9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.282 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[49d3fdc9-54d7-427a-8c95-abbb79660b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.282 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-7ab6b6d4-5590-4247-9dd8-59243897cce9
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/7ab6b6d4-5590-4247-9dd8-59243897cce9.pid.haproxy
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 7ab6b6d4-5590-4247-9dd8-59243897cce9
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:48:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:43.283 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'env', 'PROCESS_TAG=haproxy-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ab6b6d4-5590-4247-9dd8-59243897cce9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:48:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:48:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:48:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:48:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:48:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:48:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:43.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:43 compute-0 podman[365068]: 2025-10-02 12:48:43.640150728 +0000 UTC m=+0.024040191 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:48:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Oct 02 12:48:44 compute-0 podman[365068]: 2025-10-02 12:48:44.067412014 +0000 UTC m=+0.451301467 container create 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 606 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.4 MiB/s wr, 287 op/s
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:48:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:48:44 compute-0 systemd[1]: Started libpod-conmon-0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b.scope.
Oct 02 12:48:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0ac15af5b4be8d53e6ac6c2a21c261916cb7d3ff85d66a944ce9cc2d56fc69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Oct 02 12:48:44 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Oct 02 12:48:44 compute-0 podman[365068]: 2025-10-02 12:48:44.266729741 +0000 UTC m=+0.650619214 container init 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:48:44 compute-0 podman[365068]: 2025-10-02 12:48:44.274515821 +0000 UTC m=+0.658405274 container start 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:48:44 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [NOTICE]   (365093) : New worker (365095) forked
Oct 02 12:48:44 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [NOTICE]   (365093) : Loading success.
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.298 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409324.29792, a44a93f4-fbf3-4ba6-9111-36dbf856ddfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.299 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] VM Started (Lifecycle Event)
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.317 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.322 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409324.2992942, a44a93f4-fbf3-4ba6-9111-36dbf856ddfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.323 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] VM Paused (Lifecycle Event)
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.341 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.345 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:44 compute-0 nova_compute[257802]: 2025-10-02 12:48:44.362 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:48:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:44.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.516029) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324516078, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 800, "num_deletes": 253, "total_data_size": 985791, "memory_usage": 1000864, "flush_reason": "Manual Compaction"}
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324562383, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 728549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58455, "largest_seqno": 59254, "table_properties": {"data_size": 724805, "index_size": 1459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9940, "raw_average_key_size": 21, "raw_value_size": 716877, "raw_average_value_size": 1538, "num_data_blocks": 62, "num_entries": 466, "num_filter_entries": 466, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409279, "oldest_key_time": 1759409279, "file_creation_time": 1759409324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 46424 microseconds, and 3211 cpu microseconds.
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.562451) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 728549 bytes OK
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.562485) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.619914) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.619958) EVENT_LOG_v1 {"time_micros": 1759409324619948, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.619979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 981774, prev total WAL file size 981774, number of live WAL files 2.
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.620803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323631' seq:0, type:0; will stop at (end)
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(711KB)], [128(13MB)]
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324620903, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 14539801, "oldest_snapshot_seqno": -1}
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8882 keys, 10918023 bytes, temperature: kUnknown
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324846163, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10918023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10861403, "index_size": 33339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 230133, "raw_average_key_size": 25, "raw_value_size": 10706386, "raw_average_value_size": 1205, "num_data_blocks": 1296, "num_entries": 8882, "num_filter_entries": 8882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:48:44 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.846568) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10918023 bytes
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.014583) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.5 rd, 48.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.2 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(34.9) write-amplify(15.0) OK, records in: 9390, records dropped: 508 output_compression: NoCompression
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.014618) EVENT_LOG_v1 {"time_micros": 1759409325014605, "job": 78, "event": "compaction_finished", "compaction_time_micros": 225379, "compaction_time_cpu_micros": 26806, "output_level": 6, "num_output_files": 1, "total_output_size": 10918023, "num_input_records": 9390, "num_output_records": 8882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409325014967, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409325017391, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:44.620676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.017580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.017588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.017591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.017593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:48:45.017596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:45.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:45 compute-0 ceph-mon[73607]: pgmap v2669: 305 pgs: 305 active+clean; 606 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.4 MiB/s wr, 287 op/s
Oct 02 12:48:45 compute-0 ceph-mon[73607]: osdmap e367: 3 total, 3 up, 3 in
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.516 2 DEBUG nova.compute.manager [req-6118e31a-d526-4f7c-b0aa-aae80060ed6b req-c0b96b33-3faf-4c35-bae6-03696a315e1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.517 2 DEBUG oslo_concurrency.lockutils [req-6118e31a-d526-4f7c-b0aa-aae80060ed6b req-c0b96b33-3faf-4c35-bae6-03696a315e1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.517 2 DEBUG oslo_concurrency.lockutils [req-6118e31a-d526-4f7c-b0aa-aae80060ed6b req-c0b96b33-3faf-4c35-bae6-03696a315e1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.517 2 DEBUG oslo_concurrency.lockutils [req-6118e31a-d526-4f7c-b0aa-aae80060ed6b req-c0b96b33-3faf-4c35-bae6-03696a315e1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.518 2 DEBUG nova.compute.manager [req-6118e31a-d526-4f7c-b0aa-aae80060ed6b req-c0b96b33-3faf-4c35-bae6-03696a315e1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Processing event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.518 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.522 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409325.522299, a44a93f4-fbf3-4ba6-9111-36dbf856ddfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.522 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] VM Resumed (Lifecycle Event)
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.524 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.526 2 INFO nova.virt.libvirt.driver [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance spawned successfully.
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.527 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.558 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.561 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.562 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.562 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.562 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.563 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.563 2 DEBUG nova.virt.libvirt.driver [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.566 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.609 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.649 2 INFO nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Took 10.40 seconds to spawn the instance on the hypervisor.
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.649 2 DEBUG nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.723 2 INFO nova.compute.manager [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Took 11.39 seconds to build instance.
Oct 02 12:48:45 compute-0 nova_compute[257802]: 2025-10-02 12:48:45.741 2 DEBUG oslo_concurrency.lockutils [None req-1fd523d6-2f7f-40ef-9f2f-aff64b953c27 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 618 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 10 MiB/s wr, 303 op/s
Oct 02 12:48:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:46.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:47 compute-0 podman[365106]: 2025-10-02 12:48:47.00598789 +0000 UTC m=+0.139905708 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:48:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:47 compute-0 ceph-mon[73607]: pgmap v2671: 305 pgs: 305 active+clean; 618 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 10 MiB/s wr, 303 op/s
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.714 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.714 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.715 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.715 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.715 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:47 compute-0 nova_compute[257802]: 2025-10-02 12:48:47.715 2 WARNING nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 for instance with vm_state active and task_state None.
Oct 02 12:48:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 618 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 9.0 MiB/s wr, 259 op/s
Oct 02 12:48:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:48.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:48 compute-0 ceph-mon[73607]: pgmap v2672: 305 pgs: 305 active+clean; 618 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 9.0 MiB/s wr, 259 op/s
Oct 02 12:48:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:49.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:49 compute-0 nova_compute[257802]: 2025-10-02 12:48:49.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2322862970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 556 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 245 op/s
Oct 02 12:48:50 compute-0 nova_compute[257802]: 2025-10-02 12:48:50.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:50.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:51 compute-0 ceph-mon[73607]: pgmap v2673: 305 pgs: 305 active+clean; 556 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 245 op/s
Oct 02 12:48:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:51.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.517 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.518 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.518 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.519 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.519 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.521 2 INFO nova.compute.manager [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Terminating instance
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.523 2 DEBUG nova.compute.manager [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:48:51 compute-0 kernel: tapaa08bc10-64 (unregistering): left promiscuous mode
Oct 02 12:48:51 compute-0 NetworkManager[44987]: <info>  [1759409331.5775] device (tapaa08bc10-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:48:51 compute-0 ovn_controller[148183]: 2025-10-02T12:48:51Z|00754|binding|INFO|Releasing lport aa08bc10-64b4-4b2b-88d3-2f1a994c799c from this chassis (sb_readonly=0)
Oct 02 12:48:51 compute-0 ovn_controller[148183]: 2025-10-02T12:48:51Z|00755|binding|INFO|Setting lport aa08bc10-64b4-4b2b-88d3-2f1a994c799c down in Southbound
Oct 02 12:48:51 compute-0 ovn_controller[148183]: 2025-10-02T12:48:51Z|00756|binding|INFO|Removing iface tapaa08bc10-64 ovn-installed in OVS
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Oct 02 12:48:51 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000aa.scope: Consumed 16.079s CPU time.
Oct 02 12:48:51 compute-0 systemd-machined[211836]: Machine qemu-83-instance-000000aa terminated.
Oct 02 12:48:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:51.691 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:d0:c0 10.100.0.6'], port_security=['fa:16:3e:90:d0:c0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2340536c-13c5-4863-80fe-b3f9bc5dfe7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccdcfae6-20a7-43b2-a33d-9f1e1bd6bb8c d8ca8f1f-f5ac-4bad-9c2b-c266c19224ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b23ccf-1bf4-44ff-8965-dd1a37c9d290, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=aa08bc10-64b4-4b2b-88d3-2f1a994c799c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:51.693 158261 INFO neutron.agent.ovn.metadata.agent [-] Port aa08bc10-64b4-4b2b-88d3-2f1a994c799c in datapath a0b40647-5bdc-42ab-8337-b3fcdc66ecfc unbound from our chassis
Oct 02 12:48:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:51.695 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:48:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:51.696 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[048666f2-5a46-475f-980f-304cd93805a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:51.697 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc namespace which is not needed anymore
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.765 2 INFO nova.virt.libvirt.driver [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Instance destroyed successfully.
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.766 2 DEBUG nova.objects.instance [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid 2340536c-13c5-4863-80fe-b3f9bc5dfe7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.803 2 DEBUG nova.compute.manager [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.803 2 DEBUG nova.compute.manager [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing instance network info cache due to event network-changed-aa08bc10-64b4-4b2b-88d3-2f1a994c799c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.803 2 DEBUG oslo_concurrency.lockutils [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.804 2 DEBUG oslo_concurrency.lockutils [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.804 2 DEBUG nova.network.neutron [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Refreshing network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.837 2 DEBUG nova.virt.libvirt.vif [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1583023021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=170,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-wxciva9w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:44Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=2340536c-13c5-4863-80fe-b3f9bc5dfe7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.837 2 DEBUG nova.network.os_vif_util [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.838 2 DEBUG nova.network.os_vif_util [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.839 2 DEBUG os_vif [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa08bc10-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.852 2 INFO os_vif [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:d0:c0,bridge_name='br-int',has_traffic_filtering=True,id=aa08bc10-64b4-4b2b-88d3-2f1a994c799c,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa08bc10-64')
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [NOTICE]   (363130) : haproxy version is 2.8.14-c23fe91
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [NOTICE]   (363130) : path to executable is /usr/sbin/haproxy
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [WARNING]  (363130) : Exiting Master process...
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [WARNING]  (363130) : Exiting Master process...
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [ALERT]    (363130) : Current worker (363132) exited with code 143 (Terminated)
Oct 02 12:48:51 compute-0 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[363125]: [WARNING]  (363130) : All workers exited. Exiting... (0)
Oct 02 12:48:51 compute-0 systemd[1]: libpod-5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929.scope: Deactivated successfully.
Oct 02 12:48:51 compute-0 podman[365165]: 2025-10-02 12:48:51.865106814 +0000 UTC m=+0.069838577 container died 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929-userdata-shm.mount: Deactivated successfully.
Oct 02 12:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6aeaab9af37e146f895636990a34b45809c14328b11930886348b6419c1f1aa-merged.mount: Deactivated successfully.
Oct 02 12:48:51 compute-0 podman[365165]: 2025-10-02 12:48:51.90893458 +0000 UTC m=+0.113666343 container cleanup 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:48:51 compute-0 systemd[1]: libpod-conmon-5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929.scope: Deactivated successfully.
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.970 2 DEBUG nova.compute.manager [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.971 2 DEBUG nova.compute.manager [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing instance network info cache due to event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.971 2 DEBUG oslo_concurrency.lockutils [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.971 2 DEBUG oslo_concurrency.lockutils [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:51 compute-0 nova_compute[257802]: 2025-10-02 12:48:51.971 2 DEBUG nova.network.neutron [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing network info cache for port b1e398cd-0bdc-4194-b98d-41c1a091d947 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:52 compute-0 podman[365210]: 2025-10-02 12:48:52.00295662 +0000 UTC m=+0.065177342 container remove 5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.009 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9da2af71-34b4-49b1-a039-78471e74db45]: (4, ('Thu Oct  2 12:48:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc (5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929)\n5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929\nThu Oct  2 12:48:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc (5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929)\n5f615feadb787f95bfbe7a849adca8ab900b3407dfb7f1011bb7015d21212929\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.011 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a9594231-a8d2-4a5a-b1d9-25cb36b185ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.013 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0b40647-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:52 compute-0 kernel: tapa0b40647-50: left promiscuous mode
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.036 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b06aca2-abb0-439b-b039-64c3ee3ef1db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.070 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1362fac5-2308-4d7b-8b9f-0dc0aea4eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.073 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[796448ba-4629-44d6-94e0-b98f814094bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 550 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 213 op/s
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.091 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[21fd41b0-6bb7-4029-8faa-70901cfe35d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732949, 'reachable_time': 15463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365225, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 systemd[1]: run-netns-ovnmeta\x2da0b40647\x2d5bdc\x2d42ab\x2d8337\x2db3fcdc66ecfc.mount: Deactivated successfully.
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.095 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:48:52.096 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e60c1aea-5d56-4444-ab21-3826e97c1fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.373 2 INFO nova.virt.libvirt.driver [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Deleting instance files /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d_del
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.374 2 INFO nova.virt.libvirt.driver [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Deletion of /var/lib/nova/instances/2340536c-13c5-4863-80fe-b3f9bc5dfe7d_del complete
Oct 02 12:48:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:52.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.381 2 DEBUG nova.compute.manager [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-unplugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.382 2 DEBUG oslo_concurrency.lockutils [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.382 2 DEBUG oslo_concurrency.lockutils [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.382 2 DEBUG oslo_concurrency.lockutils [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.383 2 DEBUG nova.compute.manager [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] No waiting events found dispatching network-vif-unplugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.383 2 DEBUG nova.compute.manager [req-ac109e34-b30f-424f-92db-05f869a70958 req-f1ccd87c-a5c5-48e2-9757-ccb0475e68e3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-unplugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.624 2 INFO nova.compute.manager [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Took 1.10 seconds to destroy the instance on the hypervisor.
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.625 2 DEBUG oslo.service.loopingcall [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.625 2 DEBUG nova.compute.manager [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:48:52 compute-0 nova_compute[257802]: 2025-10-02 12:48:52.626 2 DEBUG nova.network.neutron [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:48:53 compute-0 ceph-mon[73607]: pgmap v2674: 305 pgs: 305 active+clean; 550 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 213 op/s
Oct 02 12:48:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:53.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:53 compute-0 nova_compute[257802]: 2025-10-02 12:48:53.987 2 DEBUG nova.network.neutron [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.025 2 INFO nova.compute.manager [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Took 1.40 seconds to deallocate network for instance.
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 522 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 199 op/s
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.079 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.080 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.108 2 DEBUG nova.network.neutron [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updated VIF entry in instance network info cache for port b1e398cd-0bdc-4194-b98d-41c1a091d947. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.109 2 DEBUG nova.network.neutron [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.172 2 DEBUG oslo_concurrency.processutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.204 2 DEBUG oslo_concurrency.lockutils [req-a5967dda-3fc3-4a65-8ca1-b1d5becaa65d req-0e0d0185-eac6-4d2e-b85a-034a64b17174 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1329233657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/492698381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:54.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.464 2 DEBUG nova.compute.manager [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.465 2 DEBUG oslo_concurrency.lockutils [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.465 2 DEBUG oslo_concurrency.lockutils [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.465 2 DEBUG oslo_concurrency.lockutils [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.466 2 DEBUG nova.compute.manager [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] No waiting events found dispatching network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.466 2 WARNING nova.compute.manager [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received unexpected event network-vif-plugged-aa08bc10-64b4-4b2b-88d3-2f1a994c799c for instance with vm_state deleted and task_state None.
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.466 2 DEBUG nova.compute.manager [req-735ef491-38b0-4101-8584-115c18519a7a req-215fc07b-916b-4a68-8cce-cf5d6120bee3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Received event network-vif-deleted-aa08bc10-64b4-4b2b-88d3-2f1a994c799c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093491162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.596 2 DEBUG oslo_concurrency.processutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.602 2 DEBUG nova.compute.provider_tree [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.625 2 DEBUG nova.scheduler.client.report [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006974070293597617 of space, bias 1.0, pg target 2.092221088079285 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002255926623860041 of space, bias 1.0, pg target 0.6722661339102922 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005059840580215894 of space, bias 1.0, pg target 1.5078324929043363 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.656 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.692 2 INFO nova.scheduler.client.report [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance 2340536c-13c5-4863-80fe-b3f9bc5dfe7d
Oct 02 12:48:54 compute-0 nova_compute[257802]: 2025-10-02 12:48:54.819 2 DEBUG oslo_concurrency.lockutils [None req-f443745c-dbf1-4b01-b6ca-ff4bb559d454 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "2340536c-13c5-4863-80fe-b3f9bc5dfe7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:54 compute-0 sudo[365251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:54 compute-0 sudo[365251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:54 compute-0 sudo[365251]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:54 compute-0 sudo[365276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:54 compute-0 sudo[365276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:54 compute-0 sudo[365276]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:55 compute-0 nova_compute[257802]: 2025-10-02 12:48:55.030 2 DEBUG nova.network.neutron [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updated VIF entry in instance network info cache for port aa08bc10-64b4-4b2b-88d3-2f1a994c799c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:55 compute-0 nova_compute[257802]: 2025-10-02 12:48:55.031 2 DEBUG nova.network.neutron [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Updating instance_info_cache with network_info: [{"id": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "address": "fa:16:3e:90:d0:c0", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa08bc10-64", "ovs_interfaceid": "aa08bc10-64b4-4b2b-88d3-2f1a994c799c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:55 compute-0 nova_compute[257802]: 2025-10-02 12:48:55.181 2 DEBUG oslo_concurrency.lockutils [req-cfba548e-33c9-4f3a-9024-5b7dda233103 req-056b9c89-a6ea-4695-a821-cd70b233dc68 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2340536c-13c5-4863-80fe-b3f9bc5dfe7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:55 compute-0 ceph-mon[73607]: pgmap v2675: 305 pgs: 305 active+clean; 522 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 199 op/s
Oct 02 12:48:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/770388318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4093491162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/411056511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/411056511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 196 op/s
Oct 02 12:48:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:56.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:56 compute-0 nova_compute[257802]: 2025-10-02 12:48:56.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:57.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:57 compute-0 ceph-mon[73607]: pgmap v2676: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 196 op/s
Oct 02 12:48:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 125 KiB/s wr, 179 op/s
Oct 02 12:48:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:58.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:59 compute-0 ceph-mon[73607]: pgmap v2677: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 125 KiB/s wr, 179 op/s
Oct 02 12:48:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:48:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:48:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:59.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:48:59 compute-0 nova_compute[257802]: 2025-10-02 12:48:59.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 473 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 444 KiB/s wr, 222 op/s
Oct 02 12:49:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:01 compute-0 ceph-mon[73607]: pgmap v2678: 305 pgs: 305 active+clean; 473 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 444 KiB/s wr, 222 op/s
Oct 02 12:49:01 compute-0 ovn_controller[148183]: 2025-10-02T12:49:01Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:95:8d 10.100.0.10
Oct 02 12:49:01 compute-0 ovn_controller[148183]: 2025-10-02T12:49:01Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:95:8d 10.100.0.10
Oct 02 12:49:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:01.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:01 compute-0 nova_compute[257802]: 2025-10-02 12:49:01.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 478 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 853 KiB/s wr, 160 op/s
Oct 02 12:49:02 compute-0 ovn_controller[148183]: 2025-10-02T12:49:02Z|00757|binding|INFO|Releasing lport cd85454a-320b-4c80-984b-1f77580d2ea7 from this chassis (sb_readonly=0)
Oct 02 12:49:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:02 compute-0 nova_compute[257802]: 2025-10-02 12:49:02.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:03 compute-0 ceph-mon[73607]: pgmap v2679: 305 pgs: 305 active+clean; 478 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 853 KiB/s wr, 160 op/s
Oct 02 12:49:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2645208543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:03.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 502 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 194 op/s
Oct 02 12:49:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2762881637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:04 compute-0 nova_compute[257802]: 2025-10-02 12:49:04.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:05 compute-0 ceph-mon[73607]: pgmap v2680: 305 pgs: 305 active+clean; 502 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 194 op/s
Oct 02 12:49:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:49:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:05.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:49:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Oct 02 12:49:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:06 compute-0 nova_compute[257802]: 2025-10-02 12:49:06.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:06 compute-0 nova_compute[257802]: 2025-10-02 12:49:06.764 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409331.7637827, 2340536c-13c5-4863-80fe-b3f9bc5dfe7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:06 compute-0 nova_compute[257802]: 2025-10-02 12:49:06.765 2 INFO nova.compute.manager [-] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] VM Stopped (Lifecycle Event)
Oct 02 12:49:06 compute-0 nova_compute[257802]: 2025-10-02 12:49:06.841 2 DEBUG nova.compute.manager [None req-0baaa996-f2ce-4c1f-94b7-bc1f4ff16614 - - - - - -] [instance: 2340536c-13c5-4863-80fe-b3f9bc5dfe7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:06 compute-0 nova_compute[257802]: 2025-10-02 12:49:06.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:07 compute-0 ceph-mon[73607]: pgmap v2681: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Oct 02 12:49:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2231071178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:07.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:07 compute-0 sudo[365308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:07 compute-0 sudo[365308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:07 compute-0 sudo[365308]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:07 compute-0 sudo[365333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:07 compute-0 sudo[365333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:07 compute-0 sudo[365333]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:07 compute-0 sudo[365358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:07 compute-0 sudo[365358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:07 compute-0 sudo[365358]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:07 compute-0 sudo[365383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:49:07 compute-0 sudo[365383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 221 op/s
Oct 02 12:49:08 compute-0 nova_compute[257802]: 2025-10-02 12:49:08.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:08 compute-0 nova_compute[257802]: 2025-10-02 12:49:08.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:49:08 compute-0 sudo[365383]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cd740a5f-b2fe-4e1f-92fc-de1bc6dbfc08 does not exist
Oct 02 12:49:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b8e87024-c2d6-4c3e-ad60-1cc0bf91d63e does not exist
Oct 02 12:49:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cfcce598-ab9b-4c38-84f5-e5dc5d789c89 does not exist
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:49:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:49:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:08 compute-0 sudo[365439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:08 compute-0 sudo[365439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:08 compute-0 sudo[365439]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:08 compute-0 sudo[365464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:08 compute-0 sudo[365464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:08 compute-0 sudo[365464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:08 compute-0 sudo[365489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:08 compute-0 sudo[365489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:08 compute-0 sudo[365489]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:08 compute-0 sudo[365514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:49:08 compute-0 sudo[365514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:08 compute-0 nova_compute[257802]: 2025-10-02 12:49:08.777 2 INFO nova.compute.manager [None req-b50e6971-904d-410c-bb4b-9fe48ac2ae22 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Get console output
Oct 02 12:49:08 compute-0 nova_compute[257802]: 2025-10-02 12:49:08.783 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.888049049 +0000 UTC m=+0.035467503 container create 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:49:08 compute-0 systemd[1]: Started libpod-conmon-77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f.scope.
Oct 02 12:49:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.872277591 +0000 UTC m=+0.019696065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.975348353 +0000 UTC m=+0.122766827 container init 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.984149409 +0000 UTC m=+0.131567863 container start 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.988060355 +0000 UTC m=+0.135478839 container attach 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:49:08 compute-0 blissful_hamilton[365596]: 167 167
Oct 02 12:49:08 compute-0 systemd[1]: libpod-77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f.scope: Deactivated successfully.
Oct 02 12:49:08 compute-0 podman[365579]: 2025-10-02 12:49:08.991308845 +0000 UTC m=+0.138727299 container died 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bebee9289280469c8c564e83e3e302154e2083e777a663cd640dba663b4b2ee3-merged.mount: Deactivated successfully.
Oct 02 12:49:09 compute-0 podman[365579]: 2025-10-02 12:49:09.031652596 +0000 UTC m=+0.179071050 container remove 77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:49:09 compute-0 systemd[1]: libpod-conmon-77bfa965ddb23a678863cbd0c54cfa39c1a00274969ea981103a5e3876fc980f.scope: Deactivated successfully.
Oct 02 12:49:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:09 compute-0 podman[365619]: 2025-10-02 12:49:09.188360526 +0000 UTC m=+0.037644336 container create 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:49:09 compute-0 systemd[1]: Started libpod-conmon-816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a.scope.
Oct 02 12:49:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:09 compute-0 podman[365619]: 2025-10-02 12:49:09.26220591 +0000 UTC m=+0.111489750 container init 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:49:09 compute-0 podman[365619]: 2025-10-02 12:49:09.171305697 +0000 UTC m=+0.020589527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:09 compute-0 podman[365619]: 2025-10-02 12:49:09.272643536 +0000 UTC m=+0.121927346 container start 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:49:09 compute-0 podman[365619]: 2025-10-02 12:49:09.280188762 +0000 UTC m=+0.129472592 container attach 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 02 12:49:09 compute-0 ceph-mon[73607]: pgmap v2682: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 221 op/s
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:49:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:09 compute-0 nova_compute[257802]: 2025-10-02 12:49:09.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:09.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:10 compute-0 silly_swirles[365635]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:49:10 compute-0 silly_swirles[365635]: --> relative data size: 1.0
Oct 02 12:49:10 compute-0 silly_swirles[365635]: --> All data devices are unavailable
Oct 02 12:49:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 509 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.5 MiB/s wr, 254 op/s
Oct 02 12:49:10 compute-0 nova_compute[257802]: 2025-10-02 12:49:10.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:10 compute-0 systemd[1]: libpod-816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a.scope: Deactivated successfully.
Oct 02 12:49:10 compute-0 podman[365619]: 2025-10-02 12:49:10.102220245 +0000 UTC m=+0.951504055 container died 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c6b29c34eefd138bbe0b9c57665321e0b36cb34010ceb56be094438153b1aa1-merged.mount: Deactivated successfully.
Oct 02 12:49:10 compute-0 podman[365619]: 2025-10-02 12:49:10.300620259 +0000 UTC m=+1.149904069 container remove 816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:49:10 compute-0 systemd[1]: libpod-conmon-816b9c71f2732eaba892ce9e1a86491cb0bbaa51aafb4f60e451dd06ca01477a.scope: Deactivated successfully.
Oct 02 12:49:10 compute-0 sudo[365514]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:10 compute-0 sudo[365662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:10 compute-0 sudo[365662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:10 compute-0 sudo[365662]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:10 compute-0 sudo[365687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:10 compute-0 sudo[365687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:10 compute-0 sudo[365687]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:10 compute-0 sudo[365712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:10 compute-0 sudo[365712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:10 compute-0 sudo[365712]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:10 compute-0 sudo[365737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:49:10 compute-0 sudo[365737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.858021101 +0000 UTC m=+0.033575856 container create 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:10 compute-0 systemd[1]: Started libpod-conmon-06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495.scope.
Oct 02 12:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.843899544 +0000 UTC m=+0.019454319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.941319637 +0000 UTC m=+0.116874412 container init 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.949130689 +0000 UTC m=+0.124685444 container start 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.952538232 +0000 UTC m=+0.128093007 container attach 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:49:10 compute-0 eager_yalow[365820]: 167 167
Oct 02 12:49:10 compute-0 systemd[1]: libpod-06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495.scope: Deactivated successfully.
Oct 02 12:49:10 compute-0 conmon[365820]: conmon 06d39cdfee2dcf959813 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495.scope/container/memory.events
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.956125881 +0000 UTC m=+0.131680666 container died 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-64060d63a7373e17228a9f8d687542cfb3f9d44daf284e477d237340acb7910c-merged.mount: Deactivated successfully.
Oct 02 12:49:10 compute-0 podman[365804]: 2025-10-02 12:49:10.991824217 +0000 UTC m=+0.167378972 container remove 06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_yalow, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:49:11 compute-0 systemd[1]: libpod-conmon-06d39cdfee2dcf9598137f4d200401a3574b446edcf2e824eae4668658177495.scope: Deactivated successfully.
Oct 02 12:49:11 compute-0 podman[365846]: 2025-10-02 12:49:11.144643831 +0000 UTC m=+0.037092371 container create 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:49:11 compute-0 systemd[1]: Started libpod-conmon-7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34.scope.
Oct 02 12:49:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8515f7cb818ef648205634078a80e358b67c31f6ecdf0b0d400dc0af923b24b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8515f7cb818ef648205634078a80e358b67c31f6ecdf0b0d400dc0af923b24b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8515f7cb818ef648205634078a80e358b67c31f6ecdf0b0d400dc0af923b24b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8515f7cb818ef648205634078a80e358b67c31f6ecdf0b0d400dc0af923b24b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:11 compute-0 podman[365846]: 2025-10-02 12:49:11.217106102 +0000 UTC m=+0.109554652 container init 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 12:49:11 compute-0 podman[365846]: 2025-10-02 12:49:11.223445378 +0000 UTC m=+0.115893898 container start 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:49:11 compute-0 podman[365846]: 2025-10-02 12:49:11.129030218 +0000 UTC m=+0.021478768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:11 compute-0 podman[365846]: 2025-10-02 12:49:11.226518393 +0000 UTC m=+0.118966943 container attach 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:49:11 compute-0 ceph-mon[73607]: pgmap v2683: 305 pgs: 305 active+clean; 509 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.5 MiB/s wr, 254 op/s
Oct 02 12:49:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3025005931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:11.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:11 compute-0 nova_compute[257802]: 2025-10-02 12:49:11.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]: {
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:     "1": [
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:         {
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "devices": [
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "/dev/loop3"
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             ],
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "lv_name": "ceph_lv0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "lv_size": "7511998464",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "name": "ceph_lv0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "tags": {
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.cluster_name": "ceph",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.crush_device_class": "",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.encrypted": "0",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.osd_id": "1",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.type": "block",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:                 "ceph.vdo": "0"
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             },
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "type": "block",
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:             "vg_name": "ceph_vg0"
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:         }
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]:     ]
Oct 02 12:49:11 compute-0 gracious_wilbur[365862]: }
Oct 02 12:49:12 compute-0 systemd[1]: libpod-7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34.scope: Deactivated successfully.
Oct 02 12:49:12 compute-0 podman[365846]: 2025-10-02 12:49:12.003751035 +0000 UTC m=+0.896199575 container died 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 02 12:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8515f7cb818ef648205634078a80e358b67c31f6ecdf0b0d400dc0af923b24b-merged.mount: Deactivated successfully.
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 509 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Oct 02 12:49:12 compute-0 podman[365846]: 2025-10-02 12:49:12.10082196 +0000 UTC m=+0.993270490 container remove 7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:49:12 compute-0 systemd[1]: libpod-conmon-7730bf9bf56ccf0d303416f82b77d26f1543c66f6e39d768a2604dc596fc9b34.scope: Deactivated successfully.
Oct 02 12:49:12 compute-0 sudo[365737]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:12 compute-0 sudo[365884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:12 compute-0 sudo[365884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:12 compute-0 sudo[365884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:12 compute-0 sudo[365909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:12 compute-0 sudo[365909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:12 compute-0 sudo[365909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:12 compute-0 sudo[365934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:12 compute-0 sudo[365934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:12 compute-0 sudo[365934]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:12 compute-0 sudo[365959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:49:12 compute-0 sudo[365959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:12.677 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:12 compute-0 nova_compute[257802]: 2025-10-02 12:49:12.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:12.679 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.680247533 +0000 UTC m=+0.058339044 container create c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.643138932 +0000 UTC m=+0.021230473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:12 compute-0 systemd[1]: Started libpod-conmon-c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc.scope.
Oct 02 12:49:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.848119037 +0000 UTC m=+0.226210568 container init c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.854564276 +0000 UTC m=+0.232655787 container start c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:49:12 compute-0 hungry_spence[366042]: 167 167
Oct 02 12:49:12 compute-0 systemd[1]: libpod-c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc.scope: Deactivated successfully.
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.877736155 +0000 UTC m=+0.255827686 container attach c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:49:12 compute-0 podman[366025]: 2025-10-02 12:49:12.878123144 +0000 UTC m=+0.256214645 container died c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b7b5a2fb1d8048bf7764d94f8782e84fce07d67b17b50624993f19852e56ea-merged.mount: Deactivated successfully.
Oct 02 12:49:13 compute-0 podman[366025]: 2025-10-02 12:49:13.112959823 +0000 UTC m=+0.491051344 container remove c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:49:13 compute-0 systemd[1]: libpod-conmon-c644a7a410aa9efcdae3b06d26dae91534ce1245af3c14bca10a0a83ba181ccc.scope: Deactivated successfully.
Oct 02 12:49:13 compute-0 podman[366071]: 2025-10-02 12:49:13.169693647 +0000 UTC m=+0.115014416 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:49:13 compute-0 podman[366070]: 2025-10-02 12:49:13.170025595 +0000 UTC m=+0.120039580 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:49:13 compute-0 podman[366059]: 2025-10-02 12:49:13.193773469 +0000 UTC m=+0.189410044 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:49:13 compute-0 podman[366124]: 2025-10-02 12:49:13.271642111 +0000 UTC m=+0.028819829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:13 compute-0 podman[366124]: 2025-10-02 12:49:13.441669318 +0000 UTC m=+0.198846996 container create b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:49:13 compute-0 ceph-mon[73607]: pgmap v2684: 305 pgs: 305 active+clean; 509 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Oct 02 12:49:13 compute-0 systemd[1]: Started libpod-conmon-b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2.scope.
Oct 02 12:49:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06624d46f9ad869d7e84112535f9c3d2d524f200b8dd31481bc2db22241a69e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06624d46f9ad869d7e84112535f9c3d2d524f200b8dd31481bc2db22241a69e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06624d46f9ad869d7e84112535f9c3d2d524f200b8dd31481bc2db22241a69e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06624d46f9ad869d7e84112535f9c3d2d524f200b8dd31481bc2db22241a69e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:13 compute-0 podman[366124]: 2025-10-02 12:49:13.714266975 +0000 UTC m=+0.471444713 container init b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:49:13 compute-0 podman[366124]: 2025-10-02 12:49:13.722929147 +0000 UTC m=+0.480106835 container start b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:49:13 compute-0 podman[366124]: 2025-10-02 12:49:13.761198767 +0000 UTC m=+0.518376445 container attach b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:49:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Oct 02 12:49:14 compute-0 nova_compute[257802]: 2025-10-02 12:49:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:14 compute-0 nova_compute[257802]: 2025-10-02 12:49:14.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:49:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:14 compute-0 nova_compute[257802]: 2025-10-02 12:49:14.193 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:49:14 compute-0 nova_compute[257802]: 2025-10-02 12:49:14.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-0 nova_compute[257802]: 2025-10-02 12:49:14.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:14.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:14 compute-0 youthful_bohr[366141]: {
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:         "osd_id": 1,
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:         "type": "bluestore"
Oct 02 12:49:14 compute-0 youthful_bohr[366141]:     }
Oct 02 12:49:14 compute-0 youthful_bohr[366141]: }
Oct 02 12:49:14 compute-0 systemd[1]: libpod-b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2.scope: Deactivated successfully.
Oct 02 12:49:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:14.682 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:14 compute-0 podman[366163]: 2025-10-02 12:49:14.728651902 +0000 UTC m=+0.029742391 container died b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:49:14 compute-0 ceph-mon[73607]: pgmap v2685: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Oct 02 12:49:15 compute-0 sudo[366178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:15 compute-0 sudo[366178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:15 compute-0 sudo[366178]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:15 compute-0 sudo[366203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:15 compute-0 sudo[366203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:15 compute-0 sudo[366203]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-06624d46f9ad869d7e84112535f9c3d2d524f200b8dd31481bc2db22241a69e6-merged.mount: Deactivated successfully.
Oct 02 12:49:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:15.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:15 compute-0 podman[366163]: 2025-10-02 12:49:15.410079812 +0000 UTC m=+0.711170291 container remove b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:49:15 compute-0 systemd[1]: libpod-conmon-b52930065de777508e6f92979f791032d46150e413875b2c9f41eb9a2ffc4eb2.scope: Deactivated successfully.
Oct 02 12:49:15 compute-0 sudo[365959]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:49:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:49:15 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 84aed23c-7671-4e74-9a29-a0c999c0de5f does not exist
Oct 02 12:49:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cac58865-66dc-42e5-827c-83225092c353 does not exist
Oct 02 12:49:15 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 87d1e0e5-8c65-4f66-b87a-78280b269f96 does not exist
Oct 02 12:49:15 compute-0 sudo[366229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:15 compute-0 sudo[366229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:15 compute-0 sudo[366229]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:15 compute-0 sudo[366254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:49:15 compute-0 sudo[366254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:15 compute-0 sudo[366254]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:15 compute-0 nova_compute[257802]: 2025-10-02 12:49:15.857 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:15 compute-0 nova_compute[257802]: 2025-10-02 12:49:15.858 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:15 compute-0 nova_compute[257802]: 2025-10-02 12:49:15.858 2 DEBUG nova.objects.instance [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'flavor' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 555 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 12:49:16 compute-0 nova_compute[257802]: 2025-10-02 12:49:16.192 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:16 compute-0 nova_compute[257802]: 2025-10-02 12:49:16.192 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:16 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:49:16 compute-0 nova_compute[257802]: 2025-10-02 12:49:16.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:16 compute-0 nova_compute[257802]: 2025-10-02 12:49:16.898 2 DEBUG nova.objects.instance [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_requests' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:16 compute-0 nova_compute[257802]: 2025-10-02 12:49:16.924 2 DEBUG nova.network.neutron [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:49:17 compute-0 nova_compute[257802]: 2025-10-02 12:49:17.170 2 DEBUG nova.policy [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:49:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:17.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:17 compute-0 ceph-mon[73607]: pgmap v2686: 305 pgs: 305 active+clean; 555 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 12:49:17 compute-0 podman[366280]: 2025-10-02 12:49:17.949884572 +0000 UTC m=+0.087749717 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 12:49:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 555 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct 02 12:49:18 compute-0 nova_compute[257802]: 2025-10-02 12:49:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:18 compute-0 nova_compute[257802]: 2025-10-02 12:49:18.111 2 DEBUG nova.network.neutron [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Successfully created port: c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:49:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:18 compute-0 ceph-mon[73607]: pgmap v2687: 305 pgs: 305 active+clean; 555 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct 02 12:49:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3336780980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.803 2 DEBUG nova.network.neutron [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Successfully updated port: c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.828 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.828 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.829 2 DEBUG nova.network.neutron [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.988 2 DEBUG nova.compute.manager [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-changed-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.989 2 DEBUG nova.compute.manager [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing instance network info cache due to event network-changed-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:19 compute-0 nova_compute[257802]: 2025-10-02 12:49:19.989 2 DEBUG oslo_concurrency.lockutils [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 535 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 02 12:49:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2461171271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4095847724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:20.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:21.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:21 compute-0 ceph-mon[73607]: pgmap v2688: 305 pgs: 305 active+clean; 535 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 02 12:49:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/384528648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1572234307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:21 compute-0 nova_compute[257802]: 2025-10-02 12:49:21.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 668 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:49:22 compute-0 nova_compute[257802]: 2025-10-02 12:49:22.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:22 compute-0 nova_compute[257802]: 2025-10-02 12:49:22.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:22.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Oct 02 12:49:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/615274900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/615274900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Oct 02 12:49:22 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.127 2 DEBUG nova.network.neutron [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.158 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.159 2 DEBUG oslo_concurrency.lockutils [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.159 2 DEBUG nova.network.neutron [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing network info cache for port c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.162 2 DEBUG nova.virt.libvirt.vif [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.162 2 DEBUG nova.network.os_vif_util [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.163 2 DEBUG nova.network.os_vif_util [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.163 2 DEBUG os_vif [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc44a4d12-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc44a4d12-3f, col_values=(('external_ids', {'iface-id': 'c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:5a:b2', 'vm-uuid': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.1710] manager: (tapc44a4d12-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.178 2 INFO os_vif [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f')
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.179 2 DEBUG nova.virt.libvirt.vif [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.179 2 DEBUG nova.network.os_vif_util [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.180 2 DEBUG nova.network.os_vif_util [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.183 2 DEBUG nova.virt.libvirt.guest [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:7f:5a:b2"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <target dev="tapc44a4d12-3f"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]: </interface>
Oct 02 12:49:23 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.1963] manager: (tapc44a4d12-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Oct 02 12:49:23 compute-0 kernel: tapc44a4d12-3f: entered promiscuous mode
Oct 02 12:49:23 compute-0 ovn_controller[148183]: 2025-10-02T12:49:23Z|00758|binding|INFO|Claiming lport c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc for this chassis.
Oct 02 12:49:23 compute-0 ovn_controller[148183]: 2025-10-02T12:49:23Z|00759|binding|INFO|c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc: Claiming fa:16:3e:7f:5a:b2 10.100.0.20
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.211 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:5a:b2 10.100.0.20'], port_security=['fa:16:3e:7f:5a:b2 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c114e1a6-21d7-49a2-a13f-595584b99547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb724e20-5004-45bb-be58-41678b3148fa, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.212 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc in datapath da8223ff-ffa4-4359-83b5-a642db1cfd48 bound to our chassis
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.213 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da8223ff-ffa4-4359-83b5-a642db1cfd48
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.225 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6777c39-27d5-4fe0-9739-8f2118404452]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.226 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda8223ff-f1 in ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.228 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda8223ff-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.228 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[539c623b-3541-42e1-a349-291b2070f2d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c57be3e-fa38-4034-904e-45ac918982c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 systemd-udevd[366318]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.241 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[85ec7a5e-dee3-41f2-8e6b-b5b5d42839f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.2447] device (tapc44a4d12-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.2460] device (tapc44a4d12-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 ovn_controller[148183]: 2025-10-02T12:49:23Z|00760|binding|INFO|Setting lport c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc ovn-installed in OVS
Oct 02 12:49:23 compute-0 ovn_controller[148183]: 2025-10-02T12:49:23Z|00761|binding|INFO|Setting lport c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc up in Southbound
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.266 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[207892fb-af35-4c83-8d8d-2cef9dd3e5a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.293 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d29bda-1982-4f31-b777-5bacade5ca8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.294 2 DEBUG nova.virt.libvirt.driver [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.294 2 DEBUG nova.virt.libvirt.driver [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.295 2 DEBUG nova.virt.libvirt.driver [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:97:95:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.295 2 DEBUG nova.virt.libvirt.driver [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:7f:5a:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.298 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1f732701-a15d-4096-a096-81ed14930df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 systemd-udevd[366321]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.3002] manager: (tapda8223ff-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.325 2 DEBUG nova.virt.libvirt.guest [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:23</nova:creationTime>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:23 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     <nova:port uuid="c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc">
Oct 02 12:49:23 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Oct 02 12:49:23 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:23 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:23 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:23 compute-0 nova_compute[257802]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.329 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f18cd703-8687-4b82-9e2f-9f8c29c18e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.331 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e7527b56-1f13-4e40-a8e9-fc3a6682de75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.3517] device (tapda8223ff-f0): carrier: link connected
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.353 2 DEBUG oslo_concurrency.lockutils [None req-63930d78-5473-41cb-9ed2-6cc65acfe409 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.360 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8221acb6-1449-4cda-85e0-732a3cb9546b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.381 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[260a47b2-451a-4ba4-ae88-99e310adc3e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda8223ff-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c9:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743097, 'reachable_time': 27048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366344, 'error': None, 'target': 'ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.398 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5738133-2ca2-4f18-a013-54edc2a07502]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:c9dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743097, 'tstamp': 743097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366345, 'error': None, 'target': 'ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.415 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f1903700-ebb2-4421-a6b7-f2e94c9d3053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda8223ff-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c9:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743097, 'reachable_time': 27048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366346, 'error': None, 'target': 'ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.443 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[497fcb16-80d8-43ee-a779-f52c7099ebda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.498 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0590357-8431-4de5-ab0f-018a7f87c26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.499 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda8223ff-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.500 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.500 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda8223ff-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 ceph-mon[73607]: pgmap v2689: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 668 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:49:23 compute-0 ceph-mon[73607]: osdmap e368: 3 total, 3 up, 3 in
Oct 02 12:49:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2938637152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 NetworkManager[44987]: <info>  [1759409363.5023] manager: (tapda8223ff-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Oct 02 12:49:23 compute-0 kernel: tapda8223ff-f0: entered promiscuous mode
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.504 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda8223ff-f0, col_values=(('external_ids', {'iface-id': '4bab100b-5192-4ee0-98d9-20fe7d6b435c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 ovn_controller[148183]: 2025-10-02T12:49:23Z|00762|binding|INFO|Releasing lport 4bab100b-5192-4ee0-98d9-20fe7d6b435c from this chassis (sb_readonly=0)
Oct 02 12:49:23 compute-0 nova_compute[257802]: 2025-10-02 12:49:23.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.522 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da8223ff-ffa4-4359-83b5-a642db1cfd48.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da8223ff-ffa4-4359-83b5-a642db1cfd48.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.523 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[984b29fa-ce9e-47da-bd9d-a2c0c152bf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.524 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-da8223ff-ffa4-4359-83b5-a642db1cfd48
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/da8223ff-ffa4-4359-83b5-a642db1cfd48.pid.haproxy
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID da8223ff-ffa4-4359-83b5-a642db1cfd48
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:49:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:23.524 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'env', 'PROCESS_TAG=haproxy-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da8223ff-ffa4-4359-83b5-a642db1cfd48.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:49:23 compute-0 podman[366376]: 2025-10-02 12:49:23.873102266 +0000 UTC m=+0.046556375 container create 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:23 compute-0 systemd[1]: Started libpod-conmon-3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2.scope.
Oct 02 12:49:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201c097828d01b7580fb8b89aaf0ff0b969068212be935338d085a7f370dc837/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:23 compute-0 podman[366376]: 2025-10-02 12:49:23.848738718 +0000 UTC m=+0.022192857 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:49:23 compute-0 podman[366376]: 2025-10-02 12:49:23.950670812 +0000 UTC m=+0.124124941 container init 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:49:23 compute-0 podman[366376]: 2025-10-02 12:49:23.957017737 +0000 UTC m=+0.130471856 container start 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:23 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [NOTICE]   (366395) : New worker (366397) forked
Oct 02 12:49:23 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [NOTICE]   (366395) : Loading success.
Oct 02 12:49:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Oct 02 12:49:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.236 2 DEBUG nova.compute.manager [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.236 2 DEBUG oslo_concurrency.lockutils [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.236 2 DEBUG oslo_concurrency.lockutils [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.237 2 DEBUG oslo_concurrency.lockutils [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.237 2 DEBUG nova.compute.manager [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.237 2 WARNING nova.compute.manager [req-c3147b02-353f-4eb3-bdfc-3a94102baffe req-659f4824-08b7-433b-8b2d-baee213b25cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc for instance with vm_state active and task_state None.
Oct 02 12:49:24 compute-0 nova_compute[257802]: 2025-10-02 12:49:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:24.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:24 compute-0 ovn_controller[148183]: 2025-10-02T12:49:24Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:5a:b2 10.100.0.20
Oct 02 12:49:24 compute-0 ovn_controller[148183]: 2025-10-02T12:49:24Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:5a:b2 10.100.0.20
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.199 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.199 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.215 2 DEBUG nova.objects.instance [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'flavor' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.253 2 DEBUG nova.virt.libvirt.vif [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.254 2 DEBUG nova.network.os_vif_util [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.254 2 DEBUG nova.network.os_vif_util [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.258 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.265 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.268 2 DEBUG nova.virt.libvirt.driver [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Attempting to detach device tapc44a4d12-3f from instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.269 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:7f:5a:b2"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <target dev="tapc44a4d12-3f"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </interface>
Oct 02 12:49:25 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.275 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.280 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface>not found in domain: <domain type='kvm' id='84'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <name>instance-000000ae</name>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <uuid>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</uuid>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:23</nova:creationTime>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:port uuid="c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc">
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <memory unit='KiB'>131072</memory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <resource>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <partition>/machine</partition>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </resource>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <sysinfo type='smbios'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='serial'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='uuid'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <boot dev='hd'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <smbios mode='sysinfo'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <vmcoreinfo state='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='x2apic'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='vme'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <clock offset='utc'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='hpet' present='no'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_reboot>restart</on_reboot>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_crash>destroy</on_crash>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <disk type='network' device='disk'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk' index='2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='vda' bus='virtio'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='virtio-disk0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <disk type='network' device='cdrom'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config' index='1'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='sda' bus='sata'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <readonly/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='sata0-0-0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pcie.0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='1' port='0x10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='2' port='0x11'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='3' port='0x12'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='4' port='0x13'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='5' port='0x14'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='6' port='0x15'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='7' port='0x16'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='8' port='0x17'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.8'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='9' port='0x18'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.9'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='10' port='0x19'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='11' port='0x1a'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.11'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='12' port='0x1b'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.12'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='13' port='0x1c'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.13'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='14' port='0x1d'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.14'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='15' port='0x1e'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.15'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='16' port='0x1f'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.16'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='17' port='0x20'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.17'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='18' port='0x21'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.18'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='19' port='0x22'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.19'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='20' port='0x23'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.20'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='21' port='0x24'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.21'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='22' port='0x25'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.22'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='23' port='0x26'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.23'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='24' port='0x27'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.24'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='25' port='0x28'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.25'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-pci-bridge'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.26'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='usb'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='sata' index='0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='ide'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <interface type='ethernet'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mac address='fa:16:3e:97:95:8d'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='tapb1e398cd-0b'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model type='virtio'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mtu size='1442'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='net0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <interface type='ethernet'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mac address='fa:16:3e:7f:5a:b2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='tapc44a4d12-3f'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model type='virtio'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mtu size='1442'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='net1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <serial type='pty'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target type='isa-serial' port='0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <model name='isa-serial'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </target>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target type='serial' port='0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </console>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='tablet' bus='usb'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='mouse' bus='ps2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='keyboard' bus='ps2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <listen type='address' address='::0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <audio id='1' type='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='video0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <watchdog model='itco' action='reset'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='watchdog0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </watchdog>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <memballoon model='virtio'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <stats period='10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='balloon0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <rng model='virtio'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='rng0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <label>system_u:system_r:svirt_t:s0:c400,c761</label>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c761</imagelabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <label>+107:+107</label>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:49:25 compute-0 nova_compute[257802]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.281 2 INFO nova.virt.libvirt.driver [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully detached device tapc44a4d12-3f from instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa from the persistent domain config.
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.282 2 DEBUG nova.virt.libvirt.driver [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] (1/8): Attempting to detach device tapc44a4d12-3f with device alias net1 from instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.282 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <mac address="fa:16:3e:7f:5a:b2"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <model type="virtio"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <mtu size="1442"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <target dev="tapc44a4d12-3f"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </interface>
Oct 02 12:49:25 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:25 compute-0 kernel: tapc44a4d12-3f (unregistering): left promiscuous mode
Oct 02 12:49:25 compute-0 NetworkManager[44987]: <info>  [1759409365.3303] device (tapc44a4d12-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:49:25 compute-0 ovn_controller[148183]: 2025-10-02T12:49:25Z|00763|binding|INFO|Releasing lport c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc from this chassis (sb_readonly=0)
Oct 02 12:49:25 compute-0 ovn_controller[148183]: 2025-10-02T12:49:25Z|00764|binding|INFO|Setting lport c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc down in Southbound
Oct 02 12:49:25 compute-0 ovn_controller[148183]: 2025-10-02T12:49:25Z|00765|binding|INFO|Removing iface tapc44a4d12-3f ovn-installed in OVS
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.348 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:5a:b2 10.100.0.20'], port_security=['fa:16:3e:7f:5a:b2 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c114e1a6-21d7-49a2-a13f-595584b99547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb724e20-5004-45bb-be58-41678b3148fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.350 2 DEBUG nova.virt.libvirt.driver [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Start waiting for the detach event from libvirt for device tapc44a4d12-3f with device alias net1 for instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.351 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc in datapath da8223ff-ffa4-4359-83b5-a642db1cfd48 unbound from our chassis
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.351 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759409365.3502254, a44a93f4-fbf3-4ba6-9111-36dbf856ddfa => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.351 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.354 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da8223ff-ffa4-4359-83b5-a642db1cfd48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.355 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e91cf4-2e5e-490f-9f9d-ebb923bf1c55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.358 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface>not found in domain: <domain type='kvm' id='84'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <name>instance-000000ae</name>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <uuid>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</uuid>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:23</nova:creationTime>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:port uuid="c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc">
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <memory unit='KiB'>131072</memory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <resource>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <partition>/machine</partition>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </resource>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <sysinfo type='smbios'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <system>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='serial'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='uuid'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </system>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <os>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <boot dev='hd'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <smbios mode='sysinfo'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </os>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <features>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <vmcoreinfo state='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </features>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='x2apic'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <feature policy='require' name='vme'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <clock offset='utc'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <timer name='hpet' present='no'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_reboot>restart</on_reboot>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <on_crash>destroy</on_crash>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <disk type='network' device='disk'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk' index='2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='vda' bus='virtio'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='virtio-disk0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <disk type='network' device='cdrom'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config' index='1'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='sda' bus='sata'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <readonly/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='sata0-0-0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pcie.0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='1' port='0x10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='2' port='0x11'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='3' port='0x12'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='4' port='0x13'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.357 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48 namespace which is not needed anymore
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='5' port='0x14'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='6' port='0x15'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='7' port='0x16'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='8' port='0x17'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.8'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='9' port='0x18'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.9'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='10' port='0x19'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='11' port='0x1a'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.11'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='12' port='0x1b'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.12'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='13' port='0x1c'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.13'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='14' port='0x1d'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.14'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='15' port='0x1e'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.15'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='16' port='0x1f'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.16'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='17' port='0x20'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.17'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='18' port='0x21'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.18'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='19' port='0x22'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.19'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='20' port='0x23'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.20'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='21' port='0x24'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.21'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='22' port='0x25'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.22'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='23' port='0x26'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.23'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='24' port='0x27'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.24'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target chassis='25' port='0x28'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.25'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model name='pcie-pci-bridge'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='pci.26'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='usb'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <controller type='sata' index='0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='ide'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <interface type='ethernet'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mac address='fa:16:3e:97:95:8d'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target dev='tapb1e398cd-0b'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model type='virtio'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <mtu size='1442'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='net0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <serial type='pty'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target type='isa-serial' port='0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:         <model name='isa-serial'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       </target>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <target type='serial' port='0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </console>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='tablet' bus='usb'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='mouse' bus='ps2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input1'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <input type='keyboard' bus='ps2'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='input2'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <listen type='address' address='::0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </graphics>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <audio id='1' type='none'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <video>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='video0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </video>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <watchdog model='itco' action='reset'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='watchdog0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </watchdog>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <memballoon model='virtio'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <stats period='10'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='balloon0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <rng model='virtio'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <alias name='rng0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <label>system_u:system_r:svirt_t:s0:c400,c761</label>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c761</imagelabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <label>+107:+107</label>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </domain>
Oct 02 12:49:25 compute-0 nova_compute[257802]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.358 2 INFO nova.virt.libvirt.driver [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully detached device tapc44a4d12-3f from instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa from the live domain config.
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.359 2 DEBUG nova.virt.libvirt.vif [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.359 2 DEBUG nova.network.os_vif_util [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.360 2 DEBUG nova.network.os_vif_util [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.360 2 DEBUG os_vif [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc44a4d12-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.367 2 INFO os_vif [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f')
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.368 2 DEBUG nova.virt.libvirt.guest [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:25</nova:creationTime>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:25 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:25 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:25 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:25 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:25 compute-0 nova_compute[257802]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:49:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:25.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:25 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [NOTICE]   (366395) : haproxy version is 2.8.14-c23fe91
Oct 02 12:49:25 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [NOTICE]   (366395) : path to executable is /usr/sbin/haproxy
Oct 02 12:49:25 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [WARNING]  (366395) : Exiting Master process...
Oct 02 12:49:25 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [ALERT]    (366395) : Current worker (366397) exited with code 143 (Terminated)
Oct 02 12:49:25 compute-0 neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48[366391]: [WARNING]  (366395) : All workers exited. Exiting... (0)
Oct 02 12:49:25 compute-0 systemd[1]: libpod-3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2.scope: Deactivated successfully.
Oct 02 12:49:25 compute-0 podman[366429]: 2025-10-02 12:49:25.50120766 +0000 UTC m=+0.049079016 container died 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:49:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Oct 02 12:49:25 compute-0 ceph-mon[73607]: pgmap v2691: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Oct 02 12:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2-userdata-shm.mount: Deactivated successfully.
Oct 02 12:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-201c097828d01b7580fb8b89aaf0ff0b969068212be935338d085a7f370dc837-merged.mount: Deactivated successfully.
Oct 02 12:49:25 compute-0 podman[366429]: 2025-10-02 12:49:25.546979255 +0000 UTC m=+0.094850611 container cleanup 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:49:25 compute-0 systemd[1]: libpod-conmon-3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2.scope: Deactivated successfully.
Oct 02 12:49:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Oct 02 12:49:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Oct 02 12:49:25 compute-0 podman[366461]: 2025-10-02 12:49:25.614331409 +0000 UTC m=+0.044236388 container remove 3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.621 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dc678c68-fad3-4e63-a4c5-5cbbfbb87f8b]: (4, ('Thu Oct  2 12:49:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48 (3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2)\n3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2\nThu Oct  2 12:49:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48 (3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2)\n3d11e765a483355da3aa50d49206cee0debe8d5794b8964c3b5083e6d54837a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.623 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b52fcd54-e27e-463a-8520-4c800da1bd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.624 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda8223ff-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 kernel: tapda8223ff-f0: left promiscuous mode
Oct 02 12:49:25 compute-0 nova_compute[257802]: 2025-10-02 12:49:25.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.646 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[964dc3bf-6b1a-47fa-b8df-3aa53e305420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.669 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[77b1f542-19bd-4fbc-a47b-12ad63d8fef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.670 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[39e88648-8a09-4085-a427-303e938dfaf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.687 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[02c0012a-526d-4deb-a35b-86c9531bd10c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743091, 'reachable_time': 15649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366476, 'error': None, 'target': 'ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dda8223ff\x2dffa4\x2d4359\x2d83b5\x2da642db1cfd48.mount: Deactivated successfully.
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.691 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da8223ff-ffa4-4359-83b5-a642db1cfd48 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:49:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:25.692 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab71a94-a058-4894-a5b7-5f00664ceb44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.6 MiB/s wr, 282 op/s
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.122 2 DEBUG nova.network.neutron [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updated VIF entry in instance network info cache for port c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.123 2 DEBUG nova.network.neutron [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.135 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.135 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.135 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.161 2 DEBUG oslo_concurrency.lockutils [req-cbad1151-1f99-406b-b3f7-1a1f87257fb0 req-dc2ea670-3ba3-43f2-870d-b6fde93c1184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.336 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.337 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.337 2 DEBUG nova.network.neutron [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.374 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.375 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.375 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.375 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.375 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.375 2 WARNING nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc for instance with vm_state active and task_state None.
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-unplugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-unplugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 WARNING nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-unplugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc for instance with vm_state active and task_state None.
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.376 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.377 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.377 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.377 2 DEBUG oslo_concurrency.lockutils [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.377 2 DEBUG nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.377 2 WARNING nova.compute.manager [req-7b3c32e3-b745-4b65-a51d-867ecc8ccf70 req-e2fc695a-0ce0-4aba-81d6-845340c0a4f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-plugged-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc for instance with vm_state active and task_state None.
Oct 02 12:49:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:26.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.461 2 DEBUG nova.compute.manager [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-deleted-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.462 2 INFO nova.compute.manager [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Neutron deleted interface c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc; detaching it from the instance and deleting it from the info cache
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.462 2 DEBUG nova.network.neutron [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.491 2 DEBUG nova.objects.instance [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.512 2 DEBUG nova.objects.instance [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.537 2 DEBUG nova.virt.libvirt.vif [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.538 2 DEBUG nova.network.os_vif_util [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.539 2 DEBUG nova.network.os_vif_util [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.544 2 DEBUG nova.virt.libvirt.guest [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.550 2 DEBUG nova.virt.libvirt.guest [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface>not found in domain: <domain type='kvm' id='84'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <name>instance-000000ae</name>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <uuid>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</uuid>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:25</nova:creationTime>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <memory unit='KiB'>131072</memory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <resource>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <partition>/machine</partition>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </resource>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <sysinfo type='smbios'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='serial'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='uuid'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <boot dev='hd'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <smbios mode='sysinfo'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <vmcoreinfo state='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='x2apic'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='vme'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <clock offset='utc'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='hpet' present='no'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_reboot>restart</on_reboot>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_crash>destroy</on_crash>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <disk type='network' device='disk'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk' index='2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='vda' bus='virtio'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='virtio-disk0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <disk type='network' device='cdrom'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config' index='1'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='sda' bus='sata'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <readonly/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='sata0-0-0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pcie.0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='1' port='0x10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='2' port='0x11'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='3' port='0x12'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='4' port='0x13'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='5' port='0x14'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='6' port='0x15'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='7' port='0x16'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='8' port='0x17'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.8'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='9' port='0x18'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.9'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='10' port='0x19'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='11' port='0x1a'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.11'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='12' port='0x1b'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.12'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='13' port='0x1c'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.13'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='14' port='0x1d'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.14'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='15' port='0x1e'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.15'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='16' port='0x1f'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.16'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='17' port='0x20'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.17'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='18' port='0x21'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.18'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='19' port='0x22'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.19'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='20' port='0x23'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.20'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='21' port='0x24'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.21'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='22' port='0x25'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.22'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='23' port='0x26'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.23'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='24' port='0x27'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.24'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='25' port='0x28'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.25'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-pci-bridge'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.26'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='usb'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='sata' index='0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='ide'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <interface type='ethernet'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <mac address='fa:16:3e:97:95:8d'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='tapb1e398cd-0b'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model type='virtio'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <mtu size='1442'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='net0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <serial type='pty'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target type='isa-serial' port='0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <model name='isa-serial'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </target>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target type='serial' port='0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </console>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='tablet' bus='usb'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='mouse' bus='ps2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='keyboard' bus='ps2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <listen type='address' address='::0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </graphics>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <audio id='1' type='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='video0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <watchdog model='itco' action='reset'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='watchdog0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </watchdog>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <memballoon model='virtio'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <stats period='10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='balloon0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <rng model='virtio'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='rng0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <label>system_u:system_r:svirt_t:s0:c400,c761</label>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c761</imagelabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <label>+107:+107</label>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:49:26 compute-0 nova_compute[257802]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.551 2 DEBUG nova.virt.libvirt.guest [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.554 2 DEBUG nova.virt.libvirt.guest [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7f:5a:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc44a4d12-3f"/></interface>not found in domain: <domain type='kvm' id='84'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <name>instance-000000ae</name>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <uuid>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</uuid>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:25</nova:creationTime>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <memory unit='KiB'>131072</memory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <resource>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <partition>/machine</partition>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </resource>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <sysinfo type='smbios'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <system>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='serial'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='uuid'>a44a93f4-fbf3-4ba6-9111-36dbf856ddfa</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </system>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <os>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <boot dev='hd'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <smbios mode='sysinfo'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </os>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <features>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <vmcoreinfo state='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </features>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='x2apic'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <feature policy='require' name='vme'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <clock offset='utc'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <timer name='hpet' present='no'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_reboot>restart</on_reboot>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <on_crash>destroy</on_crash>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <disk type='network' device='disk'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk' index='2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='vda' bus='virtio'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='virtio-disk0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <disk type='network' device='cdrom'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <auth username='openstack'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source protocol='rbd' name='vms/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_disk.config' index='1'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </source>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='sda' bus='sata'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <readonly/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='sata0-0-0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pcie.0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='1' port='0x10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='2' port='0x11'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='3' port='0x12'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='4' port='0x13'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='5' port='0x14'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='6' port='0x15'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='7' port='0x16'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='8' port='0x17'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.8'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='9' port='0x18'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.9'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='10' port='0x19'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='11' port='0x1a'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.11'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='12' port='0x1b'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.12'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='13' port='0x1c'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.13'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='14' port='0x1d'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.14'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='15' port='0x1e'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.15'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='16' port='0x1f'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.16'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='17' port='0x20'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.17'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='18' port='0x21'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.18'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='19' port='0x22'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.19'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='20' port='0x23'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.20'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='21' port='0x24'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.21'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='22' port='0x25'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.22'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='23' port='0x26'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.23'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='24' port='0x27'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.24'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-root-port'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target chassis='25' port='0x28'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.25'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model name='pcie-pci-bridge'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='pci.26'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='usb'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <controller type='sata' index='0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='ide'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </controller>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <interface type='ethernet'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <mac address='fa:16:3e:97:95:8d'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target dev='tapb1e398cd-0b'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model type='virtio'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <mtu size='1442'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='net0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <serial type='pty'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target type='isa-serial' port='0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:         <model name='isa-serial'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       </target>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <source path='/dev/pts/0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <log file='/var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa/console.log' append='off'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <target type='serial' port='0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='serial0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </console>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='tablet' bus='usb'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='mouse' bus='ps2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input1'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <input type='keyboard' bus='ps2'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='input2'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </input>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <listen type='address' address='::0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </graphics>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <audio id='1' type='none'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <video>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='video0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </video>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <watchdog model='itco' action='reset'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='watchdog0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </watchdog>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <memballoon model='virtio'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <stats period='10'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='balloon0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <rng model='virtio'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <alias name='rng0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <label>system_u:system_r:svirt_t:s0:c400,c761</label>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c761</imagelabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <label>+107:+107</label>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </seclabel>
Oct 02 12:49:26 compute-0 nova_compute[257802]: </domain>
Oct 02 12:49:26 compute-0 nova_compute[257802]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.554 2 WARNING nova.virt.libvirt.driver [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Detaching interface fa:16:3e:7f:5a:b2 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapc44a4d12-3f' not found.
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.555 2 DEBUG nova.virt.libvirt.vif [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.555 2 DEBUG nova.network.os_vif_util [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "address": "fa:16:3e:7f:5a:b2", "network": {"id": "da8223ff-ffa4-4359-83b5-a642db1cfd48", "bridge": "br-int", "label": "tempest-network-smoke--1807143904", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc44a4d12-3f", "ovs_interfaceid": "c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.556 2 DEBUG nova.network.os_vif_util [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.556 2 DEBUG os_vif [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc44a4d12-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.560 2 INFO os_vif [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:5a:b2,bridge_name='br-int',has_traffic_filtering=True,id=c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc,network=Network(da8223ff-ffa4-4359-83b5-a642db1cfd48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc44a4d12-3f')
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.561 2 DEBUG nova.virt.libvirt.guest [req-f8847d22-95a8-46fe-9744-b1aeb6e77af5 req-daae834b-edea-486b-bd97-34365e8d3eef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:name>tempest-TestNetworkBasicOps-server-1210888659</nova:name>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:creationTime>2025-10-02 12:49:26</nova:creationTime>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:flavor name="m1.nano">
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:memory>128</nova:memory>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:disk>1</nova:disk>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:swap>0</nova:swap>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:flavor>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:owner>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   <nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     <nova:port uuid="b1e398cd-0bdc-4194-b98d-41c1a091d947">
Oct 02 12:49:26 compute-0 nova_compute[257802]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:49:26 compute-0 nova_compute[257802]:     </nova:port>
Oct 02 12:49:26 compute-0 nova_compute[257802]:   </nova:ports>
Oct 02 12:49:26 compute-0 nova_compute[257802]: </nova:instance>
Oct 02 12:49:26 compute-0 nova_compute[257802]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:49:26 compute-0 ceph-mon[73607]: osdmap e369: 3 total, 3 up, 3 in
Oct 02 12:49:26 compute-0 nova_compute[257802]: 2025-10-02 12:49:26.869 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:26.968 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:26.968 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:26.968 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:27 compute-0 ceph-mon[73607]: pgmap v2693: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.6 MiB/s wr, 282 op/s
Oct 02 12:49:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Oct 02 12:49:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Oct 02 12:49:27 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Oct 02 12:49:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:49:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:28 compute-0 ovn_controller[148183]: 2025-10-02T12:49:28Z|00766|binding|INFO|Releasing lport cd85454a-320b-4c80-984b-1f77580d2ea7 from this chassis (sb_readonly=0)
Oct 02 12:49:28 compute-0 nova_compute[257802]: 2025-10-02 12:49:28.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Oct 02 12:49:28 compute-0 nova_compute[257802]: 2025-10-02 12:49:28.981 2 INFO nova.network.neutron [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Port c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:49:28 compute-0 nova_compute[257802]: 2025-10-02 12:49:28.982 2 DEBUG nova.network.neutron [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.035 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.044 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.045 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.045 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Oct 02 12:49:29 compute-0 ceph-mon[73607]: osdmap e370: 3 total, 3 up, 3 in
Oct 02 12:49:29 compute-0 ceph-mon[73607]: pgmap v2695: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:49:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.122 2 DEBUG oslo_concurrency.lockutils [None req-83d1c420-8e8d-4c99-a28f-6f6d9f7ef018 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-c44a4d12-3f6e-41ef-aa45-ba4d4e5aa3fc" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Oct 02 12:49:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Oct 02 12:49:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Oct 02 12:49:29 compute-0 nova_compute[257802]: 2025-10-02 12:49:29.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:29.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:30 compute-0 ceph-mon[73607]: osdmap e371: 3 total, 3 up, 3 in
Oct 02 12:49:30 compute-0 ceph-mon[73607]: osdmap e372: 3 total, 3 up, 3 in
Oct 02 12:49:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/232811485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/783607838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.7 MiB/s wr, 239 op/s
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.358 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.359 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.359 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.359 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.359 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.361 2 INFO nova.compute.manager [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Terminating instance
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.362 2 DEBUG nova.compute.manager [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:30 compute-0 kernel: tapb1e398cd-0b (unregistering): left promiscuous mode
Oct 02 12:49:30 compute-0 NetworkManager[44987]: <info>  [1759409370.4323] device (tapb1e398cd-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:49:30 compute-0 ovn_controller[148183]: 2025-10-02T12:49:30Z|00767|binding|INFO|Releasing lport b1e398cd-0bdc-4194-b98d-41c1a091d947 from this chassis (sb_readonly=0)
Oct 02 12:49:30 compute-0 ovn_controller[148183]: 2025-10-02T12:49:30Z|00768|binding|INFO|Setting lport b1e398cd-0bdc-4194-b98d-41c1a091d947 down in Southbound
Oct 02 12:49:30 compute-0 ovn_controller[148183]: 2025-10-02T12:49:30Z|00769|binding|INFO|Removing iface tapb1e398cd-0b ovn-installed in OVS
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.452 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:95:8d 10.100.0.10'], port_security=['fa:16:3e:97:95:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a44a93f4-fbf3-4ba6-9111-36dbf856ddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7903f35c-f06f-45c6-b8fd-ceb1b636ba65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16ce81b6-9ce6-4724-b3e5-191386c7c3a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=b1e398cd-0bdc-4194-b98d-41c1a091d947) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.453 158261 INFO neutron.agent.ovn.metadata.agent [-] Port b1e398cd-0bdc-4194-b98d-41c1a091d947 in datapath 7ab6b6d4-5590-4247-9dd8-59243897cce9 unbound from our chassis
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.454 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ab6b6d4-5590-4247-9dd8-59243897cce9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.457 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ca91c6-9a99-4d51-8a53-8f1142d32fb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.457 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9 namespace which is not needed anymore
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Oct 02 12:49:30 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ae.scope: Consumed 15.983s CPU time.
Oct 02 12:49:30 compute-0 systemd-machined[211836]: Machine qemu-84-instance-000000ae terminated.
Oct 02 12:49:30 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [NOTICE]   (365093) : haproxy version is 2.8.14-c23fe91
Oct 02 12:49:30 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [NOTICE]   (365093) : path to executable is /usr/sbin/haproxy
Oct 02 12:49:30 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [WARNING]  (365093) : Exiting Master process...
Oct 02 12:49:30 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [ALERT]    (365093) : Current worker (365095) exited with code 143 (Terminated)
Oct 02 12:49:30 compute-0 neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9[365089]: [WARNING]  (365093) : All workers exited. Exiting... (0)
Oct 02 12:49:30 compute-0 systemd[1]: libpod-0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b.scope: Deactivated successfully.
Oct 02 12:49:30 compute-0 podman[366505]: 2025-10-02 12:49:30.601263294 +0000 UTC m=+0.050101322 container died 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.608 2 INFO nova.virt.libvirt.driver [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance destroyed successfully.
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.608 2 DEBUG nova.objects.instance [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid a44a93f4-fbf3-4ba6-9111-36dbf856ddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.629 2 DEBUG nova.virt.libvirt.vif [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210888659',display_name='tempest-TestNetworkBasicOps-server-1210888659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210888659',id=174,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOuizzzsJ0xQoaj6QWBctuJTCzZuNABfaUbqmY1NfxPiQQ1W4zoCTJjFgqJkZAPE8tNumkBg7/MpTOE+q4DWN6dEyxAopDry/w0CNriaUKHv801j6Cb4/rGJW3h1iORzw==',key_name='tempest-TestNetworkBasicOps-2008743034',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-ghrwnmf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=a44a93f4-fbf3-4ba6-9111-36dbf856ddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.630 2 DEBUG nova.network.os_vif_util [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.631 2 DEBUG nova.network.os_vif_util [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.631 2 DEBUG os_vif [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1e398cd-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a0ac15af5b4be8d53e6ac6c2a21c261916cb7d3ff85d66a944ce9cc2d56fc69-merged.mount: Deactivated successfully.
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.640 2 INFO os_vif [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:95:8d,bridge_name='br-int',has_traffic_filtering=True,id=b1e398cd-0bdc-4194-b98d-41c1a091d947,network=Network(7ab6b6d4-5590-4247-9dd8-59243897cce9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1e398cd-0b')
Oct 02 12:49:30 compute-0 podman[366505]: 2025-10-02 12:49:30.648203706 +0000 UTC m=+0.097041734 container cleanup 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:49:30 compute-0 systemd[1]: libpod-conmon-0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b.scope: Deactivated successfully.
Oct 02 12:49:30 compute-0 podman[366563]: 2025-10-02 12:49:30.71594994 +0000 UTC m=+0.040979217 container remove 0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.721 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b39b199-f2f4-4183-aaf7-39cb6d3be059]: (4, ('Thu Oct  2 12:49:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9 (0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b)\n0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b\nThu Oct  2 12:49:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9 (0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b)\n0ee3b263fd0f8f65dd881aaa7bcc248b5fce19042d62492dde952bf6e156b56b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.723 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37d7c5d2-c3b7-4a66-b65f-5f61961f5901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.724 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab6b6d4-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 kernel: tap7ab6b6d4-50: left promiscuous mode
Oct 02 12:49:30 compute-0 nova_compute[257802]: 2025-10-02 12:49:30.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.745 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[005ef12d-a6d3-4c67-85ba-9fafe7a141c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.767 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d3765d-42a3-4bf7-b3f5-107fa8b8034c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.768 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8482fe-2506-44e1-9117-9f8a91a9a854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.784 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccb0113-8634-4a81-9cb0-b63081b0476e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739068, 'reachable_time': 22240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366579, 'error': None, 'target': 'ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ab6b6d4\x2d5590\x2d4247\x2d9dd8\x2d59243897cce9.mount: Deactivated successfully.
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.788 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ab6b6d4-5590-4247-9dd8-59243897cce9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:49:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:30.788 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[d92ada22-3af4-4d16-9eb1-6f8b39413729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.037 2 INFO nova.virt.libvirt.driver [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Deleting instance files /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_del
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.038 2 INFO nova.virt.libvirt.driver [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Deletion of /var/lib/nova/instances/a44a93f4-fbf3-4ba6-9111-36dbf856ddfa_del complete
Oct 02 12:49:31 compute-0 ceph-mon[73607]: pgmap v2698: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.7 MiB/s wr, 239 op/s
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.099 2 INFO nova.compute.manager [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.099 2 DEBUG oslo.service.loopingcall [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.100 2 DEBUG nova.compute.manager [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.100 2 DEBUG nova.network.neutron [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:49:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.584 2 DEBUG nova.compute.manager [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.584 2 DEBUG nova.compute.manager [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing instance network info cache due to event network-changed-b1e398cd-0bdc-4194-b98d-41c1a091d947. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:31 compute-0 nova_compute[257802]: 2025-10-02 12:49:31.584 2 DEBUG oslo_concurrency.lockutils [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.037 2 DEBUG nova.network.neutron [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.073 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [{"id": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "address": "fa:16:3e:97:95:8d", "network": {"id": "7ab6b6d4-5590-4247-9dd8-59243897cce9", "bridge": "br-int", "label": "tempest-network-smoke--615376330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1e398cd-0b", "ovs_interfaceid": "b1e398cd-0bdc-4194-b98d-41c1a091d947", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.0 MiB/s wr, 180 op/s
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.110 2 INFO nova.compute.manager [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Took 1.01 seconds to deallocate network for instance.
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.123 2 DEBUG nova.compute.manager [req-1ac6f30b-93dc-460d-b932-3cb40fd72bb4 req-a836a153-0df8-4d7e-97d8-eea8dfc91652 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-deleted-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.124 2 INFO nova.compute.manager [req-1ac6f30b-93dc-460d-b932-3cb40fd72bb4 req-a836a153-0df8-4d7e-97d8-eea8dfc91652 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Neutron deleted interface b1e398cd-0bdc-4194-b98d-41c1a091d947; detaching it from the instance and deleting it from the info cache
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.124 2 DEBUG nova.network.neutron [req-1ac6f30b-93dc-460d-b932-3cb40fd72bb4 req-a836a153-0df8-4d7e-97d8-eea8dfc91652 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.194 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.195 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.195 2 DEBUG oslo_concurrency.lockutils [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.196 2 DEBUG nova.network.neutron [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Refreshing network info cache for port b1e398cd-0bdc-4194-b98d-41c1a091d947 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.197 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.253 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.254 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.259 2 DEBUG nova.compute.manager [req-1ac6f30b-93dc-460d-b932-3cb40fd72bb4 req-a836a153-0df8-4d7e-97d8-eea8dfc91652 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Detach interface failed, port_id=b1e398cd-0bdc-4194-b98d-41c1a091d947, reason: Instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.270 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.309 2 DEBUG oslo_concurrency.processutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.445 2 DEBUG nova.network.neutron [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:49:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370213429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.754 2 DEBUG oslo_concurrency.processutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.761 2 DEBUG nova.compute.provider_tree [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.811 2 DEBUG nova.scheduler.client.report [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.908 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.910 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.910 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.910 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.911 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.993 2 DEBUG nova.network.neutron [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:32 compute-0 nova_compute[257802]: 2025-10-02 12:49:32.997 2 INFO nova.scheduler.client.report [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance a44a93f4-fbf3-4ba6-9111-36dbf856ddfa
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.029 2 DEBUG oslo_concurrency.lockutils [req-caba8e44-99b1-4cc7-b299-fe8bf1eca4d9 req-2608d6d5-dc3f-47a8-9b29-e93129f9a91a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:33 compute-0 ceph-mon[73607]: pgmap v2699: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.0 MiB/s wr, 180 op/s
Oct 02 12:49:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1493815161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1370213429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212733663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.311 2 DEBUG oslo_concurrency.lockutils [None req-7727c41e-c1c0-4479-b8b2-474d8068a0ac fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.325 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.482 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.483 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4240MB free_disk=20.819190979003906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.484 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:33 compute-0 nova_compute[257802]: 2025-10-02 12:49:33.484 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 449 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.9 MiB/s wr, 225 op/s
Oct 02 12:49:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.162 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.163 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:49:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2212733663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.187 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.302 2 DEBUG nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-unplugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.303 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.303 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.303 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.304 2 DEBUG nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-unplugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.304 2 WARNING nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-unplugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 for instance with vm_state deleted and task_state None.
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.304 2 DEBUG nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.305 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.305 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.305 2 DEBUG oslo_concurrency.lockutils [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a44a93f4-fbf3-4ba6-9111-36dbf856ddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.305 2 DEBUG nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] No waiting events found dispatching network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.306 2 WARNING nova.compute.manager [req-0a9584d9-2e90-4a40-9d3f-32eaea00c1f6 req-47025d7a-96e7-4818-a006-67eaa4db2d7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Received unexpected event network-vif-plugged-b1e398cd-0bdc-4194-b98d-41c1a091d947 for instance with vm_state deleted and task_state None.
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Oct 02 12:49:34 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Oct 02 12:49:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985409564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.595 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.600 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.708 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.888 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:49:34 compute-0 nova_compute[257802]: 2025-10-02 12:49:34.888 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:35 compute-0 nova_compute[257802]: 2025-10-02 12:49:35.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:35 compute-0 nova_compute[257802]: 2025-10-02 12:49:35.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:49:35 compute-0 sudo[366650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:35 compute-0 sudo[366650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:35 compute-0 sudo[366650]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:35 compute-0 sudo[366675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:35 compute-0 sudo[366675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:35 compute-0 sudo[366675]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:35 compute-0 ceph-mon[73607]: pgmap v2700: 305 pgs: 305 active+clean; 449 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.9 MiB/s wr, 225 op/s
Oct 02 12:49:35 compute-0 ceph-mon[73607]: osdmap e373: 3 total, 3 up, 3 in
Oct 02 12:49:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/985409564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:35 compute-0 nova_compute[257802]: 2025-10-02 12:49:35.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:49:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545344315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:49:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545344315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 276 op/s
Oct 02 12:49:36 compute-0 nova_compute[257802]: 2025-10-02 12:49:36.151 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/545344315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/545344315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:37 compute-0 ceph-mon[73607]: pgmap v2702: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 276 op/s
Oct 02 12:49:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Oct 02 12:49:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:38.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:38 compute-0 nova_compute[257802]: 2025-10-02 12:49:38.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:39 compute-0 nova_compute[257802]: 2025-10-02 12:49:39.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Oct 02 12:49:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Oct 02 12:49:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Oct 02 12:49:39 compute-0 nova_compute[257802]: 2025-10-02 12:49:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:39 compute-0 ceph-mon[73607]: pgmap v2703: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Oct 02 12:49:39 compute-0 ceph-mon[73607]: osdmap e374: 3 total, 3 up, 3 in
Oct 02 12:49:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Oct 02 12:49:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:40 compute-0 nova_compute[257802]: 2025-10-02 12:49:40.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:40 compute-0 ceph-mon[73607]: pgmap v2705: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Oct 02 12:49:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000072s ======
Oct 02 12:49:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 273 op/s
Oct 02 12:49:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:42.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:49:42
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms']
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:43 compute-0 ceph-mon[73607]: pgmap v2706: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 273 op/s
Oct 02 12:49:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:49:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:49:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:49:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:49:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:49:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:43 compute-0 podman[366705]: 2025-10-02 12:49:43.924945588 +0000 UTC m=+0.060678382 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:49:43 compute-0 podman[366707]: 2025-10-02 12:49:43.924826535 +0000 UTC m=+0.057034771 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:49:43 compute-0 podman[366706]: 2025-10-02 12:49:43.92541925 +0000 UTC m=+0.060498917 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.3 MiB/s wr, 196 op/s
Oct 02 12:49:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.140189) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384140218, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 886, "num_deletes": 258, "total_data_size": 1193624, "memory_usage": 1216648, "flush_reason": "Manual Compaction"}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384147761, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1179227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59255, "largest_seqno": 60140, "table_properties": {"data_size": 1174768, "index_size": 2046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10372, "raw_average_key_size": 19, "raw_value_size": 1165541, "raw_average_value_size": 2245, "num_data_blocks": 89, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409324, "oldest_key_time": 1759409324, "file_creation_time": 1759409384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 7607 microseconds, and 3450 cpu microseconds.
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.147795) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1179227 bytes OK
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.147811) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.149447) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.149463) EVENT_LOG_v1 {"time_micros": 1759409384149458, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.149482) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1189264, prev total WAL file size 1189264, number of live WAL files 2.
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.150145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323536' seq:72057594037927935, type:22 .. '6C6F676D0032353037' seq:0, type:0; will stop at (end)
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1151KB)], [131(10MB)]
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384150211, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 12097250, "oldest_snapshot_seqno": -1}
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:49:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8869 keys, 11945971 bytes, temperature: kUnknown
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384201390, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11945971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11888042, "index_size": 34648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 230994, "raw_average_key_size": 26, "raw_value_size": 11731862, "raw_average_value_size": 1322, "num_data_blocks": 1349, "num_entries": 8869, "num_filter_entries": 8869, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.201658) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11945971 bytes
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.202917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.0 rd, 233.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 9401, records dropped: 532 output_compression: NoCompression
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.202932) EVENT_LOG_v1 {"time_micros": 1759409384202925, "job": 80, "event": "compaction_finished", "compaction_time_micros": 51257, "compaction_time_cpu_micros": 28670, "output_level": 6, "num_output_files": 1, "total_output_size": 11945971, "num_input_records": 9401, "num_output_records": 8869, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384203228, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384205173, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.149983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.205342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.205351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.205354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.205357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:49:44.205360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:49:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4100157186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3741594729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:44 compute-0 nova_compute[257802]: 2025-10-02 12:49:44.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:44.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:45 compute-0 ceph-mon[73607]: pgmap v2707: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.3 MiB/s wr, 196 op/s
Oct 02 12:49:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3022817253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:45 compute-0 nova_compute[257802]: 2025-10-02 12:49:45.605 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409370.6043758, a44a93f4-fbf3-4ba6-9111-36dbf856ddfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:45 compute-0 nova_compute[257802]: 2025-10-02 12:49:45.606 2 INFO nova.compute.manager [-] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] VM Stopped (Lifecycle Event)
Oct 02 12:49:45 compute-0 nova_compute[257802]: 2025-10-02 12:49:45.631 2 DEBUG nova.compute.manager [None req-b9ca7677-b513-46c2-a948-7069aeeaa51c - - - - - -] [instance: a44a93f4-fbf3-4ba6-9111-36dbf856ddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:45 compute-0 nova_compute[257802]: 2025-10-02 12:49:45.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Oct 02 12:49:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:47 compute-0 ceph-mon[73607]: pgmap v2708: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Oct 02 12:49:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Oct 02 12:49:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:48.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:48 compute-0 podman[366766]: 2025-10-02 12:49:48.931901504 +0000 UTC m=+0.072252115 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 12:49:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:49 compute-0 nova_compute[257802]: 2025-10-02 12:49:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:49.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:49 compute-0 ceph-mon[73607]: pgmap v2709: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Oct 02 12:49:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 452 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 MiB/s wr, 163 op/s
Oct 02 12:49:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:50 compute-0 nova_compute[257802]: 2025-10-02 12:49:50.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:51 compute-0 ceph-mon[73607]: pgmap v2710: 305 pgs: 305 active+clean; 452 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 MiB/s wr, 163 op/s
Oct 02 12:49:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 452 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 12:49:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:52 compute-0 ceph-mon[73607]: pgmap v2711: 305 pgs: 305 active+clean; 452 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 12:49:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:53 compute-0 sshd-session[366479]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 12:49:53 compute-0 sshd-session[366479]: Connection reset by 45.140.17.97 port 12100
Oct 02 12:49:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:54.033 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:54 compute-0 nova_compute[257802]: 2025-10-02 12:49:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:54.034 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:49:54.035 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 02 12:49:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:54 compute-0 nova_compute[257802]: 2025-10-02 12:49:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006507326854178299 of space, bias 1.0, pg target 1.9521980562534897 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:49:55 compute-0 ceph-mon[73607]: pgmap v2712: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 02 12:49:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3406662994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3406662994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:55 compute-0 sudo[366798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:55 compute-0 sudo[366798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:55 compute-0 sudo[366798]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:55 compute-0 sudo[366823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:55 compute-0 sudo[366823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:55 compute-0 sudo[366823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:49:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:49:55 compute-0 nova_compute[257802]: 2025-10-02 12:49:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 12:49:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:56.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:57 compute-0 ceph-mon[73607]: pgmap v2713: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 12:49:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 02 12:49:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:58.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:59 compute-0 nova_compute[257802]: 2025-10-02 12:49:59.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:49:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:59.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:59 compute-0 ceph-mon[73607]: pgmap v2714: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 02 12:49:59 compute-0 nova_compute[257802]: 2025-10-02 12:49:59.647 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:59 compute-0 nova_compute[257802]: 2025-10-02 12:49:59.647 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:59 compute-0 nova_compute[257802]: 2025-10-02 12:49:59.824 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:50:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:50:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.114 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.114 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.120 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.121 2 INFO nova.compute.claims [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.451 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:00.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154447279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.878 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.884 2 DEBUG nova.compute.provider_tree [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:00 compute-0 nova_compute[257802]: 2025-10-02 12:50:00.965 2 DEBUG nova.scheduler.client.report [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.141 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.141 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.244 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.244 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.352 2 INFO nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:50:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:01.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.503 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:50:01 compute-0 ceph-mon[73607]: pgmap v2715: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Oct 02 12:50:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3154447279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.768 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.769 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.770 2 INFO nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Creating image(s)
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.797 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.826 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.850 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.854 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.924 2 DEBUG nova.policy [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6785ffe5d6554514b4ed9fd47665eca0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a442bc513e14406b73e96e70396e6c3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.926 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.927 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.928 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.928 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.952 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:01 compute-0 nova_compute[257802]: 2025-10-02 12:50:01.956 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 64 KiB/s wr, 101 op/s
Oct 02 12:50:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:02.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.000 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.057 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] resizing rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:50:03 compute-0 ceph-mon[73607]: pgmap v2716: 305 pgs: 305 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 64 KiB/s wr, 101 op/s
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.147 2 DEBUG nova.objects.instance [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.282 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.282 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Ensure instance console log exists: /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.283 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.283 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.283 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:03 compute-0 nova_compute[257802]: 2025-10-02 12:50:03.772 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Successfully created port: 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:50:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 105 op/s
Oct 02 12:50:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:04 compute-0 nova_compute[257802]: 2025-10-02 12:50:04.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:04.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/812729193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:04 compute-0 nova_compute[257802]: 2025-10-02 12:50:04.890 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:04 compute-0 nova_compute[257802]: 2025-10-02 12:50:04.891 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.022 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.241 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.241 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.248 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.248 2 INFO nova.compute.claims [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.279 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Successfully updated port: 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.352 2 DEBUG nova.compute.manager [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-changed-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.353 2 DEBUG nova.compute.manager [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Refreshing instance network info cache due to event network-changed-62f0b94c-3e74-4a7d-b13e-8178d5dbf737. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.353 2 DEBUG oslo_concurrency.lockutils [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.353 2 DEBUG oslo_concurrency.lockutils [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.353 2 DEBUG nova.network.neutron [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Refreshing network info cache for port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.446 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:05.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.546 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:05 compute-0 ceph-mon[73607]: pgmap v2717: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 105 op/s
Oct 02 12:50:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2818901054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099459370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.883 2 DEBUG nova.network.neutron [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.886 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:05 compute-0 nova_compute[257802]: 2025-10-02 12:50:05.891 2 DEBUG nova.compute.provider_tree [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.060 2 DEBUG nova.scheduler.client.report [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.143 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.144 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.176 2 DEBUG nova.network.neutron [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.351 2 DEBUG oslo_concurrency.lockutils [req-3ec92509-f39c-41e1-8f56-515d60db791d req-71e494c4-8d85-4082-a7ce-83aa18c14f75 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.352 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.352 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.373 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.373 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:50:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.590 2 INFO nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.738 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:50:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2099459370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:06 compute-0 ceph-mon[73607]: pgmap v2718: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.862 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:50:06 compute-0 nova_compute[257802]: 2025-10-02 12:50:06.921 2 DEBUG nova.policy [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.022 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.023 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.023 2 INFO nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Creating image(s)
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.048 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.070 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.093 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.097 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.160 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.161 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.162 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.162 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.188 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:07 compute-0 nova_compute[257802]: 2025-10-02 12:50:07.192 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 02 12:50:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:08.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:09 compute-0 nova_compute[257802]: 2025-10-02 12:50:09.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-0 nova_compute[257802]: 2025-10-02 12:50:09.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:50:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:09 compute-0 ceph-mon[73607]: pgmap v2719: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 02 12:50:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3377597495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-0 nova_compute[257802]: 2025-10-02 12:50:09.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:09.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:09 compute-0 nova_compute[257802]: 2025-10-02 12:50:09.909 2 DEBUG nova.network.neutron [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.028 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.029 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance network_info: |[{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.034 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start _get_guest_xml network_info=[{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.041 2 WARNING nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.050 2 DEBUG nova.virt.libvirt.host [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.051 2 DEBUG nova.virt.libvirt.host [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.055 2 DEBUG nova.virt.libvirt.host [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.055 2 DEBUG nova.virt.libvirt.host [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.057 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.057 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.058 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.059 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.059 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.059 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.059 2 DEBUG nova.virt.hardware [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.063 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 450 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.110 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Successfully created port: 76fcd8fc-8215-4f14-b41b-aaa8327ae16e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.115 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.202 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:50:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:10.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.908 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.932 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:10 compute-0 nova_compute[257802]: 2025-10-02 12:50:10.936 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:10 compute-0 ceph-mon[73607]: pgmap v2720: 305 pgs: 305 active+clean; 450 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.255 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Successfully updated port: 76fcd8fc-8215-4f14-b41b-aaa8327ae16e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.296 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.296 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.297 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/401527109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.371 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.373 2 DEBUG nova.virt.libvirt.vif [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1972240086',display_name='tempest-ServerStableDeviceRescueTest-server-1972240086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1972240086',id=177,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-2z5a0kx3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:01Z,user_data=None,user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=8e2c1007-1d07-434c-8a22-6cb98d903d3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.376 2 DEBUG nova.network.os_vif_util [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.378 2 DEBUG nova.network.os_vif_util [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.380 2 DEBUG nova.objects.instance [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.444 2 DEBUG nova.compute.manager [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.445 2 DEBUG nova.compute.manager [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing instance network info cache due to event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.446 2 DEBUG oslo_concurrency.lockutils [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.450 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <uuid>8e2c1007-1d07-434c-8a22-6cb98d903d3c</uuid>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <name>instance-000000b1</name>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-1972240086</nova:name>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:50:10</nova:creationTime>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:user uuid="6785ffe5d6554514b4ed9fd47665eca0">tempest-ServerStableDeviceRescueTest-454391960-project-member</nova:user>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:project uuid="6a442bc513e14406b73e96e70396e6c3">tempest-ServerStableDeviceRescueTest-454391960</nova:project>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <nova:port uuid="62f0b94c-3e74-4a7d-b13e-8178d5dbf737">
Oct 02 12:50:11 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <system>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="serial">8e2c1007-1d07-434c-8a22-6cb98d903d3c</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="uuid">8e2c1007-1d07-434c-8a22-6cb98d903d3c</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </system>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <os>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </os>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <features>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </features>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk">
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config">
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:11 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:26:56:b9"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <target dev="tap62f0b94c-3e"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/console.log" append="off"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <video>
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </video>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:50:11 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:50:11 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:50:11 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:50:11 compute-0 nova_compute[257802]: </domain>
Oct 02 12:50:11 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.452 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Preparing to wait for external event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.453 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.453 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.453 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.454 2 DEBUG nova.virt.libvirt.vif [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1972240086',display_name='tempest-ServerStableDeviceRescueTest-server-1972240086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1972240086',id=177,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-2z5a0kx3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:01Z,user_data=None,user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=8e2c1007-1d07-434c-8a22-6cb98d903d3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.454 2 DEBUG nova.network.os_vif_util [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.455 2 DEBUG nova.network.os_vif_util [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.455 2 DEBUG os_vif [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62f0b94c-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62f0b94c-3e, col_values=(('external_ids', {'iface-id': '62f0b94c-3e74-4a7d-b13e-8178d5dbf737', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:56:b9', 'vm-uuid': '8e2c1007-1d07-434c-8a22-6cb98d903d3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:11 compute-0 NetworkManager[44987]: <info>  [1759409411.4666] manager: (tap62f0b94c-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Oct 02 12:50:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.478 2 INFO os_vif [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e')
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.579 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.580 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.580 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No VIF found with MAC fa:16:3e:26:56:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.580 2 INFO nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Using config drive
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.603 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.657 2 DEBUG nova.objects.instance [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.685 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.686 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Ensure instance console log exists: /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.686 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.687 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.687 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:11 compute-0 nova_compute[257802]: 2025-10-02 12:50:11.884 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 450 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 574 KiB/s rd, 3.0 MiB/s wr, 114 op/s
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.140 2 INFO nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Creating config drive at /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.145 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2svmzc4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/112898182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/401527109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.279 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2svmzc4n" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.311 2 DEBUG nova.storage.rbd_utils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.315 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:12.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.522 2 DEBUG oslo_concurrency.processutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.523 2 INFO nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Deleting local config drive /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config because it was imported into RBD.
Oct 02 12:50:12 compute-0 kernel: tap62f0b94c-3e: entered promiscuous mode
Oct 02 12:50:12 compute-0 NetworkManager[44987]: <info>  [1759409412.5872] manager: (tap62f0b94c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Oct 02 12:50:12 compute-0 ovn_controller[148183]: 2025-10-02T12:50:12Z|00770|binding|INFO|Claiming lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for this chassis.
Oct 02 12:50:12 compute-0 ovn_controller[148183]: 2025-10-02T12:50:12Z|00771|binding|INFO|62f0b94c-3e74-4a7d-b13e-8178d5dbf737: Claiming fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:12 compute-0 systemd-udevd[367367]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:12 compute-0 systemd-machined[211836]: New machine qemu-85-instance-000000b1.
Oct 02 12:50:12 compute-0 NetworkManager[44987]: <info>  [1759409412.6340] device (tap62f0b94c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:12 compute-0 NetworkManager[44987]: <info>  [1759409412.6353] device (tap62f0b94c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.661 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.663 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.664 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:12 compute-0 ovn_controller[148183]: 2025-10-02T12:50:12Z|00772|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 ovn-installed in OVS
Oct 02 12:50:12 compute-0 ovn_controller[148183]: 2025-10-02T12:50:12Z|00773|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 up in Southbound
Oct 02 12:50:12 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-000000b1.
Oct 02 12:50:12 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.688 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f90d7bb9-3d96-4482-928e-02d772d44533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.689 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.691 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.691 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e033c2-99df-41cf-9901-748ffed1b20b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.692 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99604ae6-2825-4cc5-8cc5-216e7e1c5af0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.707 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffcee51-0b22-4d1f-9acd-054d9ac8cd72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.737 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[89332f96-cc86-4ac8-a953-1912d3add0f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.773 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[62bef5c9-221c-4943-aa83-3a80d3381ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.780 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[70bffb9a-59cc-440e-ac77-4cf679881dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 NetworkManager[44987]: <info>  [1759409412.7820] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.823 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d13006d6-2237-4abc-8d01-1c00c1f92248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.826 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcf4729-21e9-48ea-8ce2-179bf5e04c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 NetworkManager[44987]: <info>  [1759409412.8485] device (tap48e4ff16-10): carrier: link connected
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.854 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6037d9e7-3e77-45e2-9254-4df316c67a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.870 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e63965f1-04a5-49ba-b7fa-5d339338ae88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748047, 'reachable_time': 43642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367402, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.888 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a72fba64-5260-4e58-885d-094ae4222ad4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748047, 'tstamp': 748047}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367406, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.905 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[91140810-48ea-4d53-bf5c-7faf5097ea65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748047, 'reachable_time': 43642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367419, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.940 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8608cf53-96f4-4992-a872-2f741df186b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.995 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce0da99-66cc-4ed8-bf0d-55cb9c0a8863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.997 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.997 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:12.998 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:12.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:13 compute-0 NetworkManager[44987]: <info>  [1759409413.0003] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 02 12:50:13 compute-0 kernel: tap48e4ff16-10: entered promiscuous mode
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:13.002 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:13 compute-0 ovn_controller[148183]: 2025-10-02T12:50:13Z|00774|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:13.021 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:13.023 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d533ec32-ef92-48ae-9dac-47ec8e5806cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:13.023 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:13.024 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.072 2 DEBUG nova.network.neutron [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.137 2 DEBUG nova.compute.manager [req-752de503-ef9d-4a86-a9dc-5ab253161d5e req-00f3f0ad-e0e4-4c42-ae2f-8935e84c5305 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.139 2 DEBUG oslo_concurrency.lockutils [req-752de503-ef9d-4a86-a9dc-5ab253161d5e req-00f3f0ad-e0e4-4c42-ae2f-8935e84c5305 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.139 2 DEBUG oslo_concurrency.lockutils [req-752de503-ef9d-4a86-a9dc-5ab253161d5e req-00f3f0ad-e0e4-4c42-ae2f-8935e84c5305 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.139 2 DEBUG oslo_concurrency.lockutils [req-752de503-ef9d-4a86-a9dc-5ab253161d5e req-00f3f0ad-e0e4-4c42-ae2f-8935e84c5305 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.140 2 DEBUG nova.compute.manager [req-752de503-ef9d-4a86-a9dc-5ab253161d5e req-00f3f0ad-e0e4-4c42-ae2f-8935e84c5305 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Processing event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.143 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.143 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Instance network_info: |[{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.144 2 DEBUG oslo_concurrency.lockutils [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.144 2 DEBUG nova.network.neutron [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.147 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Start _get_guest_xml network_info=[{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.151 2 WARNING nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.156 2 DEBUG nova.virt.libvirt.host [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.157 2 DEBUG nova.virt.libvirt.host [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.161 2 DEBUG nova.virt.libvirt.host [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.162 2 DEBUG nova.virt.libvirt.host [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.163 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.163 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.164 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.164 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.164 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.165 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.165 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.166 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.166 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.166 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.166 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.167 2 DEBUG nova.virt.hardware [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.170 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:13 compute-0 podman[367496]: 2025-10-02 12:50:13.440204859 +0000 UTC m=+0.046261387 container create 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:50:13 compute-0 ceph-mon[73607]: pgmap v2721: 305 pgs: 305 active+clean; 450 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 574 KiB/s rd, 3.0 MiB/s wr, 114 op/s
Oct 02 12:50:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:13 compute-0 systemd[1]: Started libpod-conmon-2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc.scope.
Oct 02 12:50:13 compute-0 podman[367496]: 2025-10-02 12:50:13.416320432 +0000 UTC m=+0.022376980 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44ce0f41dbe0554b9aead53f18f0c8daaea67ed245867e5f7eb7693e2fd8969/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:13 compute-0 podman[367496]: 2025-10-02 12:50:13.534645298 +0000 UTC m=+0.140701846 container init 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.534 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409413.5341792, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.536 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Started (Lifecycle Event)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.538 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:50:13 compute-0 podman[367496]: 2025-10-02 12:50:13.540320428 +0000 UTC m=+0.146376946 container start 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.542 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.547 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance spawned successfully.
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.548 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:50:13 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [NOTICE]   (367515) : New worker (367517) forked
Oct 02 12:50:13 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [NOTICE]   (367515) : Loading success.
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.581 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.585 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146398080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.651 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.652 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.653 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.653 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.654 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.654 2 DEBUG nova.virt.libvirt.driver [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.659 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.659 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409413.5345104, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.659 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Paused (Lifecycle Event)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.664 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.686 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.689 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.803 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.810 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409413.5417976, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.811 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Resumed (Lifecycle Event)
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.872 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.878 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.910 2 INFO nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Took 12.14 seconds to spawn the instance on the hypervisor.
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.911 2 DEBUG nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:13 compute-0 nova_compute[257802]: 2025-10-02 12:50:13.962 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.126 2 INFO nova.compute.manager [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Took 14.04 seconds to build instance.
Oct 02 12:50:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824393924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.173 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.175 2 DEBUG nova.virt.libvirt.vif [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2028645950',display_name='tempest-TestNetworkBasicOps-server-2028645950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2028645950',id=178,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJxXgh9Gujh9k8v28uC2r2aUWZ4hdScdX0MxzNItcmpV/cUmfpYjCMJUsWbygZcDyKss/9aG0cpN6ekTQGyiz+Iza/UQBZyY0NPcxBcPbDwZL6Km1biqqXoyNPtMEgGVA==',key_name='tempest-TestNetworkBasicOps-1030994634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-1f3otf55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:06Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.175 2 DEBUG nova.network.os_vif_util [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.176 2 DEBUG nova.network.os_vif_util [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.178 2 DEBUG nova.objects.instance [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.191 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <uuid>2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1</uuid>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <name>instance-000000b2</name>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-2028645950</nova:name>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:50:13</nova:creationTime>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <nova:port uuid="76fcd8fc-8215-4f14-b41b-aaa8327ae16e">
Oct 02 12:50:14 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <system>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="serial">2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="uuid">2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </system>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <os>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </os>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <features>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </features>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk">
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config">
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:14 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e4:44:ff"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <target dev="tap76fcd8fc-82"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/console.log" append="off"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <video>
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </video>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:50:14 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:50:14 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:50:14 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:50:14 compute-0 nova_compute[257802]: </domain>
Oct 02 12:50:14 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.193 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Preparing to wait for external event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.193 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.194 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.194 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.194 2 DEBUG nova.virt.libvirt.vif [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2028645950',display_name='tempest-TestNetworkBasicOps-server-2028645950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2028645950',id=178,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJxXgh9Gujh9k8v28uC2r2aUWZ4hdScdX0MxzNItcmpV/cUmfpYjCMJUsWbygZcDyKss/9aG0cpN6ekTQGyiz+Iza/UQBZyY0NPcxBcPbDwZL6Km1biqqXoyNPtMEgGVA==',key_name='tempest-TestNetworkBasicOps-1030994634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-1f3otf55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:06Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.195 2 DEBUG nova.network.os_vif_util [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.195 2 DEBUG nova.network.os_vif_util [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.196 2 DEBUG os_vif [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.198 2 DEBUG oslo_concurrency.lockutils [None req-3e2b74d7-d72e-407a-b309-d5d45c185afd 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76fcd8fc-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76fcd8fc-82, col_values=(('external_ids', {'iface-id': '76fcd8fc-8215-4f14-b41b-aaa8327ae16e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:44:ff', 'vm-uuid': '2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-0 NetworkManager[44987]: <info>  [1759409414.2030] manager: (tap76fcd8fc-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.209 2 INFO os_vif [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82')
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.262 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.263 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.263 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:e4:44:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.264 2 INFO nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Using config drive
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.296 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:14 compute-0 podman[367571]: 2025-10-02 12:50:14.306629743 +0000 UTC m=+0.066722111 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 12:50:14 compute-0 podman[367572]: 2025-10-02 12:50:14.307190186 +0000 UTC m=+0.063835159 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:50:14 compute-0 podman[367573]: 2025-10-02 12:50:14.314448864 +0000 UTC m=+0.069454087 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.679 2 INFO nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Creating config drive at /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.684 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqobzh4w4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.818 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqobzh4w4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.843 2 DEBUG nova.storage.rbd_utils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.846 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.876 2 DEBUG nova.network.neutron [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updated VIF entry in instance network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.877 2 DEBUG nova.network.neutron [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:14 compute-0 nova_compute[257802]: 2025-10-02 12:50:14.895 2 DEBUG oslo_concurrency.lockutils [req-a02f9197-c80d-4249-b99b-2725dc08913d req-2c0a5e06-f0bd-4b73-9060-31aaf7a6f245 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2146398080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2824393924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.225 2 DEBUG nova.compute.manager [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.226 2 DEBUG oslo_concurrency.lockutils [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.226 2 DEBUG oslo_concurrency.lockutils [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.226 2 DEBUG oslo_concurrency.lockutils [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.227 2 DEBUG nova.compute.manager [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.227 2 WARNING nova.compute.manager [req-782970b2-861b-4cc3-8048-b2485eccf9a0 req-0684fe7c-d2aa-49c1-b641-9ea232c7a4cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state active and task_state None.
Oct 02 12:50:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:15.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:15 compute-0 sudo[367686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:15 compute-0 sudo[367686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:15 compute-0 sudo[367686]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:15 compute-0 sudo[367711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:15 compute-0 sudo[367711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:15 compute-0 sudo[367711]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.564 2 DEBUG oslo_concurrency.processutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.565 2 INFO nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Deleting local config drive /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1/disk.config because it was imported into RBD.
Oct 02 12:50:15 compute-0 kernel: tap76fcd8fc-82: entered promiscuous mode
Oct 02 12:50:15 compute-0 NetworkManager[44987]: <info>  [1759409415.6185] manager: (tap76fcd8fc-82): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-0 ovn_controller[148183]: 2025-10-02T12:50:15Z|00775|binding|INFO|Claiming lport 76fcd8fc-8215-4f14-b41b-aaa8327ae16e for this chassis.
Oct 02 12:50:15 compute-0 ovn_controller[148183]: 2025-10-02T12:50:15Z|00776|binding|INFO|76fcd8fc-8215-4f14-b41b-aaa8327ae16e: Claiming fa:16:3e:e4:44:ff 10.100.0.7
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.634 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:44:ff 10.100.0.7'], port_security=['fa:16:3e:e4:44:ff 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72dccc64-4bf8-4d82-ae40-bb1af4f9b50a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afda8bd0-d3cf-487a-af6f-4c4fa9c53a57, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=76fcd8fc-8215-4f14-b41b-aaa8327ae16e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.635 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e in datapath a24bd64e-e8fc-466a-bb2d-8ba89573eddd bound to our chassis
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.637 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a24bd64e-e8fc-466a-bb2d-8ba89573eddd
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.649 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3def8a35-8a1f-4e2a-8687-ef0008c90867]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.650 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa24bd64e-e1 in ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.652 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa24bd64e-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.652 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[430188dd-f2d0-4017-817f-54f290f8113d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.653 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa17c69-5279-449d-886b-f1dc3f95a378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 systemd-udevd[367751]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:15 compute-0 systemd-machined[211836]: New machine qemu-86-instance-000000b2.
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.669 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[52ba24d5-71b7-47be-8dec-7754352750fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 NetworkManager[44987]: <info>  [1759409415.6825] device (tap76fcd8fc-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:15 compute-0 NetworkManager[44987]: <info>  [1759409415.6837] device (tap76fcd8fc-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.685 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2e165d78-fdcb-4a91-9482-580152f9b049]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-000000b2.
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-0 ovn_controller[148183]: 2025-10-02T12:50:15Z|00777|binding|INFO|Setting lport 76fcd8fc-8215-4f14-b41b-aaa8327ae16e ovn-installed in OVS
Oct 02 12:50:15 compute-0 ovn_controller[148183]: 2025-10-02T12:50:15Z|00778|binding|INFO|Setting lport 76fcd8fc-8215-4f14-b41b-aaa8327ae16e up in Southbound
Oct 02 12:50:15 compute-0 nova_compute[257802]: 2025-10-02 12:50:15.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.717 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[92128671-70f2-481a-bc49-17d43c1e6228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 NetworkManager[44987]: <info>  [1759409415.7243] manager: (tapa24bd64e-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/352)
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.723 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b9fc66ba-e8db-412a-86c7-bfee1d75894b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.767 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b282958f-e66b-4238-b557-1d709e0dae9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.784 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2973c1-d91a-4f09-87cc-6b771fb5b5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 NetworkManager[44987]: <info>  [1759409415.8120] device (tapa24bd64e-e0): carrier: link connected
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.818 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[52295325-d381-4cbe-b366-5d673b2a1c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.833 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e85cec48-89c0-49a1-880d-5ec3b51b6893]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24bd64e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:8d:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748343, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367782, 'error': None, 'target': 'ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.847 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a36669-64ab-4490-a4a0-2e51cfc4c3b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:8df3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748343, 'tstamp': 748343}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367783, 'error': None, 'target': 'ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.860 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[71936184-7d9d-4dce-8176-d5f31a2bbab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24bd64e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:8d:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748343, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367784, 'error': None, 'target': 'ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.895 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b895fc0-2fd0-46a6-92e6-50d369a3e539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ceph-mon[73607]: pgmap v2722: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.971 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a15ba84-9722-487c-a3af-25f7259cc41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.973 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24bd64e-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.973 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:15 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:15.973 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa24bd64e-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 NetworkManager[44987]: <info>  [1759409416.0042] manager: (tapa24bd64e-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Oct 02 12:50:16 compute-0 kernel: tapa24bd64e-e0: entered promiscuous mode
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:16.011 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa24bd64e-e0, col_values=(('external_ids', {'iface-id': 'bafef82e-ba7f-4cb9-9180-0642581dc07e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 ovn_controller[148183]: 2025-10-02T12:50:16Z|00779|binding|INFO|Releasing lport bafef82e-ba7f-4cb9-9180-0642581dc07e from this chassis (sb_readonly=0)
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:16.015 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a24bd64e-e8fc-466a-bb2d-8ba89573eddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a24bd64e-e8fc-466a-bb2d-8ba89573eddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:16.016 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fea37e91-8544-41d7-a309-8f2b172085e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:16.017 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-a24bd64e-e8fc-466a-bb2d-8ba89573eddd
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/a24bd64e-e8fc-466a-bb2d-8ba89573eddd.pid.haproxy
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID a24bd64e-e8fc-466a-bb2d-8ba89573eddd
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:16 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:16.020 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'env', 'PROCESS_TAG=haproxy-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a24bd64e-e8fc-466a-bb2d-8ba89573eddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.142 2 DEBUG nova.compute.manager [req-6f586974-fff0-4da7-aa20-ca42d6f797d8 req-abfefc6c-428b-4bc6-b062-5ceb2fee18c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.142 2 DEBUG oslo_concurrency.lockutils [req-6f586974-fff0-4da7-aa20-ca42d6f797d8 req-abfefc6c-428b-4bc6-b062-5ceb2fee18c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.143 2 DEBUG oslo_concurrency.lockutils [req-6f586974-fff0-4da7-aa20-ca42d6f797d8 req-abfefc6c-428b-4bc6-b062-5ceb2fee18c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.143 2 DEBUG oslo_concurrency.lockutils [req-6f586974-fff0-4da7-aa20-ca42d6f797d8 req-abfefc6c-428b-4bc6-b062-5ceb2fee18c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.144 2 DEBUG nova.compute.manager [req-6f586974-fff0-4da7-aa20-ca42d6f797d8 req-abfefc6c-428b-4bc6-b062-5ceb2fee18c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Processing event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:50:16 compute-0 sudo[367794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:16 compute-0 sudo[367794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:16 compute-0 sudo[367794]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:16 compute-0 sudo[367819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:16 compute-0 sudo[367819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:16 compute-0 sudo[367819]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:16 compute-0 sudo[367844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:16 compute-0 sudo[367844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:16 compute-0 sudo[367844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:16 compute-0 sudo[367902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:50:16 compute-0 sudo[367902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:16 compute-0 podman[367929]: 2025-10-02 12:50:16.392427339 +0000 UTC m=+0.049303472 container create 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:50:16 compute-0 systemd[1]: Started libpod-conmon-7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20.scope.
Oct 02 12:50:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf388b6297b4653671703a10752460c1a0cd6fcb364e330bd02ac15f0b688fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:16 compute-0 podman[367929]: 2025-10-02 12:50:16.364143855 +0000 UTC m=+0.021020018 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:16.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:16 compute-0 podman[367929]: 2025-10-02 12:50:16.480499723 +0000 UTC m=+0.137375876 container init 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:50:16 compute-0 podman[367929]: 2025-10-02 12:50:16.487926475 +0000 UTC m=+0.144802608 container start 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.501 2 DEBUG nova.compute.manager [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:16 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [NOTICE]   (367958) : New worker (367963) forked
Oct 02 12:50:16 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [NOTICE]   (367958) : Loading success.
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.546 2 INFO nova.compute.manager [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] instance snapshotting
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.823 2 INFO nova.virt.libvirt.driver [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Beginning live snapshot process
Oct 02 12:50:16 compute-0 sudo[367902]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:16 compute-0 nova_compute[257802]: 2025-10-02 12:50:16.955 2 DEBUG nova.virt.libvirt.imagebackend [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:50:16 compute-0 ceph-mon[73607]: pgmap v2723: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.041 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.042 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409417.0410314, 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.042 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] VM Started (Lifecycle Event)
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.045 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.049 2 INFO nova.virt.libvirt.driver [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Instance spawned successfully.
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.050 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.066 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.072 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.075 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.076 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.076 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.077 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.078 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.078 2 DEBUG nova.virt.libvirt.driver [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.102 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.103 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409417.0420988, 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.104 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] VM Paused (Lifecycle Event)
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.127 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.132 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409417.044762, 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.133 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] VM Resumed (Lifecycle Event)
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.136 2 INFO nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Took 10.11 seconds to spawn the instance on the hypervisor.
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.136 2 DEBUG nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.150 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.153 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.186 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.188 2 DEBUG nova.storage.rbd_utils [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] creating snapshot(b111ceeb38f544808022775fe1f86d5b) on rbd image(8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6978dd98-f26f-435e-9ea9-b74993feb0c7 does not exist
Oct 02 12:50:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9d4d4fff-b081-43ba-8523-23f13c3373bd does not exist
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:50:17 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1dce025c-6076-4f7f-9593-9ac48f77c9a3 does not exist
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:50:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:50:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.233 2 INFO nova.compute.manager [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Took 12.01 seconds to build instance.
Oct 02 12:50:17 compute-0 sudo[368066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:17 compute-0 sudo[368066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:17 compute-0 nova_compute[257802]: 2025-10-02 12:50:17.263 2 DEBUG oslo_concurrency.lockutils [None req-f52a7d87-2a5c-4d97-ac54-de4fdc210cd8 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:17 compute-0 sudo[368066]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:17 compute-0 sudo[368094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:17 compute-0 sudo[368094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:17 compute-0 sudo[368094]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:17 compute-0 sudo[368119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:17 compute-0 sudo[368119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:17 compute-0 sudo[368119]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:17 compute-0 sudo[368144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:50:17 compute-0 sudo[368144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:17.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.748942612 +0000 UTC m=+0.039659215 container create aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:50:17 compute-0 systemd[1]: Started libpod-conmon-aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc.scope.
Oct 02 12:50:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.728804178 +0000 UTC m=+0.019520801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.837280722 +0000 UTC m=+0.127997355 container init aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.844499199 +0000 UTC m=+0.135215802 container start aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.847424861 +0000 UTC m=+0.138141494 container attach aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:17 compute-0 unruffled_khorana[368223]: 167 167
Oct 02 12:50:17 compute-0 systemd[1]: libpod-aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc.scope: Deactivated successfully.
Oct 02 12:50:17 compute-0 conmon[368223]: conmon aa5af68d760fcb7cd0ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc.scope/container/memory.events
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.852083276 +0000 UTC m=+0.142799879 container died aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20803237748b532ecc892b8006cc12f2a2b7d37169ec816ef37e5eea63d9bae-merged.mount: Deactivated successfully.
Oct 02 12:50:17 compute-0 podman[368207]: 2025-10-02 12:50:17.892399617 +0000 UTC m=+0.183116220 container remove aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_khorana, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:50:17 compute-0 systemd[1]: libpod-conmon-aa5af68d760fcb7cd0eac52d402c87f9fe7f496e2fa52fe9065b8b943f20a4cc.scope: Deactivated successfully.
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:50:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:18 compute-0 podman[368246]: 2025-10-02 12:50:18.084408243 +0000 UTC m=+0.043333665 container create 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:50:18 compute-0 systemd[1]: Started libpod-conmon-9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700.scope.
Oct 02 12:50:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:18 compute-0 podman[368246]: 2025-10-02 12:50:18.065279553 +0000 UTC m=+0.024205005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.262 2 DEBUG nova.compute.manager [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.263 2 DEBUG oslo_concurrency.lockutils [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.264 2 DEBUG oslo_concurrency.lockutils [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.264 2 DEBUG oslo_concurrency.lockutils [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.264 2 DEBUG nova.compute.manager [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] No waiting events found dispatching network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.264 2 WARNING nova.compute.manager [req-a03b02f6-959b-408f-8cc5-17557184e1fd req-9b3e4cba-6a0b-4f36-b3b9-fcdf9e618c08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received unexpected event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e for instance with vm_state active and task_state None.
Oct 02 12:50:18 compute-0 podman[368246]: 2025-10-02 12:50:18.272124984 +0000 UTC m=+0.231050416 container init 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:18 compute-0 podman[368246]: 2025-10-02 12:50:18.282916529 +0000 UTC m=+0.241841951 container start 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:50:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Oct 02 12:50:18 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Oct 02 12:50:18 compute-0 podman[368246]: 2025-10-02 12:50:18.338959065 +0000 UTC m=+0.297884507 container attach 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:50:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:18.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.658 2 DEBUG nova.storage.rbd_utils [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] cloning vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk@b111ceeb38f544808022775fe1f86d5b to images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:50:18 compute-0 nova_compute[257802]: 2025-10-02 12:50:18.912 2 DEBUG nova.storage.rbd_utils [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] flattening images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:50:19 compute-0 festive_dewdney[368263]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:50:19 compute-0 festive_dewdney[368263]: --> relative data size: 1.0
Oct 02 12:50:19 compute-0 festive_dewdney[368263]: --> All data devices are unavailable
Oct 02 12:50:19 compute-0 systemd[1]: libpod-9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700.scope: Deactivated successfully.
Oct 02 12:50:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:19 compute-0 podman[368246]: 2025-10-02 12:50:19.173960888 +0000 UTC m=+1.132886310 container died 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7c368d58ce51d47c06d856c5be9b0629e32fdbcd6dc24c42ff9c281fb5a69e3-merged.mount: Deactivated successfully.
Oct 02 12:50:19 compute-0 nova_compute[257802]: 2025-10-02 12:50:19.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:19 compute-0 podman[368246]: 2025-10-02 12:50:19.23714665 +0000 UTC m=+1.196072062 container remove 9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:50:19 compute-0 systemd[1]: libpod-conmon-9386cf492587927e843008e51b4c1758926a087515c13d8a13cfac7659b7a700.scope: Deactivated successfully.
Oct 02 12:50:19 compute-0 sudo[368144]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:19 compute-0 podman[368333]: 2025-10-02 12:50:19.309282941 +0000 UTC m=+0.101729399 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 12:50:19 compute-0 sudo[368363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:19 compute-0 sudo[368363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:19 compute-0 sudo[368363]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:19 compute-0 nova_compute[257802]: 2025-10-02 12:50:19.354 2 DEBUG nova.storage.rbd_utils [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] removing snapshot(b111ceeb38f544808022775fe1f86d5b) on rbd image(8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:50:19 compute-0 ceph-mon[73607]: pgmap v2724: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:50:19 compute-0 ceph-mon[73607]: osdmap e375: 3 total, 3 up, 3 in
Oct 02 12:50:19 compute-0 sudo[368412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:19 compute-0 sudo[368412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:19 compute-0 sudo[368412]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:19 compute-0 sudo[368437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:19 compute-0 sudo[368437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:19 compute-0 nova_compute[257802]: 2025-10-02 12:50:19.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:19 compute-0 sudo[368437]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:19 compute-0 sudo[368462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:50:19 compute-0 sudo[368462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.819537266 +0000 UTC m=+0.038258911 container create aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:50:19 compute-0 systemd[1]: Started libpod-conmon-aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049.scope.
Oct 02 12:50:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.802679572 +0000 UTC m=+0.021401237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.913063894 +0000 UTC m=+0.131785539 container init aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.91984688 +0000 UTC m=+0.138568525 container start aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.923489899 +0000 UTC m=+0.142211544 container attach aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:50:19 compute-0 elastic_dirac[368542]: 167 167
Oct 02 12:50:19 compute-0 systemd[1]: libpod-aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049.scope: Deactivated successfully.
Oct 02 12:50:19 compute-0 conmon[368542]: conmon aa69b58b967b10f077db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049.scope/container/memory.events
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.926941845 +0000 UTC m=+0.145663490 container died aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f31749633bf027478dd4e1d6403fd0ef2309a05dbdafef794ba1c6f93ddb030-merged.mount: Deactivated successfully.
Oct 02 12:50:19 compute-0 podman[368526]: 2025-10-02 12:50:19.971096309 +0000 UTC m=+0.189817954 container remove aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:50:19 compute-0 systemd[1]: libpod-conmon-aa69b58b967b10f077dbd25b80f0918da42a54e0cbf35bd5d08e9bfb04769049.scope: Deactivated successfully.
Oct 02 12:50:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Oct 02 12:50:20 compute-0 podman[368566]: 2025-10-02 12:50:20.135505508 +0000 UTC m=+0.037959174 container create e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:20 compute-0 systemd[1]: Started libpod-conmon-e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7.scope.
Oct 02 12:50:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9557eaabcfd6ed417b06679d41c5f4f2d8aaf1ae740cca1ce3071dc7bad640ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9557eaabcfd6ed417b06679d41c5f4f2d8aaf1ae740cca1ce3071dc7bad640ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9557eaabcfd6ed417b06679d41c5f4f2d8aaf1ae740cca1ce3071dc7bad640ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9557eaabcfd6ed417b06679d41c5f4f2d8aaf1ae740cca1ce3071dc7bad640ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:20 compute-0 podman[368566]: 2025-10-02 12:50:20.119854453 +0000 UTC m=+0.022308149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:20 compute-0 podman[368566]: 2025-10-02 12:50:20.229175808 +0000 UTC m=+0.131629484 container init e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:20 compute-0 podman[368566]: 2025-10-02 12:50:20.235005212 +0000 UTC m=+0.137458878 container start e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:20 compute-0 podman[368566]: 2025-10-02 12:50:20.239867791 +0000 UTC m=+0.142321457 container attach e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Oct 02 12:50:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Oct 02 12:50:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Oct 02 12:50:20 compute-0 NetworkManager[44987]: <info>  [1759409420.4014] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct 02 12:50:20 compute-0 NetworkManager[44987]: <info>  [1759409420.4025] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.429 2 DEBUG nova.storage.rbd_utils [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] creating snapshot(snap) on rbd image(e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:50:20 compute-0 ovn_controller[148183]: 2025-10-02T12:50:20Z|00780|binding|INFO|Releasing lport bafef82e-ba7f-4cb9-9180-0642581dc07e from this chassis (sb_readonly=0)
Oct 02 12:50:20 compute-0 ovn_controller[148183]: 2025-10-02T12:50:20Z|00781|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:50:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:20.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.888 2 DEBUG nova.compute.manager [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.889 2 DEBUG nova.compute.manager [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing instance network info cache due to event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.889 2 DEBUG oslo_concurrency.lockutils [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.889 2 DEBUG oslo_concurrency.lockutils [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:20 compute-0 nova_compute[257802]: 2025-10-02 12:50:20.890 2 DEBUG nova.network.neutron [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:20 compute-0 agitated_germain[368582]: {
Oct 02 12:50:20 compute-0 agitated_germain[368582]:     "1": [
Oct 02 12:50:20 compute-0 agitated_germain[368582]:         {
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "devices": [
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "/dev/loop3"
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             ],
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "lv_name": "ceph_lv0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "lv_size": "7511998464",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "name": "ceph_lv0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "tags": {
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.cluster_name": "ceph",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.crush_device_class": "",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.encrypted": "0",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.osd_id": "1",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.type": "block",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:                 "ceph.vdo": "0"
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             },
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "type": "block",
Oct 02 12:50:20 compute-0 agitated_germain[368582]:             "vg_name": "ceph_vg0"
Oct 02 12:50:20 compute-0 agitated_germain[368582]:         }
Oct 02 12:50:20 compute-0 agitated_germain[368582]:     ]
Oct 02 12:50:20 compute-0 agitated_germain[368582]: }
Oct 02 12:50:21 compute-0 systemd[1]: libpod-e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7.scope: Deactivated successfully.
Oct 02 12:50:21 compute-0 podman[368566]: 2025-10-02 12:50:21.004816022 +0000 UTC m=+0.907269708 container died e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9557eaabcfd6ed417b06679d41c5f4f2d8aaf1ae740cca1ce3071dc7bad640ac-merged.mount: Deactivated successfully.
Oct 02 12:50:21 compute-0 podman[368566]: 2025-10-02 12:50:21.075048948 +0000 UTC m=+0.977502634 container remove e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:21 compute-0 systemd[1]: libpod-conmon-e0b11b8257b62bbdddcfe04ffa87f6ff69e313609cf225771f7cfdc126e7e0d7.scope: Deactivated successfully.
Oct 02 12:50:21 compute-0 sudo[368462]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:21 compute-0 sudo[368624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:21 compute-0 sudo[368624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:21 compute-0 sudo[368624]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:21 compute-0 sudo[368649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:21 compute-0 sudo[368649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:21 compute-0 sudo[368649]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:21 compute-0 sudo[368674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:21 compute-0 sudo[368674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:21 compute-0 sudo[368674]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:21 compute-0 sudo[368699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:50:21 compute-0 sudo[368699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:21 compute-0 ceph-mon[73607]: pgmap v2726: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Oct 02 12:50:21 compute-0 ceph-mon[73607]: osdmap e376: 3 total, 3 up, 3 in
Oct 02 12:50:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:21.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Oct 02 12:50:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Oct 02 12:50:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.654138282 +0000 UTC m=+0.041632684 container create 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:50:21 compute-0 systemd[1]: Started libpod-conmon-877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d.scope.
Oct 02 12:50:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.635432023 +0000 UTC m=+0.022926455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.736215499 +0000 UTC m=+0.123709921 container init 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.744570754 +0000 UTC m=+0.132065156 container start 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:50:21 compute-0 romantic_einstein[368780]: 167 167
Oct 02 12:50:21 compute-0 systemd[1]: libpod-877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d.scope: Deactivated successfully.
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.751400772 +0000 UTC m=+0.138895194 container attach 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.752399366 +0000 UTC m=+0.139893778 container died 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5e558a3a08f102226304fee051ccc2a06026b7714749a503993f1a9b2eb1757-merged.mount: Deactivated successfully.
Oct 02 12:50:21 compute-0 podman[368764]: 2025-10-02 12:50:21.810395531 +0000 UTC m=+0.197889933 container remove 877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_einstein, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:21 compute-0 systemd[1]: libpod-conmon-877abd6865a94b1cbcdf6c3686024fa1f21cb1a00eaccad5c6d1f6413e74728d.scope: Deactivated successfully.
Oct 02 12:50:21 compute-0 podman[368806]: 2025-10-02 12:50:21.980074969 +0000 UTC m=+0.038287371 container create 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:50:22 compute-0 systemd[1]: Started libpod-conmon-14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713.scope.
Oct 02 12:50:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6e9682036dd58b148a796b2402f9c8b9284431fd71e271ca7bf857b8f289c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6e9682036dd58b148a796b2402f9c8b9284431fd71e271ca7bf857b8f289c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6e9682036dd58b148a796b2402f9c8b9284431fd71e271ca7bf857b8f289c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6e9682036dd58b148a796b2402f9c8b9284431fd71e271ca7bf857b8f289c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:21.96463789 +0000 UTC m=+0.022850312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:22.0696812 +0000 UTC m=+0.127893592 container init 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:22.076330414 +0000 UTC m=+0.134542816 container start 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:22.079560613 +0000 UTC m=+0.137773015 container attach 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:50:22 compute-0 nova_compute[257802]: 2025-10-02 12:50:22.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.7 MiB/s wr, 260 op/s
Oct 02 12:50:22 compute-0 nova_compute[257802]: 2025-10-02 12:50:22.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/425157005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:22 compute-0 ceph-mon[73607]: osdmap e377: 3 total, 3 up, 3 in
Oct 02 12:50:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3775429891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:22.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:22 compute-0 nova_compute[257802]: 2025-10-02 12:50:22.833 2 DEBUG nova.network.neutron [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updated VIF entry in instance network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:22 compute-0 nova_compute[257802]: 2025-10-02 12:50:22.833 2 DEBUG nova.network.neutron [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:22 compute-0 sharp_noether[368823]: {
Oct 02 12:50:22 compute-0 sharp_noether[368823]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:50:22 compute-0 sharp_noether[368823]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:50:22 compute-0 sharp_noether[368823]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:50:22 compute-0 sharp_noether[368823]:         "osd_id": 1,
Oct 02 12:50:22 compute-0 sharp_noether[368823]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:50:22 compute-0 sharp_noether[368823]:         "type": "bluestore"
Oct 02 12:50:22 compute-0 sharp_noether[368823]:     }
Oct 02 12:50:22 compute-0 sharp_noether[368823]: }
Oct 02 12:50:22 compute-0 systemd[1]: libpod-14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713.scope: Deactivated successfully.
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:22.912819482 +0000 UTC m=+0.971031884 container died 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae6e9682036dd58b148a796b2402f9c8b9284431fd71e271ca7bf857b8f289c-merged.mount: Deactivated successfully.
Oct 02 12:50:22 compute-0 podman[368806]: 2025-10-02 12:50:22.966978173 +0000 UTC m=+1.025190575 container remove 14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:50:22 compute-0 systemd[1]: libpod-conmon-14bc561eaa8e5f7042422613be05d2be8569dc919dfa2b61a4a10586739ab713.scope: Deactivated successfully.
Oct 02 12:50:22 compute-0 sudo[368699]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:50:23 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:50:23 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:23 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 17869a8a-61d3-4b1c-81da-3dea64b928d3 does not exist
Oct 02 12:50:23 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9a5360c3-4bd0-498f-af0d-f9499183780c does not exist
Oct 02 12:50:23 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cbe20cba-d610-442b-b634-73280b018bfe does not exist
Oct 02 12:50:23 compute-0 sudo[368857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:23 compute-0 sudo[368857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:23 compute-0 sudo[368857]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:23 compute-0 sudo[368882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:50:23 compute-0 sudo[368882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:23 compute-0 sudo[368882]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:23 compute-0 nova_compute[257802]: 2025-10-02 12:50:23.211 2 DEBUG oslo_concurrency.lockutils [req-5916abca-3404-4092-9d5e-a29f3d081e84 req-16a74ac4-d7e4-40de-9285-0ea01961fb13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:23 compute-0 nova_compute[257802]: 2025-10-02 12:50:23.298 2 INFO nova.virt.libvirt.driver [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Snapshot image upload complete
Oct 02 12:50:23 compute-0 nova_compute[257802]: 2025-10-02 12:50:23.299 2 INFO nova.compute.manager [None req-938e9029-77d0-4b7c-9173-b22ce3b64e14 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Took 6.75 seconds to snapshot the instance on the hypervisor.
Oct 02 12:50:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:23 compute-0 ceph-mon[73607]: pgmap v2729: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.7 MiB/s wr, 260 op/s
Oct 02 12:50:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:23 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:50:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 MiB/s rd, 3.6 MiB/s wr, 350 op/s
Oct 02 12:50:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.182348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424182398, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 667, "num_deletes": 250, "total_data_size": 788811, "memory_usage": 802280, "flush_reason": "Manual Compaction"}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424188009, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 780614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60141, "largest_seqno": 60807, "table_properties": {"data_size": 777105, "index_size": 1352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7440, "raw_average_key_size": 17, "raw_value_size": 769916, "raw_average_value_size": 1803, "num_data_blocks": 59, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409384, "oldest_key_time": 1759409384, "file_creation_time": 1759409424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 5684 microseconds, and 2750 cpu microseconds.
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.188040) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 780614 bytes OK
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.188056) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.190003) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.190014) EVENT_LOG_v1 {"time_micros": 1759409424190010, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.190028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 785320, prev total WAL file size 785320, number of live WAL files 2.
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.190440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(762KB)], [134(11MB)]
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424190465, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12726585, "oldest_snapshot_seqno": -1}
Oct 02 12:50:24 compute-0 nova_compute[257802]: 2025-10-02 12:50:24.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8779 keys, 11638962 bytes, temperature: kUnknown
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424242658, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11638962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11581875, "index_size": 34062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 230903, "raw_average_key_size": 26, "raw_value_size": 11427310, "raw_average_value_size": 1301, "num_data_blocks": 1306, "num_entries": 8779, "num_filter_entries": 8779, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.242893) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11638962 bytes
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.244110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.6 rd, 222.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(31.2) write-amplify(14.9) OK, records in: 9296, records dropped: 517 output_compression: NoCompression
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.244125) EVENT_LOG_v1 {"time_micros": 1759409424244118, "job": 82, "event": "compaction_finished", "compaction_time_micros": 52254, "compaction_time_cpu_micros": 25470, "output_level": 6, "num_output_files": 1, "total_output_size": 11638962, "num_input_records": 9296, "num_output_records": 8779, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424244332, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424246036, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.190366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.246111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.246116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.246117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.246119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:50:24.246120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:24 compute-0 nova_compute[257802]: 2025-10-02 12:50:24.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:24.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:25 compute-0 nova_compute[257802]: 2025-10-02 12:50:25.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:25 compute-0 ceph-mon[73607]: pgmap v2730: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 MiB/s rd, 3.6 MiB/s wr, 350 op/s
Oct 02 12:50:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:25 compute-0 nova_compute[257802]: 2025-10-02 12:50:25.567 2 INFO nova.compute.manager [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Rescuing
Oct 02 12:50:25 compute-0 nova_compute[257802]: 2025-10-02 12:50:25.568 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:25 compute-0 nova_compute[257802]: 2025-10-02 12:50:25.568 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:25 compute-0 nova_compute[257802]: 2025-10-02 12:50:25.568 2 DEBUG nova.network.neutron [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Oct 02 12:50:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:26.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:26.968 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:26.969 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:26.969 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:27 compute-0 nova_compute[257802]: 2025-10-02 12:50:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:27 compute-0 nova_compute[257802]: 2025-10-02 12:50:27.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:50:27 compute-0 nova_compute[257802]: 2025-10-02 12:50:27.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:50:27 compute-0 nova_compute[257802]: 2025-10-02 12:50:27.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:27 compute-0 ceph-mon[73607]: pgmap v2731: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Oct 02 12:50:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.014 2 DEBUG nova.network.neutron [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.043 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.046 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.046 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.047 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 697 KiB/s wr, 72 op/s
Oct 02 12:50:28 compute-0 ovn_controller[148183]: 2025-10-02T12:50:28Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:50:28 compute-0 ovn_controller[148183]: 2025-10-02T12:50:28Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:50:28 compute-0 nova_compute[257802]: 2025-10-02 12:50:28.292 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:50:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Oct 02 12:50:29 compute-0 nova_compute[257802]: 2025-10-02 12:50:29.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:29 compute-0 nova_compute[257802]: 2025-10-02 12:50:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Oct 02 12:50:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:29.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Oct 02 12:50:29 compute-0 ceph-mon[73607]: pgmap v2732: 305 pgs: 305 active+clean; 514 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 697 KiB/s wr, 72 op/s
Oct 02 12:50:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/569344712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 155 op/s
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.297 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.375 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.376 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.376 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.424 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.425 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.425 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.425 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.425 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:30 compute-0 ceph-mon[73607]: osdmap e378: 3 total, 3 up, 3 in
Oct 02 12:50:30 compute-0 ceph-mon[73607]: pgmap v2734: 305 pgs: 305 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 155 op/s
Oct 02 12:50:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829914052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:30 compute-0 nova_compute[257802]: 2025-10-02 12:50:30.889 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.063 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.066 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.067 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.227 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.228 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3810MB free_disk=20.855430603027344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.228 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.228 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.449 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.449 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.449 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.450 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:50:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.575 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.686 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.686 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.717 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.770 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:50:31 compute-0 nova_compute[257802]: 2025-10-02 12:50:31.909 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3829914052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 12:50:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863371451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:32 compute-0 nova_compute[257802]: 2025-10-02 12:50:32.347 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:32 compute-0 nova_compute[257802]: 2025-10-02 12:50:32.352 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:32 compute-0 nova_compute[257802]: 2025-10-02 12:50:32.461 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:32.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:32 compute-0 nova_compute[257802]: 2025-10-02 12:50:32.648 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:50:32 compute-0 nova_compute[257802]: 2025-10-02 12:50:32.648 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:32 compute-0 ovn_controller[148183]: 2025-10-02T12:50:32Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:44:ff 10.100.0.7
Oct 02 12:50:32 compute-0 ovn_controller[148183]: 2025-10-02T12:50:32Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:44:ff 10.100.0.7
Oct 02 12:50:33 compute-0 ceph-mon[73607]: pgmap v2735: 305 pgs: 305 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 12:50:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2863371451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:33.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:33 compute-0 kernel: tap62f0b94c-3e (unregistering): left promiscuous mode
Oct 02 12:50:33 compute-0 NetworkManager[44987]: <info>  [1759409433.9815] device (tap62f0b94c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:33 compute-0 ovn_controller[148183]: 2025-10-02T12:50:33Z|00782|binding|INFO|Releasing lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 from this chassis (sb_readonly=0)
Oct 02 12:50:33 compute-0 nova_compute[257802]: 2025-10-02 12:50:33.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:33 compute-0 ovn_controller[148183]: 2025-10-02T12:50:33Z|00783|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 down in Southbound
Oct 02 12:50:33 compute-0 ovn_controller[148183]: 2025-10-02T12:50:33Z|00784|binding|INFO|Removing iface tap62f0b94c-3e ovn-installed in OVS
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Oct 02 12:50:34 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b1.scope: Consumed 14.335s CPU time.
Oct 02 12:50:34 compute-0 systemd-machined[211836]: Machine qemu-85-instance-000000b1 terminated.
Oct 02 12:50:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 583 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 5.4 MiB/s wr, 114 op/s
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.335 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance shutdown successfully after 6 seconds.
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.340 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance destroyed successfully.
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.340 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:34.445 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:34.447 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis
Oct 02 12:50:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:34.448 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:34.449 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bc69ab-c9e6-45db-97cb-0ec4d968a5ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:34.450 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.481 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Attempting a stable device rescue
Oct 02 12:50:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:34.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.886 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.892 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.892 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Creating image(s)
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.925 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:34 compute-0 nova_compute[257802]: 2025-10-02 12:50:34.931 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:34 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [NOTICE]   (367515) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:34 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [NOTICE]   (367515) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:34 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [WARNING]  (367515) : Exiting Master process...
Oct 02 12:50:34 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [ALERT]    (367515) : Current worker (367517) exited with code 143 (Terminated)
Oct 02 12:50:34 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[367511]: [WARNING]  (367515) : All workers exited. Exiting... (0)
Oct 02 12:50:34 compute-0 systemd[1]: libpod-2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc.scope: Deactivated successfully.
Oct 02 12:50:34 compute-0 podman[368991]: 2025-10-02 12:50:34.944264033 +0000 UTC m=+0.361940291 container died 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.152 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.186 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.191 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "25e6f6ce5871d9752b3ab7b4700b1344ca904a1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.193 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "25e6f6ce5871d9752b3ab7b4700b1344ca904a1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44ce0f41dbe0554b9aead53f18f0c8daaea67ed245867e5f7eb7693e2fd8969-merged.mount: Deactivated successfully.
Oct 02 12:50:35 compute-0 ceph-mon[73607]: pgmap v2736: 305 pgs: 305 active+clean; 583 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 5.4 MiB/s wr, 114 op/s
Oct 02 12:50:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.583 2 DEBUG nova.virt.libvirt.imagebackend [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:50:35 compute-0 podman[368991]: 2025-10-02 12:50:35.640729492 +0000 UTC m=+1.058405750 container cleanup 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.646 2 DEBUG nova.compute.manager [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.647 2 DEBUG oslo_concurrency.lockutils [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.647 2 DEBUG oslo_concurrency.lockutils [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.647 2 DEBUG oslo_concurrency.lockutils [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.648 2 DEBUG nova.compute.manager [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.648 2 WARNING nova.compute.manager [req-88bfddbd-9da4-4461-b776-595e8f35c128 req-2d94457d-ad53-4c8a-ab2b-4a28003fd748 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state active and task_state rescuing.
Oct 02 12:50:35 compute-0 systemd[1]: libpod-conmon-2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc.scope: Deactivated successfully.
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.662 2 DEBUG nova.virt.libvirt.imagebackend [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.663 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] cloning images/e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0@snap to None/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:50:35 compute-0 sudo[369110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:35 compute-0 sudo[369110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:35 compute-0 sudo[369110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:35 compute-0 sudo[369175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:35 compute-0 sudo[369175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:35 compute-0 sudo[369175]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:35 compute-0 podman[369111]: 2025-10-02 12:50:35.835449665 +0000 UTC m=+0.174439036 container remove 2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.841 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[021812fa-9871-404e-9ea8-f6ba6e55774d]: (4, ('Thu Oct  2 12:50:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc)\n2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc\nThu Oct  2 12:50:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc)\n2b1e2b5facc4815fc0062183c2b9cd7e04fea2f29c7a46214fa6d083a1899ddc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.844 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6af4bd35-c531-42ac-be8c-a030c1bb22f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.845 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:35 compute-0 kernel: tap48e4ff16-10: left promiscuous mode
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:35 compute-0 nova_compute[257802]: 2025-10-02 12:50:35.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.870 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3045ec0c-7a65-4c9e-bb4f-fa7ceeb7c969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.906 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4b05e630-0dfc-4a1b-9883-340d1072ae79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.908 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1e689d-e804-46e2-b3d0-5fe918839d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.923 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37a6c05a-f677-43f8-b02c-cd6d4caba485]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748039, 'reachable_time': 19429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369209, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.928 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:35.928 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[0edf8ac6-59e1-473e-a5fc-e14577e9725a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 609 KiB/s rd, 7.3 MiB/s wr, 175 op/s
Oct 02 12:50:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:36 compute-0 nova_compute[257802]: 2025-10-02 12:50:36.561 2 DEBUG oslo_concurrency.lockutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "25e6f6ce5871d9752b3ab7b4700b1344ca904a1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4256102784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/28671049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:36 compute-0 nova_compute[257802]: 2025-10-02 12:50:36.614 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.350 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.352 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start _get_guest_xml network_info=[{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:26:56:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e3ec26e3-5879-4ab0-bb2c-bf8f5cbdc0c0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.353 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'resources' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:37.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:37 compute-0 ceph-mon[73607]: pgmap v2737: 305 pgs: 305 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 609 KiB/s rd, 7.3 MiB/s wr, 175 op/s
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.802 2 DEBUG nova.compute.manager [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.803 2 DEBUG oslo_concurrency.lockutils [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.803 2 DEBUG oslo_concurrency.lockutils [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.804 2 DEBUG oslo_concurrency.lockutils [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.804 2 DEBUG nova.compute.manager [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.804 2 WARNING nova.compute.manager [req-7ddbf40c-2e85-4eeb-a8fc-b8924d2ac9d2 req-429b7ce1-7955-41a8-8ac5-c7c66c7179b9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state active and task_state rescuing.
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.839 2 WARNING nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.853 2 DEBUG nova.virt.libvirt.host [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.854 2 DEBUG nova.virt.libvirt.host [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.860 2 DEBUG nova.virt.libvirt.host [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.860 2 DEBUG nova.virt.libvirt.host [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.861 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.861 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.862 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.862 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.862 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.863 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.863 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.863 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.863 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.863 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.864 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.864 2 DEBUG nova.virt.hardware [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:37 compute-0 nova_compute[257802]: 2025-10-02 12:50:37.864 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:38 compute-0 nova_compute[257802]: 2025-10-02 12:50:38.037 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 609 KiB/s rd, 7.3 MiB/s wr, 175 op/s
Oct 02 12:50:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4015185663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:38 compute-0 nova_compute[257802]: 2025-10-02 12:50:38.482 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:38 compute-0 nova_compute[257802]: 2025-10-02 12:50:38.512 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:38 compute-0 ceph-mon[73607]: pgmap v2738: 305 pgs: 305 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 609 KiB/s rd, 7.3 MiB/s wr, 175 op/s
Oct 02 12:50:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4015185663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387736143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:38 compute-0 nova_compute[257802]: 2025-10-02 12:50:38.964 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:38 compute-0 nova_compute[257802]: 2025-10-02 12:50:38.966 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848565880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.412 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.414 2 DEBUG nova.virt.libvirt.vif [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1972240086',display_name='tempest-ServerStableDeviceRescueTest-server-1972240086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1972240086',id=177,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-2z5a0kx3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:23Z,user_data=None,user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=8e2c1007-1d07-434c-8a22-6cb98d903d3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:26:56:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.414 2 DEBUG nova.network.os_vif_util [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:26:56:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.415 2 DEBUG nova.network.os_vif_util [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.417 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.491 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <uuid>8e2c1007-1d07-434c-8a22-6cb98d903d3c</uuid>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <name>instance-000000b1</name>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-1972240086</nova:name>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:50:37</nova:creationTime>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:user uuid="6785ffe5d6554514b4ed9fd47665eca0">tempest-ServerStableDeviceRescueTest-454391960-project-member</nova:user>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:project uuid="6a442bc513e14406b73e96e70396e6c3">tempest-ServerStableDeviceRescueTest-454391960</nova:project>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <nova:port uuid="62f0b94c-3e74-4a7d-b13e-8178d5dbf737">
Oct 02 12:50:39 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <system>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="serial">8e2c1007-1d07-434c-8a22-6cb98d903d3c</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="uuid">8e2c1007-1d07-434c-8a22-6cb98d903d3c</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </system>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <os>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </os>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <features>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </features>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.rescue">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </source>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:50:39 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <boot order="1"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:26:56:b9"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <target dev="tap62f0b94c-3e"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/console.log" append="off"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <video>
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </video>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:50:39 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:50:39 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:50:39 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:50:39 compute-0 nova_compute[257802]: </domain>
Oct 02 12:50:39 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.498 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance destroyed successfully.
Oct 02 12:50:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:39.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.878 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.879 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.879 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.879 2 DEBUG nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No VIF found with MAC fa:16:3e:26:56:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.880 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Using config drive
Oct 02 12:50:39 compute-0 nova_compute[257802]: 2025-10-02 12:50:39.914 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3387736143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3848565880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 420 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.129 2 INFO nova.compute.manager [None req-84e49c8e-b643-405d-9b93-95efaa0570b4 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Get console output
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.135 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.145 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.353 2 DEBUG nova.objects.instance [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'keypairs' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.370 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.728 2 INFO nova.compute.manager [None req-5ced178d-caea-492f-845d-2f4fcaf484d0 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Get console output
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.733 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.950 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Creating config drive at /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue
Oct 02 12:50:40 compute-0 nova_compute[257802]: 2025-10-02 12:50:40.955 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvm42kb6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:41 compute-0 nova_compute[257802]: 2025-10-02 12:50:41.088 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvm42kb6z" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:41 compute-0 nova_compute[257802]: 2025-10-02 12:50:41.121 2 DEBUG nova.storage.rbd_utils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:41 compute-0 nova_compute[257802]: 2025-10-02 12:50:41.125 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:41 compute-0 ceph-mon[73607]: pgmap v2739: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 420 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Oct 02 12:50:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:41.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 289 KiB/s rd, 3.5 MiB/s wr, 106 op/s
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.292 2 DEBUG oslo_concurrency.processutils [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue 8e2c1007-1d07-434c-8a22-6cb98d903d3c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.293 2 INFO nova.virt.libvirt.driver [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Deleting local config drive /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c/disk.config.rescue because it was imported into RBD.
Oct 02 12:50:42 compute-0 kernel: tap62f0b94c-3e: entered promiscuous mode
Oct 02 12:50:42 compute-0 ovn_controller[148183]: 2025-10-02T12:50:42Z|00785|binding|INFO|Claiming lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for this chassis.
Oct 02 12:50:42 compute-0 ovn_controller[148183]: 2025-10-02T12:50:42Z|00786|binding|INFO|62f0b94c-3e74-4a7d-b13e-8178d5dbf737: Claiming fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.3597] manager: (tap62f0b94c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 02 12:50:42 compute-0 ovn_controller[148183]: 2025-10-02T12:50:42Z|00787|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 ovn-installed in OVS
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 systemd-udevd[369389]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:42 compute-0 systemd-machined[211836]: New machine qemu-87-instance-000000b1.
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.4008] device (tap62f0b94c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.4020] device (tap62f0b94c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:42 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-000000b1.
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.466 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:42 compute-0 ovn_controller[148183]: 2025-10-02T12:50:42Z|00788|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 up in Southbound
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.467 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.468 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.482 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[794fb963-de01-4b22-abba-3c1734a3b4b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.484 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.485 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.485 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd461f7-0225-4b81-b252-2e0396cb3f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.487 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[11b797c8-86ef-428d-9c55-2779153cc439]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.501 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fd169a-e174-4068-bec3-21f5d0aa1b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:50:42
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images']
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.518 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[582370ae-dee6-4869-9dba-6ad6d33079aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.553 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dbd7ba-f639-4703-b5ec-bb57deb53784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.559 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e15a5-7207-4d4b-ae12-913d312c4888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.5609] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct 02 12:50:42 compute-0 systemd-udevd[369392]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.618 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[90c28c5a-16d9-4970-b412-cc0278e21957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.622 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[42fdc1cd-d50a-42b5-b71b-5861afe50033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.6446] device (tap48e4ff16-10): carrier: link connected
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.651 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8b2c6a-fbd9-4691-a9e1-687dbbdfe1ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.669 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a3ba00-9836-4535-9c3a-e8d7f9c3a7f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751027, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369439, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.683 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e0c417-d8f5-473e-8690-7cd658d76b63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751027, 'tstamp': 751027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369443, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.706 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33e0d874-2aa5-4f52-8e74-640a6bb41d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751027, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369444, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.744 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15cf837e-f297-40c7-a4e2-75b4d73dbcd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.830 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c97b39-433e-4397-b6cf-91a87c478604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.832 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.832 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.832 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 NetworkManager[44987]: <info>  [1759409442.8351] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct 02 12:50:42 compute-0 kernel: tap48e4ff16-10: entered promiscuous mode
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.838 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 ovn_controller[148183]: 2025-10-02T12:50:42Z|00789|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 nova_compute[257802]: 2025-10-02 12:50:42.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.859 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.860 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[119f53f8-f5b0-4f02-96e8-6089a1d5779f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.861 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:42.862 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:43 compute-0 podman[369517]: 2025-10-02 12:50:43.198373014 +0000 UTC m=+0.025209240 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:50:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:50:43 compute-0 podman[369517]: 2025-10-02 12:50:43.327336032 +0000 UTC m=+0.154172228 container create 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:50:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:50:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:50:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:50:43 compute-0 systemd[1]: Started libpod-conmon-60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612.scope.
Oct 02 12:50:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb79698fca06718606a75306cfe607a4bae3a6347de915bf3351ba2cad9479de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:43.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:43 compute-0 podman[369517]: 2025-10-02 12:50:43.521506182 +0000 UTC m=+0.348342378 container init 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:43 compute-0 podman[369517]: 2025-10-02 12:50:43.529023977 +0000 UTC m=+0.355860173 container start 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:50:43 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [NOTICE]   (369538) : New worker (369540) forked
Oct 02 12:50:43 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [NOTICE]   (369538) : Loading success.
Oct 02 12:50:43 compute-0 ceph-mon[73607]: pgmap v2740: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 289 KiB/s rd, 3.5 MiB/s wr, 106 op/s
Oct 02 12:50:43 compute-0 nova_compute[257802]: 2025-10-02 12:50:43.618 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 8e2c1007-1d07-434c-8a22-6cb98d903d3c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:50:43 compute-0 nova_compute[257802]: 2025-10-02 12:50:43.619 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409443.6185684, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:43 compute-0 nova_compute[257802]: 2025-10-02 12:50:43.620 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Resumed (Lifecycle Event)
Oct 02 12:50:43 compute-0 nova_compute[257802]: 2025-10-02 12:50:43.623 2 DEBUG nova.compute.manager [None req-f51ce395-6577-4630-a335-31f86deb2785 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 154 op/s
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:50:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:50:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:44 compute-0 nova_compute[257802]: 2025-10-02 12:50:44.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:44 compute-0 nova_compute[257802]: 2025-10-02 12:50:44.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:44.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:44 compute-0 podman[369550]: 2025-10-02 12:50:44.92588622 +0000 UTC m=+0.061109711 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:50:44 compute-0 podman[369552]: 2025-10-02 12:50:44.927780578 +0000 UTC m=+0.056414648 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:50:44 compute-0 podman[369551]: 2025-10-02 12:50:44.92993675 +0000 UTC m=+0.061360408 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:50:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.601 2 DEBUG nova.compute.manager [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.602 2 DEBUG oslo_concurrency.lockutils [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.603 2 DEBUG oslo_concurrency.lockutils [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.603 2 DEBUG oslo_concurrency.lockutils [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.603 2 DEBUG nova.compute.manager [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.604 2 WARNING nova.compute.manager [req-8815cd5c-6d26-4244-9c49-644891d9bc89 req-37bd2ca5-417d-4d86-a3f8-5a7aba51cc72 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state active and task_state rescuing.
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.649 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.654 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:45 compute-0 ceph-mon[73607]: pgmap v2741: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 154 op/s
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.767 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409443.6201096, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.767 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Started (Lifecycle Event)
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.901 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:45 compute-0 nova_compute[257802]: 2025-10-02 12:50:45.905 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 183 op/s
Oct 02 12:50:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:46 compute-0 ceph-mon[73607]: pgmap v2742: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 183 op/s
Oct 02 12:50:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.861 2 DEBUG nova.compute.manager [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.861 2 DEBUG nova.compute.manager [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing instance network info cache due to event network-changed-76fcd8fc-8215-4f14-b41b-aaa8327ae16e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.862 2 DEBUG oslo_concurrency.lockutils [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.862 2 DEBUG oslo_concurrency.lockutils [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.862 2 DEBUG nova.network.neutron [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Refreshing network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.907 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.908 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.908 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.908 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.909 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.910 2 INFO nova.compute.manager [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Terminating instance
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.910 2 DEBUG nova.compute.manager [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.989 2 DEBUG nova.compute.manager [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.990 2 DEBUG oslo_concurrency.lockutils [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.990 2 DEBUG oslo_concurrency.lockutils [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.991 2 DEBUG oslo_concurrency.lockutils [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.991 2 DEBUG nova.compute.manager [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:47 compute-0 nova_compute[257802]: 2025-10-02 12:50:47.991 2 WARNING nova.compute.manager [req-29c5d4c6-a0ee-4469-aa23-faafab30ab45 req-ea0b4168-9b93-4a8a-93ca-e8b631e9e620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state rescued and task_state None.
Oct 02 12:50:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 129 op/s
Oct 02 12:50:48 compute-0 kernel: tap76fcd8fc-82 (unregistering): left promiscuous mode
Oct 02 12:50:48 compute-0 NetworkManager[44987]: <info>  [1759409448.2652] device (tap76fcd8fc-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:48 compute-0 ovn_controller[148183]: 2025-10-02T12:50:48Z|00790|binding|INFO|Releasing lport 76fcd8fc-8215-4f14-b41b-aaa8327ae16e from this chassis (sb_readonly=0)
Oct 02 12:50:48 compute-0 ovn_controller[148183]: 2025-10-02T12:50:48Z|00791|binding|INFO|Setting lport 76fcd8fc-8215-4f14-b41b-aaa8327ae16e down in Southbound
Oct 02 12:50:48 compute-0 ovn_controller[148183]: 2025-10-02T12:50:48Z|00792|binding|INFO|Removing iface tap76fcd8fc-82 ovn-installed in OVS
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.308 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:44:ff 10.100.0.7'], port_security=['fa:16:3e:e4:44:ff 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72dccc64-4bf8-4d82-ae40-bb1af4f9b50a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afda8bd0-d3cf-487a-af6f-4c4fa9c53a57, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=76fcd8fc-8215-4f14-b41b-aaa8327ae16e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.310 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e in datapath a24bd64e-e8fc-466a-bb2d-8ba89573eddd unbound from our chassis
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.311 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a24bd64e-e8fc-466a-bb2d-8ba89573eddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.312 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8718f6-8886-4af8-a51b-4e482b81924e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.312 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd namespace which is not needed anymore
Oct 02 12:50:48 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Oct 02 12:50:48 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b2.scope: Consumed 14.671s CPU time.
Oct 02 12:50:48 compute-0 systemd-machined[211836]: Machine qemu-86-instance-000000b2 terminated.
Oct 02 12:50:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:48.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.550 2 INFO nova.virt.libvirt.driver [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Instance destroyed successfully.
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.551 2 DEBUG nova.objects.instance [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:48 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [NOTICE]   (367958) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:48 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [NOTICE]   (367958) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:48 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [WARNING]  (367958) : Exiting Master process...
Oct 02 12:50:48 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [ALERT]    (367958) : Current worker (367963) exited with code 143 (Terminated)
Oct 02 12:50:48 compute-0 neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd[367947]: [WARNING]  (367958) : All workers exited. Exiting... (0)
Oct 02 12:50:48 compute-0 systemd[1]: libpod-7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20.scope: Deactivated successfully.
Oct 02 12:50:48 compute-0 podman[369627]: 2025-10-02 12:50:48.578598779 +0000 UTC m=+0.180341951 container died 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.684 2 DEBUG nova.virt.libvirt.vif [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2028645950',display_name='tempest-TestNetworkBasicOps-server-2028645950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2028645950',id=178,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJxXgh9Gujh9k8v28uC2r2aUWZ4hdScdX0MxzNItcmpV/cUmfpYjCMJUsWbygZcDyKss/9aG0cpN6ekTQGyiz+Iza/UQBZyY0NPcxBcPbDwZL6Km1biqqXoyNPtMEgGVA==',key_name='tempest-TestNetworkBasicOps-1030994634',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-1f3otf55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:17Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.685 2 DEBUG nova.network.os_vif_util [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.686 2 DEBUG nova.network.os_vif_util [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.686 2 DEBUG os_vif [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76fcd8fc-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.694 2 INFO os_vif [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:44:ff,bridge_name='br-int',has_traffic_filtering=True,id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e,network=Network(a24bd64e-e8fc-466a-bb2d-8ba89573eddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76fcd8fc-82')
Oct 02 12:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf388b6297b4653671703a10752460c1a0cd6fcb364e330bd02ac15f0b688fb-merged.mount: Deactivated successfully.
Oct 02 12:50:48 compute-0 podman[369627]: 2025-10-02 12:50:48.860090044 +0000 UTC m=+0.461833196 container cleanup 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:50:48 compute-0 podman[369688]: 2025-10-02 12:50:48.945676826 +0000 UTC m=+0.066144636 container remove 7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:48 compute-0 systemd[1]: libpod-conmon-7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20.scope: Deactivated successfully.
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.951 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7a2c9d-d01f-4457-bf1c-7f450827b0ab]: (4, ('Thu Oct  2 12:50:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd (7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20)\n7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20\nThu Oct  2 12:50:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd (7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20)\n7623715c2905ca0b1a16e0cc3a89507029478a43f222b08ad55e469871d1cf20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.954 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[94474760-8cfc-4101-9b03-83284193f9a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.955 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24bd64e-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 kernel: tapa24bd64e-e0: left promiscuous mode
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 nova_compute[257802]: 2025-10-02 12:50:48.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:48.974 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[914ed0df-7705-446f-90e4-199a4128827f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:49.004 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6846ea6d-c0f1-4bc0-995f-0c97def275cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:49.005 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[da58ef53-1a79-42ae-a44c-e87bfb65e829]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:49.023 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[85ed2047-6474-468b-ad8c-0223bbc1756c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748333, 'reachable_time': 34082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369704, 'error': None, 'target': 'ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:49 compute-0 systemd[1]: run-netns-ovnmeta\x2da24bd64e\x2de8fc\x2d466a\x2dbb2d\x2d8ba89573eddd.mount: Deactivated successfully.
Oct 02 12:50:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:49.026 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a24bd64e-e8fc-466a-bb2d-8ba89573eddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:49.026 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ce208d5b-7cc4-4124-a17c-b7023a0d7e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:50:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 60K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1683 writes, 7700 keys, 1683 commit groups, 1.0 writes per commit group, ingest: 10.84 MB, 0.02 MB/s
                                           Interval WAL: 1684 writes, 1684 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.9      1.95              0.22        41    0.048       0      0       0.0       0.0
                                             L6      1/0   11.10 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9     97.5     83.0      4.75              1.12        40    0.119    270K    21K       0.0       0.0
                                            Sum      1/0   11.10 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     69.1     70.8      6.70              1.34        81    0.083    270K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5     72.8     73.0      1.24              0.25        14    0.088     64K   3660       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     97.5     83.0      4.75              1.12        40    0.119    270K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.0      1.95              0.22        40    0.049       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.078, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.46 GB write, 0.10 MB/s write, 0.45 GB read, 0.10 MB/s read, 6.7 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 49.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000282 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2892,47.59 MB,15.6538%) FilterBlock(82,762.55 KB,0.244959%) IndexBlock(82,1.27 MB,0.418889%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.115 2 DEBUG nova.compute.manager [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-unplugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.116 2 DEBUG oslo_concurrency.lockutils [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.116 2 DEBUG oslo_concurrency.lockutils [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.117 2 DEBUG oslo_concurrency.lockutils [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.117 2 DEBUG nova.compute.manager [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] No waiting events found dispatching network-vif-unplugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.117 2 DEBUG nova.compute.manager [req-0d4c74ec-a1ac-4170-adbe-56b814ad062b req-48df15e4-961d-479c-90c3-a0f638a1c842 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-unplugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:50:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.450 2 INFO nova.compute.manager [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Unrescuing
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.452 2 DEBUG oslo_concurrency.lockutils [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.452 2 DEBUG oslo_concurrency.lockutils [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.453 2 DEBUG nova.network.neutron [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:49.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.552 2 DEBUG nova.network.neutron [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updated VIF entry in instance network info cache for port 76fcd8fc-8215-4f14-b41b-aaa8327ae16e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.553 2 DEBUG nova.network.neutron [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [{"id": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "address": "fa:16:3e:e4:44:ff", "network": {"id": "a24bd64e-e8fc-466a-bb2d-8ba89573eddd", "bridge": "br-int", "label": "tempest-network-smoke--2082152244", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76fcd8fc-82", "ovs_interfaceid": "76fcd8fc-8215-4f14-b41b-aaa8327ae16e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:49 compute-0 nova_compute[257802]: 2025-10-02 12:50:49.686 2 DEBUG oslo_concurrency.lockutils [req-b48fbc96-e48d-4827-bfd0-025b01db27a2 req-a8bd0957-98de-4c7b-8148-8629f2498e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:49 compute-0 ceph-mon[73607]: pgmap v2743: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 129 op/s
Oct 02 12:50:49 compute-0 podman[369705]: 2025-10-02 12:50:49.946695556 +0000 UTC m=+0.084610639 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 45 KiB/s wr, 177 op/s
Oct 02 12:50:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:50 compute-0 ceph-mon[73607]: pgmap v2744: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 45 KiB/s wr, 177 op/s
Oct 02 12:50:51 compute-0 nova_compute[257802]: 2025-10-02 12:50:51.429 2 DEBUG nova.network.neutron [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:51.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:51 compute-0 nova_compute[257802]: 2025-10-02 12:50:51.737 2 INFO nova.virt.libvirt.driver [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Deleting instance files /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_del
Oct 02 12:50:51 compute-0 nova_compute[257802]: 2025-10-02 12:50:51.738 2 INFO nova.virt.libvirt.driver [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Deletion of /var/lib/nova/instances/2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1_del complete
Oct 02 12:50:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 155 op/s
Oct 02 12:50:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.821 2 DEBUG nova.compute.manager [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.823 2 DEBUG oslo_concurrency.lockutils [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.823 2 DEBUG oslo_concurrency.lockutils [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.823 2 DEBUG oslo_concurrency.lockutils [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.824 2 DEBUG nova.compute.manager [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] No waiting events found dispatching network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:52 compute-0 nova_compute[257802]: 2025-10-02 12:50:52.824 2 WARNING nova.compute.manager [req-a2d82470-d2d7-492f-8811-bb2bddb5bd6c req-7fc2703f-947b-4a34-bd2e-8d1f28b74e13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received unexpected event network-vif-plugged-76fcd8fc-8215-4f14-b41b-aaa8327ae16e for instance with vm_state active and task_state deleting.
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.010 2 DEBUG oslo_concurrency.lockutils [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.011 2 DEBUG nova.objects.instance [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:53.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.595 2 INFO nova.compute.manager [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Took 5.68 seconds to destroy the instance on the hypervisor.
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.596 2 DEBUG oslo.service.loopingcall [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.596 2 DEBUG nova.compute.manager [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.596 2 DEBUG nova.network.neutron [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:50:53 compute-0 nova_compute[257802]: 2025-10-02 12:50:53.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:53 compute-0 ceph-mon[73607]: pgmap v2745: 305 pgs: 305 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 155 op/s
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 171 op/s
Oct 02 12:50:54 compute-0 kernel: tap62f0b94c-3e (unregistering): left promiscuous mode
Oct 02 12:50:54 compute-0 NetworkManager[44987]: <info>  [1759409454.1762] device (tap62f0b94c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:54 compute-0 ovn_controller[148183]: 2025-10-02T12:50:54Z|00793|binding|INFO|Releasing lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 from this chassis (sb_readonly=0)
Oct 02 12:50:54 compute-0 ovn_controller[148183]: 2025-10-02T12:50:54Z|00794|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 down in Southbound
Oct 02 12:50:54 compute-0 ovn_controller[148183]: 2025-10-02T12:50:54Z|00795|binding|INFO|Removing iface tap62f0b94c-3e ovn-installed in OVS
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:54 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Oct 02 12:50:54 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b1.scope: Consumed 10.749s CPU time.
Oct 02 12:50:54 compute-0 systemd-machined[211836]: Machine qemu-87-instance-000000b1 terminated.
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.442 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance destroyed successfully.
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.443 2 DEBUG nova.objects.instance [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:54.445 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:54.447 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis
Oct 02 12:50:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:54.448 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:54.450 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[004fc646-925c-4756-bb5f-6c7057f94be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:54.450 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore
Oct 02 12:50:54 compute-0 nova_compute[257802]: 2025-10-02 12:50:54.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:54 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [NOTICE]   (369538) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:54 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [NOTICE]   (369538) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:54 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [WARNING]  (369538) : Exiting Master process...
Oct 02 12:50:54 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [ALERT]    (369538) : Current worker (369540) exited with code 143 (Terminated)
Oct 02 12:50:54 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369534]: [WARNING]  (369538) : All workers exited. Exiting... (0)
Oct 02 12:50:54 compute-0 systemd[1]: libpod-60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612.scope: Deactivated successfully.
Oct 02 12:50:54 compute-0 podman[369770]: 2025-10-02 12:50:54.623344687 +0000 UTC m=+0.090425342 container died 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009012801274892984 of space, bias 1.0, pg target 2.703840382467895 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6441557469058254 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb79698fca06718606a75306cfe607a4bae3a6347de915bf3351ba2cad9479de-merged.mount: Deactivated successfully.
Oct 02 12:50:54 compute-0 podman[369770]: 2025-10-02 12:50:54.797385013 +0000 UTC m=+0.264465658 container cleanup 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:54 compute-0 systemd[1]: libpod-conmon-60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612.scope: Deactivated successfully.
Oct 02 12:50:55 compute-0 ceph-mon[73607]: pgmap v2746: 305 pgs: 305 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 171 op/s
Oct 02 12:50:55 compute-0 podman[369800]: 2025-10-02 12:50:55.184231266 +0000 UTC m=+0.366679649 container remove 60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.191 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5f359900-8d19-4009-abfe-30b09928fd85]: (4, ('Thu Oct  2 12:50:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612)\n60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612\nThu Oct  2 12:50:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612)\n60e167c2d0d4248dce6736ee52bb824fc63b4c8d233c7bad471855758b7b5612\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.193 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4e793a95-2bee-4770-b6a4-5037c900f460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.194 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 kernel: tap48e4ff16-10: left promiscuous mode
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.218 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae655c9-2ee7-48ad-8780-1b270325b0a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.248 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1924f6e3-0fe9-438d-951f-1eca6b872fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.250 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9dced1d7-3750-4058-a9ca-5690445f3a16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.267 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[84d17ef2-14a2-4640-b803-f03c545f249f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751017, 'reachable_time': 42932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369821, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.269 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.270 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5672ee84-0500-4f06-8d8a-5ca79cb5d9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 kernel: tap62f0b94c-3e: entered promiscuous mode
Oct 02 12:50:55 compute-0 systemd-udevd[369737]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:55 compute-0 ovn_controller[148183]: 2025-10-02T12:50:55Z|00796|binding|INFO|Claiming lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for this chassis.
Oct 02 12:50:55 compute-0 ovn_controller[148183]: 2025-10-02T12:50:55Z|00797|binding|INFO|62f0b94c-3e74-4a7d-b13e-8178d5dbf737: Claiming fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.3507] manager: (tap62f0b94c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.3601] device (tap62f0b94c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.3611] device (tap62f0b94c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:55 compute-0 ovn_controller[148183]: 2025-10-02T12:50:55Z|00798|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 ovn-installed in OVS
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 systemd-machined[211836]: New machine qemu-88-instance-000000b1.
Oct 02 12:50:55 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-000000b1.
Oct 02 12:50:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:50:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:55.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:50:55 compute-0 ovn_controller[148183]: 2025-10-02T12:50:55Z|00799|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 up in Southbound
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.627 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.628 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.629 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.645 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[813c0b08-4349-4c1a-9bbc-f4caf584584f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.646 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.649 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.649 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1045233a-4608-4056-801d-8e8aeb78f62d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.650 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ddffe76f-54dc-4111-9a8c-acc2ac5cc805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.663 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[18b02323-b314-4ffe-bf9a-e00e025d8599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.679 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[78c32f18-0aa6-4b45-a527-874bc7696ad5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.706 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d1864-b176-4568-88c2-cac29ae1e431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.711 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a19738-1e4f-4d65-ad6f-0de0321e5134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.7129] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/360)
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.750 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[d60c457a-1cae-43c6-bf5e-63579c781091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.753 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[eea12e40-d462-453a-9dac-0dd2b5be1469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.7740] device (tap48e4ff16-10): carrier: link connected
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.780 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f14cda17-1b7e-49ca-9519-e9940e8154a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.795 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d86c2324-985a-4ee0-9713-57206745a2a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752339, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369865, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.810 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3df48819-f0b3-4688-ad26-8d528502de00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752339, 'tstamp': 752339}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369866, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.827 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[319a0e2c-b957-48cc-868d-f179153eb53c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752339, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369870, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.857 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5c48563b-8f57-4bc4-af3d-d0357fc73b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 sudo[369867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:55 compute-0 sudo[369867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:55 compute-0 sudo[369867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.917 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[937f7014-1bef-40dd-85b5-7f56872473be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.918 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.918 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.919 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:55 compute-0 NetworkManager[44987]: <info>  [1759409455.9213] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 kernel: tap48e4ff16-10: entered promiscuous mode
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.927 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:55 compute-0 sudo[369897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 ovn_controller[148183]: 2025-10-02T12:50:55Z|00800|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=1)
Oct 02 12:50:55 compute-0 sudo[369897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.930 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.931 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e027667f-f7ed-4e6a-ac7f-0616280f5988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:55 compute-0 sudo[369897]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.932 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:50:55.933 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:55 compute-0 nova_compute[257802]: 2025-10-02 12:50:55.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 558 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 149 op/s
Oct 02 12:50:56 compute-0 podman[369949]: 2025-10-02 12:50:56.257855369 +0000 UTC m=+0.027526617 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:56 compute-0 podman[369949]: 2025-10-02 12:50:56.367636996 +0000 UTC m=+0.137308224 container create fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:56 compute-0 systemd[1]: Started libpod-conmon-fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a.scope.
Oct 02 12:50:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1959efda180910425d04d38eb31704bb79e95b41022b30417f918b12520e8558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 podman[369949]: 2025-10-02 12:50:56.509250844 +0000 UTC m=+0.278922092 container init fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:56 compute-0 podman[369949]: 2025-10-02 12:50:56.514806971 +0000 UTC m=+0.284478199 container start fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:50:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:56 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [NOTICE]   (369969) : New worker (369971) forked
Oct 02 12:50:56 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [NOTICE]   (369969) : Loading success.
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.632 2 DEBUG nova.compute.manager [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.632 2 DEBUG oslo_concurrency.lockutils [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.632 2 DEBUG oslo_concurrency.lockutils [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.632 2 DEBUG oslo_concurrency.lockutils [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.632 2 DEBUG nova.compute.manager [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:56 compute-0 nova_compute[257802]: 2025-10-02 12:50:56.633 2 WARNING nova.compute.manager [req-8b5c40fc-687f-42b6-80ba-2e7611a1e121 req-00f5fed8-ce14-4a95-a74a-60f72dde1d80 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:50:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2167788464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2167788464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.225 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 8e2c1007-1d07-434c-8a22-6cb98d903d3c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.226 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409457.2249918, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.226 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Resumed (Lifecycle Event)
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.466 2 DEBUG nova.compute.manager [req-75b8941f-14ef-4d5b-8a16-ef5efe81b10e req-403c5a18-4d85-41b4-b331-a18891d471b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Received event network-vif-deleted-76fcd8fc-8215-4f14-b41b-aaa8327ae16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.466 2 INFO nova.compute.manager [req-75b8941f-14ef-4d5b-8a16-ef5efe81b10e req-403c5a18-4d85-41b4-b331-a18891d471b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Neutron deleted interface 76fcd8fc-8215-4f14-b41b-aaa8327ae16e; detaching it from the instance and deleting it from the info cache
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.466 2 DEBUG nova.network.neutron [req-75b8941f-14ef-4d5b-8a16-ef5efe81b10e req-403c5a18-4d85-41b4-b331-a18891d471b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.501 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.508 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:57.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.536 2 DEBUG nova.network.neutron [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.596 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.597 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409457.227978, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.597 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Started (Lifecycle Event)
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.630 2 INFO nova.compute.manager [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Took 4.03 seconds to deallocate network for instance.
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.635 2 DEBUG nova.compute.manager [req-75b8941f-14ef-4d5b-8a16-ef5efe81b10e req-403c5a18-4d85-41b4-b331-a18891d471b5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Detach interface failed, port_id=76fcd8fc-8215-4f14-b41b-aaa8327ae16e, reason: Instance 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.653 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.655 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.705 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:50:57 compute-0 ceph-mon[73607]: pgmap v2747: 305 pgs: 305 active+clean; 558 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 149 op/s
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.790 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.791 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:57 compute-0 nova_compute[257802]: 2025-10-02 12:50:57.903 2 DEBUG oslo_concurrency.processutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 558 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 90 op/s
Oct 02 12:50:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182267724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:58 compute-0 nova_compute[257802]: 2025-10-02 12:50:58.418 2 DEBUG oslo_concurrency.processutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:58 compute-0 nova_compute[257802]: 2025-10-02 12:50:58.424 2 DEBUG nova.compute.provider_tree [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:58 compute-0 nova_compute[257802]: 2025-10-02 12:50:58.627 2 DEBUG nova.scheduler.client.report [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:58 compute-0 nova_compute[257802]: 2025-10-02 12:50:58.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 nova_compute[257802]: 2025-10-02 12:50:58.883 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:59 compute-0 ceph-mon[73607]: pgmap v2748: 305 pgs: 305 active+clean; 558 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 90 op/s
Oct 02 12:50:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1182267724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:50:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.684 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.685 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.685 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.685 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.685 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.686 2 WARNING nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.686 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.686 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.686 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.687 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.687 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.687 2 WARNING nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.687 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.687 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.688 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.688 2 DEBUG oslo_concurrency.lockutils [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.688 2 DEBUG nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.688 2 WARNING nova.compute.manager [req-363e6eb9-c4ee-4b8b-8a27-6228511a5d16 req-d98ab2f1-0e12-4344-856e-81cb52a4c89b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:50:59 compute-0 nova_compute[257802]: 2025-10-02 12:50:59.789 2 INFO nova.scheduler.client.report [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1
Oct 02 12:51:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Oct 02 12:51:00 compute-0 nova_compute[257802]: 2025-10-02 12:51:00.186 2 DEBUG oslo_concurrency.lockutils [None req-f185a0d6-b059-4475-b6d9-7abff38b6224 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:00.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:01 compute-0 ceph-mon[73607]: pgmap v2749: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Oct 02 12:51:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:01.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 12:51:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:03.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:03 compute-0 nova_compute[257802]: 2025-10-02 12:51:03.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409448.547403, 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:03 compute-0 nova_compute[257802]: 2025-10-02 12:51:03.548 2 INFO nova.compute.manager [-] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] VM Stopped (Lifecycle Event)
Oct 02 12:51:03 compute-0 ceph-mon[73607]: pgmap v2750: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 12:51:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3015363131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:03 compute-0 nova_compute[257802]: 2025-10-02 12:51:03.657 2 DEBUG nova.compute.manager [None req-c5bbb301-54cc-4531-aedd-259c1bd65864 - - - - - -] [instance: 2cd5a60e-98e2-413b-a3ef-72abc0b2e7b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:03 compute-0 nova_compute[257802]: 2025-10-02 12:51:03.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 576 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Oct 02 12:51:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:04 compute-0 nova_compute[257802]: 2025-10-02 12:51:04.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:04 compute-0 nova_compute[257802]: 2025-10-02 12:51:04.704 2 DEBUG nova.compute.manager [None req-9628729b-c897-41a4-9362-4a047d9f13c1 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:05 compute-0 ceph-mon[73607]: pgmap v2751: 305 pgs: 305 active+clean; 576 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Oct 02 12:51:05 compute-0 nova_compute[257802]: 2025-10-02 12:51:05.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:05.425 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:05.426 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:51:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:05.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 02 12:51:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2674474749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:06.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:07 compute-0 ceph-mon[73607]: pgmap v2752: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 02 12:51:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.669905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467669941, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 632, "num_deletes": 252, "total_data_size": 758594, "memory_usage": 771544, "flush_reason": "Manual Compaction"}
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467773172, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 749791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60808, "largest_seqno": 61439, "table_properties": {"data_size": 746508, "index_size": 1190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7898, "raw_average_key_size": 19, "raw_value_size": 739770, "raw_average_value_size": 1822, "num_data_blocks": 53, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409424, "oldest_key_time": 1759409424, "file_creation_time": 1759409467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 103319 microseconds, and 3102 cpu microseconds.
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.773221) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 749791 bytes OK
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.773242) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.816083) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.816125) EVENT_LOG_v1 {"time_micros": 1759409467816116, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.816148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 755262, prev total WAL file size 755262, number of live WAL files 2.
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.816697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(732KB)], [137(11MB)]
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467816722, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12388753, "oldest_snapshot_seqno": -1}
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8667 keys, 10503087 bytes, temperature: kUnknown
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467985006, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10503087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10447734, "index_size": 32584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 229274, "raw_average_key_size": 26, "raw_value_size": 10295992, "raw_average_value_size": 1187, "num_data_blocks": 1237, "num_entries": 8667, "num_filter_entries": 8667, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:51:07 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.985269) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10503087 bytes
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.043413) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.6 rd, 62.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(30.5) write-amplify(14.0) OK, records in: 9185, records dropped: 518 output_compression: NoCompression
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.043456) EVENT_LOG_v1 {"time_micros": 1759409468043441, "job": 84, "event": "compaction_finished", "compaction_time_micros": 168369, "compaction_time_cpu_micros": 25114, "output_level": 6, "num_output_files": 1, "total_output_size": 10503087, "num_input_records": 9185, "num_output_records": 8667, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409468043762, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409468045422, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:07.816626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.045475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.045481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.045482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.045484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:51:08.045485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:51:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 137 op/s
Oct 02 12:51:08 compute-0 ovn_controller[148183]: 2025-10-02T12:51:08Z|00801|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:51:08 compute-0 nova_compute[257802]: 2025-10-02 12:51:08.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:08.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:08 compute-0 ovn_controller[148183]: 2025-10-02T12:51:08Z|00802|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:51:08 compute-0 nova_compute[257802]: 2025-10-02 12:51:08.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:08 compute-0 nova_compute[257802]: 2025-10-02 12:51:08.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:08 compute-0 ceph-mon[73607]: pgmap v2753: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 137 op/s
Oct 02 12:51:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:09 compute-0 nova_compute[257802]: 2025-10-02 12:51:09.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:09.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Oct 02 12:51:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:10.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:11 compute-0 nova_compute[257802]: 2025-10-02 12:51:11.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:11 compute-0 nova_compute[257802]: 2025-10-02 12:51:11.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:51:11 compute-0 ceph-mon[73607]: pgmap v2754: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Oct 02 12:51:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:11.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:11 compute-0 ovn_controller[148183]: 2025-10-02T12:51:11Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:51:11 compute-0 ovn_controller[148183]: 2025-10-02T12:51:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:56:b9 10.100.0.5
Oct 02 12:51:12 compute-0 nova_compute[257802]: 2025-10-02 12:51:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 76 KiB/s wr, 65 op/s
Oct 02 12:51:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:13 compute-0 ceph-mon[73607]: pgmap v2755: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 76 KiB/s wr, 65 op/s
Oct 02 12:51:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:13.428 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:13.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:13 compute-0 nova_compute[257802]: 2025-10-02 12:51:13.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 85 KiB/s wr, 85 op/s
Oct 02 12:51:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:14 compute-0 nova_compute[257802]: 2025-10-02 12:51:14.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:14.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:15 compute-0 ceph-mon[73607]: pgmap v2756: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 85 KiB/s wr, 85 op/s
Oct 02 12:51:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:15.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:15 compute-0 podman[370075]: 2025-10-02 12:51:15.918053847 +0000 UTC m=+0.055513544 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:51:15 compute-0 podman[370076]: 2025-10-02 12:51:15.928122544 +0000 UTC m=+0.062943477 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:51:15 compute-0 podman[370077]: 2025-10-02 12:51:15.951769145 +0000 UTC m=+0.084689201 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:51:16 compute-0 sudo[370135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:16 compute-0 sudo[370135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:16 compute-0 sudo[370135]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:16 compute-0 sudo[370160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:16 compute-0 sudo[370160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:16 compute-0 sudo[370160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 552 KiB/s rd, 53 KiB/s wr, 57 op/s
Oct 02 12:51:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/113032608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:17 compute-0 nova_compute[257802]: 2025-10-02 12:51:17.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:17.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:18 compute-0 nova_compute[257802]: 2025-10-02 12:51:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 29 KiB/s wr, 48 op/s
Oct 02 12:51:18 compute-0 ceph-mon[73607]: pgmap v2757: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 552 KiB/s rd, 53 KiB/s wr, 57 op/s
Oct 02 12:51:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:18 compute-0 nova_compute[257802]: 2025-10-02 12:51:18.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:19 compute-0 nova_compute[257802]: 2025-10-02 12:51:19.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:19.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:19 compute-0 ceph-mon[73607]: pgmap v2758: 305 pgs: 305 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 29 KiB/s wr, 48 op/s
Oct 02 12:51:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2431425398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:20 compute-0 nova_compute[257802]: 2025-10-02 12:51:20.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 40 KiB/s wr, 50 op/s
Oct 02 12:51:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:20.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:20 compute-0 ceph-mon[73607]: pgmap v2759: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 40 KiB/s wr, 50 op/s
Oct 02 12:51:20 compute-0 podman[370188]: 2025-10-02 12:51:20.965262411 +0000 UTC m=+0.097560878 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:51:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:21.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:21 compute-0 sshd-session[370215]: Invalid user vr from 167.99.55.34 port 42668
Oct 02 12:51:22 compute-0 sshd-session[370215]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:51:22 compute-0 sshd-session[370215]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:51:22 compute-0 nova_compute[257802]: 2025-10-02 12:51:22.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 02 12:51:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:22.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:23 compute-0 sudo[370218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:23 compute-0 sudo[370218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:23 compute-0 sudo[370218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:23 compute-0 sudo[370243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:23 compute-0 sudo[370243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:23 compute-0 sudo[370243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:23 compute-0 sudo[370268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:23 compute-0 sudo[370268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:23 compute-0 sudo[370268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:23 compute-0 nova_compute[257802]: 2025-10-02 12:51:23.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:23 compute-0 sudo[370293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:51:23 compute-0 sudo[370293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:23 compute-0 sshd-session[370215]: Failed password for invalid user vr from 167.99.55.34 port 42668 ssh2
Oct 02 12:51:23 compute-0 sshd-session[370215]: Connection closed by invalid user vr 167.99.55.34 port 42668 [preauth]
Oct 02 12:51:23 compute-0 ceph-mon[73607]: pgmap v2760: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 02 12:51:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4005157176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 485 KiB/s rd, 839 KiB/s wr, 62 op/s
Oct 02 12:51:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:24 compute-0 podman[370390]: 2025-10-02 12:51:24.435244921 +0000 UTC m=+0.245897671 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:51:24 compute-0 nova_compute[257802]: 2025-10-02 12:51:24.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:24 compute-0 podman[370390]: 2025-10-02 12:51:24.616188846 +0000 UTC m=+0.426841526 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:51:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:51:25 compute-0 ceph-mon[73607]: pgmap v2761: 305 pgs: 305 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 485 KiB/s rd, 839 KiB/s wr, 62 op/s
Oct 02 12:51:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3190617108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:25.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:25 compute-0 podman[370526]: 2025-10-02 12:51:25.790348759 +0000 UTC m=+0.230226606 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:51:25 compute-0 podman[370547]: 2025-10-02 12:51:25.900998977 +0000 UTC m=+0.073535617 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:51:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:26 compute-0 podman[370526]: 2025-10-02 12:51:26.06965185 +0000 UTC m=+0.509529687 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 12:51:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 120 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Oct 02 12:51:26 compute-0 podman[370592]: 2025-10-02 12:51:26.419191176 +0000 UTC m=+0.111749726 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Oct 02 12:51:26 compute-0 podman[370613]: 2025-10-02 12:51:26.497390257 +0000 UTC m=+0.059681526 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, version=2.2.4)
Oct 02 12:51:26 compute-0 podman[370592]: 2025-10-02 12:51:26.540934057 +0000 UTC m=+0.233492627 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64)
Oct 02 12:51:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/453716225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3858465189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4176606406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:26 compute-0 sudo[370293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:51:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:51:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:26.969 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:26.970 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:26.970 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:26 compute-0 sudo[370647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:26 compute-0 sudo[370647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:26 compute-0 sudo[370647]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:27 compute-0 sudo[370672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:27 compute-0 sudo[370672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:27 compute-0 sudo[370672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:27 compute-0 sudo[370697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:27 compute-0 sudo[370697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:27 compute-0 sudo[370697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:27 compute-0 sudo[370722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:51:27 compute-0 sudo[370722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:27.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:27 compute-0 sudo[370722]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:51:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:51:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:51:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:51:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:51:27 compute-0 ceph-mon[73607]: pgmap v2762: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 120 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Oct 02 12:51:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3655991857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:27 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cadb8e55-7736-48b9-9e45-ecf4efd7cf0d does not exist
Oct 02 12:51:28 compute-0 nova_compute[257802]: 2025-10-02 12:51:28.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:28 compute-0 nova_compute[257802]: 2025-10-02 12:51:28.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:51:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 548086aa-7008-4282-aeff-53bf7de96e8a does not exist
Oct 02 12:51:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d1f7d8cd-be02-4bf2-982c-e70361566d9a does not exist
Oct 02 12:51:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:51:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:51:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:51:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 3.6 MiB/s wr, 42 op/s
Oct 02 12:51:28 compute-0 nova_compute[257802]: 2025-10-02 12:51:28.146 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:51:28 compute-0 sudo[370777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:28 compute-0 sudo[370777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:28 compute-0 sudo[370777]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:28 compute-0 sudo[370802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:28 compute-0 sudo[370802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:28 compute-0 sudo[370802]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:28 compute-0 sudo[370827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:28 compute-0 sudo[370827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:28 compute-0 sudo[370827]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:28 compute-0 sudo[370852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:51:28 compute-0 sudo[370852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:28.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:28 compute-0 nova_compute[257802]: 2025-10-02 12:51:28.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.783008564 +0000 UTC m=+0.057670798 container create f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:28 compute-0 systemd[1]: Started libpod-conmon-f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683.scope.
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.756304697 +0000 UTC m=+0.030966971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.886122786 +0000 UTC m=+0.160785030 container init f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.895538898 +0000 UTC m=+0.170201122 container start f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.899232199 +0000 UTC m=+0.173894423 container attach f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:28 compute-0 priceless_brown[370936]: 167 167
Oct 02 12:51:28 compute-0 systemd[1]: libpod-f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683.scope: Deactivated successfully.
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.902677273 +0000 UTC m=+0.177339497 container died f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:51:28 compute-0 ceph-mon[73607]: pgmap v2763: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 3.6 MiB/s wr, 42 op/s
Oct 02 12:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-834301dca1d922710a0fc64f4d05e30dd7b1b2f478dfe98c94c167f901925796-merged.mount: Deactivated successfully.
Oct 02 12:51:28 compute-0 podman[370919]: 2025-10-02 12:51:28.951973464 +0000 UTC m=+0.226635688 container remove f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:51:28 compute-0 systemd[1]: libpod-conmon-f23eb0f3ead7794820952bec230299583bf0bf70e891711ad1e3821ea78f0683.scope: Deactivated successfully.
Oct 02 12:51:29 compute-0 podman[370958]: 2025-10-02 12:51:29.142954556 +0000 UTC m=+0.046918184 container create db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:29 compute-0 systemd[1]: Started libpod-conmon-db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419.scope.
Oct 02 12:51:29 compute-0 podman[370958]: 2025-10-02 12:51:29.121848498 +0000 UTC m=+0.025812146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:29 compute-0 podman[370958]: 2025-10-02 12:51:29.253389629 +0000 UTC m=+0.157353327 container init db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:51:29 compute-0 podman[370958]: 2025-10-02 12:51:29.267728091 +0000 UTC m=+0.171691709 container start db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:51:29 compute-0 podman[370958]: 2025-10-02 12:51:29.271862033 +0000 UTC m=+0.175825741 container attach db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:29 compute-0 nova_compute[257802]: 2025-10-02 12:51:29.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:29.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:30 compute-0 exciting_dubinsky[370974]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:51:30 compute-0 exciting_dubinsky[370974]: --> relative data size: 1.0
Oct 02 12:51:30 compute-0 exciting_dubinsky[370974]: --> All data devices are unavailable
Oct 02 12:51:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Oct 02 12:51:30 compute-0 systemd[1]: libpod-db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419.scope: Deactivated successfully.
Oct 02 12:51:30 compute-0 podman[370958]: 2025-10-02 12:51:30.161488706 +0000 UTC m=+1.065452334 container died db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0874355b33c5cf4e303fb65174902def53f3c444df235b5bc67d6146ca3b901-merged.mount: Deactivated successfully.
Oct 02 12:51:30 compute-0 podman[370958]: 2025-10-02 12:51:30.222612977 +0000 UTC m=+1.126576595 container remove db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:51:30 compute-0 systemd[1]: libpod-conmon-db2afe74072b626e0ac4aad9262edbab08f6eb964cad4faa67ab3e271ef1b419.scope: Deactivated successfully.
Oct 02 12:51:30 compute-0 sudo[370852]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:30 compute-0 sudo[371002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:30 compute-0 sudo[371002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:30 compute-0 sudo[371002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:30 compute-0 sudo[371027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:30 compute-0 sudo[371027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:30 compute-0 sudo[371027]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:30 compute-0 sudo[371052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:30 compute-0 sudo[371052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:30 compute-0 sudo[371052]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:30 compute-0 sudo[371077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:51:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:30.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:30 compute-0 sudo[371077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.890942555 +0000 UTC m=+0.050366299 container create edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:51:30 compute-0 systemd[1]: Started libpod-conmon-edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe.scope.
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.863715156 +0000 UTC m=+0.023138920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.978015383 +0000 UTC m=+0.137439157 container init edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.986155893 +0000 UTC m=+0.145579637 container start edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.989435154 +0000 UTC m=+0.148858888 container attach edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:51:30 compute-0 sad_mahavira[371159]: 167 167
Oct 02 12:51:30 compute-0 systemd[1]: libpod-edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe.scope: Deactivated successfully.
Oct 02 12:51:30 compute-0 podman[371142]: 2025-10-02 12:51:30.994764085 +0000 UTC m=+0.154187819 container died edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb7c79be66cd130f048d2995b4e00e736667e243d23c49790109050a02f666f-merged.mount: Deactivated successfully.
Oct 02 12:51:31 compute-0 podman[371142]: 2025-10-02 12:51:31.04340037 +0000 UTC m=+0.202824114 container remove edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:51:31 compute-0 systemd[1]: libpod-conmon-edd9c82e69aa89d4b55fd3be94f4014ae2306dba61c7906322ad7e6bd3ec16fe.scope: Deactivated successfully.
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:31 compute-0 podman[371183]: 2025-10-02 12:51:31.209258244 +0000 UTC m=+0.037367499 container create 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:51:31 compute-0 systemd[1]: Started libpod-conmon-6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463.scope.
Oct 02 12:51:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19149f3e90d6d557ac8e7c5d440aa6106f9e1d0d6e3d504432f963dd0f58d21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19149f3e90d6d557ac8e7c5d440aa6106f9e1d0d6e3d504432f963dd0f58d21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19149f3e90d6d557ac8e7c5d440aa6106f9e1d0d6e3d504432f963dd0f58d21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19149f3e90d6d557ac8e7c5d440aa6106f9e1d0d6e3d504432f963dd0f58d21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:31 compute-0 podman[371183]: 2025-10-02 12:51:31.193574879 +0000 UTC m=+0.021684164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:31 compute-0 podman[371183]: 2025-10-02 12:51:31.290637194 +0000 UTC m=+0.118746459 container init 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:51:31 compute-0 podman[371183]: 2025-10-02 12:51:31.296691152 +0000 UTC m=+0.124800407 container start 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:51:31 compute-0 podman[371183]: 2025-10-02 12:51:31.29986958 +0000 UTC m=+0.127978855 container attach 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.427 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.428 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.428 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.428 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.428 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:31.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:31 compute-0 ceph-mon[73607]: pgmap v2764: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Oct 02 12:51:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723549652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:31 compute-0 nova_compute[257802]: 2025-10-02 12:51:31.893 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]: {
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:     "1": [
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:         {
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "devices": [
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "/dev/loop3"
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             ],
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "lv_name": "ceph_lv0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "lv_size": "7511998464",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "name": "ceph_lv0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "tags": {
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.cluster_name": "ceph",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.crush_device_class": "",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.encrypted": "0",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.osd_id": "1",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.type": "block",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:                 "ceph.vdo": "0"
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             },
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "type": "block",
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:             "vg_name": "ceph_vg0"
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:         }
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]:     ]
Oct 02 12:51:32 compute-0 priceless_hypatia[371200]: }
Oct 02 12:51:32 compute-0 systemd[1]: libpod-6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463.scope: Deactivated successfully.
Oct 02 12:51:32 compute-0 podman[371183]: 2025-10-02 12:51:32.115236319 +0000 UTC m=+0.943345594 container died 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 02 12:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19149f3e90d6d557ac8e7c5d440aa6106f9e1d0d6e3d504432f963dd0f58d21-merged.mount: Deactivated successfully.
Oct 02 12:51:32 compute-0 podman[371183]: 2025-10-02 12:51:32.172977718 +0000 UTC m=+1.001086973 container remove 6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:32 compute-0 systemd[1]: libpod-conmon-6dc6a9c4f41e2e2f1786adc8b154a516dbf8047b60c7025aa370e10e94fa3463.scope: Deactivated successfully.
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.189 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.190 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:32 compute-0 sudo[371077]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:32 compute-0 sudo[371246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:32 compute-0 sudo[371246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:32 compute-0 sudo[371246]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:32 compute-0 sudo[371271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:32 compute-0 sudo[371271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:32 compute-0 sudo[371271]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.344 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.345 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4019MB free_disk=20.764667510986328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.345 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:32 compute-0 nova_compute[257802]: 2025-10-02 12:51:32.345 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:32 compute-0 sudo[371296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:32 compute-0 sudo[371296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:32 compute-0 sudo[371296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:32 compute-0 sudo[371321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:51:32 compute-0 sudo[371321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.776722389 +0000 UTC m=+0.036561329 container create ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:51:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/723549652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:32 compute-0 systemd[1]: Started libpod-conmon-ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514.scope.
Oct 02 12:51:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.761054284 +0000 UTC m=+0.020893254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.861254465 +0000 UTC m=+0.121093405 container init ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.867129199 +0000 UTC m=+0.126968139 container start ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.870483142 +0000 UTC m=+0.130322072 container attach ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:51:32 compute-0 cool_napier[371403]: 167 167
Oct 02 12:51:32 compute-0 systemd[1]: libpod-ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514.scope: Deactivated successfully.
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.87244313 +0000 UTC m=+0.132282070 container died ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2341e0e4b516b3e31e0dc9846d520d9009dc50b23fdc0daf54cfb064c3b03a-merged.mount: Deactivated successfully.
Oct 02 12:51:32 compute-0 podman[371386]: 2025-10-02 12:51:32.906511647 +0000 UTC m=+0.166350587 container remove ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:51:32 compute-0 systemd[1]: libpod-conmon-ee27f73e1a3b09c3ccdf703ba216438754f7da656709868b0e15406cf6d13514.scope: Deactivated successfully.
Oct 02 12:51:33 compute-0 podman[371427]: 2025-10-02 12:51:33.090136318 +0000 UTC m=+0.050083602 container create e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:33 compute-0 systemd[1]: Started libpod-conmon-e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc.scope.
Oct 02 12:51:33 compute-0 podman[371427]: 2025-10-02 12:51:33.066902347 +0000 UTC m=+0.026849651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a82056f1586ef75a7593d92f13c7d290606cc53bf8972df477ee6b619bef1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a82056f1586ef75a7593d92f13c7d290606cc53bf8972df477ee6b619bef1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a82056f1586ef75a7593d92f13c7d290606cc53bf8972df477ee6b619bef1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a82056f1586ef75a7593d92f13c7d290606cc53bf8972df477ee6b619bef1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:33 compute-0 podman[371427]: 2025-10-02 12:51:33.187160731 +0000 UTC m=+0.147108045 container init e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:51:33 compute-0 podman[371427]: 2025-10-02 12:51:33.193814025 +0000 UTC m=+0.153761309 container start e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:51:33 compute-0 podman[371427]: 2025-10-02 12:51:33.197425563 +0000 UTC m=+0.157372927 container attach e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:33 compute-0 nova_compute[257802]: 2025-10-02 12:51:33.381 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:51:33 compute-0 nova_compute[257802]: 2025-10-02 12:51:33.382 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:51:33 compute-0 nova_compute[257802]: 2025-10-02 12:51:33.382 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:51:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:33.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:33 compute-0 nova_compute[257802]: 2025-10-02 12:51:33.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:33 compute-0 ceph-mon[73607]: pgmap v2765: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]: {
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:         "osd_id": 1,
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:         "type": "bluestore"
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]:     }
Oct 02 12:51:33 compute-0 hardcore_vaughan[371444]: }
Oct 02 12:51:34 compute-0 systemd[1]: libpod-e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc.scope: Deactivated successfully.
Oct 02 12:51:34 compute-0 podman[371465]: 2025-10-02 12:51:34.048623802 +0000 UTC m=+0.022403991 container died e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8a82056f1586ef75a7593d92f13c7d290606cc53bf8972df477ee6b619bef1-merged.mount: Deactivated successfully.
Oct 02 12:51:34 compute-0 podman[371465]: 2025-10-02 12:51:34.096364915 +0000 UTC m=+0.070145084 container remove e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_vaughan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:34 compute-0 systemd[1]: libpod-conmon-e00f8e8be86327cb594c804f6b84c65576287217bec7af85f111f9dc04525ebc.scope: Deactivated successfully.
Oct 02 12:51:34 compute-0 sudo[371321]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:51:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 260 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Oct 02 12:51:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:51:34 compute-0 nova_compute[257802]: 2025-10-02 12:51:34.176 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8b6db394-3f5b-4e11-90ce-a881e2934773 does not exist
Oct 02 12:51:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fde5470c-836a-422b-9256-aa0fd6615704 does not exist
Oct 02 12:51:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4511bf7c-4cb7-4781-b9ed-1e4278f7ca77 does not exist
Oct 02 12:51:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:34 compute-0 sudo[371481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:34 compute-0 sudo[371481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:34 compute-0 sudo[371481]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:34 compute-0 sudo[371506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:51:34 compute-0 sudo[371506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:34 compute-0 sudo[371506]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:34 compute-0 nova_compute[257802]: 2025-10-02 12:51:34.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024152979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:34 compute-0 nova_compute[257802]: 2025-10-02 12:51:34.659 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:34 compute-0 nova_compute[257802]: 2025-10-02 12:51:34.665 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:34 compute-0 nova_compute[257802]: 2025-10-02 12:51:34.854 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.037 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.038 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:35 compute-0 ceph-mon[73607]: pgmap v2766: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 260 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Oct 02 12:51:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:51:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1024152979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:35 compute-0 NetworkManager[44987]: <info>  [1759409495.3163] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct 02 12:51:35 compute-0 NetworkManager[44987]: <info>  [1759409495.3174] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.383 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.383 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:35 compute-0 ovn_controller[148183]: 2025-10-02T12:51:35Z|00803|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.545 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:51:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:35.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.857 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.858 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.864 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:51:35 compute-0 nova_compute[257802]: 2025-10-02 12:51:35.865 2 INFO nova.compute.claims [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.100 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:36 compute-0 sudo[371554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:36 compute-0 sudo[371554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:36 compute-0 sudo[371554]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.8 MiB/s wr, 101 op/s
Oct 02 12:51:36 compute-0 sudo[371580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:36 compute-0 sudo[371580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:36 compute-0 sudo[371580]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849983047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.527 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.533 2 DEBUG nova.compute.provider_tree [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:36 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:51:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:36.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:36 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.613 2 DEBUG nova.scheduler.client.report [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.887 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:36 compute-0 nova_compute[257802]: 2025-10-02 12:51:36.888 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.257 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.257 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.380 2 INFO nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.503 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:51:37 compute-0 ceph-mon[73607]: pgmap v2767: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.8 MiB/s wr, 101 op/s
Oct 02 12:51:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3849983047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:51:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:37.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.724 2 DEBUG nova.policy [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.836 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.839 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.840 2 INFO nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Creating image(s)
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.885 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.918 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.949 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:37 compute-0 nova_compute[257802]: 2025-10-02 12:51:37.954 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.055 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.057 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.058 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.058 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.087 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.091 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 86 op/s
Oct 02 12:51:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:38 compute-0 nova_compute[257802]: 2025-10-02 12:51:38.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:38 compute-0 ceph-mon[73607]: pgmap v2768: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 86 op/s
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.191 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.252 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.370 2 DEBUG nova.objects.instance [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 42da5a56-55e8-4a1a-a524-24555a4bd3ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.477 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.478 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Ensure instance console log exists: /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.478 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.479 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.479 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:39 compute-0 nova_compute[257802]: 2025-10-02 12:51:39.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:39.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 711 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 186 op/s
Oct 02 12:51:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:40.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:41 compute-0 ceph-mon[73607]: pgmap v2769: 305 pgs: 305 active+clean; 711 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 186 op/s
Oct 02 12:51:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:41.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:42 compute-0 nova_compute[257802]: 2025-10-02 12:51:42.118 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Successfully created port: edd030ea-1bf1-4735-8720-2e02fbd67149 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 711 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 170 op/s
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:51:42
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:51:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:42.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:43 compute-0 nova_compute[257802]: 2025-10-02 12:51:43.038 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:51:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:51:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:51:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:51:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:51:43 compute-0 ceph-mon[73607]: pgmap v2770: 305 pgs: 305 active+clean; 711 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 170 op/s
Oct 02 12:51:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:51:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:43.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:51:43 compute-0 nova_compute[257802]: 2025-10-02 12:51:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.071 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Successfully updated port: edd030ea-1bf1-4735-8720-2e02fbd67149 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 721 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:51:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.195 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.195 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.196 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:51:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.449 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:51:44 compute-0 nova_compute[257802]: 2025-10-02 12:51:44.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:45 compute-0 ceph-mon[73607]: pgmap v2771: 305 pgs: 305 active+clean; 721 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:51:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:45.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:45 compute-0 nova_compute[257802]: 2025-10-02 12:51:45.954 2 DEBUG nova.compute.manager [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:45 compute-0 nova_compute[257802]: 2025-10-02 12:51:45.955 2 DEBUG nova.compute.manager [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing instance network info cache due to event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:51:45 compute-0 nova_compute[257802]: 2025-10-02 12:51:45.955 2 DEBUG oslo_concurrency.lockutils [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 732 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.491 2 DEBUG nova.network.neutron [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.519 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.520 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Instance network_info: |[{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.520 2 DEBUG oslo_concurrency.lockutils [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.520 2 DEBUG nova.network.neutron [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.522 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Start _get_guest_xml network_info=[{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.526 2 WARNING nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.533 2 DEBUG nova.virt.libvirt.host [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.534 2 DEBUG nova.virt.libvirt.host [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.538 2 DEBUG nova.virt.libvirt.host [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.538 2 DEBUG nova.virt.libvirt.host [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.539 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.540 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.540 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.540 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.540 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.541 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.541 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.541 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.541 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.542 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.542 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.542 2 DEBUG nova.virt.hardware [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:51:46 compute-0 nova_compute[257802]: 2025-10-02 12:51:46.544 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:46 compute-0 podman[371821]: 2025-10-02 12:51:46.930231298 +0000 UTC m=+0.057412381 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:51:46 compute-0 podman[371820]: 2025-10-02 12:51:46.934679727 +0000 UTC m=+0.066634208 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:51:46 compute-0 podman[371819]: 2025-10-02 12:51:46.954642568 +0000 UTC m=+0.086271571 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:51:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653966322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.016 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.043 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.047 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:47 compute-0 ceph-mon[73607]: pgmap v2772: 305 pgs: 305 active+clean; 732 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Oct 02 12:51:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242662243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.493 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.495 2 DEBUG nova.virt.libvirt.vif [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933745214',display_name='tempest-TestNetworkBasicOps-server-933745214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933745214',id=182,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5g2YR7mRJ8wv66Ie3gEOW4Ei/BhT431fnvwb66U2s7bhv8tgyt/+mCbk3G8aSXbvbYDVe7KE5z2DHS0eT8dOztSZNQmtEW2btO6tXoKqIQlS8tpISInSq+eCkqqeyBiA==',key_name='tempest-TestNetworkBasicOps-1419012775',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m1sxbh0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:37Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=42da5a56-55e8-4a1a-a524-24555a4bd3ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.495 2 DEBUG nova.network.os_vif_util [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.496 2 DEBUG nova.network.os_vif_util [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.497 2 DEBUG nova.objects.instance [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 42da5a56-55e8-4a1a-a524-24555a4bd3ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.530 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <uuid>42da5a56-55e8-4a1a-a524-24555a4bd3ec</uuid>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <name>instance-000000b6</name>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-933745214</nova:name>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:51:46</nova:creationTime>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <nova:port uuid="edd030ea-1bf1-4735-8720-2e02fbd67149">
Oct 02 12:51:47 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <system>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="serial">42da5a56-55e8-4a1a-a524-24555a4bd3ec</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="uuid">42da5a56-55e8-4a1a-a524-24555a4bd3ec</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </system>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <os>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </os>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <features>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </features>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk">
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </source>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config">
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </source>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:51:47 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:57:d0:8a"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <target dev="tapedd030ea-1b"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/console.log" append="off"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <video>
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </video>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:51:47 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:51:47 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:51:47 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:51:47 compute-0 nova_compute[257802]: </domain>
Oct 02 12:51:47 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.531 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Preparing to wait for external event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.532 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.532 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.532 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.533 2 DEBUG nova.virt.libvirt.vif [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933745214',display_name='tempest-TestNetworkBasicOps-server-933745214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933745214',id=182,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5g2YR7mRJ8wv66Ie3gEOW4Ei/BhT431fnvwb66U2s7bhv8tgyt/+mCbk3G8aSXbvbYDVe7KE5z2DHS0eT8dOztSZNQmtEW2btO6tXoKqIQlS8tpISInSq+eCkqqeyBiA==',key_name='tempest-TestNetworkBasicOps-1419012775',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m1sxbh0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:37Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=42da5a56-55e8-4a1a-a524-24555a4bd3ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.533 2 DEBUG nova.network.os_vif_util [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.534 2 DEBUG nova.network.os_vif_util [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.534 2 DEBUG os_vif [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.536 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedd030ea-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedd030ea-1b, col_values=(('external_ids', {'iface-id': 'edd030ea-1bf1-4735-8720-2e02fbd67149', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:d0:8a', 'vm-uuid': '42da5a56-55e8-4a1a-a524-24555a4bd3ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:47 compute-0 NetworkManager[44987]: <info>  [1759409507.5421] manager: (tapedd030ea-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.551 2 INFO os_vif [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b')
Oct 02 12:51:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:47.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.905 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.905 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.905 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:57:d0:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.906 2 INFO nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Using config drive
Oct 02 12:51:47 compute-0 nova_compute[257802]: 2025-10-02 12:51:47.930 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1653966322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4242662243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 732 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 110 op/s
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.387 2 DEBUG nova.network.neutron [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updated VIF entry in instance network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.389 2 DEBUG nova.network.neutron [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.525 2 DEBUG oslo_concurrency.lockutils [req-00bd8612-1af9-4a61-a32e-e4a99ebcdaa2 req-23a2636f-70fb-4c3c-87d6-c3cc92e3a11c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.666 2 INFO nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Creating config drive at /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.671 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmq5wjuai execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.804 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmq5wjuai" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.834 2 DEBUG nova.storage.rbd_utils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:48 compute-0 nova_compute[257802]: 2025-10-02 12:51:48.838 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.116 2 DEBUG oslo_concurrency.processutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config 42da5a56-55e8-4a1a-a524-24555a4bd3ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.117 2 INFO nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Deleting local config drive /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec/disk.config because it was imported into RBD.
Oct 02 12:51:49 compute-0 ceph-mon[73607]: pgmap v2773: 305 pgs: 305 active+clean; 732 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 110 op/s
Oct 02 12:51:49 compute-0 kernel: tapedd030ea-1b: entered promiscuous mode
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.1699] manager: (tapedd030ea-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 ovn_controller[148183]: 2025-10-02T12:51:49Z|00804|binding|INFO|Claiming lport edd030ea-1bf1-4735-8720-2e02fbd67149 for this chassis.
Oct 02 12:51:49 compute-0 ovn_controller[148183]: 2025-10-02T12:51:49Z|00805|binding|INFO|edd030ea-1bf1-4735-8720-2e02fbd67149: Claiming fa:16:3e:57:d0:8a 10.100.0.12
Oct 02 12:51:49 compute-0 ovn_controller[148183]: 2025-10-02T12:51:49Z|00806|binding|INFO|Setting lport edd030ea-1bf1-4735-8720-2e02fbd67149 ovn-installed in OVS
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 systemd-machined[211836]: New machine qemu-89-instance-000000b6.
Oct 02 12:51:49 compute-0 systemd-udevd[371988]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.2147] device (tapedd030ea-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.2160] device (tapedd030ea-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:51:49 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-000000b6.
Oct 02 12:51:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:49 compute-0 ovn_controller[148183]: 2025-10-02T12:51:49Z|00807|binding|INFO|Setting lport edd030ea-1bf1-4735-8720-2e02fbd67149 up in Southbound
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.244 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:d0:8a 10.100.0.12'], port_security=['fa:16:3e:57:d0:8a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42da5a56-55e8-4a1a-a524-24555a4bd3ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a82f1e35-7dfe-4339-916d-665ac590e0ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=edd030ea-1bf1-4735-8720-2e02fbd67149) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.245 158261 INFO neutron.agent.ovn.metadata.agent [-] Port edd030ea-1bf1-4735-8720-2e02fbd67149 in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 bound to our chassis
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.247 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.260 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0d5d98-1702-4ada-b8a0-5976317237f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.261 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15afb19f-01 in ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.263 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15afb19f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.263 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc90cb5-a4a3-4792-98d3-2e1ba0e68199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.264 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1b6534-be26-45d0-ade7-b3e0f15e21bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.276 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[09cb0568-a357-4b94-b792-bd129e6d5ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.301 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fc54633b-cc65-41f3-9744-404ab19e203e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.332 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[44cd1179-9f05-43f0-88bc-4f6e13031b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.338 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e384d068-4080-443b-9f71-8f58cbc70c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.3399] manager: (tap15afb19f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.372 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[03e5310d-a736-4eec-917c-50aa2b3c2334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.375 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ca956f12-ab96-45fc-8177-ea8dfeea7336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.3966] device (tap15afb19f-00): carrier: link connected
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.401 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[971ee004-c584-4583-abf1-749f046a2881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.417 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[591d971a-a9e8-4bfd-851d-7829bb6cc5dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372021, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.433 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9efb5e-6ee3-4153-a3d9-bc72eb00152e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:fcd9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757702, 'tstamp': 757702}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372022, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.449 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[07acbd84-9965-4077-a527-001affa867f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372023, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.481 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b241f7e7-2c0a-441b-94a6-510c58fd2426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.541 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[009424bd-68f1-4abe-bf6d-699dbbb1d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.542 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.542 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.543 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15afb19f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:49 compute-0 kernel: tap15afb19f-00: entered promiscuous mode
Oct 02 12:51:49 compute-0 NetworkManager[44987]: <info>  [1759409509.5466] manager: (tap15afb19f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.548 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15afb19f-00, col_values=(('external_ids', {'iface-id': '6947a742-74da-45a4-bca1-52180ae211d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:49 compute-0 ovn_controller[148183]: 2025-10-02T12:51:49Z|00808|binding|INFO|Releasing lport 6947a742-74da-45a4-bca1-52180ae211d0 from this chassis (sb_readonly=0)
Oct 02 12:51:49 compute-0 nova_compute[257802]: 2025-10-02 12:51:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.564 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15afb19f-043a-469d-96b6-7de0ff8590f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15afb19f-043a-469d-96b6-7de0ff8590f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.565 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f53d6-b747-41f7-8383-f9a344617d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.565 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/15afb19f-043a-469d-96b6-7de0ff8590f7.pid.haproxy
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:51:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:51:49.566 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'env', 'PROCESS_TAG=haproxy-15afb19f-043a-469d-96b6-7de0ff8590f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15afb19f-043a-469d-96b6-7de0ff8590f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:51:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:49.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:49 compute-0 podman[372096]: 2025-10-02 12:51:49.942049114 +0000 UTC m=+0.057847183 container create 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:49 compute-0 systemd[1]: Started libpod-conmon-53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2.scope.
Oct 02 12:51:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:50 compute-0 podman[372096]: 2025-10-02 12:51:49.90814086 +0000 UTC m=+0.023938959 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8429a495c50209d857d65efc9049597a3d834257b339043b6ac427634bcfd8ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:50 compute-0 podman[372096]: 2025-10-02 12:51:50.023490434 +0000 UTC m=+0.139288503 container init 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:51:50 compute-0 podman[372096]: 2025-10-02 12:51:50.029390469 +0000 UTC m=+0.145188538 container start 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:51:50 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [NOTICE]   (372115) : New worker (372117) forked
Oct 02 12:51:50 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [NOTICE]   (372115) : Loading success.
Oct 02 12:51:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 778 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 202 op/s
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.252 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409510.2522483, 42da5a56-55e8-4a1a-a524-24555a4bd3ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.253 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] VM Started (Lifecycle Event)
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.396 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.400 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409510.2557862, 42da5a56-55e8-4a1a-a524-24555a4bd3ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.400 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] VM Paused (Lifecycle Event)
Oct 02 12:51:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.711 2 DEBUG nova.compute.manager [req-6ca92e21-4511-410f-8f79-a90a9abba675 req-bc91a294-e664-428b-8a8e-ea3a1c6b2c89 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.711 2 DEBUG oslo_concurrency.lockutils [req-6ca92e21-4511-410f-8f79-a90a9abba675 req-bc91a294-e664-428b-8a8e-ea3a1c6b2c89 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.711 2 DEBUG oslo_concurrency.lockutils [req-6ca92e21-4511-410f-8f79-a90a9abba675 req-bc91a294-e664-428b-8a8e-ea3a1c6b2c89 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.711 2 DEBUG oslo_concurrency.lockutils [req-6ca92e21-4511-410f-8f79-a90a9abba675 req-bc91a294-e664-428b-8a8e-ea3a1c6b2c89 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.712 2 DEBUG nova.compute.manager [req-6ca92e21-4511-410f-8f79-a90a9abba675 req-bc91a294-e664-428b-8a8e-ea3a1c6b2c89 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Processing event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.712 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.716 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.718 2 INFO nova.virt.libvirt.driver [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Instance spawned successfully.
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.718 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.853 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.857 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409510.7155108, 42da5a56-55e8-4a1a-a524-24555a4bd3ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:50 compute-0 nova_compute[257802]: 2025-10-02 12:51:50.857 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] VM Resumed (Lifecycle Event)
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.000 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.000 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.001 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.001 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.001 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.002 2 DEBUG nova.virt.libvirt.driver [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.083 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.087 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.261 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:51:51 compute-0 ceph-mon[73607]: pgmap v2774: 305 pgs: 305 active+clean; 778 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 202 op/s
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.402 2 INFO nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Took 13.56 seconds to spawn the instance on the hypervisor.
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.402 2 DEBUG nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:51.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.681 2 INFO nova.compute.manager [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Took 15.85 seconds to build instance.
Oct 02 12:51:51 compute-0 nova_compute[257802]: 2025-10-02 12:51:51.782 2 DEBUG oslo_concurrency.lockutils [None req-6a5dcde1-83bb-4a51-855f-5935eea46852 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:51 compute-0 podman[372127]: 2025-10-02 12:51:51.938699011 +0000 UTC m=+0.079591896 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:51:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 778 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 448 KiB/s rd, 4.7 MiB/s wr, 102 op/s
Oct 02 12:51:52 compute-0 nova_compute[257802]: 2025-10-02 12:51:52.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.063 2 DEBUG nova.compute.manager [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.064 2 DEBUG oslo_concurrency.lockutils [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.064 2 DEBUG oslo_concurrency.lockutils [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.064 2 DEBUG oslo_concurrency.lockutils [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.064 2 DEBUG nova.compute.manager [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] No waiting events found dispatching network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:53 compute-0 nova_compute[257802]: 2025-10-02 12:51:53.065 2 WARNING nova.compute.manager [req-1cb4e46e-7144-4867-a1d8-9161a44f90b7 req-2177c123-b5dd-40a7-b134-5380e0b139ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received unexpected event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 for instance with vm_state active and task_state None.
Oct 02 12:51:53 compute-0 ceph-mon[73607]: pgmap v2775: 305 pgs: 305 active+clean; 778 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 448 KiB/s rd, 4.7 MiB/s wr, 102 op/s
Oct 02 12:51:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:53.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 785 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 156 op/s
Oct 02 12:51:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:54 compute-0 nova_compute[257802]: 2025-10-02 12:51:54.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014003938965661641 of space, bias 1.0, pg target 4.201181689698492 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6399401521496371 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4414927647496742 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002680866717848502 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Oct 02 12:51:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:51:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661064486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:51:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661064486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:55 compute-0 ceph-mon[73607]: pgmap v2776: 305 pgs: 305 active+clean; 785 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 156 op/s
Oct 02 12:51:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2661064486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2661064486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Oct 02 12:51:56 compute-0 sudo[372156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:56 compute-0 sudo[372156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:56 compute-0 sudo[372156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:56 compute-0 sudo[372181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:56 compute-0 sudo[372181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:56 compute-0 sudo[372181]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:57 compute-0 nova_compute[257802]: 2025-10-02 12:51:57.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:57 compute-0 ceph-mon[73607]: pgmap v2777: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Oct 02 12:51:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:57.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 192 op/s
Oct 02 12:51:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:59 compute-0 nova_compute[257802]: 2025-10-02 12:51:59.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:51:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:59.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:59 compute-0 ceph-mon[73607]: pgmap v2778: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 192 op/s
Oct 02 12:52:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 192 op/s
Oct 02 12:52:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:00.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:00 compute-0 ceph-mon[73607]: pgmap v2779: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 192 op/s
Oct 02 12:52:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:01.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 131 KiB/s wr, 100 op/s
Oct 02 12:52:02 compute-0 nova_compute[257802]: 2025-10-02 12:52:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:03.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:03 compute-0 ceph-mon[73607]: pgmap v2780: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 131 KiB/s wr, 100 op/s
Oct 02 12:52:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 131 KiB/s wr, 100 op/s
Oct 02 12:52:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:04 compute-0 nova_compute[257802]: 2025-10-02 12:52:04.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:05 compute-0 ceph-mon[73607]: pgmap v2781: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 131 KiB/s wr, 100 op/s
Oct 02 12:52:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1114481088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:05.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 793 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 902 KiB/s wr, 68 op/s
Oct 02 12:52:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:06.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:07 compute-0 ceph-mon[73607]: pgmap v2782: 305 pgs: 305 active+clean; 793 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 902 KiB/s wr, 68 op/s
Oct 02 12:52:07 compute-0 nova_compute[257802]: 2025-10-02 12:52:07.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:07.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 793 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 897 KiB/s wr, 22 op/s
Oct 02 12:52:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:08 compute-0 ovn_controller[148183]: 2025-10-02T12:52:08Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:d0:8a 10.100.0.12
Oct 02 12:52:08 compute-0 ovn_controller[148183]: 2025-10-02T12:52:08Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:d0:8a 10.100.0.12
Oct 02 12:52:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:09 compute-0 nova_compute[257802]: 2025-10-02 12:52:09.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:09 compute-0 ceph-mon[73607]: pgmap v2783: 305 pgs: 305 active+clean; 793 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 897 KiB/s wr, 22 op/s
Oct 02 12:52:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2235344967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 12:52:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:11 compute-0 ceph-mon[73607]: pgmap v2784: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 12:52:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:52:12 compute-0 nova_compute[257802]: 2025-10-02 12:52:12.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:13 compute-0 nova_compute[257802]: 2025-10-02 12:52:13.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:13 compute-0 nova_compute[257802]: 2025-10-02 12:52:13.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:52:13 compute-0 nova_compute[257802]: 2025-10-02 12:52:13.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:13.147 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:13.149 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:52:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:13 compute-0 ceph-mon[73607]: pgmap v2785: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:52:14 compute-0 nova_compute[257802]: 2025-10-02 12:52:14.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:52:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:14 compute-0 nova_compute[257802]: 2025-10-02 12:52:14.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:14.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:14 compute-0 ceph-mon[73607]: pgmap v2786: 305 pgs: 305 active+clean; 733 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:52:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 460 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:52:16 compute-0 sudo[372217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:16 compute-0 sudo[372217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:16 compute-0 sudo[372217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:16 compute-0 sudo[372242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:16 compute-0 sudo[372242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:16 compute-0 sudo[372242]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:16.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Oct 02 12:52:17 compute-0 ceph-mon[73607]: pgmap v2787: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 460 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:52:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Oct 02 12:52:17 compute-0 nova_compute[257802]: 2025-10-02 12:52:17.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Oct 02 12:52:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:17 compute-0 podman[372269]: 2025-10-02 12:52:17.92779177 +0000 UTC m=+0.063590063 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:52:17 compute-0 podman[372270]: 2025-10-02 12:52:17.931659045 +0000 UTC m=+0.062984558 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 02 12:52:17 compute-0 podman[372268]: 2025-10-02 12:52:17.947294159 +0000 UTC m=+0.085582113 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:52:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:18.150 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 512 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Oct 02 12:52:18 compute-0 nova_compute[257802]: 2025-10-02 12:52:18.180 2 DEBUG nova.compute.manager [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:18 compute-0 nova_compute[257802]: 2025-10-02 12:52:18.181 2 DEBUG nova.compute.manager [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing instance network info cache due to event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:18 compute-0 nova_compute[257802]: 2025-10-02 12:52:18.181 2 DEBUG oslo_concurrency.lockutils [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:18 compute-0 nova_compute[257802]: 2025-10-02 12:52:18.181 2 DEBUG oslo_concurrency.lockutils [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:18 compute-0 nova_compute[257802]: 2025-10-02 12:52:18.181 2 DEBUG nova.network.neutron [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:19 compute-0 nova_compute[257802]: 2025-10-02 12:52:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:19 compute-0 ceph-mon[73607]: osdmap e379: 3 total, 3 up, 3 in
Oct 02 12:52:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:19 compute-0 nova_compute[257802]: 2025-10-02 12:52:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:19.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:20 compute-0 nova_compute[257802]: 2025-10-02 12:52:20.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 101 op/s
Oct 02 12:52:20 compute-0 ceph-mon[73607]: pgmap v2789: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 512 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Oct 02 12:52:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:21 compute-0 nova_compute[257802]: 2025-10-02 12:52:21.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:21 compute-0 nova_compute[257802]: 2025-10-02 12:52:21.566 2 INFO nova.compute.manager [None req-be6a72fd-21ed-47f7-b25f-72478bc0d31f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Get console output
Oct 02 12:52:21 compute-0 nova_compute[257802]: 2025-10-02 12:52:21.570 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:52:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:21.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:21 compute-0 ceph-mon[73607]: pgmap v2790: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 101 op/s
Oct 02 12:52:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 101 op/s
Oct 02 12:52:22 compute-0 nova_compute[257802]: 2025-10-02 12:52:22.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Oct 02 12:52:22 compute-0 podman[372325]: 2025-10-02 12:52:22.956986262 +0000 UTC m=+0.098063710 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:52:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-0 ceph-mon[73607]: pgmap v2791: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 101 op/s
Oct 02 12:52:23 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:23.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:24 compute-0 nova_compute[257802]: 2025-10-02 12:52:24.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 02 12:52:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:24 compute-0 ceph-mon[73607]: osdmap e380: 3 total, 3 up, 3 in
Oct 02 12:52:24 compute-0 nova_compute[257802]: 2025-10-02 12:52:24.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:24.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:25 compute-0 ceph-mon[73607]: pgmap v2793: 305 pgs: 305 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 02 12:52:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Oct 02 12:52:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Oct 02 12:52:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Oct 02 12:52:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 118 op/s
Oct 02 12:52:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:26.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:26.972 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:26.973 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:52:26.974 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:27 compute-0 ceph-mon[73607]: osdmap e381: 3 total, 3 up, 3 in
Oct 02 12:52:27 compute-0 ceph-mon[73607]: pgmap v2795: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 118 op/s
Oct 02 12:52:27 compute-0 nova_compute[257802]: 2025-10-02 12:52:27.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:28 compute-0 nova_compute[257802]: 2025-10-02 12:52:28.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 531 KiB/s wr, 29 op/s
Oct 02 12:52:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:29 compute-0 nova_compute[257802]: 2025-10-02 12:52:29.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:29 compute-0 nova_compute[257802]: 2025-10-02 12:52:29.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:52:29 compute-0 nova_compute[257802]: 2025-10-02 12:52:29.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:52:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Oct 02 12:52:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Oct 02 12:52:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Oct 02 12:52:29 compute-0 nova_compute[257802]: 2025-10-02 12:52:29.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:29 compute-0 ceph-mon[73607]: pgmap v2796: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 531 KiB/s wr, 29 op/s
Oct 02 12:52:29 compute-0 ceph-mon[73607]: osdmap e382: 3 total, 3 up, 3 in
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.067 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.068 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.068 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.068 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 621 KiB/s wr, 45 op/s
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.604 2 DEBUG nova.network.neutron [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updated VIF entry in instance network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:30 compute-0 nova_compute[257802]: 2025-10-02 12:52:30.604 2 DEBUG nova.network.neutron [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:30 compute-0 ceph-mon[73607]: pgmap v2798: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 621 KiB/s wr, 45 op/s
Oct 02 12:52:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 535 KiB/s wr, 39 op/s
Oct 02 12:52:32 compute-0 nova_compute[257802]: 2025-10-02 12:52:32.444 2 DEBUG oslo_concurrency.lockutils [req-535c9a08-1471-4982-be4c-69690f826802 req-e95defe5-e026-43ca-b529-1facd781c251 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:32 compute-0 nova_compute[257802]: 2025-10-02 12:52:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:32.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:33 compute-0 ceph-mon[73607]: pgmap v2799: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 535 KiB/s wr, 39 op/s
Oct 02 12:52:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3443426736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 3.5 KiB/s wr, 9 op/s
Oct 02 12:52:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:34 compute-0 nova_compute[257802]: 2025-10-02 12:52:34.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:34 compute-0 sudo[372358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:34 compute-0 sudo[372358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:34 compute-0 sudo[372358]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:34 compute-0 sudo[372384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:34 compute-0 sudo[372384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:34 compute-0 sudo[372384]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:34 compute-0 sudo[372409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:34 compute-0 sudo[372409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:34 compute-0 sudo[372409]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:34 compute-0 sudo[372434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:52:34 compute-0 sudo[372434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:35 compute-0 sudo[372434]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:35 compute-0 nova_compute[257802]: 2025-10-02 12:52:35.406 2 DEBUG nova.compute.manager [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:35 compute-0 nova_compute[257802]: 2025-10-02 12:52:35.406 2 DEBUG nova.compute.manager [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing instance network info cache due to event network-changed-edd030ea-1bf1-4735-8720-2e02fbd67149. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:35 compute-0 nova_compute[257802]: 2025-10-02 12:52:35.406 2 DEBUG oslo_concurrency.lockutils [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:35 compute-0 nova_compute[257802]: 2025-10-02 12:52:35.407 2 DEBUG oslo_concurrency.lockutils [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:35 compute-0 nova_compute[257802]: 2025-10-02 12:52:35.407 2 DEBUG nova.network.neutron [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Refreshing network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:52:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8dc8bf71-4212-4f5c-a73a-aa8187489a64 does not exist
Oct 02 12:52:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 80958e61-061a-42ed-abae-c00c044b7f0c does not exist
Oct 02 12:52:35 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c7838563-d4e5-4d07-b06a-da4b87864119 does not exist
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:52:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:52:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:35 compute-0 sudo[372489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:35 compute-0 sudo[372489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:35 compute-0 sudo[372489]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:35 compute-0 sudo[372514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:35 compute-0 sudo[372514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:35 compute-0 sudo[372514]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:35 compute-0 sudo[372539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:35 compute-0 sudo[372539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:35 compute-0 sudo[372539]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:35 compute-0 sudo[372564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:52:35 compute-0 sudo[372564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:36 compute-0 ceph-mon[73607]: pgmap v2800: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 3.5 KiB/s wr, 9 op/s
Oct 02 12:52:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/653364688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1840656855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:36 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:52:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 12:52:36 compute-0 nova_compute[257802]: 2025-10-02 12:52:36.238 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.360434296 +0000 UTC m=+0.106323243 container create 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.27632458 +0000 UTC m=+0.022213547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:36 compute-0 systemd[1]: Started libpod-conmon-81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054.scope.
Oct 02 12:52:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.568675412 +0000 UTC m=+0.314564389 container init 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.577121879 +0000 UTC m=+0.323010826 container start 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:36 compute-0 elastic_kare[372646]: 167 167
Oct 02 12:52:36 compute-0 systemd[1]: libpod-81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054.scope: Deactivated successfully.
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.613599535 +0000 UTC m=+0.359488482 container attach 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.615490281 +0000 UTC m=+0.361379228 container died 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:52:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:36 compute-0 sudo[372660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:36 compute-0 sudo[372660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:36 compute-0 sudo[372660]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-271d34160d24c93f0264a4cddcd45e59bf7166a1fd48c9bf680717a08462f2e7-merged.mount: Deactivated successfully.
Oct 02 12:52:36 compute-0 nova_compute[257802]: 2025-10-02 12:52:36.708 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:36 compute-0 nova_compute[257802]: 2025-10-02 12:52:36.709 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:52:36 compute-0 nova_compute[257802]: 2025-10-02 12:52:36.709 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:36 compute-0 sudo[372688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:36 compute-0 sudo[372688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:36 compute-0 sudo[372688]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:36 compute-0 podman[372630]: 2025-10-02 12:52:36.756724171 +0000 UTC m=+0.502613118 container remove 81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:36 compute-0 systemd[1]: libpod-conmon-81524ef7a8862ea7b721dba550ba3c3bf9ce51e3ad5711a43bb57d8372dbe054.scope: Deactivated successfully.
Oct 02 12:52:36 compute-0 podman[372721]: 2025-10-02 12:52:36.928802748 +0000 UTC m=+0.045983321 container create 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:36 compute-0 systemd[1]: Started libpod-conmon-9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590.scope.
Oct 02 12:52:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:37 compute-0 podman[372721]: 2025-10-02 12:52:36.906394267 +0000 UTC m=+0.023574840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:37 compute-0 podman[372721]: 2025-10-02 12:52:37.009399148 +0000 UTC m=+0.126579741 container init 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:37 compute-0 podman[372721]: 2025-10-02 12:52:37.020169062 +0000 UTC m=+0.137349645 container start 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:52:37 compute-0 podman[372721]: 2025-10-02 12:52:37.024697804 +0000 UTC m=+0.141878377 container attach 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.431 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.432 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.432 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.432 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.432 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:52:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:52:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:37 compute-0 ceph-mon[73607]: pgmap v2801: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:37.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978905884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-0 nova_compute[257802]: 2025-10-02 12:52:37.890 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:37 compute-0 gallant_haslett[372737]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:52:37 compute-0 gallant_haslett[372737]: --> relative data size: 1.0
Oct 02 12:52:37 compute-0 gallant_haslett[372737]: --> All data devices are unavailable
Oct 02 12:52:37 compute-0 systemd[1]: libpod-9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590.scope: Deactivated successfully.
Oct 02 12:52:37 compute-0 podman[372775]: 2025-10-02 12:52:37.980669666 +0000 UTC m=+0.025588959 container died 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1cb62ad1f5bfd1c3109afa83e60d4b21dfa6b7195a121d4e323a5b484904e53-merged.mount: Deactivated successfully.
Oct 02 12:52:38 compute-0 podman[372775]: 2025-10-02 12:52:38.044554046 +0000 UTC m=+0.089473319 container remove 9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:38 compute-0 systemd[1]: libpod-conmon-9d571dae242cbf686f6250540ecf7e52af88d084a7de42c539d8e87073a73590.scope: Deactivated successfully.
Oct 02 12:52:38 compute-0 sudo[372564]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:38 compute-0 sudo[372790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:38 compute-0 sudo[372790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:38 compute-0 sudo[372790]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 12:52:38 compute-0 sudo[372815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:38 compute-0 sudo[372815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:38 compute-0 sudo[372815]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:38 compute-0 sudo[372840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:38 compute-0 sudo[372840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:38 compute-0 sudo[372840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:38 compute-0 sudo[372865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:52:38 compute-0 sudo[372865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.58285967 +0000 UTC m=+0.037993255 container create b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:52:38 compute-0 systemd[1]: Started libpod-conmon-b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4.scope.
Oct 02 12:52:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:38.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.565637946 +0000 UTC m=+0.020771551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.682115907 +0000 UTC m=+0.137249492 container init b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.68916732 +0000 UTC m=+0.144300905 container start b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:38 compute-0 wonderful_cray[372947]: 167 167
Oct 02 12:52:38 compute-0 systemd[1]: libpod-b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4.scope: Deactivated successfully.
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.701620487 +0000 UTC m=+0.156754072 container attach b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.701994746 +0000 UTC m=+0.157128331 container died b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:52:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3978905884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b0e5381efe72ae24ac3ad498ffd79614a28497f857b36f5eba56c0dd111798-merged.mount: Deactivated successfully.
Oct 02 12:52:38 compute-0 podman[372931]: 2025-10-02 12:52:38.776447875 +0000 UTC m=+0.231581460 container remove b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:52:38 compute-0 systemd[1]: libpod-conmon-b1ef62898f44830730be650ad02dacabf3d560219e2964d7a56a0d4cad20adf4.scope: Deactivated successfully.
Oct 02 12:52:38 compute-0 podman[372975]: 2025-10-02 12:52:38.935039651 +0000 UTC m=+0.037710767 container create bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:52:38 compute-0 systemd[1]: Started libpod-conmon-bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387.scope.
Oct 02 12:52:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653a4d826fe5b5875616f92d1f62d912076c96d587e386954adbb41b96e2f84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653a4d826fe5b5875616f92d1f62d912076c96d587e386954adbb41b96e2f84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653a4d826fe5b5875616f92d1f62d912076c96d587e386954adbb41b96e2f84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653a4d826fe5b5875616f92d1f62d912076c96d587e386954adbb41b96e2f84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:38.918336101 +0000 UTC m=+0.021007237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:39.014906023 +0000 UTC m=+0.117577159 container init bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:39.021218078 +0000 UTC m=+0.123889194 container start bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:39.024024027 +0000 UTC m=+0.126695243 container attach bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.198 2 DEBUG nova.network.neutron [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updated VIF entry in instance network info cache for port edd030ea-1bf1-4735-8720-2e02fbd67149. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.199 2 DEBUG nova.network.neutron [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:39 compute-0 condescending_noether[372991]: {
Oct 02 12:52:39 compute-0 condescending_noether[372991]:     "1": [
Oct 02 12:52:39 compute-0 condescending_noether[372991]:         {
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "devices": [
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "/dev/loop3"
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             ],
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "lv_name": "ceph_lv0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "lv_size": "7511998464",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "name": "ceph_lv0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "tags": {
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.cluster_name": "ceph",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.crush_device_class": "",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.encrypted": "0",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.osd_id": "1",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.type": "block",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:                 "ceph.vdo": "0"
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             },
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "type": "block",
Oct 02 12:52:39 compute-0 condescending_noether[372991]:             "vg_name": "ceph_vg0"
Oct 02 12:52:39 compute-0 condescending_noether[372991]:         }
Oct 02 12:52:39 compute-0 condescending_noether[372991]:     ]
Oct 02 12:52:39 compute-0 condescending_noether[372991]: }
Oct 02 12:52:39 compute-0 ceph-mon[73607]: pgmap v2802: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 12:52:39 compute-0 systemd[1]: libpod-bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387.scope: Deactivated successfully.
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:39.820576765 +0000 UTC m=+0.923247881 container died bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-653a4d826fe5b5875616f92d1f62d912076c96d587e386954adbb41b96e2f84d-merged.mount: Deactivated successfully.
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.901 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.902 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.905 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:39 compute-0 nova_compute[257802]: 2025-10-02 12:52:39.905 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:39 compute-0 podman[372975]: 2025-10-02 12:52:39.923370909 +0000 UTC m=+1.026042035 container remove bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:39 compute-0 systemd[1]: libpod-conmon-bbc00281000d87eb8440175b3942395004ab1fd3b70e8ab99769085a82c7d387.scope: Deactivated successfully.
Oct 02 12:52:39 compute-0 sudo[372865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:40 compute-0 sudo[373012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:40 compute-0 sudo[373012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:40 compute-0 sudo[373012]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:40 compute-0 nova_compute[257802]: 2025-10-02 12:52:40.072 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:40 compute-0 nova_compute[257802]: 2025-10-02 12:52:40.073 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3761MB free_disk=20.714889526367188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:52:40 compute-0 nova_compute[257802]: 2025-10-02 12:52:40.074 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:40 compute-0 nova_compute[257802]: 2025-10-02 12:52:40.074 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:40 compute-0 sudo[373037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:40 compute-0 sudo[373037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:40 compute-0 sudo[373037]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:40 compute-0 sudo[373062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:40 compute-0 sudo[373062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:40 compute-0 sudo[373062]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct 02 12:52:40 compute-0 sudo[373087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:52:40 compute-0 sudo[373087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.522172139 +0000 UTC m=+0.052835899 container create 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:52:40 compute-0 systemd[1]: Started libpod-conmon-2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0.scope.
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.490418409 +0000 UTC m=+0.021082189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:40.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.645754125 +0000 UTC m=+0.176417905 container init 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.65208065 +0000 UTC m=+0.182744400 container start 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:40 compute-0 epic_borg[373169]: 167 167
Oct 02 12:52:40 compute-0 systemd[1]: libpod-2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0.scope: Deactivated successfully.
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.656207631 +0000 UTC m=+0.186871391 container attach 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.656638032 +0000 UTC m=+0.187301792 container died 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d704dd9258a543bfa94ff61f864eb447a52f1d9faf619618dbcee67e3f49f316-merged.mount: Deactivated successfully.
Oct 02 12:52:40 compute-0 podman[373152]: 2025-10-02 12:52:40.732306411 +0000 UTC m=+0.262970171 container remove 2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:52:40 compute-0 systemd[1]: libpod-conmon-2e65d0ad651607c02bcdd6ce2f89deeadefda2388616fd589a03986b9d98f1e0.scope: Deactivated successfully.
Oct 02 12:52:40 compute-0 podman[373194]: 2025-10-02 12:52:40.89999392 +0000 UTC m=+0.039077660 container create 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:40 compute-0 ceph-mon[73607]: pgmap v2803: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct 02 12:52:40 compute-0 systemd[1]: Started libpod-conmon-373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a.scope.
Oct 02 12:52:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8464f2cf83a5418a19137ea04db68669219c0cdc406e36c487ac3e59ef04a1e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8464f2cf83a5418a19137ea04db68669219c0cdc406e36c487ac3e59ef04a1e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8464f2cf83a5418a19137ea04db68669219c0cdc406e36c487ac3e59ef04a1e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8464f2cf83a5418a19137ea04db68669219c0cdc406e36c487ac3e59ef04a1e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:40 compute-0 podman[373194]: 2025-10-02 12:52:40.882366597 +0000 UTC m=+0.021450377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:41 compute-0 podman[373194]: 2025-10-02 12:52:41.021608647 +0000 UTC m=+0.160692417 container init 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:41 compute-0 podman[373194]: 2025-10-02 12:52:41.028414504 +0000 UTC m=+0.167498244 container start 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:52:41 compute-0 podman[373194]: 2025-10-02 12:52:41.036150485 +0000 UTC m=+0.175234245 container attach 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:52:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:41 compute-0 goofy_jemison[373210]: {
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:         "osd_id": 1,
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:         "type": "bluestore"
Oct 02 12:52:41 compute-0 goofy_jemison[373210]:     }
Oct 02 12:52:41 compute-0 goofy_jemison[373210]: }
Oct 02 12:52:41 compute-0 systemd[1]: libpod-373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a.scope: Deactivated successfully.
Oct 02 12:52:41 compute-0 podman[373194]: 2025-10-02 12:52:41.863590971 +0000 UTC m=+1.002674711 container died 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8464f2cf83a5418a19137ea04db68669219c0cdc406e36c487ac3e59ef04a1e0-merged.mount: Deactivated successfully.
Oct 02 12:52:41 compute-0 podman[373194]: 2025-10-02 12:52:41.93236204 +0000 UTC m=+1.071445780 container remove 373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:41 compute-0 systemd[1]: libpod-conmon-373ee058c3602fde94784ccea915f969993fb8d074e8a7d4e506550042b8919a.scope: Deactivated successfully.
Oct 02 12:52:41 compute-0 sudo[373087]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:52:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 13 KiB/s wr, 13 op/s
Oct 02 12:52:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8f185fe9-5e04-4ce7-9368-7fec5ff578da does not exist
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7b1fbc5c-2e7d-48cd-822b-5c2c5999b4a2 does not exist
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8c979581-3377-4924-b2e3-67b8bfe8c68f does not exist
Oct 02 12:52:42 compute-0 sudo[373243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:42 compute-0 sudo[373243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:42 compute-0 sudo[373243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:42 compute-0 sudo[373268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:52:42 compute-0 sudo[373268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:42 compute-0 sudo[373268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:42 compute-0 nova_compute[257802]: 2025-10-02 12:52:42.498 2 DEBUG oslo_concurrency.lockutils [req-6ea3d257-c16b-43b5-93e1-896d45af5f5e req-65aee52f-3b0d-4293-8727-7b173a17c3e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:52:42
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:52:42 compute-0 nova_compute[257802]: 2025-10-02 12:52:42.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.205 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.205 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 42da5a56-55e8-4a1a-a524-24555a4bd3ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.205 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.205 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:52:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:43 compute-0 ceph-mon[73607]: pgmap v2804: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 13 KiB/s wr, 13 op/s
Oct 02 12:52:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.278 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:52:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:52:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:52:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:52:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:52:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945501830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.718 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:43 compute-0 nova_compute[257802]: 2025-10-02 12:52:43.724 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 13 KiB/s wr, 13 op/s
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:52:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:52:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2945501830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:44 compute-0 nova_compute[257802]: 2025-10-02 12:52:44.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:45 compute-0 nova_compute[257802]: 2025-10-02 12:52:45.160 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:45 compute-0 ceph-mon[73607]: pgmap v2805: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 13 KiB/s wr, 13 op/s
Oct 02 12:52:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 21 KiB/s wr, 14 op/s
Oct 02 12:52:46 compute-0 nova_compute[257802]: 2025-10-02 12:52:46.294 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:52:46 compute-0 nova_compute[257802]: 2025-10-02 12:52:46.294 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:47 compute-0 nova_compute[257802]: 2025-10-02 12:52:47.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:47 compute-0 ceph-mon[73607]: pgmap v2806: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 KiB/s rd, 21 KiB/s wr, 14 op/s
Oct 02 12:52:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 12:52:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:48 compute-0 nova_compute[257802]: 2025-10-02 12:52:48.684 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:48 compute-0 ceph-mon[73607]: pgmap v2807: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 12:52:48 compute-0 podman[373319]: 2025-10-02 12:52:48.939440857 +0000 UTC m=+0.069871737 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:52:48 compute-0 podman[373320]: 2025-10-02 12:52:48.963192401 +0000 UTC m=+0.083679226 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 12:52:48 compute-0 podman[373321]: 2025-10-02 12:52:48.968550073 +0000 UTC m=+0.097585659 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:52:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:49 compute-0 nova_compute[257802]: 2025-10-02 12:52:49.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:52:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:52:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 35 op/s
Oct 02 12:52:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:51 compute-0 ceph-mon[73607]: pgmap v2808: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 35 op/s
Oct 02 12:52:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:52 compute-0 nova_compute[257802]: 2025-10-02 12:52:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:53 compute-0 ceph-mon[73607]: pgmap v2809: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:52:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:52:53 compute-0 podman[373372]: 2025-10-02 12:52:53.959463594 +0000 UTC m=+0.087917140 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:54 compute-0 nova_compute[257802]: 2025-10-02 12:52:54.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010851784966499162 of space, bias 1.0, pg target 3.2555354899497484 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6421021121231156 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007034230760282896 of space, bias 1.0, pg target 2.0891665358040203 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002680866717848502 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Oct 02 12:52:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:52:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954220367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:52:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954220367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:55 compute-0 ceph-mon[73607]: pgmap v2810: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1954220367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1954220367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:55.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:56.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:56 compute-0 sudo[373401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:56 compute-0 sudo[373401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:56 compute-0 sudo[373401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:56 compute-0 sudo[373426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:56 compute-0 sudo[373426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:56 compute-0 sudo[373426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:57 compute-0 nova_compute[257802]: 2025-10-02 12:52:57.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:57 compute-0 ceph-mon[73607]: pgmap v2811: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 02 12:52:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:52:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:52:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 217K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s
                                           Cumulative WAL: 55K writes, 19K syncs, 2.80 writes per sync, written: 0.21 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7093 writes, 28K keys, 7093 commit groups, 1.0 writes per commit group, ingest: 28.78 MB, 0.05 MB/s
                                           Interval WAL: 7093 writes, 2781 syncs, 2.55 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:52:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:59 compute-0 nova_compute[257802]: 2025-10-02 12:52:59.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:52:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:52:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:52:59 compute-0 ceph-mon[73607]: pgmap v2812: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:53:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:53:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:00 compute-0 ceph-mon[73607]: pgmap v2813: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:53:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:53:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481913080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2481913080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:02 compute-0 nova_compute[257802]: 2025-10-02 12:53:02.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:03 compute-0 ceph-mon[73607]: pgmap v2814: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:03.348 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:03 compute-0 nova_compute[257802]: 2025-10-02 12:53:03.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:03.349 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:53:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:04 compute-0 nova_compute[257802]: 2025-10-02 12:53:04.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:05.351 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:05 compute-0 ceph-mon[73607]: pgmap v2815: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 12:53:07 compute-0 nova_compute[257802]: 2025-10-02 12:53:07.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:07 compute-0 ceph-mon[73607]: pgmap v2816: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:07.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1299686118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:09 compute-0 nova_compute[257802]: 2025-10-02 12:53:09.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:09 compute-0 ceph-mon[73607]: pgmap v2817: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:09.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:10.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:10 compute-0 ceph-mon[73607]: pgmap v2818: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:11.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:12 compute-0 nova_compute[257802]: 2025-10-02 12:53:12.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:13 compute-0 ceph-mon[73607]: pgmap v2819: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:13.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2364978983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:14 compute-0 nova_compute[257802]: 2025-10-02 12:53:14.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:14.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:15 compute-0 nova_compute[257802]: 2025-10-02 12:53:15.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:15 compute-0 nova_compute[257802]: 2025-10-02 12:53:15.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:15 compute-0 nova_compute[257802]: 2025-10-02 12:53:15.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:53:15 compute-0 ceph-mon[73607]: pgmap v2820: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:15.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1538834424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:16.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:17 compute-0 sudo[373462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:17 compute-0 sudo[373462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 sudo[373462]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 sudo[373487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:17 compute-0 sudo[373487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 sudo[373487]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 nova_compute[257802]: 2025-10-02 12:53:17.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:17 compute-0 ceph-mon[73607]: pgmap v2821: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:18.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:18 compute-0 ceph-mon[73607]: pgmap v2822: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:19 compute-0 nova_compute[257802]: 2025-10-02 12:53:19.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:19 compute-0 nova_compute[257802]: 2025-10-02 12:53:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:19.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:19 compute-0 podman[373513]: 2025-10-02 12:53:19.908027378 +0000 UTC m=+0.047555820 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:53:19 compute-0 podman[373515]: 2025-10-02 12:53:19.911598485 +0000 UTC m=+0.049387414 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:19 compute-0 podman[373514]: 2025-10-02 12:53:19.912680872 +0000 UTC m=+0.053359262 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:20.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:21 compute-0 nova_compute[257802]: 2025-10-02 12:53:21.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:21 compute-0 ceph-mon[73607]: pgmap v2823: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 12:53:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:21.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:22 compute-0 nova_compute[257802]: 2025-10-02 12:53:22.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:22 compute-0 nova_compute[257802]: 2025-10-02 12:53:22.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:22.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:23 compute-0 ceph-mon[73607]: pgmap v2824: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:23.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:24 compute-0 nova_compute[257802]: 2025-10-02 12:53:24.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:24 compute-0 nova_compute[257802]: 2025-10-02 12:53:24.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:24.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:25 compute-0 podman[373570]: 2025-10-02 12:53:25.017335767 +0000 UTC m=+0.140457112 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:25 compute-0 ceph-mon[73607]: pgmap v2825: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:25.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:26 compute-0 nova_compute[257802]: 2025-10-02 12:53:26.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:26.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:26.973 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:26.973 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:26.974 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:27 compute-0 ceph-mon[73607]: pgmap v2826: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:27 compute-0 nova_compute[257802]: 2025-10-02 12:53:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:29 compute-0 nova_compute[257802]: 2025-10-02 12:53:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:29.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:29 compute-0 ceph-mon[73607]: pgmap v2827: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:30 compute-0 ceph-mon[73607]: pgmap v2828: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:31 compute-0 nova_compute[257802]: 2025-10-02 12:53:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:31 compute-0 nova_compute[257802]: 2025-10-02 12:53:31.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:53:31 compute-0 ovn_controller[148183]: 2025-10-02T12:53:31Z|00809|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 12:53:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:31.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:32 compute-0 nova_compute[257802]: 2025-10-02 12:53:32.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:33 compute-0 ceph-mon[73607]: pgmap v2829: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:33.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:34 compute-0 nova_compute[257802]: 2025-10-02 12:53:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:35 compute-0 ceph-mon[73607]: pgmap v2830: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:35.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:36 compute-0 nova_compute[257802]: 2025-10-02 12:53:36.132 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:36 compute-0 nova_compute[257802]: 2025-10-02 12:53:36.133 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:36 compute-0 nova_compute[257802]: 2025-10-02 12:53:36.133 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:53:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:37 compute-0 sudo[373602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:37 compute-0 sudo[373602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:37 compute-0 sudo[373602]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:37 compute-0 sudo[373627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:37 compute-0 sudo[373627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:37 compute-0 sudo[373627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:37 compute-0 ceph-mon[73607]: pgmap v2831: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:37 compute-0 nova_compute[257802]: 2025-10-02 12:53:37.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:37.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:38.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:39 compute-0 nova_compute[257802]: 2025-10-02 12:53:39.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:39 compute-0 ceph-mon[73607]: pgmap v2832: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:39.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:40.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:40.865 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:40 compute-0 nova_compute[257802]: 2025-10-02 12:53:40.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:40.867 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.492 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [{"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.528 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-42da5a56-55e8-4a1a-a524-24555a4bd3ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.528 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.529 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.529 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.577 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.577 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:53:41 compute-0 nova_compute[257802]: 2025-10-02 12:53:41.578 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:41 compute-0 ceph-mon[73607]: pgmap v2833: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:41.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648494638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.022 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.149 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.150 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.153 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.153 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.307 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.309 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3842MB free_disk=20.760520935058594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.309 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.309 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.413 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.413 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 42da5a56-55e8-4a1a-a524-24555a4bd3ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.413 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.414 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.485 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:53:42
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'backups']
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:53:42 compute-0 sudo[373677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:42 compute-0 sudo[373677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:42 compute-0 sudo[373677]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2715256421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3648494638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1226854057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 sudo[373721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:42 compute-0 sudo[373721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:42 compute-0 sudo[373721]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:42 compute-0 sudo[373747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:42 compute-0 sudo[373747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:42 compute-0 sudo[373747]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:42 compute-0 sudo[373772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:53:42 compute-0 sudo[373772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2592904405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.945 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:42 compute-0 nova_compute[257802]: 2025-10-02 12:53:42.950 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:43 compute-0 nova_compute[257802]: 2025-10-02 12:53:43.041 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:43 compute-0 nova_compute[257802]: 2025-10-02 12:53:43.043 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:53:43 compute-0 nova_compute[257802]: 2025-10-02 12:53:43.044 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:43 compute-0 sudo[373772]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 95224e53-70c3-4978-8f6f-5635a6701f87 does not exist
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7bc09dce-b600-4d1a-8006-cc34a6c07a18 does not exist
Oct 02 12:53:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 44bbe621-5811-4926-bd96-1974adcac248 does not exist
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:43 compute-0 sudo[373830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:43 compute-0 sudo[373830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:43 compute-0 sudo[373830]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:43 compute-0 sudo[373855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:43 compute-0 sudo[373855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:43 compute-0 sudo[373855]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:43 compute-0 sudo[373880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:43 compute-0 sudo[373880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:43 compute-0 sudo[373880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:43 compute-0 ceph-mon[73607]: pgmap v2834: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2592904405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:43 compute-0 sudo[373905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:53:43 compute-0 sudo[373905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:43.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.054747989 +0000 UTC m=+0.046047281 container create f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:44 compute-0 systemd[1]: Started libpod-conmon-f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9.scope.
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.030234757 +0000 UTC m=+0.021534079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.161217055 +0000 UTC m=+0.152516377 container init f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.168588866 +0000 UTC m=+0.159888158 container start f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.173308581 +0000 UTC m=+0.164607893 container attach f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:44 compute-0 happy_driscoll[373986]: 167 167
Oct 02 12:53:44 compute-0 systemd[1]: libpod-f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9.scope: Deactivated successfully.
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.176744746 +0000 UTC m=+0.168044038 container died f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:53:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a0ecbb32ad9580856daa580847418b3466d47e280f5ecfe468b9f11aa208c01-merged.mount: Deactivated successfully.
Oct 02 12:53:44 compute-0 podman[373969]: 2025-10-02 12:53:44.217397245 +0000 UTC m=+0.208696537 container remove f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:44 compute-0 systemd[1]: libpod-conmon-f1d07c9206680e9a452694f1aa783d157d87aec40e547fd50a913d78928396f9.scope: Deactivated successfully.
Oct 02 12:53:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:44 compute-0 podman[374010]: 2025-10-02 12:53:44.428606984 +0000 UTC m=+0.072909373 container create bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:53:44 compute-0 systemd[1]: Started libpod-conmon-bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a.scope.
Oct 02 12:53:44 compute-0 podman[374010]: 2025-10-02 12:53:44.379799244 +0000 UTC m=+0.024101653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:44 compute-0 podman[374010]: 2025-10-02 12:53:44.524602051 +0000 UTC m=+0.168904480 container init bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:53:44 compute-0 podman[374010]: 2025-10-02 12:53:44.533179692 +0000 UTC m=+0.177482101 container start bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:44 compute-0 podman[374010]: 2025-10-02 12:53:44.537257402 +0000 UTC m=+0.181559801 container attach bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:53:44 compute-0 nova_compute[257802]: 2025-10-02 12:53:44.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:44.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:44 compute-0 ceph-mon[73607]: pgmap v2835: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:45 compute-0 distracted_hopper[374026]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:53:45 compute-0 distracted_hopper[374026]: --> relative data size: 1.0
Oct 02 12:53:45 compute-0 distracted_hopper[374026]: --> All data devices are unavailable
Oct 02 12:53:45 compute-0 systemd[1]: libpod-bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a.scope: Deactivated successfully.
Oct 02 12:53:45 compute-0 podman[374010]: 2025-10-02 12:53:45.339372676 +0000 UTC m=+0.983675065 container died bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5d0bd711729048413cc5008c678d9c6e08090c334d9faef521b67a882c79dfe-merged.mount: Deactivated successfully.
Oct 02 12:53:45 compute-0 podman[374010]: 2025-10-02 12:53:45.386974605 +0000 UTC m=+1.031276994 container remove bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 12:53:45 compute-0 systemd[1]: libpod-conmon-bfa143fc0ab895b836e28876d13bf8299e1e25a5599334b7ece2e06b7cdc6f7a.scope: Deactivated successfully.
Oct 02 12:53:45 compute-0 sudo[373905]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:45 compute-0 sudo[374055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:45 compute-0 sudo[374055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:45 compute-0 sudo[374055]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:45 compute-0 sudo[374080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:45 compute-0 sudo[374080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:45 compute-0 sudo[374080]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:45 compute-0 sudo[374105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:45 compute-0 sudo[374105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:45 compute-0 sudo[374105]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:45 compute-0 sudo[374130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:53:45 compute-0 sudo[374130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:45.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:45 compute-0 podman[374196]: 2025-10-02 12:53:45.957582343 +0000 UTC m=+0.060396766 container create d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:45.918431451 +0000 UTC m=+0.021245904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:46 compute-0 systemd[1]: Started libpod-conmon-d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc.scope.
Oct 02 12:53:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:46.099018937 +0000 UTC m=+0.201833360 container init d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:46.106253094 +0000 UTC m=+0.209067517 container start d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:46 compute-0 ecstatic_perlman[374212]: 167 167
Oct 02 12:53:46 compute-0 systemd[1]: libpod-d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc.scope: Deactivated successfully.
Oct 02 12:53:46 compute-0 conmon[374212]: conmon d0ec82a1d4e8e649c675 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc.scope/container/memory.events
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:46.151142287 +0000 UTC m=+0.253956740 container attach d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:46.151908925 +0000 UTC m=+0.254723348 container died d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:53:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b537a5f192e98a019d5e593f15341099fb11f829ac21db7b671fe99ebd2f8deb-merged.mount: Deactivated successfully.
Oct 02 12:53:46 compute-0 podman[374196]: 2025-10-02 12:53:46.450153502 +0000 UTC m=+0.552967925 container remove d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_perlman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:46 compute-0 systemd[1]: libpod-conmon-d0ec82a1d4e8e649c675f131055046bce73a8274b6c238720eecf10634a817cc.scope: Deactivated successfully.
Oct 02 12:53:46 compute-0 podman[374238]: 2025-10-02 12:53:46.649450538 +0000 UTC m=+0.075140607 container create 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:46 compute-0 podman[374238]: 2025-10-02 12:53:46.596308243 +0000 UTC m=+0.021998332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:46.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:46 compute-0 systemd[1]: Started libpod-conmon-54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3.scope.
Oct 02 12:53:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1123460e29e4372db90af4c4639984ee010f3627743b4b978a89a001c2657294/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1123460e29e4372db90af4c4639984ee010f3627743b4b978a89a001c2657294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1123460e29e4372db90af4c4639984ee010f3627743b4b978a89a001c2657294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1123460e29e4372db90af4c4639984ee010f3627743b4b978a89a001c2657294/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:46 compute-0 podman[374238]: 2025-10-02 12:53:46.78223584 +0000 UTC m=+0.207925939 container init 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:53:46 compute-0 podman[374238]: 2025-10-02 12:53:46.790553534 +0000 UTC m=+0.216243603 container start 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:53:46 compute-0 podman[374238]: 2025-10-02 12:53:46.802670292 +0000 UTC m=+0.228360361 container attach 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:53:47 compute-0 ceph-mon[73607]: pgmap v2836: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]: {
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:     "1": [
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:         {
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "devices": [
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "/dev/loop3"
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             ],
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "lv_name": "ceph_lv0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "lv_size": "7511998464",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "name": "ceph_lv0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "tags": {
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.cluster_name": "ceph",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.crush_device_class": "",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.encrypted": "0",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.osd_id": "1",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.type": "block",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:                 "ceph.vdo": "0"
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             },
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "type": "block",
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:             "vg_name": "ceph_vg0"
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:         }
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]:     ]
Oct 02 12:53:47 compute-0 wizardly_zhukovsky[374255]: }
Oct 02 12:53:47 compute-0 systemd[1]: libpod-54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3.scope: Deactivated successfully.
Oct 02 12:53:47 compute-0 conmon[374255]: conmon 54c24cc2a66068e4b1c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3.scope/container/memory.events
Oct 02 12:53:47 compute-0 podman[374238]: 2025-10-02 12:53:47.577360241 +0000 UTC m=+1.003050310 container died 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:53:47 compute-0 nova_compute[257802]: 2025-10-02 12:53:47.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1123460e29e4372db90af4c4639984ee010f3627743b4b978a89a001c2657294-merged.mount: Deactivated successfully.
Oct 02 12:53:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:47.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:47 compute-0 nova_compute[257802]: 2025-10-02 12:53:47.804 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:47 compute-0 nova_compute[257802]: 2025-10-02 12:53:47.806 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:47 compute-0 nova_compute[257802]: 2025-10-02 12:53:47.883 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:53:47 compute-0 podman[374238]: 2025-10-02 12:53:47.916720188 +0000 UTC m=+1.342410257 container remove 54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:53:47 compute-0 systemd[1]: libpod-conmon-54c24cc2a66068e4b1c62e4d8ed896c85667d045c8d88ffc88c0826e7e6762a3.scope: Deactivated successfully.
Oct 02 12:53:47 compute-0 sudo[374130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:48 compute-0 ovn_controller[148183]: 2025-10-02T12:53:48Z|00810|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:53:48 compute-0 ovn_controller[148183]: 2025-10-02T12:53:48Z|00811|binding|INFO|Releasing lport 6947a742-74da-45a4-bca1-52180ae211d0 from this chassis (sb_readonly=0)
Oct 02 12:53:48 compute-0 sudo[374277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:48 compute-0 sudo[374277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:48 compute-0 sudo[374277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.055 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.056 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.068 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.069 2 INFO nova.compute.claims [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:48 compute-0 sudo[374302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:48 compute-0 sudo[374302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:48 compute-0 sudo[374302]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:48 compute-0 sudo[374327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:48 compute-0 sudo[374327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:48 compute-0 sudo[374327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:48 compute-0 sudo[374352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:53:48 compute-0 sudo[374352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.335 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.55062518 +0000 UTC m=+0.063811879 container create 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:53:48 compute-0 systemd[1]: Started libpod-conmon-890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b.scope.
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.512944565 +0000 UTC m=+0.026131284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.653550708 +0000 UTC m=+0.166737437 container init 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.661256218 +0000 UTC m=+0.174442917 container start 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:48 compute-0 trusting_joliot[374452]: 167 167
Oct 02 12:53:48 compute-0 systemd[1]: libpod-890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b.scope: Deactivated successfully.
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.683911984 +0000 UTC m=+0.197098683 container attach 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.684219462 +0000 UTC m=+0.197406161 container died 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:53:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f06b608920f4cbcabd4b0d424e5834282cc1249566a6efcc3a5ec6511773d63-merged.mount: Deactivated successfully.
Oct 02 12:53:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394809665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.776 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.782 2 DEBUG nova.compute.provider_tree [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:48 compute-0 podman[374436]: 2025-10-02 12:53:48.819168026 +0000 UTC m=+0.332354725 container remove 890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_joliot, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:48 compute-0 systemd[1]: libpod-conmon-890e10f923709c356930505976b843e1a5ffd9cea5a799a3be2814bcb4c9ae1b.scope: Deactivated successfully.
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.854 2 DEBUG nova.scheduler.client.report [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.963 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:48 compute-0 nova_compute[257802]: 2025-10-02 12:53:48.965 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:53:49 compute-0 podman[374482]: 2025-10-02 12:53:49.022787319 +0000 UTC m=+0.077429914 container create aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:49 compute-0 podman[374482]: 2025-10-02 12:53:48.966862715 +0000 UTC m=+0.021505330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:49 compute-0 nova_compute[257802]: 2025-10-02 12:53:49.074 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:53:49 compute-0 nova_compute[257802]: 2025-10-02 12:53:49.074 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:53:49 compute-0 systemd[1]: Started libpod-conmon-aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d.scope.
Oct 02 12:53:49 compute-0 nova_compute[257802]: 2025-10-02 12:53:49.131 2 INFO nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:53:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc258d1b8947492c07bef43f79a49a2448be53aa560d94a53c4cf6b671a5267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc258d1b8947492c07bef43f79a49a2448be53aa560d94a53c4cf6b671a5267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc258d1b8947492c07bef43f79a49a2448be53aa560d94a53c4cf6b671a5267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc258d1b8947492c07bef43f79a49a2448be53aa560d94a53c4cf6b671a5267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:49 compute-0 nova_compute[257802]: 2025-10-02 12:53:49.202 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:53:49 compute-0 podman[374482]: 2025-10-02 12:53:49.232573342 +0000 UTC m=+0.287215937 container init aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:53:49 compute-0 podman[374482]: 2025-10-02 12:53:49.240571618 +0000 UTC m=+0.295214213 container start aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:53:49 compute-0 podman[374482]: 2025-10-02 12:53:49.285395839 +0000 UTC m=+0.340038614 container attach aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:49 compute-0 ceph-mon[73607]: pgmap v2837: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Oct 02 12:53:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2394809665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:49 compute-0 nova_compute[257802]: 2025-10-02 12:53:49.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:53:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:49.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:53:49 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:53:49.868 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]: {
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:         "osd_id": 1,
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:         "type": "bluestore"
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]:     }
Oct 02 12:53:49 compute-0 inspiring_jackson[374499]: }
Oct 02 12:53:50 compute-0 systemd[1]: libpod-aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d.scope: Deactivated successfully.
Oct 02 12:53:50 compute-0 podman[374482]: 2025-10-02 12:53:50.010716326 +0000 UTC m=+1.065358951 container died aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc258d1b8947492c07bef43f79a49a2448be53aa560d94a53c4cf6b671a5267-merged.mount: Deactivated successfully.
Oct 02 12:53:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:50 compute-0 podman[374482]: 2025-10-02 12:53:50.280416712 +0000 UTC m=+1.335059317 container remove aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:53:50 compute-0 systemd[1]: libpod-conmon-aafb348ab2ab38f4174cb0eb308729aa8a93597981428cf68cbdfc8b77d0f03d.scope: Deactivated successfully.
Oct 02 12:53:50 compute-0 sudo[374352]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:53:50 compute-0 podman[374533]: 2025-10-02 12:53:50.331740863 +0000 UTC m=+0.276207917 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 02 12:53:50 compute-0 podman[374521]: 2025-10-02 12:53:50.340768844 +0000 UTC m=+0.295904209 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:53:50 compute-0 podman[374530]: 2025-10-02 12:53:50.354574724 +0000 UTC m=+0.305640519 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:53:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:53:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9da3e0a5-9e54-49ff-9e30-024a9a3d7d5c does not exist
Oct 02 12:53:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 90e3a843-b280-4008-afec-a89991d2353c does not exist
Oct 02 12:53:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7475aaee-aa7f-4255-85ef-5bfff2a02823 does not exist
Oct 02 12:53:50 compute-0 sudo[374592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:50 compute-0 sudo[374592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:50 compute-0 sudo[374592]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:50 compute-0 sudo[374617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:53:50 compute-0 sudo[374617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:50 compute-0 sudo[374617]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:50.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:50 compute-0 nova_compute[257802]: 2025-10-02 12:53:50.834 2 DEBUG nova.policy [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:53:51 compute-0 ceph-mon[73607]: pgmap v2838: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:53:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:51.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.263 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.265 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.266 2 INFO nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Creating image(s)
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.307 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.339 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.369 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.373 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.437 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.438 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.439 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.439 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.472 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.477 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:52 compute-0 nova_compute[257802]: 2025-10-02 12:53:52.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:52.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:53 compute-0 ceph-mon[73607]: pgmap v2839: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:53.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:53 compute-0 nova_compute[257802]: 2025-10-02 12:53:53.875 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:53 compute-0 nova_compute[257802]: 2025-10-02 12:53:53.965 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.477 2 DEBUG nova.objects.instance [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 74bffcba-6d85-4f82-87ed-b625a3439b3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.554 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.554 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Ensure instance console log exists: /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.555 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.555 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.556 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:54 compute-0 nova_compute[257802]: 2025-10-02 12:53:54.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:53:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:54.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01085432951796017 of space, bias 1.0, pg target 3.256298855388051 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6421021121231156 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007034230760282896 of space, bias 1.0, pg target 2.0891665358040203 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002680866717848502 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Oct 02 12:53:54 compute-0 ceph-mon[73607]: pgmap v2840: 305 pgs: 305 active+clean; 741 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct 02 12:53:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:55.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3449421909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3629671726' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3629671726' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2875896490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:55 compute-0 podman[374811]: 2025-10-02 12:53:55.965444945 +0000 UTC m=+0.100901690 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:53:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:56.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:56 compute-0 ceph-mon[73607]: pgmap v2841: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:53:57 compute-0 nova_compute[257802]: 2025-10-02 12:53:57.176 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Successfully created port: 2cc0b7ac-ce39-4a91-b50b-d0d21320173d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:53:57 compute-0 sudo[374840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:57 compute-0 sudo[374840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:57 compute-0 sudo[374840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:57 compute-0 sudo[374865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:57 compute-0 sudo[374865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:57 compute-0 sudo[374865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:57 compute-0 nova_compute[257802]: 2025-10-02 12:53:57.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:53:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:53:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/703401347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:53:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:58 compute-0 nova_compute[257802]: 2025-10-02 12:53:58.807 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Successfully updated port: 2cc0b7ac-ce39-4a91-b50b-d0d21320173d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:53:58 compute-0 nova_compute[257802]: 2025-10-02 12:53:58.874 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:58 compute-0 nova_compute[257802]: 2025-10-02 12:53:58.875 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:58 compute-0 nova_compute[257802]: 2025-10-02 12:53:58.875 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:59 compute-0 ceph-mon[73607]: pgmap v2842: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 12:53:59 compute-0 nova_compute[257802]: 2025-10-02 12:53:59.163 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:53:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:59 compute-0 nova_compute[257802]: 2025-10-02 12:53:59.509 2 DEBUG nova.compute.manager [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-changed-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:59 compute-0 nova_compute[257802]: 2025-10-02 12:53:59.509 2 DEBUG nova.compute.manager [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Refreshing instance network info cache due to event network-changed-2cc0b7ac-ce39-4a91-b50b-d0d21320173d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:59 compute-0 nova_compute[257802]: 2025-10-02 12:53:59.509 2 DEBUG oslo_concurrency.lockutils [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:59 compute-0 nova_compute[257802]: 2025-10-02 12:53:59.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:53:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:59.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:54:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:00.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:01 compute-0 ceph-mon[73607]: pgmap v2843: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.470 2 DEBUG nova.network.neutron [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updating instance_info_cache with network_info: [{"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:01.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.989 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.990 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Instance network_info: |[{"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.990 2 DEBUG oslo_concurrency.lockutils [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.990 2 DEBUG nova.network.neutron [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Refreshing network info cache for port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.993 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Start _get_guest_xml network_info=[{"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:54:01 compute-0 nova_compute[257802]: 2025-10-02 12:54:01.998 2 WARNING nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.006 2 DEBUG nova.virt.libvirt.host [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.007 2 DEBUG nova.virt.libvirt.host [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.010 2 DEBUG nova.virt.libvirt.host [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.010 2 DEBUG nova.virt.libvirt.host [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.011 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.012 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.013 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.013 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.013 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.014 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.014 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.014 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.014 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.015 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.015 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.015 2 DEBUG nova.virt.hardware [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.019 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:54:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004981711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.502 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.529 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.533 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758174292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.965 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.967 2 DEBUG nova.virt.libvirt.vif [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-255554500',display_name='tempest-TestNetworkBasicOps-server-255554500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-255554500',id=183,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItRV2ZCvIn+5+pD36R01KJVrm1n22oaqAey9xhlfmzCR8xgtC0xZTXW8ytR4NLJqR0vHEjYgMWYRdJi6TKJ7Ah9ZnqiR0j1XzsFSf5LBbJeK9hca8RmoOL4ayXnz+TbIQ==',key_name='tempest-TestNetworkBasicOps-732704962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-kiunt17m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:50Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=74bffcba-6d85-4f82-87ed-b625a3439b3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.967 2 DEBUG nova.network.os_vif_util [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.969 2 DEBUG nova.network.os_vif_util [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:02 compute-0 nova_compute[257802]: 2025-10-02 12:54:02.970 2 DEBUG nova.objects.instance [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 74bffcba-6d85-4f82-87ed-b625a3439b3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.309 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <uuid>74bffcba-6d85-4f82-87ed-b625a3439b3a</uuid>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <name>instance-000000b7</name>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkBasicOps-server-255554500</nova:name>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:54:01</nova:creationTime>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <nova:port uuid="2cc0b7ac-ce39-4a91-b50b-d0d21320173d">
Oct 02 12:54:03 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <system>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="serial">74bffcba-6d85-4f82-87ed-b625a3439b3a</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="uuid">74bffcba-6d85-4f82-87ed-b625a3439b3a</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </system>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <os>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </os>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <features>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </features>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/74bffcba-6d85-4f82-87ed-b625a3439b3a_disk">
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config">
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </source>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:54:03 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6a:82:95"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <target dev="tap2cc0b7ac-ce"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/console.log" append="off"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <video>
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </video>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:54:03 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:54:03 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:54:03 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:54:03 compute-0 nova_compute[257802]: </domain>
Oct 02 12:54:03 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.309 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Preparing to wait for external event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.310 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.310 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.310 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.311 2 DEBUG nova.virt.libvirt.vif [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-255554500',display_name='tempest-TestNetworkBasicOps-server-255554500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-255554500',id=183,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItRV2ZCvIn+5+pD36R01KJVrm1n22oaqAey9xhlfmzCR8xgtC0xZTXW8ytR4NLJqR0vHEjYgMWYRdJi6TKJ7Ah9ZnqiR0j1XzsFSf5LBbJeK9hca8RmoOL4ayXnz+TbIQ==',key_name='tempest-TestNetworkBasicOps-732704962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-kiunt17m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:50Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=74bffcba-6d85-4f82-87ed-b625a3439b3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.311 2 DEBUG nova.network.os_vif_util [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.311 2 DEBUG nova.network.os_vif_util [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.312 2 DEBUG os_vif [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cc0b7ac-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cc0b7ac-ce, col_values=(('external_ids', {'iface-id': '2cc0b7ac-ce39-4a91-b50b-d0d21320173d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:82:95', 'vm-uuid': '74bffcba-6d85-4f82-87ed-b625a3439b3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:03 compute-0 NetworkManager[44987]: <info>  [1759409643.3202] manager: (tap2cc0b7ac-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.332 2 INFO os_vif [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce')
Oct 02 12:54:03 compute-0 ceph-mon[73607]: pgmap v2844: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:54:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4004981711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1758174292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:54:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:03.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.925 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.925 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.926 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:6a:82:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.926 2 INFO nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Using config drive
Oct 02 12:54:03 compute-0 nova_compute[257802]: 2025-10-02 12:54:03.952 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.228 2 DEBUG nova.network.neutron [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updated VIF entry in instance network info cache for port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.229 2 DEBUG nova.network.neutron [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updating instance_info_cache with network_info: [{"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.512 2 DEBUG oslo_concurrency.lockutils [req-b994064d-f0a7-48d8-ad0a-5da4235ebf10 req-797e665f-f2be-471b-89f0-ae37782a4b9c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/816787724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.952 2 INFO nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Creating config drive at /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config
Oct 02 12:54:04 compute-0 nova_compute[257802]: 2025-10-02 12:54:04.956 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpie7spu73 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.091 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpie7spu73" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.122 2 DEBUG nova.storage.rbd_utils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.127 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.416 2 DEBUG oslo_concurrency.processutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config 74bffcba-6d85-4f82-87ed-b625a3439b3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.417 2 INFO nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Deleting local config drive /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a/disk.config because it was imported into RBD.
Oct 02 12:54:05 compute-0 kernel: tap2cc0b7ac-ce: entered promiscuous mode
Oct 02 12:54:05 compute-0 NetworkManager[44987]: <info>  [1759409645.4731] manager: (tap2cc0b7ac-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Oct 02 12:54:05 compute-0 ovn_controller[148183]: 2025-10-02T12:54:05Z|00812|binding|INFO|Claiming lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d for this chassis.
Oct 02 12:54:05 compute-0 ovn_controller[148183]: 2025-10-02T12:54:05Z|00813|binding|INFO|2cc0b7ac-ce39-4a91-b50b-d0d21320173d: Claiming fa:16:3e:6a:82:95 10.100.0.6
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:05 compute-0 ovn_controller[148183]: 2025-10-02T12:54:05Z|00814|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d ovn-installed in OVS
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:05 compute-0 systemd-udevd[375027]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:05 compute-0 NetworkManager[44987]: <info>  [1759409645.5201] device (tap2cc0b7ac-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:54:05 compute-0 NetworkManager[44987]: <info>  [1759409645.5209] device (tap2cc0b7ac-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:54:05 compute-0 systemd-machined[211836]: New machine qemu-90-instance-000000b7.
Oct 02 12:54:05 compute-0 ovn_controller[148183]: 2025-10-02T12:54:05Z|00815|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d up in Southbound
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.531 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:82:95 10.100.0.6'], port_security=['fa:16:3e:6a:82:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '74bffcba-6d85-4f82-87ed-b625a3439b3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '338f00f9-84c1-4d77-98e7-4725ad48c7d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2cc0b7ac-ce39-4a91-b50b-d0d21320173d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.533 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 bound to our chassis
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.534 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:54:05 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-000000b7.
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.549 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29763025-061f-46e7-beba-729bcc86aca3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ceph-mon[73607]: pgmap v2845: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.578 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[22d18c8a-5855-486b-a9cd-ec8e63f7cf42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.581 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf3dcfd-f095-40aa-8542-f4ec0d494f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.609 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4ad5cc-2429-403e-8192-92dde6094608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.627 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2c70214b-88fb-4b4c-b613-5be4a1d32fee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375042, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.644 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[49cbf95a-280e-4849-9967-81e30e973cd3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757713, 'tstamp': 757713}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375044, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757715, 'tstamp': 757715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375044, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.646 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:05 compute-0 nova_compute[257802]: 2025-10-02 12:54:05.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.651 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15afb19f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.651 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.652 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15afb19f-00, col_values=(('external_ids', {'iface-id': '6947a742-74da-45a4-bca1-52180ae211d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:05.652 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 12:54:06 compute-0 nova_compute[257802]: 2025-10-02 12:54:06.536 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409646.535463, 74bffcba-6d85-4f82-87ed-b625a3439b3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:06 compute-0 nova_compute[257802]: 2025-10-02 12:54:06.536 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] VM Started (Lifecycle Event)
Oct 02 12:54:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:07 compute-0 ceph-mon[73607]: pgmap v2846: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 12:54:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:07.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:07 compute-0 nova_compute[257802]: 2025-10-02 12:54:07.963 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:07 compute-0 nova_compute[257802]: 2025-10-02 12:54:07.970 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409646.5392644, 74bffcba-6d85-4f82-87ed-b625a3439b3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:07 compute-0 nova_compute[257802]: 2025-10-02 12:54:07.970 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] VM Paused (Lifecycle Event)
Oct 02 12:54:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Oct 02 12:54:08 compute-0 nova_compute[257802]: 2025-10-02 12:54:08.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-0 nova_compute[257802]: 2025-10-02 12:54:08.364 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:08 compute-0 nova_compute[257802]: 2025-10-02 12:54:08.368 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:08 compute-0 nova_compute[257802]: 2025-10-02 12:54:08.896 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:09 compute-0 nova_compute[257802]: 2025-10-02 12:54:09.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:09 compute-0 ceph-mon[73607]: pgmap v2847: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Oct 02 12:54:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 102 op/s
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.214 2 DEBUG nova.compute.manager [req-425ededa-f7b8-4c19-917d-eb4095f17904 req-4efdefa5-03d2-4854-91b8-e3a1356fdeec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.214 2 DEBUG oslo_concurrency.lockutils [req-425ededa-f7b8-4c19-917d-eb4095f17904 req-4efdefa5-03d2-4854-91b8-e3a1356fdeec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.215 2 DEBUG oslo_concurrency.lockutils [req-425ededa-f7b8-4c19-917d-eb4095f17904 req-4efdefa5-03d2-4854-91b8-e3a1356fdeec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.215 2 DEBUG oslo_concurrency.lockutils [req-425ededa-f7b8-4c19-917d-eb4095f17904 req-4efdefa5-03d2-4854-91b8-e3a1356fdeec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.215 2 DEBUG nova.compute.manager [req-425ededa-f7b8-4c19-917d-eb4095f17904 req-4efdefa5-03d2-4854-91b8-e3a1356fdeec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Processing event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.216 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.220 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409650.2202907, 74bffcba-6d85-4f82-87ed-b625a3439b3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.221 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] VM Resumed (Lifecycle Event)
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.223 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.228 2 INFO nova.virt.libvirt.driver [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Instance spawned successfully.
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.229 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.481 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.486 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.487 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.488 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.488 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.489 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.489 2 DEBUG nova.virt.libvirt.driver [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.494 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.602 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:10.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3838244575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-0 ceph-mon[73607]: pgmap v2848: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 102 op/s
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.915 2 INFO nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Took 18.65 seconds to spawn the instance on the hypervisor.
Oct 02 12:54:10 compute-0 nova_compute[257802]: 2025-10-02 12:54:10.915 2 DEBUG nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:11 compute-0 nova_compute[257802]: 2025-10-02 12:54:11.084 2 INFO nova.compute.manager [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Took 23.09 seconds to build instance.
Oct 02 12:54:11 compute-0 nova_compute[257802]: 2025-10-02 12:54:11.182 2 DEBUG oslo_concurrency.lockutils [None req-5744bfd9-cbfa-49f5-98e1-13b25585f813 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:54:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:11.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.267 2 DEBUG nova.compute.manager [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.268 2 DEBUG oslo_concurrency.lockutils [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.269 2 DEBUG oslo_concurrency.lockutils [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.269 2 DEBUG oslo_concurrency.lockutils [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.269 2 DEBUG nova.compute.manager [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.270 2 WARNING nova.compute.manager [req-e82ee932-6761-42da-a4c9-95b3a52caa47 req-2de5ca60-df69-422a-898b-b7019082aa90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state active and task_state None.
Oct 02 12:54:13 compute-0 nova_compute[257802]: 2025-10-02 12:54:13.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:13 compute-0 ceph-mon[73607]: pgmap v2849: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 12:54:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:13.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 12:54:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:14 compute-0 nova_compute[257802]: 2025-10-02 12:54:14.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:15 compute-0 ceph-mon[73607]: pgmap v2850: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 12:54:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:15.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 13 KiB/s wr, 195 op/s
Oct 02 12:54:16 compute-0 nova_compute[257802]: 2025-10-02 12:54:16.614 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:16 compute-0 nova_compute[257802]: 2025-10-02 12:54:16.614 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:54:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:17 compute-0 nova_compute[257802]: 2025-10-02 12:54:17.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:17 compute-0 sudo[375094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:17 compute-0 sudo[375094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:17 compute-0 sudo[375094]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:17 compute-0 ceph-mon[73607]: pgmap v2851: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 13 KiB/s wr, 195 op/s
Oct 02 12:54:17 compute-0 sudo[375119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:17 compute-0 sudo[375119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:17 compute-0 sudo[375119]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 13 KiB/s wr, 126 op/s
Oct 02 12:54:18 compute-0 nova_compute[257802]: 2025-10-02 12:54:18.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.387 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:54:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.580 2 DEBUG nova.compute.manager [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-changed-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.581 2 DEBUG nova.compute.manager [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Refreshing instance network info cache due to event network-changed-2cc0b7ac-ce39-4a91-b50b-d0d21320173d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.582 2 DEBUG oslo_concurrency.lockutils [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.582 2 DEBUG oslo_concurrency.lockutils [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.582 2 DEBUG nova.network.neutron [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Refreshing network info cache for port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:19 compute-0 ceph-mon[73607]: pgmap v2852: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 13 KiB/s wr, 126 op/s
Oct 02 12:54:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:19 compute-0 nova_compute[257802]: 2025-10-02 12:54:19.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 169 op/s
Oct 02 12:54:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:20 compute-0 podman[375146]: 2025-10-02 12:54:20.929767559 +0000 UTC m=+0.055975066 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:54:20 compute-0 podman[375147]: 2025-10-02 12:54:20.943124258 +0000 UTC m=+0.068360971 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd)
Oct 02 12:54:20 compute-0 podman[375148]: 2025-10-02 12:54:20.943658871 +0000 UTC m=+0.055063394 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:54:21 compute-0 nova_compute[257802]: 2025-10-02 12:54:21.387 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:21 compute-0 ceph-mon[73607]: pgmap v2853: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 169 op/s
Oct 02 12:54:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 KiB/s wr, 159 op/s
Oct 02 12:54:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:22.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:22 compute-0 ceph-mon[73607]: pgmap v2854: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 KiB/s wr, 159 op/s
Oct 02 12:54:23 compute-0 nova_compute[257802]: 2025-10-02 12:54:23.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:23 compute-0 nova_compute[257802]: 2025-10-02 12:54:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:23 compute-0 nova_compute[257802]: 2025-10-02 12:54:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:23 compute-0 nova_compute[257802]: 2025-10-02 12:54:23.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:24 compute-0 nova_compute[257802]: 2025-10-02 12:54:24.146 2 DEBUG nova.network.neutron [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updated VIF entry in instance network info cache for port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:24 compute-0 nova_compute[257802]: 2025-10-02 12:54:24.147 2 DEBUG nova.network.neutron [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updating instance_info_cache with network_info: [{"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:24 compute-0 nova_compute[257802]: 2025-10-02 12:54:24.190 2 DEBUG oslo_concurrency.lockutils [req-b97c2015-9b84-4b24-a8cb-e68a7725e65c req-841591b0-4738-46e3-89a4-53b75b34b37d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-74bffcba-6d85-4f82-87ed-b625a3439b3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 KiB/s wr, 159 op/s
Oct 02 12:54:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:24 compute-0 nova_compute[257802]: 2025-10-02 12:54:24.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:25 compute-0 ceph-mon[73607]: pgmap v2855: 305 pgs: 305 active+clean; 787 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 KiB/s wr, 159 op/s
Oct 02 12:54:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:25.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 807 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 183 op/s
Oct 02 12:54:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:26.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:26 compute-0 podman[375202]: 2025-10-02 12:54:26.942217074 +0000 UTC m=+0.078953120 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:26.974 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:26.975 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:26.976 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:27 compute-0 nova_compute[257802]: 2025-10-02 12:54:27.122 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:27 compute-0 ovn_controller[148183]: 2025-10-02T12:54:27Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:82:95 10.100.0.6
Oct 02 12:54:27 compute-0 ovn_controller[148183]: 2025-10-02T12:54:27Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:82:95 10.100.0.6
Oct 02 12:54:27 compute-0 ceph-mon[73607]: pgmap v2856: 305 pgs: 305 active+clean; 807 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 183 op/s
Oct 02 12:54:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3357589349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 807 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct 02 12:54:28 compute-0 nova_compute[257802]: 2025-10-02 12:54:28.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:28.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:29 compute-0 nova_compute[257802]: 2025-10-02 12:54:29.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:29 compute-0 ceph-mon[73607]: pgmap v2857: 305 pgs: 305 active+clean; 807 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct 02 12:54:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1128630450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 12:54:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2011315154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:31 compute-0 nova_compute[257802]: 2025-10-02 12:54:31.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:31 compute-0 ceph-mon[73607]: pgmap v2858: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 12:54:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:31.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 906 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:54:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:32 compute-0 ceph-mon[73607]: pgmap v2859: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 906 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.311 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.312 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.312 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.312 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:33 compute-0 nova_compute[257802]: 2025-10-02 12:54:33.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 906 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.380 2 INFO nova.compute.manager [None req-090d77fb-58be-4ae2-ab69-e26e1d1d277b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Get console output
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.384 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:54:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.545 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [{"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.585 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-8e2c1007-1d07-434c-8a22-6cb98d903d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.585 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.684 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.685 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.686 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.686 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.687 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.690 2 INFO nova.compute.manager [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Terminating instance
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.692 2 DEBUG nova.compute.manager [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:54:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:34 compute-0 kernel: tap2cc0b7ac-ce (unregistering): left promiscuous mode
Oct 02 12:54:34 compute-0 NetworkManager[44987]: <info>  [1759409674.8499] device (tap2cc0b7ac-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:34 compute-0 ovn_controller[148183]: 2025-10-02T12:54:34Z|00816|binding|INFO|Releasing lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d from this chassis (sb_readonly=0)
Oct 02 12:54:34 compute-0 ovn_controller[148183]: 2025-10-02T12:54:34Z|00817|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d down in Southbound
Oct 02 12:54:34 compute-0 ovn_controller[148183]: 2025-10-02T12:54:34Z|00818|binding|INFO|Removing iface tap2cc0b7ac-ce ovn-installed in OVS
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.879 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:82:95 10.100.0.6'], port_security=['fa:16:3e:6a:82:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '74bffcba-6d85-4f82-87ed-b625a3439b3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '338f00f9-84c1-4d77-98e7-4725ad48c7d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2cc0b7ac-ce39-4a91-b50b-d0d21320173d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.880 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 unbound from our chassis
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.881 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.899 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e49384-1acb-4e30-b026-238731870479]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.932 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc7721e-78e0-468f-a402-e4ed95073a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Oct 02 12:54:34 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b7.scope: Consumed 15.671s CPU time.
Oct 02 12:54:34 compute-0 systemd-machined[211836]: Machine qemu-90-instance-000000b7 terminated.
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.937 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8937dcf7-72f2-4090-96f7-276a8a2dad03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.965 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3c704c02-daf8-4487-b5ab-6079fb8182a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.980 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e473ea61-a749-48c4-9ea9-78ecc52dc6e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375244, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.996 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[057639e4-2caf-4aa2-abf3-8023c2430716]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757713, 'tstamp': 757713}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375245, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757715, 'tstamp': 757715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375245, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:34 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:34.998 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:34.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.005 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15afb19f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.006 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.006 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15afb19f-00, col_values=(('external_ids', {'iface-id': '6947a742-74da-45a4-bca1-52180ae211d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.006 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:35 compute-0 kernel: tap2cc0b7ac-ce: entered promiscuous mode
Oct 02 12:54:35 compute-0 systemd-udevd[375236]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:35 compute-0 NetworkManager[44987]: <info>  [1759409675.1149] manager: (tap2cc0b7ac-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00819|binding|INFO|Claiming lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d for this chassis.
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00820|binding|INFO|2cc0b7ac-ce39-4a91-b50b-d0d21320173d: Claiming fa:16:3e:6a:82:95 10.100.0.6
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 kernel: tap2cc0b7ac-ce (unregistering): left promiscuous mode
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.121 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.121 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.126 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:82:95 10.100.0.6'], port_security=['fa:16:3e:6a:82:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '74bffcba-6d85-4f82-87ed-b625a3439b3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '338f00f9-84c1-4d77-98e7-4725ad48c7d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2cc0b7ac-ce39-4a91-b50b-d0d21320173d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.128 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 bound to our chassis
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.130 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00821|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d ovn-installed in OVS
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00822|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d up in Southbound
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00823|binding|INFO|Releasing lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d from this chassis (sb_readonly=1)
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00824|if_status|INFO|Dropped 5 log messages in last 1255 seconds (most recently, 1255 seconds ago) due to excessive rate
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00825|if_status|INFO|Not setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d down as sb is readonly
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00826|binding|INFO|Removing iface tap2cc0b7ac-ce ovn-installed in OVS
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00827|binding|INFO|Releasing lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d from this chassis (sb_readonly=0)
Oct 02 12:54:35 compute-0 ovn_controller[148183]: 2025-10-02T12:54:35Z|00828|binding|INFO|Setting lport 2cc0b7ac-ce39-4a91-b50b-d0d21320173d down in Southbound
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.148 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7883b005-f713-420e-a807-82e9ae22ac30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.150 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:82:95 10.100.0.6'], port_security=['fa:16:3e:6a:82:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '74bffcba-6d85-4f82-87ed-b625a3439b3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '338f00f9-84c1-4d77-98e7-4725ad48c7d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=2cc0b7ac-ce39-4a91-b50b-d0d21320173d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.157 2 INFO nova.virt.libvirt.driver [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Instance destroyed successfully.
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.157 2 DEBUG nova.objects.instance [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 74bffcba-6d85-4f82-87ed-b625a3439b3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.179 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8042785d-be7d-4266-8655-3fe3d064c0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.182 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f00d90fb-a50e-41eb-a5dc-07a7fc9507fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.209 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cd401f78-2d20-40f3-b6e3-e1299c662480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.226 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[622245cc-205c-456e-a195-82a8a788b662]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375257, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.237 2 DEBUG nova.virt.libvirt.vif [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-255554500',display_name='tempest-TestNetworkBasicOps-server-255554500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-255554500',id=183,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItRV2ZCvIn+5+pD36R01KJVrm1n22oaqAey9xhlfmzCR8xgtC0xZTXW8ytR4NLJqR0vHEjYgMWYRdJi6TKJ7Ah9ZnqiR0j1XzsFSf5LBbJeK9hca8RmoOL4ayXnz+TbIQ==',key_name='tempest-TestNetworkBasicOps-732704962',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-kiunt17m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:11Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=74bffcba-6d85-4f82-87ed-b625a3439b3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.238 2 DEBUG nova.network.os_vif_util [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "address": "fa:16:3e:6a:82:95", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cc0b7ac-ce", "ovs_interfaceid": "2cc0b7ac-ce39-4a91-b50b-d0d21320173d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.238 2 DEBUG nova.network.os_vif_util [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.239 2 DEBUG os_vif [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cc0b7ac-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.242 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[734f41ef-202f-43aa-922c-7e7192124851]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757713, 'tstamp': 757713}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375259, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757715, 'tstamp': 757715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375259, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.244 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.251 2 DEBUG nova.compute.manager [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.251 2 DEBUG oslo_concurrency.lockutils [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.251 2 DEBUG oslo_concurrency.lockutils [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.252 2 DEBUG oslo_concurrency.lockutils [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.252 2 DEBUG nova.compute.manager [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.252 2 DEBUG nova.compute.manager [req-aeae2529-a096-4aa1-b38e-39fedf5cd981 req-3db0670d-ad20-4304-8875-490a73f25767 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.295 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15afb19f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.296 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.296 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15afb19f-00, col_values=(('external_ids', {'iface-id': '6947a742-74da-45a4-bca1-52180ae211d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.296 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.297 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 2cc0b7ac-ce39-4a91-b50b-d0d21320173d in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 unbound from our chassis
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.297 2 INFO os_vif [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:82:95,bridge_name='br-int',has_traffic_filtering=True,id=2cc0b7ac-ce39-4a91-b50b-d0d21320173d,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cc0b7ac-ce')
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.298 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15afb19f-043a-469d-96b6-7de0ff8590f7
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.314 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[90f069bd-90e1-49b2-9d3f-f9c31993b447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.344 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0bdd32-95af-4630-b752-d3625d3844ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.346 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[ab995195-0175-4f7f-8faf-5af90611eef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.370 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[51fc2d7e-c136-4fb5-81d2-053b29eb677c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.384 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[74cb2015-f107-4798-ad3e-538d2d1de7b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15afb19f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:fc:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757702, 'reachable_time': 24495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375301, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.397 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[86af9ef9-56da-4d17-b2f4-2bf4d8004e23]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757713, 'tstamp': 757713}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375302, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15afb19f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757715, 'tstamp': 757715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375302, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.398 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.401 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15afb19f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.401 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.402 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15afb19f-00, col_values=(('external_ids', {'iface-id': '6947a742-74da-45a4-bca1-52180ae211d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:35.402 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ceph-mon[73607]: pgmap v2860: 305 pgs: 305 active+clean; 820 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 906 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:54:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3383468913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2340281099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:35 compute-0 nova_compute[257802]: 2025-10-02 12:54:35.615 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 868 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 924 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:54:36 compute-0 nova_compute[257802]: 2025-10-02 12:54:36.266 2 INFO nova.virt.libvirt.driver [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Deleting instance files /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a_del
Oct 02 12:54:36 compute-0 nova_compute[257802]: 2025-10-02 12:54:36.266 2 INFO nova.virt.libvirt.driver [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Deletion of /var/lib/nova/instances/74bffcba-6d85-4f82-87ed-b625a3439b3a_del complete
Oct 02 12:54:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2066395569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:36.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:37 compute-0 ceph-mon[73607]: pgmap v2861: 305 pgs: 305 active+clean; 868 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 924 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:54:37 compute-0 sudo[375308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:37 compute-0 sudo[375308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:37 compute-0 sudo[375308]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:37 compute-0 sudo[375333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:37 compute-0 sudo[375333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:37 compute-0 sudo[375333]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:37.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 868 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 889 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.265 2 DEBUG nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.265 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.265 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.265 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.266 2 DEBUG nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.266 2 WARNING nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state active and task_state deleting.
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.266 2 DEBUG nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.267 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.267 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.267 2 DEBUG oslo_concurrency.lockutils [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.267 2 DEBUG nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.267 2 WARNING nova.compute.manager [req-a6c764ff-288d-4ed5-a754-0f5f2bf28244 req-b51801fa-92c9-494c-9f49-a684cced34d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state active and task_state deleting.
Oct 02 12:54:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:38.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.853 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.853 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.857 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.857 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.860 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:38 compute-0 nova_compute[257802]: 2025-10-02 12:54:38.860 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.048 2 INFO nova.compute.manager [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Took 4.35 seconds to destroy the instance on the hypervisor.
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.048 2 DEBUG oslo.service.loopingcall [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.049 2 DEBUG nova.compute.manager [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.049 2 DEBUG nova.network.neutron [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.070 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.072 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3840MB free_disk=20.71508026123047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.072 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.072 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:39 compute-0 ceph-mon[73607]: pgmap v2862: 305 pgs: 305 active+clean; 868 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 889 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Oct 02 12:54:39 compute-0 nova_compute[257802]: 2025-10-02 12:54:39.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:39.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 919 KiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.294 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 42da5a56-55e8-4a1a-a524-24555a4bd3ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 74bffcba-6d85-4f82-87ed-b625a3439b3a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.295 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.296 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.368 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.723 2 DEBUG nova.compute.manager [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.724 2 DEBUG oslo_concurrency.lockutils [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.724 2 DEBUG oslo_concurrency.lockutils [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.725 2 DEBUG oslo_concurrency.lockutils [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.725 2 DEBUG nova.compute.manager [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.725 2 WARNING nova.compute.manager [req-28ed38ee-1011-4725-a17b-f287d4ef3d9f req-3de554fc-7a96-411f-af92-541b922376d1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state active and task_state deleting.
Oct 02 12:54:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409710738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.805 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.810 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.840 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.892 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.892 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.893 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.893 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:40.974 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:40.974 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:40.975 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:40 compute-0 nova_compute[257802]: 2025-10-02 12:54:40.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.017 2 DEBUG nova.network.neutron [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.058 2 INFO nova.compute.manager [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Took 2.01 seconds to deallocate network for instance.
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.092 2 DEBUG nova.compute.manager [req-06e31b92-141d-49ef-a1e9-61377c557101 req-c858cc64-c6e2-42d6-b374-2f7016131b54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-deleted-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.108 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.109 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.188 2 DEBUG oslo_concurrency.processutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:41 compute-0 ceph-mon[73607]: pgmap v2863: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 919 KiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 02 12:54:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2409710738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048990525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.684 2 DEBUG oslo_concurrency.processutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.691 2 DEBUG nova.compute.provider_tree [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.716 2 DEBUG nova.scheduler.client.report [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.785 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.822 2 INFO nova.scheduler.client.report [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 74bffcba-6d85-4f82-87ed-b625a3439b3a
Oct 02 12:54:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:41 compute-0 nova_compute[257802]: 2025-10-02 12:54:41.932 2 DEBUG oslo_concurrency.lockutils [None req-8f5a1488-bdf1-43d9-ad46-f4a91763e520 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:54:42
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'backups']
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:54:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3048990525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:54:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:42.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.824 2 DEBUG nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.824 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.825 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.825 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.825 2 DEBUG nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.825 2 WARNING nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-unplugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state deleted and task_state None.
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.825 2 DEBUG nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.826 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.826 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.826 2 DEBUG oslo_concurrency.lockutils [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "74bffcba-6d85-4f82-87ed-b625a3439b3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.826 2 DEBUG nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] No waiting events found dispatching network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:42 compute-0 nova_compute[257802]: 2025-10-02 12:54:42.826 2 WARNING nova.compute.manager [req-9e1e88a6-da68-45ed-8120-ecb7e7a20436 req-d6f8df5a-cbb6-4199-bc05-6b39f97e7264 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Received unexpected event network-vif-plugged-2cc0b7ac-ce39-4a91-b50b-d0d21320173d for instance with vm_state deleted and task_state None.
Oct 02 12:54:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:54:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:54:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:54:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:54:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:54:43 compute-0 ceph-mon[73607]: pgmap v2864: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.633532) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683633600, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2138, "num_deletes": 252, "total_data_size": 3876567, "memory_usage": 3929584, "flush_reason": "Manual Compaction"}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683658801, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3797588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61440, "largest_seqno": 63577, "table_properties": {"data_size": 3787963, "index_size": 6118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19873, "raw_average_key_size": 20, "raw_value_size": 3768675, "raw_average_value_size": 3877, "num_data_blocks": 267, "num_entries": 972, "num_filter_entries": 972, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409468, "oldest_key_time": 1759409468, "file_creation_time": 1759409683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 25347 microseconds, and 8827 cpu microseconds.
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.658882) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3797588 bytes OK
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.658904) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.661133) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.661151) EVENT_LOG_v1 {"time_micros": 1759409683661145, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.661173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3867937, prev total WAL file size 3867937, number of live WAL files 2.
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.662285) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3708KB)], [140(10MB)]
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683662324, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14300675, "oldest_snapshot_seqno": -1}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9116 keys, 12395384 bytes, temperature: kUnknown
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683735677, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12395384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12335333, "index_size": 36130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 239375, "raw_average_key_size": 26, "raw_value_size": 12174116, "raw_average_value_size": 1335, "num_data_blocks": 1384, "num_entries": 9116, "num_filter_entries": 9116, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.736003) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12395384 bytes
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.737774) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.7 rd, 168.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.0) write-amplify(3.3) OK, records in: 9639, records dropped: 523 output_compression: NoCompression
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.737794) EVENT_LOG_v1 {"time_micros": 1759409683737786, "job": 86, "event": "compaction_finished", "compaction_time_micros": 73444, "compaction_time_cpu_micros": 27904, "output_level": 6, "num_output_files": 1, "total_output_size": 12395384, "num_input_records": 9639, "num_output_records": 9116, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683738453, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683740575, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.662166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.740621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.740626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.740628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.740630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:54:43.740631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:54:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:43.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:54:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:54:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:44 compute-0 nova_compute[257802]: 2025-10-02 12:54:44.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:45 compute-0 nova_compute[257802]: 2025-10-02 12:54:45.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:45 compute-0 ceph-mon[73607]: pgmap v2865: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 12:54:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.345 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.346 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.346 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.346 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.347 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.349 2 INFO nova.compute.manager [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Terminating instance
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.350 2 DEBUG nova.compute.manager [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:54:46 compute-0 kernel: tapedd030ea-1b (unregistering): left promiscuous mode
Oct 02 12:54:46 compute-0 NetworkManager[44987]: <info>  [1759409686.4191] device (tapedd030ea-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:54:46 compute-0 ovn_controller[148183]: 2025-10-02T12:54:46Z|00829|binding|INFO|Releasing lport edd030ea-1bf1-4735-8720-2e02fbd67149 from this chassis (sb_readonly=0)
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 ovn_controller[148183]: 2025-10-02T12:54:46Z|00830|binding|INFO|Setting lport edd030ea-1bf1-4735-8720-2e02fbd67149 down in Southbound
Oct 02 12:54:46 compute-0 ovn_controller[148183]: 2025-10-02T12:54:46Z|00831|binding|INFO|Removing iface tapedd030ea-1b ovn-installed in OVS
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.439 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:d0:8a 10.100.0.12'], port_security=['fa:16:3e:57:d0:8a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42da5a56-55e8-4a1a-a524-24555a4bd3ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15afb19f-043a-469d-96b6-7de0ff8590f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a82f1e35-7dfe-4339-916d-665ac590e0ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd01f191-612f-4595-b9f0-c8bb017b8743, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=edd030ea-1bf1-4735-8720-2e02fbd67149) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.442 158261 INFO neutron.agent.ovn.metadata.agent [-] Port edd030ea-1bf1-4735-8720-2e02fbd67149 in datapath 15afb19f-043a-469d-96b6-7de0ff8590f7 unbound from our chassis
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.445 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15afb19f-043a-469d-96b6-7de0ff8590f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.447 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dc64aab3-654c-4bb2-9593-66792823d103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.449 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7 namespace which is not needed anymore
Oct 02 12:54:46 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Oct 02 12:54:46 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b6.scope: Consumed 20.324s CPU time.
Oct 02 12:54:46 compute-0 systemd-machined[211836]: Machine qemu-89-instance-000000b6 terminated.
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.588 2 INFO nova.virt.libvirt.driver [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Instance destroyed successfully.
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.589 2 DEBUG nova.objects.instance [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 42da5a56-55e8-4a1a-a524-24555a4bd3ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [NOTICE]   (372115) : haproxy version is 2.8.14-c23fe91
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [NOTICE]   (372115) : path to executable is /usr/sbin/haproxy
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [WARNING]  (372115) : Exiting Master process...
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [WARNING]  (372115) : Exiting Master process...
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [ALERT]    (372115) : Current worker (372117) exited with code 143 (Terminated)
Oct 02 12:54:46 compute-0 neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7[372111]: [WARNING]  (372115) : All workers exited. Exiting... (0)
Oct 02 12:54:46 compute-0 systemd[1]: libpod-53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2.scope: Deactivated successfully.
Oct 02 12:54:46 compute-0 podman[375429]: 2025-10-02 12:54:46.618464909 +0000 UTC m=+0.071810025 container died 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.626 2 DEBUG nova.virt.libvirt.vif [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933745214',display_name='tempest-TestNetworkBasicOps-server-933745214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933745214',id=182,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5g2YR7mRJ8wv66Ie3gEOW4Ei/BhT431fnvwb66U2s7bhv8tgyt/+mCbk3G8aSXbvbYDVe7KE5z2DHS0eT8dOztSZNQmtEW2btO6tXoKqIQlS8tpISInSq+eCkqqeyBiA==',key_name='tempest-TestNetworkBasicOps-1419012775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-m1sxbh0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:51:51Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=42da5a56-55e8-4a1a-a524-24555a4bd3ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.627 2 DEBUG nova.network.os_vif_util [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "edd030ea-1bf1-4735-8720-2e02fbd67149", "address": "fa:16:3e:57:d0:8a", "network": {"id": "15afb19f-043a-469d-96b6-7de0ff8590f7", "bridge": "br-int", "label": "tempest-network-smoke--253378791", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd030ea-1b", "ovs_interfaceid": "edd030ea-1bf1-4735-8720-2e02fbd67149", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.627 2 DEBUG nova.network.os_vif_util [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.628 2 DEBUG os_vif [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd030ea-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.635 2 INFO os_vif [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:d0:8a,bridge_name='br-int',has_traffic_filtering=True,id=edd030ea-1bf1-4735-8720-2e02fbd67149,network=Network(15afb19f-043a-469d-96b6-7de0ff8590f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd030ea-1b')
Oct 02 12:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2-userdata-shm.mount: Deactivated successfully.
Oct 02 12:54:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Oct 02 12:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8429a495c50209d857d65efc9049597a3d834257b339043b6ac427634bcfd8ae-merged.mount: Deactivated successfully.
Oct 02 12:54:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Oct 02 12:54:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Oct 02 12:54:46 compute-0 podman[375429]: 2025-10-02 12:54:46.680664828 +0000 UTC m=+0.134009914 container cleanup 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:54:46 compute-0 systemd[1]: libpod-conmon-53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2.scope: Deactivated successfully.
Oct 02 12:54:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-0 podman[375483]: 2025-10-02 12:54:46.76833547 +0000 UTC m=+0.055144685 container remove 53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.775 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d81ec926-32ce-4720-9421-9d1baa976b58]: (4, ('Thu Oct  2 12:54:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7 (53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2)\n53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2\nThu Oct  2 12:54:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7 (53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2)\n53f821af10379f38cbd1815740e3ad2aa0899f602fee46144cf8b111d4ea45b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.779 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b94de0f9-e075-45ef-b6f5-efbf82518bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.780 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15afb19f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 kernel: tap15afb19f-00: left promiscuous mode
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.804 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[106a6afc-59d1-4dbd-8dbc-1d100b555f6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.833 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e842adb9-7af4-4d86-98c6-71f7a7e3b714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.835 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7e42427d-fa84-457a-9ec8-fa5a8dee3702]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.852 2 DEBUG nova.compute.manager [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-unplugged-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.853 2 DEBUG oslo_concurrency.lockutils [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.853 2 DEBUG oslo_concurrency.lockutils [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.853 2 DEBUG oslo_concurrency.lockutils [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.853 2 DEBUG nova.compute.manager [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] No waiting events found dispatching network-vif-unplugged-edd030ea-1bf1-4735-8720-2e02fbd67149 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.853 2 DEBUG nova.compute.manager [req-62a38843-d932-40eb-ad31-b82c4a07ea53 req-ed8155ca-46c5-4fde-9a1f-7e667cab4eba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-unplugged-edd030ea-1bf1-4735-8720-2e02fbd67149 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.854 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[571d1397-621a-4045-8651-bffc6a190a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757695, 'reachable_time': 38826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375499, 'error': None, 'target': 'ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d15afb19f\x2d043a\x2d469d\x2d96b6\x2d7de0ff8590f7.mount: Deactivated successfully.
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.857 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15afb19f-043a-469d-96b6-7de0ff8590f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:54:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:54:46.857 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5e068140-de74-4d96-b14f-15dc8b27d4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:46 compute-0 nova_compute[257802]: 2025-10-02 12:54:46.930 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.581 2 INFO nova.virt.libvirt.driver [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Deleting instance files /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec_del
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.582 2 INFO nova.virt.libvirt.driver [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Deletion of /var/lib/nova/instances/42da5a56-55e8-4a1a-a524-24555a4bd3ec_del complete
Oct 02 12:54:47 compute-0 ceph-mon[73607]: pgmap v2866: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Oct 02 12:54:47 compute-0 ceph-mon[73607]: osdmap e383: 3 total, 3 up, 3 in
Oct 02 12:54:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:47.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.928 2 INFO nova.compute.manager [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Took 1.58 seconds to destroy the instance on the hypervisor.
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.929 2 DEBUG oslo.service.loopingcall [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.929 2 DEBUG nova.compute.manager [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:54:47 compute-0 nova_compute[257802]: 2025-10-02 12:54:47.929 2 DEBUG nova.network.neutron [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:54:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 329 op/s
Oct 02 12:54:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:48 compute-0 ceph-mon[73607]: pgmap v2868: 305 pgs: 305 active+clean; 789 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 329 op/s
Oct 02 12:54:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:49 compute-0 nova_compute[257802]: 2025-10-02 12:54:49.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:49.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.154 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409675.1372285, 74bffcba-6d85-4f82-87ed-b625a3439b3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.154 2 INFO nova.compute.manager [-] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] VM Stopped (Lifecycle Event)
Oct 02 12:54:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:50.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.789 2 DEBUG nova.compute.manager [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.790 2 DEBUG oslo_concurrency.lockutils [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.791 2 DEBUG oslo_concurrency.lockutils [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.791 2 DEBUG oslo_concurrency.lockutils [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.792 2 DEBUG nova.compute.manager [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] No waiting events found dispatching network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.793 2 WARNING nova.compute.manager [req-f14aba3d-530a-4fed-b874-b89942128cec req-0b88c10a-5e7b-4904-8ca6-f586b8bd0cf8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received unexpected event network-vif-plugged-edd030ea-1bf1-4735-8720-2e02fbd67149 for instance with vm_state active and task_state deleting.
Oct 02 12:54:50 compute-0 nova_compute[257802]: 2025-10-02 12:54:50.805 2 DEBUG nova.compute.manager [None req-307b9308-015e-48e6-9429-9a929c035f86 - - - - - -] [instance: 74bffcba-6d85-4f82-87ed-b625a3439b3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:50 compute-0 sudo[375503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:50 compute-0 sudo[375503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:50 compute-0 sudo[375503]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:50 compute-0 sudo[375528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:51 compute-0 sudo[375528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:51 compute-0 sudo[375528]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:51 compute-0 sudo[375571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:51 compute-0 sudo[375571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:51 compute-0 sudo[375571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:51 compute-0 podman[375552]: 2025-10-02 12:54:51.091643443 +0000 UTC m=+0.082933548 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:54:51 compute-0 podman[375553]: 2025-10-02 12:54:51.095076528 +0000 UTC m=+0.088552847 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Oct 02 12:54:51 compute-0 podman[375554]: 2025-10-02 12:54:51.097522617 +0000 UTC m=+0.088515595 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.138 2 DEBUG nova.network.neutron [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:51 compute-0 sudo[375636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:54:51 compute-0 sudo[375636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.167 2 INFO nova.compute.manager [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Took 3.24 seconds to deallocate network for instance.
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.219 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.220 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.307 2 DEBUG oslo_concurrency.processutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:51 compute-0 ceph-mon[73607]: pgmap v2869: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:51 compute-0 sudo[375636]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591729503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.807 2 DEBUG oslo_concurrency.processutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.813 2 DEBUG nova.compute.provider_tree [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.836 2 DEBUG nova.scheduler.client.report [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.869 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.895 2 INFO nova.scheduler.client.report [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 42da5a56-55e8-4a1a-a524-24555a4bd3ec
Oct 02 12:54:51 compute-0 nova_compute[257802]: 2025-10-02 12:54:51.989 2 DEBUG oslo_concurrency.lockutils [None req-602f3de1-a9fd-4882-8529-55bb93b88d6b fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "42da5a56-55e8-4a1a-a524-24555a4bd3ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/591729503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:52 compute-0 nova_compute[257802]: 2025-10-02 12:54:52.921 2 DEBUG nova.compute.manager [req-95bad966-43a4-4aef-9bd3-40077fefec24 req-df301467-2507-43cd-83e5-98b3d354b91b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Received event network-vif-deleted-edd030ea-1bf1-4735-8720-2e02fbd67149 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:53 compute-0 ceph-mon[73607]: pgmap v2870: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:54:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:54:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Oct 02 12:54:54 compute-0 nova_compute[257802]: 2025-10-02 12:54:54.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009173108016936535 of space, bias 1.0, pg target 2.7519324050809604 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6442640720965941 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:54:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8e767ab6-6574-4005-95be-f92a0913b642 does not exist
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ee01a30b-9695-43fc-94d5-86b5e0fe1897 does not exist
Oct 02 12:54:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cb32148e-d6e6-41cb-b992-e1b950ac7fb6 does not exist
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:54:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:54 compute-0 sudo[375717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:54 compute-0 sudo[375717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:54 compute-0 sudo[375717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:54 compute-0 ceph-mon[73607]: pgmap v2871: 305 pgs: 305 active+clean; 630 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 KiB/s wr, 345 op/s
Oct 02 12:54:54 compute-0 ceph-mon[73607]: osdmap e384: 3 total, 3 up, 3 in
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:54:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:54 compute-0 sudo[375742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:54 compute-0 sudo[375742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:54 compute-0 sudo[375742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:55 compute-0 sudo[375767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:55 compute-0 sudo[375767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:55 compute-0 sudo[375767]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:55 compute-0 sudo[375792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:54:55 compute-0 sudo[375792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.388412582 +0000 UTC m=+0.043037328 container create 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:54:55 compute-0 systemd[1]: Started libpod-conmon-82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c.scope.
Oct 02 12:54:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.457293874 +0000 UTC m=+0.111918650 container init 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.464861091 +0000 UTC m=+0.119485837 container start 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.371458586 +0000 UTC m=+0.026083362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.467888595 +0000 UTC m=+0.122513371 container attach 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:54:55 compute-0 admiring_easley[375872]: 167 167
Oct 02 12:54:55 compute-0 systemd[1]: libpod-82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c.scope: Deactivated successfully.
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.482103084 +0000 UTC m=+0.136727840 container died 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b3458192600b5a1e6a8b0f50fcca6365ce41f3193ea6abfb6395241764d75f-merged.mount: Deactivated successfully.
Oct 02 12:54:55 compute-0 podman[375856]: 2025-10-02 12:54:55.523139242 +0000 UTC m=+0.177763998 container remove 82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:54:55 compute-0 systemd[1]: libpod-conmon-82f9d728d496b8f095abe90ef1229dff61e095746a200cf7d1956a1d5ab3cf3c.scope: Deactivated successfully.
Oct 02 12:54:55 compute-0 podman[375896]: 2025-10-02 12:54:55.687699275 +0000 UTC m=+0.037866801 container create 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:54:55 compute-0 systemd[1]: Started libpod-conmon-882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22.scope.
Oct 02 12:54:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:55 compute-0 podman[375896]: 2025-10-02 12:54:55.672231435 +0000 UTC m=+0.022399001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:55 compute-0 podman[375896]: 2025-10-02 12:54:55.772308503 +0000 UTC m=+0.122476049 container init 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:54:55 compute-0 podman[375896]: 2025-10-02 12:54:55.780041213 +0000 UTC m=+0.130208749 container start 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:54:55 compute-0 podman[375896]: 2025-10-02 12:54:55.787392443 +0000 UTC m=+0.137559979 container attach 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:54:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:55.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3019123386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:54:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3019123386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:54:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/835688787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 1.7 MiB/s wr, 145 op/s
Oct 02 12:54:56 compute-0 xenodochial_elion[375912]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:54:56 compute-0 xenodochial_elion[375912]: --> relative data size: 1.0
Oct 02 12:54:56 compute-0 xenodochial_elion[375912]: --> All data devices are unavailable
Oct 02 12:54:56 compute-0 systemd[1]: libpod-882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22.scope: Deactivated successfully.
Oct 02 12:54:56 compute-0 podman[375896]: 2025-10-02 12:54:56.570794937 +0000 UTC m=+0.920962473 container died 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a80f6be8eac09b302710f2d4fdbffa45e594a9ef5131b411dedfb318daff099-merged.mount: Deactivated successfully.
Oct 02 12:54:56 compute-0 podman[375896]: 2025-10-02 12:54:56.631401707 +0000 UTC m=+0.981569243 container remove 882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:54:56 compute-0 nova_compute[257802]: 2025-10-02 12:54:56.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:56 compute-0 systemd[1]: libpod-conmon-882e5067e48fd8f071401bcca3c8220a8b69092651da2d785ffc5e755a6bde22.scope: Deactivated successfully.
Oct 02 12:54:56 compute-0 sudo[375792]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:56 compute-0 sudo[375943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:56 compute-0 sudo[375943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:56 compute-0 sudo[375943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:56 compute-0 sudo[375968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:56 compute-0 sudo[375968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:56 compute-0 sudo[375968]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:56 compute-0 sudo[375993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:56 compute-0 sudo[375993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:56 compute-0 sudo[375993]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:56 compute-0 sudo[376018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:54:56 compute-0 sudo[376018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:56 compute-0 ceph-mon[73607]: pgmap v2873: 305 pgs: 305 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 1.7 MiB/s wr, 145 op/s
Oct 02 12:54:57 compute-0 ovn_controller[148183]: 2025-10-02T12:54:57Z|00832|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:54:57 compute-0 nova_compute[257802]: 2025-10-02 12:54:57.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.200774162 +0000 UTC m=+0.041009368 container create d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.182181686 +0000 UTC m=+0.022416912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:57 compute-0 systemd[1]: Started libpod-conmon-d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9.scope.
Oct 02 12:54:57 compute-0 ovn_controller[148183]: 2025-10-02T12:54:57Z|00833|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct 02 12:54:57 compute-0 nova_compute[257802]: 2025-10-02 12:54:57.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.344079143 +0000 UTC m=+0.184314379 container init d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.353009052 +0000 UTC m=+0.193244268 container start d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.356434747 +0000 UTC m=+0.196669973 container attach d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:54:57 compute-0 stoic_mirzakhani[376099]: 167 167
Oct 02 12:54:57 compute-0 systemd[1]: libpod-d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9.scope: Deactivated successfully.
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.359365359 +0000 UTC m=+0.199600575 container died d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-54ee2b2463d6e37776d3ce9ab3aec3c169f53f762d09dc3c1c2c22b3e02846f2-merged.mount: Deactivated successfully.
Oct 02 12:54:57 compute-0 podman[376083]: 2025-10-02 12:54:57.398878539 +0000 UTC m=+0.239113745 container remove d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:54:57 compute-0 systemd[1]: libpod-conmon-d03186816f197d36303375e17028f56efe115671501ae6106991d896aba410b9.scope: Deactivated successfully.
Oct 02 12:54:57 compute-0 podman[376096]: 2025-10-02 12:54:57.414924013 +0000 UTC m=+0.112997126 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:54:57 compute-0 podman[376148]: 2025-10-02 12:54:57.592618178 +0000 UTC m=+0.042330630 container create 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:54:57 compute-0 systemd[1]: Started libpod-conmon-98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b.scope.
Oct 02 12:54:57 compute-0 podman[376148]: 2025-10-02 12:54:57.574222266 +0000 UTC m=+0.023934738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c128e4481c1c8cfce4e00166e0781aba595c9dbd0d5443771f936af725b006e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c128e4481c1c8cfce4e00166e0781aba595c9dbd0d5443771f936af725b006e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c128e4481c1c8cfce4e00166e0781aba595c9dbd0d5443771f936af725b006e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c128e4481c1c8cfce4e00166e0781aba595c9dbd0d5443771f936af725b006e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:57 compute-0 podman[376148]: 2025-10-02 12:54:57.696423928 +0000 UTC m=+0.146136410 container init 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:54:57 compute-0 podman[376148]: 2025-10-02 12:54:57.70585025 +0000 UTC m=+0.155562702 container start 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:54:57 compute-0 podman[376148]: 2025-10-02 12:54:57.709965921 +0000 UTC m=+0.159678403 container attach 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:54:57 compute-0 sudo[376169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:57 compute-0 sudo[376169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:57 compute-0 sudo[376169]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:57 compute-0 sudo[376194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:57 compute-0 sudo[376194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:57 compute-0 sudo[376194]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 1.6 MiB/s wr, 138 op/s
Oct 02 12:54:58 compute-0 priceless_bohr[376164]: {
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:     "1": [
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:         {
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "devices": [
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "/dev/loop3"
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             ],
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "lv_name": "ceph_lv0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "lv_size": "7511998464",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "name": "ceph_lv0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "tags": {
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.cluster_name": "ceph",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.crush_device_class": "",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.encrypted": "0",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.osd_id": "1",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.type": "block",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:                 "ceph.vdo": "0"
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             },
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "type": "block",
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:             "vg_name": "ceph_vg0"
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:         }
Oct 02 12:54:58 compute-0 priceless_bohr[376164]:     ]
Oct 02 12:54:58 compute-0 priceless_bohr[376164]: }
Oct 02 12:54:58 compute-0 systemd[1]: libpod-98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b.scope: Deactivated successfully.
Oct 02 12:54:58 compute-0 podman[376148]: 2025-10-02 12:54:58.46059867 +0000 UTC m=+0.910311142 container died 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c128e4481c1c8cfce4e00166e0781aba595c9dbd0d5443771f936af725b006e-merged.mount: Deactivated successfully.
Oct 02 12:54:58 compute-0 podman[376148]: 2025-10-02 12:54:58.656359919 +0000 UTC m=+1.106072371 container remove 98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:54:58 compute-0 systemd[1]: libpod-conmon-98a862667957063ec2f0ceceb5743552605ef06399ae1cd734ef460c4625925b.scope: Deactivated successfully.
Oct 02 12:54:58 compute-0 sudo[376018]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:58 compute-0 sudo[376237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:58 compute-0 sudo[376237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:58 compute-0 sudo[376237]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:54:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:54:58 compute-0 sudo[376262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:58 compute-0 sudo[376262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:58 compute-0 sudo[376262]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:58 compute-0 sudo[376287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:58 compute-0 sudo[376287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:58 compute-0 sudo[376287]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:58 compute-0 sudo[376312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:54:58 compute-0 sudo[376312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.213419203 +0000 UTC m=+0.020763211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.39362304 +0000 UTC m=+0.200967018 container create c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:54:59 compute-0 systemd[1]: Started libpod-conmon-c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced.scope.
Oct 02 12:54:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.469503894 +0000 UTC m=+0.276847892 container init c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.476648799 +0000 UTC m=+0.283992767 container start c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.480619307 +0000 UTC m=+0.287963305 container attach c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:54:59 compute-0 great_babbage[376391]: 167 167
Oct 02 12:54:59 compute-0 systemd[1]: libpod-c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced.scope: Deactivated successfully.
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.483520538 +0000 UTC m=+0.290864526 container died c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:54:59 compute-0 ceph-mon[73607]: pgmap v2874: 305 pgs: 305 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 1.6 MiB/s wr, 138 op/s
Oct 02 12:54:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1507127177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:54:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1507127177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d2aee62c65b153cb9383abbdd5491a93aacbd3cf4cf534834b34d04f9db041-merged.mount: Deactivated successfully.
Oct 02 12:54:59 compute-0 podman[376375]: 2025-10-02 12:54:59.525254374 +0000 UTC m=+0.332598352 container remove c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:59 compute-0 systemd[1]: libpod-conmon-c4d6aaa3684960290ca90f06e15023ec455769754c0e3be7b558e4aeb4621ced.scope: Deactivated successfully.
Oct 02 12:54:59 compute-0 nova_compute[257802]: 2025-10-02 12:54:59.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:59 compute-0 podman[376415]: 2025-10-02 12:54:59.702805494 +0000 UTC m=+0.044728080 container create c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:54:59 compute-0 systemd[1]: Started libpod-conmon-c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67.scope.
Oct 02 12:54:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:59 compute-0 podman[376415]: 2025-10-02 12:54:59.683303515 +0000 UTC m=+0.025226131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18bcaa3245c8822daf880d9758232277cb105f16998fec929832c931583d475/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18bcaa3245c8822daf880d9758232277cb105f16998fec929832c931583d475/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18bcaa3245c8822daf880d9758232277cb105f16998fec929832c931583d475/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18bcaa3245c8822daf880d9758232277cb105f16998fec929832c931583d475/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:59 compute-0 podman[376415]: 2025-10-02 12:54:59.802104994 +0000 UTC m=+0.144027620 container init c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:54:59 compute-0 podman[376415]: 2025-10-02 12:54:59.808622174 +0000 UTC m=+0.150544770 container start c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:54:59 compute-0 podman[376415]: 2025-10-02 12:54:59.812011597 +0000 UTC m=+0.153934213 container attach c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:54:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:54:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 131 op/s
Oct 02 12:55:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Oct 02 12:55:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Oct 02 12:55:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Oct 02 12:55:00 compute-0 nifty_kare[376431]: {
Oct 02 12:55:00 compute-0 nifty_kare[376431]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:55:00 compute-0 nifty_kare[376431]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:55:00 compute-0 nifty_kare[376431]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:55:00 compute-0 nifty_kare[376431]:         "osd_id": 1,
Oct 02 12:55:00 compute-0 nifty_kare[376431]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:55:00 compute-0 nifty_kare[376431]:         "type": "bluestore"
Oct 02 12:55:00 compute-0 nifty_kare[376431]:     }
Oct 02 12:55:00 compute-0 nifty_kare[376431]: }
Oct 02 12:55:00 compute-0 systemd[1]: libpod-c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67.scope: Deactivated successfully.
Oct 02 12:55:00 compute-0 conmon[376431]: conmon c3ea0ab1527cf72ea982 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67.scope/container/memory.events
Oct 02 12:55:00 compute-0 podman[376415]: 2025-10-02 12:55:00.671775257 +0000 UTC m=+1.013697853 container died c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d18bcaa3245c8822daf880d9758232277cb105f16998fec929832c931583d475-merged.mount: Deactivated successfully.
Oct 02 12:55:00 compute-0 podman[376415]: 2025-10-02 12:55:00.72646099 +0000 UTC m=+1.068383586 container remove c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:55:00 compute-0 systemd[1]: libpod-conmon-c3ea0ab1527cf72ea9823226dbb877114518f1642d39077ed8df6165ae4f3c67.scope: Deactivated successfully.
Oct 02 12:55:00 compute-0 sudo[376312]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:55:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:00.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:55:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:55:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:55:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1a4c639e-5e9f-42ea-b2a3-2f8923157520 does not exist
Oct 02 12:55:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e0d98b46-0da0-4dc3-a485-e99c6c252751 does not exist
Oct 02 12:55:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f890898b-fb49-47cf-bf0f-4f409ede7be0 does not exist
Oct 02 12:55:00 compute-0 sudo[376468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:00 compute-0 sudo[376468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:00 compute-0 sudo[376468]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:00 compute-0 sudo[376493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:55:00 compute-0 sudo[376493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:00 compute-0 sudo[376493]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.508 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.509 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.509 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.509 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.509 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.510 2 INFO nova.compute.manager [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Terminating instance
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.511 2 DEBUG nova.compute.manager [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:55:01 compute-0 ceph-mon[73607]: pgmap v2875: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 131 op/s
Oct 02 12:55:01 compute-0 ceph-mon[73607]: osdmap e385: 3 total, 3 up, 3 in
Oct 02 12:55:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:55:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:55:01 compute-0 kernel: tap62f0b94c-3e (unregistering): left promiscuous mode
Oct 02 12:55:01 compute-0 NetworkManager[44987]: <info>  [1759409701.5672] device (tap62f0b94c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 ovn_controller[148183]: 2025-10-02T12:55:01Z|00834|binding|INFO|Releasing lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 from this chassis (sb_readonly=0)
Oct 02 12:55:01 compute-0 ovn_controller[148183]: 2025-10-02T12:55:01Z|00835|binding|INFO|Setting lport 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 down in Southbound
Oct 02 12:55:01 compute-0 ovn_controller[148183]: 2025-10-02T12:55:01Z|00836|binding|INFO|Removing iface tap62f0b94c-3e ovn-installed in OVS
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.584 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409686.582878, 42da5a56-55e8-4a1a-a524-24555a4bd3ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.584 2 INFO nova.compute.manager [-] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] VM Stopped (Lifecycle Event)
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.619 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:56:b9 10.100.0.5'], port_security=['fa:16:3e:26:56:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e2c1007-1d07-434c-8a22-6cb98d903d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '55b83463-a692-41fe-aa59-8c6f6a3385f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=62f0b94c-3e74-4a7d-b13e-8178d5dbf737) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.620 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 62f0b94c-3e74-4a7d-b13e-8178d5dbf737 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis
Oct 02 12:55:01 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.621 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:55:01 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b1.scope: Consumed 23.796s CPU time.
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.623 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cf291163-3e14-47c3-a31c-42dc6b01308d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.624 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore
Oct 02 12:55:01 compute-0 systemd-machined[211836]: Machine qemu-88-instance-000000b1 terminated.
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.640 2 DEBUG nova.compute.manager [None req-44d868ac-a9ef-42d2-858a-7327aa7ba0b3 - - - - - -] [instance: 42da5a56-55e8-4a1a-a524-24555a4bd3ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [NOTICE]   (369969) : haproxy version is 2.8.14-c23fe91
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [NOTICE]   (369969) : path to executable is /usr/sbin/haproxy
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [WARNING]  (369969) : Exiting Master process...
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [WARNING]  (369969) : Exiting Master process...
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [ALERT]    (369969) : Current worker (369971) exited with code 143 (Terminated)
Oct 02 12:55:01 compute-0 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[369965]: [WARNING]  (369969) : All workers exited. Exiting... (0)
Oct 02 12:55:01 compute-0 systemd[1]: libpod-fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a.scope: Deactivated successfully.
Oct 02 12:55:01 compute-0 podman[376542]: 2025-10-02 12:55:01.80478801 +0000 UTC m=+0.090931635 container died fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.804 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Instance destroyed successfully.
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.804 2 DEBUG nova.objects.instance [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'resources' on Instance uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1959efda180910425d04d38eb31704bb79e95b41022b30417f918b12520e8558-merged.mount: Deactivated successfully.
Oct 02 12:55:01 compute-0 podman[376542]: 2025-10-02 12:55:01.844095455 +0000 UTC m=+0.130239080 container cleanup fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:55:01 compute-0 systemd[1]: libpod-conmon-fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a.scope: Deactivated successfully.
Oct 02 12:55:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:01 compute-0 podman[376580]: 2025-10-02 12:55:01.911779898 +0000 UTC m=+0.046394931 container remove fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.920 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[963bb552-ea20-449a-9cb1-145158f62d38]: (4, ('Thu Oct  2 12:55:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a)\nfc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a\nThu Oct  2 12:55:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (fc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a)\nfc6ad3cd45ba156aa24d533280b5d667c4267362b9e00fdc373ed050d69d394a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.924 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[760dcd63-e5e4-45de-9241-dc74f91fd170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.925 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 kernel: tap48e4ff16-10: left promiscuous mode
Oct 02 12:55:01 compute-0 nova_compute[257802]: 2025-10-02 12:55:01.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.947 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3b42322e-f210-4bfb-89c3-c7f382d8b414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.970 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc4e54b-7fd6-43ed-999e-ccabe502e1ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.972 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7e48e3-b57c-4b8d-9074-29df73492683]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.991 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9946d31e-841c-42ba-847b-af5a2006325a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752332, 'reachable_time': 30551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376599, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.994 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:55:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:01.994 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[86dbe81f-3045-4a6f-9723-20a9b9a15413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.094 2 DEBUG nova.virt.libvirt.vif [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1972240086',display_name='tempest-ServerStableDeviceRescueTest-server-1972240086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1972240086',id=177,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-2z5a0kx3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:51:05Z,user_data=None,user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=8e2c1007-1d07-434c-8a22-6cb98d903d3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.094 2 DEBUG nova.network.os_vif_util [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "address": "fa:16:3e:26:56:b9", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f0b94c-3e", "ovs_interfaceid": "62f0b94c-3e74-4a7d-b13e-8178d5dbf737", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.095 2 DEBUG nova.network.os_vif_util [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.095 2 DEBUG os_vif [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.097 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62f0b94c-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.101 2 INFO os_vif [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:56:b9,bridge_name='br-int',has_traffic_filtering=True,id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f0b94c-3e')
Oct 02 12:55:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 3.2 MiB/s wr, 164 op/s
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.608 2 INFO nova.virt.libvirt.driver [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Deleting instance files /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c_del
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.608 2 INFO nova.virt.libvirt.driver [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Deletion of /var/lib/nova/instances/8e2c1007-1d07-434c-8a22-6cb98d903d3c_del complete
Oct 02 12:55:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.785 2 INFO nova.compute.manager [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Took 1.27 seconds to destroy the instance on the hypervisor.
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.785 2 DEBUG oslo.service.loopingcall [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.786 2 DEBUG nova.compute.manager [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.786 2 DEBUG nova.network.neutron [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.953 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.987 2 WARNING nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.987 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid 8e2c1007-1d07-434c-8a22-6cb98d903d3c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:55:02 compute-0 nova_compute[257802]: 2025-10-02 12:55:02.987 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.329 2 DEBUG nova.compute.manager [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.329 2 DEBUG oslo_concurrency.lockutils [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.329 2 DEBUG oslo_concurrency.lockutils [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.329 2 DEBUG oslo_concurrency.lockutils [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.330 2 DEBUG nova.compute.manager [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.330 2 DEBUG nova.compute.manager [req-ac4d7fc1-b0f4-4db9-bf37-d8796186a24f req-e9d6c31f-0fa3-478e-8d1e-3081527c6c1c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-unplugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:55:03 compute-0 ceph-mon[73607]: pgmap v2877: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 3.2 MiB/s wr, 164 op/s
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.873 2 DEBUG nova.network.neutron [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.987 2 DEBUG nova.compute.manager [req-7260bc7d-0ddb-442e-a709-0238d00457a4 req-202df466-bf5a-4801-9daf-e3ffdce1e54a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-deleted-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.988 2 INFO nova.compute.manager [req-7260bc7d-0ddb-442e-a709-0238d00457a4 req-202df466-bf5a-4801-9daf-e3ffdce1e54a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Neutron deleted interface 62f0b94c-3e74-4a7d-b13e-8178d5dbf737; detaching it from the instance and deleting it from the info cache
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.988 2 DEBUG nova.network.neutron [req-7260bc7d-0ddb-442e-a709-0238d00457a4 req-202df466-bf5a-4801-9daf-e3ffdce1e54a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:03 compute-0 nova_compute[257802]: 2025-10-02 12:55:03.991 2 INFO nova.compute.manager [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Took 1.21 seconds to deallocate network for instance.
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.043 2 DEBUG nova.compute.manager [req-7260bc7d-0ddb-442e-a709-0238d00457a4 req-202df466-bf5a-4801-9daf-e3ffdce1e54a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Detach interface failed, port_id=62f0b94c-3e74-4a7d-b13e-8178d5dbf737, reason: Instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.079 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.080 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.158 2 DEBUG oslo_concurrency.processutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 436 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Oct 02 12:55:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029227907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.652 2 DEBUG oslo_concurrency.processutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.659 2 DEBUG nova.compute.provider_tree [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.677 2 DEBUG nova.scheduler.client.report [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.700 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.736 2 INFO nova.scheduler.client.report [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Deleted allocations for instance 8e2c1007-1d07-434c-8a22-6cb98d903d3c
Oct 02 12:55:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.822 2 DEBUG oslo_concurrency.lockutils [None req-a93122a7-0b09-4b6f-8c10-b2ca99346a0c 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.823 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.823 2 INFO nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 02 12:55:04 compute-0 nova_compute[257802]: 2025-10-02 12:55:04.824 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.439 2 DEBUG nova.compute.manager [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.440 2 DEBUG oslo_concurrency.lockutils [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.440 2 DEBUG oslo_concurrency.lockutils [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.440 2 DEBUG oslo_concurrency.lockutils [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8e2c1007-1d07-434c-8a22-6cb98d903d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.441 2 DEBUG nova.compute.manager [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] No waiting events found dispatching network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:05 compute-0 nova_compute[257802]: 2025-10-02 12:55:05.441 2 WARNING nova.compute.manager [req-aa0c1a21-b014-4f89-a5d7-54b3c1dcab01 req-ccf6cd91-2e9a-4694-8181-0f1791e0df95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Received unexpected event network-vif-plugged-62f0b94c-3e74-4a7d-b13e-8178d5dbf737 for instance with vm_state deleted and task_state None.
Oct 02 12:55:05 compute-0 ceph-mon[73607]: pgmap v2878: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 436 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Oct 02 12:55:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3029227907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 455 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 223 KiB/s rd, 1016 KiB/s wr, 115 op/s
Oct 02 12:55:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Oct 02 12:55:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Oct 02 12:55:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/860078894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Oct 02 12:55:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:07 compute-0 nova_compute[257802]: 2025-10-02 12:55:07.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 ceph-mon[73607]: pgmap v2879: 305 pgs: 305 active+clean; 455 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 223 KiB/s rd, 1016 KiB/s wr, 115 op/s
Oct 02 12:55:07 compute-0 ceph-mon[73607]: osdmap e386: 3 total, 3 up, 3 in
Oct 02 12:55:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/889387731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 455 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 22 KiB/s wr, 66 op/s
Oct 02 12:55:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Oct 02 12:55:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Oct 02 12:55:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Oct 02 12:55:09 compute-0 nova_compute[257802]: 2025-10-02 12:55:09.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ceph-mon[73607]: pgmap v2881: 305 pgs: 305 active+clean; 455 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 22 KiB/s wr, 66 op/s
Oct 02 12:55:09 compute-0 ceph-mon[73607]: osdmap e387: 3 total, 3 up, 3 in
Oct 02 12:55:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 29 KiB/s wr, 141 op/s
Oct 02 12:55:10 compute-0 nova_compute[257802]: 2025-10-02 12:55:10.612 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:10 compute-0 nova_compute[257802]: 2025-10-02 12:55:10.612 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:10.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:10 compute-0 nova_compute[257802]: 2025-10-02 12:55:10.844 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.233 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.233 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.239 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.240 2 INFO nova.compute.claims [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.451 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:11 compute-0 ceph-mon[73607]: pgmap v2883: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 29 KiB/s wr, 141 op/s
Oct 02 12:55:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1890250216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529966221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.895 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.901 2 DEBUG nova.compute.provider_tree [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:11.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:11 compute-0 nova_compute[257802]: 2025-10-02 12:55:11.936 2 DEBUG nova.scheduler.client.report [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.030 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.031 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.170 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.171 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 29 KiB/s wr, 141 op/s
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.287 2 INFO nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.385 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.403 2 DEBUG nova.policy [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.542 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.543 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.544 2 INFO nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Creating image(s)
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.572 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.596 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.624 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.627 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.693 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.694 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.695 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.695 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.725 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:12 compute-0 nova_compute[257802]: 2025-10-02 12:55:12.729 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3529966221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Oct 02 12:55:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.382 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.449 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.588 2 DEBUG nova.objects.instance [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid 4419a3ba-4449-4efc-aa2a-60535aeb8970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.642 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.642 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Ensure instance console log exists: /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.643 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.643 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:13 compute-0 nova_compute[257802]: 2025-10-02 12:55:13.643 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:13 compute-0 ceph-mon[73607]: pgmap v2884: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 29 KiB/s wr, 141 op/s
Oct 02 12:55:13 compute-0 ceph-mon[73607]: osdmap e388: 3 total, 3 up, 3 in
Oct 02 12:55:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:14 compute-0 nova_compute[257802]: 2025-10-02 12:55:14.179 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Successfully created port: 5747f902-8c1e-48d7-b093-f485b127cbd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 7.4 KiB/s wr, 78 op/s
Oct 02 12:55:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Oct 02 12:55:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Oct 02 12:55:14 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Oct 02 12:55:14 compute-0 nova_compute[257802]: 2025-10-02 12:55:14.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:15 compute-0 nova_compute[257802]: 2025-10-02 12:55:15.427 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Successfully updated port: 5747f902-8c1e-48d7-b093-f485b127cbd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:55:15 compute-0 ceph-mon[73607]: pgmap v2886: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 7.4 KiB/s wr, 78 op/s
Oct 02 12:55:15 compute-0 ceph-mon[73607]: osdmap e389: 3 total, 3 up, 3 in
Oct 02 12:55:15 compute-0 nova_compute[257802]: 2025-10-02 12:55:15.676 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:15 compute-0 nova_compute[257802]: 2025-10-02 12:55:15.676 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:15 compute-0 nova_compute[257802]: 2025-10-02 12:55:15.676 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:55:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:15.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:16 compute-0 nova_compute[257802]: 2025-10-02 12:55:16.099 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:55:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 3.2 MiB/s wr, 171 op/s
Oct 02 12:55:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:16.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:16 compute-0 nova_compute[257802]: 2025-10-02 12:55:16.801 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409701.8004372, 8e2c1007-1d07-434c-8a22-6cb98d903d3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:16 compute-0 nova_compute[257802]: 2025-10-02 12:55:16.801 2 INFO nova.compute.manager [-] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] VM Stopped (Lifecycle Event)
Oct 02 12:55:16 compute-0 nova_compute[257802]: 2025-10-02 12:55:16.880 2 DEBUG nova.compute.manager [None req-1b467297-1ecd-4041-8256-a379e9227545 - - - - - -] [instance: 8e2c1007-1d07-434c-8a22-6cb98d903d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.126 2 DEBUG nova.compute.manager [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-changed-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.127 2 DEBUG nova.compute.manager [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Refreshing instance network info cache due to event network-changed-5747f902-8c1e-48d7-b093-f485b127cbd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.127 2 DEBUG oslo_concurrency.lockutils [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.461 2 DEBUG nova.network.neutron [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updating instance_info_cache with network_info: [{"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.501 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.502 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Instance network_info: |[{"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.502 2 DEBUG oslo_concurrency.lockutils [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.502 2 DEBUG nova.network.neutron [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Refreshing network info cache for port 5747f902-8c1e-48d7-b093-f485b127cbd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.506 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Start _get_guest_xml network_info=[{"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.510 2 WARNING nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.515 2 DEBUG nova.virt.libvirt.host [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.516 2 DEBUG nova.virt.libvirt.host [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:17 compute-0 ceph-mon[73607]: pgmap v2888: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 3.2 MiB/s wr, 171 op/s
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.522 2 DEBUG nova.virt.libvirt.host [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.523 2 DEBUG nova.virt.libvirt.host [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.524 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.524 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.525 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.525 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.525 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.525 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.526 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.526 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.527 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.527 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.527 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.527 2 DEBUG nova.virt.hardware [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.531 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:17 compute-0 sudo[376858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:17 compute-0 sudo[376858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:17 compute-0 sudo[376858]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:17.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:17 compute-0 sudo[376883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:17 compute-0 sudo[376883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:17 compute-0 sudo[376883]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:17 compute-0 nova_compute[257802]: 2025-10-02 12:55:17.977 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.004 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.009 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Oct 02 12:55:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476654741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.433 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.436 2 DEBUG nova.virt.libvirt.vif [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=185,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCNXoUpB5vnXEqEgNwYIzijuoYmVb3l9JNigmHJaX5u9xC3Mh8lXczeWl2u7dkH6lLuwxSlvCC37ZyW8ZGUIpj45HvvaKOGWejz8IKI5Q3A25a49idjq6IkqaQpM0SMq/w==',key_name='tempest-TestSecurityGroupsBasicOps-487542131',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-yvcor7xl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=4419a3ba-4449-4efc-aa2a-60535aeb8970,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.437 2 DEBUG nova.network.os_vif_util [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.438 2 DEBUG nova.network.os_vif_util [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.441 2 DEBUG nova.objects.instance [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4419a3ba-4449-4efc-aa2a-60535aeb8970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.547 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <uuid>4419a3ba-4449-4efc-aa2a-60535aeb8970</uuid>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <name>instance-000000b9</name>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642</nova:name>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:55:17</nova:creationTime>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <nova:port uuid="5747f902-8c1e-48d7-b093-f485b127cbd8">
Oct 02 12:55:18 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <system>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="serial">4419a3ba-4449-4efc-aa2a-60535aeb8970</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="uuid">4419a3ba-4449-4efc-aa2a-60535aeb8970</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </system>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <os>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </os>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <features>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </features>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4419a3ba-4449-4efc-aa2a-60535aeb8970_disk">
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </source>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config">
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </source>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:55:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6b:f8:56"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <target dev="tap5747f902-8c"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/console.log" append="off"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <video>
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </video>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:55:18 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:55:18 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:55:18 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:55:18 compute-0 nova_compute[257802]: </domain>
Oct 02 12:55:18 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.548 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Preparing to wait for external event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.549 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.549 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.549 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.550 2 DEBUG nova.virt.libvirt.vif [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=185,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCNXoUpB5vnXEqEgNwYIzijuoYmVb3l9JNigmHJaX5u9xC3Mh8lXczeWl2u7dkH6lLuwxSlvCC37ZyW8ZGUIpj45HvvaKOGWejz8IKI5Q3A25a49idjq6IkqaQpM0SMq/w==',key_name='tempest-TestSecurityGroupsBasicOps-487542131',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-yvcor7xl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=4419a3ba-4449-4efc-aa2a-60535aeb8970,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.550 2 DEBUG nova.network.os_vif_util [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.551 2 DEBUG nova.network.os_vif_util [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.551 2 DEBUG os_vif [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/91340182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/627310823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/87200434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2476654741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5747f902-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5747f902-8c, col_values=(('external_ids', {'iface-id': '5747f902-8c1e-48d7-b093-f485b127cbd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:f8:56', 'vm-uuid': '4419a3ba-4449-4efc-aa2a-60535aeb8970'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:18 compute-0 NetworkManager[44987]: <info>  [1759409718.5605] manager: (tap5747f902-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 nova_compute[257802]: 2025-10-02 12:55:18.566 2 INFO os_vif [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c')
Oct 02 12:55:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:18.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.152 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.152 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.152 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:6b:f8:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.153 2 INFO nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Using config drive
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.181 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.338 2 DEBUG nova.network.neutron [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updated VIF entry in instance network info cache for port 5747f902-8c1e-48d7-b093-f485b127cbd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.339 2 DEBUG nova.network.neutron [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updating instance_info_cache with network_info: [{"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.431 2 DEBUG oslo_concurrency.lockutils [req-1d2166f6-5e3b-4e2d-9aad-3115a07be48a req-045c5ab6-254a-4015-b562-574218056b77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Oct 02 12:55:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Oct 02 12:55:19 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Oct 02 12:55:19 compute-0 ceph-mon[73607]: pgmap v2889: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Oct 02 12:55:19 compute-0 ceph-mon[73607]: osdmap e390: 3 total, 3 up, 3 in
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.803 2 INFO nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Creating config drive at /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.809 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp898gp2ok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:19.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.946 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp898gp2ok" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.974 2 DEBUG nova.storage.rbd_utils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:19 compute-0 nova_compute[257802]: 2025-10-02 12:55:19.979 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.150 2 DEBUG oslo_concurrency.processutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config 4419a3ba-4449-4efc-aa2a-60535aeb8970_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.151 2 INFO nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Deleting local config drive /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970/disk.config because it was imported into RBD.
Oct 02 12:55:20 compute-0 kernel: tap5747f902-8c: entered promiscuous mode
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.2061] manager: (tap5747f902-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_controller[148183]: 2025-10-02T12:55:20Z|00837|binding|INFO|Claiming lport 5747f902-8c1e-48d7-b093-f485b127cbd8 for this chassis.
Oct 02 12:55:20 compute-0 ovn_controller[148183]: 2025-10-02T12:55:20Z|00838|binding|INFO|5747f902-8c1e-48d7-b093-f485b127cbd8: Claiming fa:16:3e:6b:f8:56 10.100.0.11
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.2245] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.2251] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Oct 02 12:55:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 4.8 MiB/s wr, 135 op/s
Oct 02 12:55:20 compute-0 systemd-machined[211836]: New machine qemu-91-instance-000000b9.
Oct 02 12:55:20 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-000000b9.
Oct 02 12:55:20 compute-0 systemd-udevd[377025]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.3101] device (tap5747f902-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.3113] device (tap5747f902-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.619 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:f8:56 10.100.0.11'], port_security=['fa:16:3e:6b:f8:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4419a3ba-4449-4efc-aa2a-60535aeb8970', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d708a73-9d9d-419e-a932-76b92db27fe0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8feb20e9-648a-477c-af80-8cb6c84d6497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d0c1625-3ae7-4b72-a555-f31f6d4351fb, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=5747f902-8c1e-48d7-b093-f485b127cbd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.620 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 5747f902-8c1e-48d7-b093-f485b127cbd8 in datapath 5d708a73-9d9d-419e-a932-76b92db27fe0 bound to our chassis
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.621 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d708a73-9d9d-419e-a932-76b92db27fe0
Oct 02 12:55:20 compute-0 ovn_controller[148183]: 2025-10-02T12:55:20Z|00839|binding|INFO|Setting lport 5747f902-8c1e-48d7-b093-f485b127cbd8 ovn-installed in OVS
Oct 02 12:55:20 compute-0 ovn_controller[148183]: 2025-10-02T12:55:20Z|00840|binding|INFO|Setting lport 5747f902-8c1e-48d7-b093-f485b127cbd8 up in Southbound
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.635 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4d949f33-8bf0-4f7f-adab-accd20218eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.636 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d708a73-91 in ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.639 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d708a73-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.640 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[580e23cd-88ff-4144-8c6b-30279e40f9b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.641 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f82ef9a3-1c3c-4b70-b08a-80e11d53b1c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.658 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8f1378-3b3f-48cb-bf89-b2e5b710611c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.671 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[66f2e287-82eb-42fc-a053-a0470a5443fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.697 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[cb382d94-90c5-4efe-8bb5-63fd2bdc8746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.705 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f273328-2e7e-4aae-985c-896cf85f8b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.7062] manager: (tap5d708a73-90): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Oct 02 12:55:20 compute-0 systemd-udevd[377028]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.738 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea1c270-88da-4c1d-a54d-e1876e553c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.740 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[edfe6453-9b0d-49fb-992b-6d856e6de167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.7604] device (tap5d708a73-90): carrier: link connected
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.764 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[41a13dfd-d58f-4dc2-ac43-134dfcc9f869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.780 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[75d737fe-e87f-4f05-94eb-66b762444309]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d708a73-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:db:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778838, 'reachable_time': 39886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377098, 'error': None, 'target': 'ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.794 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2b76e61d-18c6-4467-88af-8108bde5c3d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:dbdb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778838, 'tstamp': 778838}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377099, 'error': None, 'target': 'ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.811 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9c524521-26a2-499f-8ea2-959e0accc029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d708a73-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:db:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778838, 'reachable_time': 39886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377100, 'error': None, 'target': 'ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.855 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e660335e-f150-4568-9d6a-c29ca3d912f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.943 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[626fb7c6-100e-482b-9ece-9671484e2ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.945 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d708a73-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.945 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.945 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d708a73-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 NetworkManager[44987]: <info>  [1759409720.9872] manager: (tap5d708a73-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 02 12:55:20 compute-0 kernel: tap5d708a73-90: entered promiscuous mode
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:20.990 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d708a73-90, col_values=(('external_ids', {'iface-id': 'fe52387d-636e-4544-9cfa-1db45f861a05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:20 compute-0 nova_compute[257802]: 2025-10-02 12:55:20.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:20 compute-0 ovn_controller[148183]: 2025-10-02T12:55:20Z|00841|binding|INFO|Releasing lport fe52387d-636e-4544-9cfa-1db45f861a05 from this chassis (sb_readonly=0)
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:21.006 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d708a73-9d9d-419e-a932-76b92db27fe0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d708a73-9d9d-419e-a932-76b92db27fe0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:21.007 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3a0449-3397-457b-aafb-bc0c7646bd0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:21.009 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-5d708a73-9d9d-419e-a932-76b92db27fe0
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/5d708a73-9d9d-419e-a932-76b92db27fe0.pid.haproxy
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 5d708a73-9d9d-419e-a932-76b92db27fe0
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:55:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:21.010 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0', 'env', 'PROCESS_TAG=haproxy-5d708a73-9d9d-419e-a932-76b92db27fe0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d708a73-9d9d-419e-a932-76b92db27fe0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.407 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409721.4063418, 4419a3ba-4449-4efc-aa2a-60535aeb8970 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.408 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] VM Started (Lifecycle Event)
Oct 02 12:55:21 compute-0 podman[377138]: 2025-10-02 12:55:21.414026438 +0000 UTC m=+0.102061138 container create 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:21 compute-0 podman[377138]: 2025-10-02 12:55:21.336772461 +0000 UTC m=+0.024807161 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:55:21 compute-0 systemd[1]: Started libpod-conmon-443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00.scope.
Oct 02 12:55:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4369f59170a47a117ec44208f8a394b0e18aa6e80a4916fe572c7d3ee53357b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:21 compute-0 podman[377138]: 2025-10-02 12:55:21.512160009 +0000 UTC m=+0.200194739 container init 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:55:21 compute-0 podman[377138]: 2025-10-02 12:55:21.521606991 +0000 UTC m=+0.209641691 container start 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:21 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [NOTICE]   (377212) : New worker (377214) forked
Oct 02 12:55:21 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [NOTICE]   (377212) : Loading success.
Oct 02 12:55:21 compute-0 podman[377155]: 2025-10-02 12:55:21.54235978 +0000 UTC m=+0.080394465 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:55:21 compute-0 podman[377151]: 2025-10-02 12:55:21.544154225 +0000 UTC m=+0.083637386 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:55:21 compute-0 podman[377154]: 2025-10-02 12:55:21.544252607 +0000 UTC m=+0.083651766 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 02 12:55:21 compute-0 ceph-mon[73607]: pgmap v2891: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 4.8 MiB/s wr, 135 op/s
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.879 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.884 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409721.407155, 4419a3ba-4449-4efc-aa2a-60535aeb8970 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:21 compute-0 nova_compute[257802]: 2025-10-02 12:55:21.884 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] VM Paused (Lifecycle Event)
Oct 02 12:55:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:21.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 4.5 MiB/s wr, 126 op/s
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.466 2 DEBUG nova.compute.manager [req-8cf1e295-37fe-408e-9384-fad9dab35d01 req-c9bc9ae5-ba8e-45c8-b3d6-6537e23de367 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.467 2 DEBUG oslo_concurrency.lockutils [req-8cf1e295-37fe-408e-9384-fad9dab35d01 req-c9bc9ae5-ba8e-45c8-b3d6-6537e23de367 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.467 2 DEBUG oslo_concurrency.lockutils [req-8cf1e295-37fe-408e-9384-fad9dab35d01 req-c9bc9ae5-ba8e-45c8-b3d6-6537e23de367 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.467 2 DEBUG oslo_concurrency.lockutils [req-8cf1e295-37fe-408e-9384-fad9dab35d01 req-c9bc9ae5-ba8e-45c8-b3d6-6537e23de367 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.468 2 DEBUG nova.compute.manager [req-8cf1e295-37fe-408e-9384-fad9dab35d01 req-c9bc9ae5-ba8e-45c8-b3d6-6537e23de367 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Processing event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.468 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.474 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.479 2 INFO nova.virt.libvirt.driver [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Instance spawned successfully.
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.479 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.484 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.488 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409722.4737577, 4419a3ba-4449-4efc-aa2a-60535aeb8970 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.488 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] VM Resumed (Lifecycle Event)
Oct 02 12:55:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:22 compute-0 ceph-mon[73607]: pgmap v2892: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 4.5 MiB/s wr, 126 op/s
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.944 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.944 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.945 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.945 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.945 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.946 2 DEBUG nova.virt.libvirt.driver [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.951 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:22 compute-0 nova_compute[257802]: 2025-10-02 12:55:22.954 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.121 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.186 2 INFO nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Took 10.64 seconds to spawn the instance on the hypervisor.
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.187 2 DEBUG nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.262 2 INFO nova.compute.manager [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Took 12.06 seconds to build instance.
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.305 2 DEBUG oslo_concurrency.lockutils [None req-99417e03-2a60-4bdd-8c8b-64ee2f5c5c2d 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:23 compute-0 nova_compute[257802]: 2025-10-02 12:55:23.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:23.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:24 compute-0 ovn_controller[148183]: 2025-10-02T12:55:24Z|00842|binding|INFO|Releasing lport fe52387d-636e-4544-9cfa-1db45f861a05 from this chassis (sb_readonly=0)
Oct 02 12:55:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.6 MiB/s wr, 101 op/s
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.599 2 DEBUG nova.compute.manager [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.600 2 DEBUG oslo_concurrency.lockutils [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.601 2 DEBUG oslo_concurrency.lockutils [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.601 2 DEBUG oslo_concurrency.lockutils [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.601 2 DEBUG nova.compute.manager [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] No waiting events found dispatching network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.602 2 WARNING nova.compute.manager [req-98574d8e-3ff8-449b-9fa3-fbe6ebd096f0 req-10618298-aedb-46fc-a296-063ebcd270ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received unexpected event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 for instance with vm_state active and task_state None.
Oct 02 12:55:24 compute-0 nova_compute[257802]: 2025-10-02 12:55:24.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:25 compute-0 ceph-mon[73607]: pgmap v2893: 305 pgs: 305 active+clean; 282 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.6 MiB/s wr, 101 op/s
Oct 02 12:55:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:25.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:25 compute-0 nova_compute[257802]: 2025-10-02 12:55:25.988 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:55:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:26.976 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:26.977 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:26.977 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:27 compute-0 nova_compute[257802]: 2025-10-02 12:55:27.196 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:27 compute-0 ceph-mon[73607]: pgmap v2894: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:55:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1688538003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2586245211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:27 compute-0 podman[377226]: 2025-10-02 12:55:27.941628778 +0000 UTC m=+0.076073309 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.090 2 DEBUG nova.compute.manager [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-changed-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.090 2 DEBUG nova.compute.manager [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Refreshing instance network info cache due to event network-changed-5747f902-8c1e-48d7-b093-f485b127cbd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.091 2 DEBUG oslo_concurrency.lockutils [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.091 2 DEBUG oslo_concurrency.lockutils [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.091 2 DEBUG nova.network.neutron [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Refreshing network info cache for port 5747f902-8c1e-48d7-b093-f485b127cbd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:55:28 compute-0 nova_compute[257802]: 2025-10-02 12:55:28.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:28.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:29 compute-0 ceph-mon[73607]: pgmap v2895: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 12:55:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/264599544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:29 compute-0 nova_compute[257802]: 2025-10-02 12:55:29.513 2 DEBUG nova.network.neutron [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updated VIF entry in instance network info cache for port 5747f902-8c1e-48d7-b093-f485b127cbd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:29 compute-0 nova_compute[257802]: 2025-10-02 12:55:29.514 2 DEBUG nova.network.neutron [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updating instance_info_cache with network_info: [{"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:29 compute-0 nova_compute[257802]: 2025-10-02 12:55:29.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:29 compute-0 nova_compute[257802]: 2025-10-02 12:55:29.705 2 DEBUG oslo_concurrency.lockutils [req-6f99cb50-f294-4511-817f-22e457beced6 req-8160d4d2-bd3b-41a1-8a21-955f66dfb59e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4419a3ba-4449-4efc-aa2a-60535aeb8970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:29.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Oct 02 12:55:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4181416561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:31 compute-0 ceph-mon[73607]: pgmap v2896: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Oct 02 12:55:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:31.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 98 op/s
Oct 02 12:55:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:32.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:33 compute-0 ceph-mon[73607]: pgmap v2897: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 98 op/s
Oct 02 12:55:33 compute-0 nova_compute[257802]: 2025-10-02 12:55:33.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:34 compute-0 nova_compute[257802]: 2025-10-02 12:55:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:34 compute-0 nova_compute[257802]: 2025-10-02 12:55:34.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:55:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 98 op/s
Oct 02 12:55:34 compute-0 nova_compute[257802]: 2025-10-02 12:55:34.375 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:55:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:34 compute-0 nova_compute[257802]: 2025-10-02 12:55:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.238 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.239 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.239 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.239 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.239 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:35 compute-0 ceph-mon[73607]: pgmap v2898: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 98 op/s
Oct 02 12:55:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027032248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.693 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.869 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:35 compute-0 nova_compute[257802]: 2025-10-02 12:55:35.870 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.052 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.055 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4043MB free_disk=20.90105438232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.056 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.057 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Oct 02 12:55:36 compute-0 ovn_controller[148183]: 2025-10-02T12:55:36Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:f8:56 10.100.0.11
Oct 02 12:55:36 compute-0 ovn_controller[148183]: 2025-10-02T12:55:36Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:f8:56 10.100.0.11
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.495 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 4419a3ba-4449-4efc-aa2a-60535aeb8970 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.496 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.496 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.527 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.546 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.547 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:55:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2027032248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.560 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.579 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:55:36 compute-0 nova_compute[257802]: 2025-10-02 12:55:36.610 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554646254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:37 compute-0 nova_compute[257802]: 2025-10-02 12:55:37.062 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:37 compute-0 nova_compute[257802]: 2025-10-02 12:55:37.068 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:37 compute-0 nova_compute[257802]: 2025-10-02 12:55:37.119 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:37 compute-0 nova_compute[257802]: 2025-10-02 12:55:37.343 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:55:37 compute-0 nova_compute[257802]: 2025-10-02 12:55:37.343 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:37 compute-0 ceph-mon[73607]: pgmap v2899: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Oct 02 12:55:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1554646254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:37.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:38 compute-0 sudo[377302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:38 compute-0 sudo[377302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:38 compute-0 sudo[377302]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:38 compute-0 sudo[377327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:38 compute-0 sudo[377327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:38 compute-0 sudo[377327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 104 op/s
Oct 02 12:55:38 compute-0 nova_compute[257802]: 2025-10-02 12:55:38.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:38.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:39 compute-0 nova_compute[257802]: 2025-10-02 12:55:39.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:39 compute-0 ceph-mon[73607]: pgmap v2900: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 104 op/s
Oct 02 12:55:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:39.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:55:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:41 compute-0 ceph-mon[73607]: pgmap v2901: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:55:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:41.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:55:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:42.334 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:42.334 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:55:42 compute-0 nova_compute[257802]: 2025-10-02 12:55:42.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:55:42
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'backups', 'volumes', '.mgr', 'default.rgw.log']
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:42.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:42 compute-0 ceph-mon[73607]: pgmap v2902: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:55:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:55:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:55:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:55:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:55:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.706 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.706 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.707 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.707 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.707 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.708 2 INFO nova.compute.manager [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Terminating instance
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.709 2 DEBUG nova.compute.manager [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:55:43 compute-0 kernel: tap5747f902-8c (unregistering): left promiscuous mode
Oct 02 12:55:43 compute-0 NetworkManager[44987]: <info>  [1759409743.8222] device (tap5747f902-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:55:43 compute-0 ovn_controller[148183]: 2025-10-02T12:55:43Z|00843|binding|INFO|Releasing lport 5747f902-8c1e-48d7-b093-f485b127cbd8 from this chassis (sb_readonly=0)
Oct 02 12:55:43 compute-0 ovn_controller[148183]: 2025-10-02T12:55:43Z|00844|binding|INFO|Setting lport 5747f902-8c1e-48d7-b093-f485b127cbd8 down in Southbound
Oct 02 12:55:43 compute-0 ovn_controller[148183]: 2025-10-02T12:55:43Z|00845|binding|INFO|Removing iface tap5747f902-8c ovn-installed in OVS
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Oct 02 12:55:43 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b9.scope: Consumed 14.033s CPU time.
Oct 02 12:55:43 compute-0 systemd-machined[211836]: Machine qemu-91-instance-000000b9 terminated.
Oct 02 12:55:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:43.906 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:f8:56 10.100.0.11'], port_security=['fa:16:3e:6b:f8:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4419a3ba-4449-4efc-aa2a-60535aeb8970', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d708a73-9d9d-419e-a932-76b92db27fe0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c4125454-60e3-4e75-a2bc-b0b1f66d6f84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d0c1625-3ae7-4b72-a555-f31f6d4351fb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=5747f902-8c1e-48d7-b093-f485b127cbd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:43.907 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 5747f902-8c1e-48d7-b093-f485b127cbd8 in datapath 5d708a73-9d9d-419e-a932-76b92db27fe0 unbound from our chassis
Oct 02 12:55:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:43.908 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d708a73-9d9d-419e-a932-76b92db27fe0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:55:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:43.909 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f2593df9-e3c2-4cb8-a005-779f292318e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:43.910 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0 namespace which is not needed anymore
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.944 2 INFO nova.virt.libvirt.driver [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Instance destroyed successfully.
Oct 02 12:55:43 compute-0 nova_compute[257802]: 2025-10-02 12:55:43.944 2 DEBUG nova.objects.instance [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid 4419a3ba-4449-4efc-aa2a-60535aeb8970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:43.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [NOTICE]   (377212) : haproxy version is 2.8.14-c23fe91
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [NOTICE]   (377212) : path to executable is /usr/sbin/haproxy
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [WARNING]  (377212) : Exiting Master process...
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [WARNING]  (377212) : Exiting Master process...
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [ALERT]    (377212) : Current worker (377214) exited with code 143 (Terminated)
Oct 02 12:55:44 compute-0 neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0[377156]: [WARNING]  (377212) : All workers exited. Exiting... (0)
Oct 02 12:55:44 compute-0 systemd[1]: libpod-443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00.scope: Deactivated successfully.
Oct 02 12:55:44 compute-0 podman[377390]: 2025-10-02 12:55:44.073457695 +0000 UTC m=+0.075833325 container died 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00-userdata-shm.mount: Deactivated successfully.
Oct 02 12:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4369f59170a47a117ec44208f8a394b0e18aa6e80a4916fe572c7d3ee53357b6-merged.mount: Deactivated successfully.
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.186 2 DEBUG nova.virt.libvirt.vif [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-1-29258642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=185,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCNXoUpB5vnXEqEgNwYIzijuoYmVb3l9JNigmHJaX5u9xC3Mh8lXczeWl2u7dkH6lLuwxSlvCC37ZyW8ZGUIpj45HvvaKOGWejz8IKI5Q3A25a49idjq6IkqaQpM0SMq/w==',key_name='tempest-TestSecurityGroupsBasicOps-487542131',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-yvcor7xl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:23Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=4419a3ba-4449-4efc-aa2a-60535aeb8970,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.187 2 DEBUG nova.network.os_vif_util [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "5747f902-8c1e-48d7-b093-f485b127cbd8", "address": "fa:16:3e:6b:f8:56", "network": {"id": "5d708a73-9d9d-419e-a932-76b92db27fe0", "bridge": "br-int", "label": "tempest-network-smoke--582150062", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5747f902-8c", "ovs_interfaceid": "5747f902-8c1e-48d7-b093-f485b127cbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.188 2 DEBUG nova.network.os_vif_util [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.188 2 DEBUG os_vif [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5747f902-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.195 2 INFO os_vif [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:56,bridge_name='br-int',has_traffic_filtering=True,id=5747f902-8c1e-48d7-b093-f485b127cbd8,network=Network(5d708a73-9d9d-419e-a932-76b92db27fe0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5747f902-8c')
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:55:44 compute-0 podman[377390]: 2025-10-02 12:55:44.221978283 +0000 UTC m=+0.224353913 container cleanup 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:55:44 compute-0 systemd[1]: libpod-conmon-443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00.scope: Deactivated successfully.
Oct 02 12:55:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.344 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:44 compute-0 podman[377436]: 2025-10-02 12:55:44.371920156 +0000 UTC m=+0.126883838 container remove 443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.377 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[395aeeef-39e8-4890-8a05-1aa4dc1f66d6]: (4, ('Thu Oct  2 12:55:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0 (443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00)\n443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00\nThu Oct  2 12:55:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0 (443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00)\n443fd93336bdd23278df719291cc4ef454679daf08bcd94a8fc3b38159383f00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.379 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff71544-72d0-46a6-a206-f88fc177b944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.379 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d708a73-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:44 compute-0 kernel: tap5d708a73-90: left promiscuous mode
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.387 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee132b2-14ee-4b3a-9a32-2b86835e1aa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.420 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b91727d4-8067-4fec-b801-22fc9a74c3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.421 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4894cce8-98ca-4db8-b16e-491b4663059c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.439 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[16fdbf5d-3e16-4e1f-a513-da4a9a65785f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778831, 'reachable_time': 29610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377454, 'error': None, 'target': 'ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.441 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d708a73-9d9d-419e-a932-76b92db27fe0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:55:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:44.441 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a02b4b-39d7-4f70-a434-56b8100cb392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d708a73\x2d9d9d\x2d419e\x2da932\x2d76b92db27fe0.mount: Deactivated successfully.
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:44 compute-0 nova_compute[257802]: 2025-10-02 12:55:44.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:44.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.211 2 DEBUG nova.compute.manager [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-unplugged-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.212 2 DEBUG oslo_concurrency.lockutils [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.212 2 DEBUG oslo_concurrency.lockutils [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.212 2 DEBUG oslo_concurrency.lockutils [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.212 2 DEBUG nova.compute.manager [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] No waiting events found dispatching network-vif-unplugged-5747f902-8c1e-48d7-b093-f485b127cbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.213 2 DEBUG nova.compute.manager [req-240060ef-bd94-499c-96a0-d4d703582734 req-d98e772d-0eca-4462-a087-157b71271dc2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-unplugged-5747f902-8c1e-48d7-b093-f485b127cbd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.247 2 INFO nova.virt.libvirt.driver [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Deleting instance files /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970_del
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.248 2 INFO nova.virt.libvirt.driver [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Deletion of /var/lib/nova/instances/4419a3ba-4449-4efc-aa2a-60535aeb8970_del complete
Oct 02 12:55:45 compute-0 ceph-mon[73607]: pgmap v2903: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.521 2 INFO nova.compute.manager [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Took 1.81 seconds to destroy the instance on the hypervisor.
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.521 2 DEBUG oslo.service.loopingcall [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.522 2 DEBUG nova.compute.manager [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:55:45 compute-0 nova_compute[257802]: 2025-10-02 12:55:45.522 2 DEBUG nova.network.neutron [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:55:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 290 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Oct 02 12:55:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:47 compute-0 ceph-mon[73607]: pgmap v2904: 305 pgs: 305 active+clean; 290 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Oct 02 12:55:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:55:47.336 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.532 2 DEBUG nova.network.neutron [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.709 2 DEBUG nova.compute.manager [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.710 2 DEBUG oslo_concurrency.lockutils [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.710 2 DEBUG oslo_concurrency.lockutils [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.710 2 DEBUG oslo_concurrency.lockutils [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.710 2 DEBUG nova.compute.manager [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] No waiting events found dispatching network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:47 compute-0 nova_compute[257802]: 2025-10-02 12:55:47.710 2 WARNING nova.compute.manager [req-42ed6948-f8d6-42ce-97bc-3b4a270aedfd req-17f5525e-4bfb-4f0c-9a98-180914e86092 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received unexpected event network-vif-plugged-5747f902-8c1e-48d7-b093-f485b127cbd8 for instance with vm_state active and task_state deleting.
Oct 02 12:55:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 290 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 02 12:55:48 compute-0 nova_compute[257802]: 2025-10-02 12:55:48.409 2 DEBUG nova.compute.manager [req-eebad36e-c958-4127-8077-179e2c049f7e req-9ef1c0a6-67c2-4b3d-901b-f4275ba82ac8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Received event network-vif-deleted-5747f902-8c1e-48d7-b093-f485b127cbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:48 compute-0 nova_compute[257802]: 2025-10-02 12:55:48.410 2 INFO nova.compute.manager [req-eebad36e-c958-4127-8077-179e2c049f7e req-9ef1c0a6-67c2-4b3d-901b-f4275ba82ac8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Neutron deleted interface 5747f902-8c1e-48d7-b093-f485b127cbd8; detaching it from the instance and deleting it from the info cache
Oct 02 12:55:48 compute-0 nova_compute[257802]: 2025-10-02 12:55:48.410 2 DEBUG nova.network.neutron [req-eebad36e-c958-4127-8077-179e2c049f7e req-9ef1c0a6-67c2-4b3d-901b-f4275ba82ac8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:48.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:48 compute-0 nova_compute[257802]: 2025-10-02 12:55:48.941 2 INFO nova.compute.manager [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Took 3.42 seconds to deallocate network for instance.
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.141 2 DEBUG nova.compute.manager [req-eebad36e-c958-4127-8077-179e2c049f7e req-9ef1c0a6-67c2-4b3d-901b-f4275ba82ac8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Detach interface failed, port_id=5747f902-8c1e-48d7-b093-f485b127cbd8, reason: Instance 4419a3ba-4449-4efc-aa2a-60535aeb8970 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-0 ceph-mon[73607]: pgmap v2905: 305 pgs: 305 active+clean; 290 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.416 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.417 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.482 2 DEBUG oslo_concurrency.processutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863786545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.934 2 DEBUG oslo_concurrency.processutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:49 compute-0 nova_compute[257802]: 2025-10-02 12:55:49.943 2 DEBUG nova.compute.provider_tree [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:50 compute-0 nova_compute[257802]: 2025-10-02 12:55:50.128 2 DEBUG nova.scheduler.client.report [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 2.8 MiB/s wr, 127 op/s
Oct 02 12:55:50 compute-0 nova_compute[257802]: 2025-10-02 12:55:50.247 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:50 compute-0 nova_compute[257802]: 2025-10-02 12:55:50.296 2 INFO nova.scheduler.client.report [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance 4419a3ba-4449-4efc-aa2a-60535aeb8970
Oct 02 12:55:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2863786545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:50 compute-0 nova_compute[257802]: 2025-10-02 12:55:50.661 2 DEBUG oslo_concurrency.lockutils [None req-ee9a32a8-1ae6-4c2d-b53c-ea99a2844172 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "4419a3ba-4449-4efc-aa2a-60535aeb8970" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:50.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:51 compute-0 ceph-mon[73607]: pgmap v2906: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 2.8 MiB/s wr, 127 op/s
Oct 02 12:55:51 compute-0 podman[377484]: 2025-10-02 12:55:51.936419657 +0000 UTC m=+0.063243324 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:55:51 compute-0 podman[377483]: 2025-10-02 12:55:51.937505284 +0000 UTC m=+0.067982331 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:55:51 compute-0 podman[377482]: 2025-10-02 12:55:51.945080221 +0000 UTC m=+0.075119287 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 12:55:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:52.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:53 compute-0 ceph-mon[73607]: pgmap v2907: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 12:55:53 compute-0 nova_compute[257802]: 2025-10-02 12:55:53.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:53.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:55:54 compute-0 nova_compute[257802]: 2025-10-02 12:55:54.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 12:55:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:54 compute-0 nova_compute[257802]: 2025-10-02 12:55:54.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004336642704262051 of space, bias 1.0, pg target 1.3009928112786153 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:55:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:54.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:55:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/289378347' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:55:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:55:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/289378347' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:55:55 compute-0 ceph-mon[73607]: pgmap v2908: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 12:55:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/289378347' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:55:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/289378347' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:55:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Oct 02 12:55:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:56.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:57 compute-0 ceph-mon[73607]: pgmap v2909: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Oct 02 12:55:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:58 compute-0 sudo[377542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:58 compute-0 sudo[377542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:58 compute-0 sudo[377542]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 415 KiB/s wr, 53 op/s
Oct 02 12:55:58 compute-0 sudo[377573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:58 compute-0 sudo[377573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:58 compute-0 sudo[377573]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:58 compute-0 podman[377566]: 2025-10-02 12:55:58.298711446 +0000 UTC m=+0.075457215 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:55:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:58 compute-0 nova_compute[257802]: 2025-10-02 12:55:58.941 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409743.9401882, 4419a3ba-4449-4efc-aa2a-60535aeb8970 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:58 compute-0 nova_compute[257802]: 2025-10-02 12:55:58.942 2 INFO nova.compute.manager [-] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] VM Stopped (Lifecycle Event)
Oct 02 12:55:59 compute-0 ceph-mon[73607]: pgmap v2910: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 415 KiB/s wr, 53 op/s
Oct 02 12:55:59 compute-0 nova_compute[257802]: 2025-10-02 12:55:59.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 nova_compute[257802]: 2025-10-02 12:55:59.394 2 DEBUG nova.compute.manager [None req-9b06e9a8-5784-4d0e-aefd-87d75c33b235 - - - - - -] [instance: 4419a3ba-4449-4efc-aa2a-60535aeb8970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:59 compute-0 nova_compute[257802]: 2025-10-02 12:55:59.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 nova_compute[257802]: 2025-10-02 12:55:59.755 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:59 compute-0 nova_compute[257802]: 2025-10-02 12:55:59.755 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:55:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:55:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:00 compute-0 nova_compute[257802]: 2025-10-02 12:56:00.140 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:56:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 429 KiB/s wr, 76 op/s
Oct 02 12:56:00 compute-0 nova_compute[257802]: 2025-10-02 12:56:00.612 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:00 compute-0 nova_compute[257802]: 2025-10-02 12:56:00.613 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:00 compute-0 nova_compute[257802]: 2025-10-02 12:56:00.622 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:56:00 compute-0 nova_compute[257802]: 2025-10-02 12:56:00.622 2 INFO nova.compute.claims [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:56:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:56:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:56:01 compute-0 nova_compute[257802]: 2025-10-02 12:56:01.161 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:01 compute-0 sudo[377622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:01 compute-0 sudo[377622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:01 compute-0 sudo[377622]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:01 compute-0 sudo[377647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:01 compute-0 sudo[377647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:01 compute-0 sudo[377647]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:01 compute-0 sudo[377691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:01 compute-0 sudo[377691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:01 compute-0 sudo[377691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:01 compute-0 sudo[377716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:56:01 compute-0 ceph-mon[73607]: pgmap v2911: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 429 KiB/s wr, 76 op/s
Oct 02 12:56:01 compute-0 sudo[377716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996447255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:01 compute-0 nova_compute[257802]: 2025-10-02 12:56:01.612 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:01 compute-0 nova_compute[257802]: 2025-10-02 12:56:01.621 2 DEBUG nova.compute.provider_tree [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:01 compute-0 nova_compute[257802]: 2025-10-02 12:56:01.713 2 DEBUG nova.scheduler.client.report [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:01 compute-0 sudo[377716]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 03c2d9d9-5c60-4253-ae26-ce455400378d does not exist
Oct 02 12:56:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6837c6ad-81e1-4a95-8ae4-96e4d741938a does not exist
Oct 02 12:56:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bafa8898-5bc6-4631-886d-28ed13ac4088 does not exist
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:56:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:02 compute-0 sudo[377773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:02 compute-0 sudo[377773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:02 compute-0 sudo[377773]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:02 compute-0 sudo[377798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:02 compute-0 sudo[377798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:02 compute-0 sudo[377798]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 22 op/s
Oct 02 12:56:02 compute-0 nova_compute[257802]: 2025-10-02 12:56:02.248 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:02 compute-0 nova_compute[257802]: 2025-10-02 12:56:02.249 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:56:02 compute-0 sudo[377823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:02 compute-0 sudo[377823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:02 compute-0 sudo[377823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:02 compute-0 sudo[377848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:56:02 compute-0 sudo[377848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1996447255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:56:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.755422155 +0000 UTC m=+0.047194370 container create 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:02 compute-0 systemd[1]: Started libpod-conmon-34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05.scope.
Oct 02 12:56:02 compute-0 nova_compute[257802]: 2025-10-02 12:56:02.829 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:56:02 compute-0 nova_compute[257802]: 2025-10-02 12:56:02.831 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.739165046 +0000 UTC m=+0.030937281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:02.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.864499064 +0000 UTC m=+0.156271299 container init 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.87288453 +0000 UTC m=+0.164656785 container start 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.879555944 +0000 UTC m=+0.171328179 container attach 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:56:02 compute-0 magical_rhodes[377930]: 167 167
Oct 02 12:56:02 compute-0 systemd[1]: libpod-34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05.scope: Deactivated successfully.
Oct 02 12:56:02 compute-0 conmon[377930]: conmon 34fd0134a509f73f2589 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05.scope/container/memory.events
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.88224598 +0000 UTC m=+0.174018185 container died 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ee6b05ff51fc710f780816740e629d94c87a7f7cee031b39fc209be0835859-merged.mount: Deactivated successfully.
Oct 02 12:56:02 compute-0 podman[377914]: 2025-10-02 12:56:02.935307134 +0000 UTC m=+0.227079349 container remove 34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:56:02 compute-0 systemd[1]: libpod-conmon-34fd0134a509f73f2589c651400f0da1fed9e27e03f93a12ba64e63426208b05.scope: Deactivated successfully.
Oct 02 12:56:03 compute-0 podman[377954]: 2025-10-02 12:56:03.131593536 +0000 UTC m=+0.054157992 container create b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:56:03 compute-0 nova_compute[257802]: 2025-10-02 12:56:03.177 2 DEBUG nova.policy [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84e6a279aa124804af5819b25a773dc1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dec97fbf4fd141868e034a8652cf0c37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:56:03 compute-0 systemd[1]: Started libpod-conmon-b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b.scope.
Oct 02 12:56:03 compute-0 podman[377954]: 2025-10-02 12:56:03.106204472 +0000 UTC m=+0.028768988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:03 compute-0 podman[377954]: 2025-10-02 12:56:03.238313297 +0000 UTC m=+0.160877783 container init b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:56:03 compute-0 podman[377954]: 2025-10-02 12:56:03.251390819 +0000 UTC m=+0.173955275 container start b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:56:03 compute-0 podman[377954]: 2025-10-02 12:56:03.257756445 +0000 UTC m=+0.180320921 container attach b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:56:03 compute-0 nova_compute[257802]: 2025-10-02 12:56:03.335 2 INFO nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:56:03 compute-0 ceph-mon[73607]: pgmap v2912: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 22 op/s
Oct 02 12:56:03 compute-0 nova_compute[257802]: 2025-10-02 12:56:03.616 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:56:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:03.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.019 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.020 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.021 2 INFO nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Creating image(s)
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.063 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.094 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.124 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:04 compute-0 nice_archimedes[377971]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:56:04 compute-0 nice_archimedes[377971]: --> relative data size: 1.0
Oct 02 12:56:04 compute-0 nice_archimedes[377971]: --> All data devices are unavailable
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.132 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:04 compute-0 systemd[1]: libpod-b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b.scope: Deactivated successfully.
Oct 02 12:56:04 compute-0 podman[377954]: 2025-10-02 12:56:04.16289666 +0000 UTC m=+1.085461156 container died b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.210 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.211 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.212 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.212 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 22 op/s
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.252 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.262 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fe856b5e-f11d-45ed-b560-5115a72670cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dbec090281aa2dd9f50e5b8d1e7feacb5fb42e033b8ed9df0e61ade56b2c01e-merged.mount: Deactivated successfully.
Oct 02 12:56:04 compute-0 podman[377954]: 2025-10-02 12:56:04.426968766 +0000 UTC m=+1.349533222 container remove b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:56:04 compute-0 systemd[1]: libpod-conmon-b3c652ea1485cc79df3fbe795030e68e86dd472dd9d7549e7260be6b23766b0b.scope: Deactivated successfully.
Oct 02 12:56:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:04 compute-0 sudo[377848]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:04 compute-0 sudo[378088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:04 compute-0 sudo[378088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:04 compute-0 sudo[378088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:04 compute-0 sudo[378113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:04 compute-0 sudo[378113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.630 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Successfully created port: 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:56:04 compute-0 sudo[378113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:04 compute-0 nova_compute[257802]: 2025-10-02 12:56:04.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:04 compute-0 sudo[378141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:04 compute-0 sudo[378141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:04 compute-0 sudo[378141]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:04 compute-0 sudo[378167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:56:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/495128654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:04 compute-0 sudo[378167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.167321143 +0000 UTC m=+0.053221958 container create 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:56:05 compute-0 systemd[1]: Started libpod-conmon-7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9.scope.
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.146555463 +0000 UTC m=+0.032456268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.283455526 +0000 UTC m=+0.169356341 container init 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.291794681 +0000 UTC m=+0.177695466 container start 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.295522493 +0000 UTC m=+0.181423278 container attach 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:05 compute-0 competent_fermi[378250]: 167 167
Oct 02 12:56:05 compute-0 systemd[1]: libpod-7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9.scope: Deactivated successfully.
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.29951206 +0000 UTC m=+0.185412855 container died 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5675b5d9b10ac4b270d704e4eb9df9052698c22a39cf4aa591023af195bf2fb1-merged.mount: Deactivated successfully.
Oct 02 12:56:05 compute-0 podman[378233]: 2025-10-02 12:56:05.354987123 +0000 UTC m=+0.240887918 container remove 7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:05 compute-0 systemd[1]: libpod-conmon-7e399dfbba5489d7403f4daf0d6ba1f0978ea82a37895c0a5fbd1c9ab1bff2c9.scope: Deactivated successfully.
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.421 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fe856b5e-f11d-45ed-b560-5115a72670cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.524 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] resizing rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:56:05 compute-0 podman[378293]: 2025-10-02 12:56:05.532599366 +0000 UTC m=+0.044829332 container create ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:56:05 compute-0 systemd[1]: Started libpod-conmon-ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727.scope.
Oct 02 12:56:05 compute-0 podman[378293]: 2025-10-02 12:56:05.51363922 +0000 UTC m=+0.025869196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca35696235ebb87ac63440457a6fb4b0c5de1c728931d3511761003b17225b36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca35696235ebb87ac63440457a6fb4b0c5de1c728931d3511761003b17225b36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca35696235ebb87ac63440457a6fb4b0c5de1c728931d3511761003b17225b36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca35696235ebb87ac63440457a6fb4b0c5de1c728931d3511761003b17225b36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:05 compute-0 podman[378293]: 2025-10-02 12:56:05.634790456 +0000 UTC m=+0.147020432 container init ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:05 compute-0 podman[378293]: 2025-10-02 12:56:05.646736409 +0000 UTC m=+0.158966385 container start ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:56:05 compute-0 podman[378293]: 2025-10-02 12:56:05.65121479 +0000 UTC m=+0.163444766 container attach ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.672 2 DEBUG nova.objects.instance [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lazy-loading 'migration_context' on Instance uuid fe856b5e-f11d-45ed-b560-5115a72670cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.737 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.738 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Ensure instance console log exists: /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.738 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.739 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:05 compute-0 nova_compute[257802]: 2025-10-02 12:56:05.739 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:05 compute-0 ceph-mon[73607]: pgmap v2913: 305 pgs: 305 active+clean; 229 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 22 op/s
Oct 02 12:56:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:56:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:56:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 119 KiB/s wr, 34 op/s
Oct 02 12:56:06 compute-0 angry_swanson[378346]: {
Oct 02 12:56:06 compute-0 angry_swanson[378346]:     "1": [
Oct 02 12:56:06 compute-0 angry_swanson[378346]:         {
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "devices": [
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "/dev/loop3"
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             ],
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "lv_name": "ceph_lv0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "lv_size": "7511998464",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "name": "ceph_lv0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "tags": {
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.cluster_name": "ceph",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.crush_device_class": "",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.encrypted": "0",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.osd_id": "1",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.type": "block",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:                 "ceph.vdo": "0"
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             },
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "type": "block",
Oct 02 12:56:06 compute-0 angry_swanson[378346]:             "vg_name": "ceph_vg0"
Oct 02 12:56:06 compute-0 angry_swanson[378346]:         }
Oct 02 12:56:06 compute-0 angry_swanson[378346]:     ]
Oct 02 12:56:06 compute-0 angry_swanson[378346]: }
Oct 02 12:56:06 compute-0 systemd[1]: libpod-ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727.scope: Deactivated successfully.
Oct 02 12:56:06 compute-0 podman[378293]: 2025-10-02 12:56:06.431520378 +0000 UTC m=+0.943750334 container died ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca35696235ebb87ac63440457a6fb4b0c5de1c728931d3511761003b17225b36-merged.mount: Deactivated successfully.
Oct 02 12:56:06 compute-0 podman[378293]: 2025-10-02 12:56:06.494397673 +0000 UTC m=+1.006627629 container remove ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swanson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:56:06 compute-0 systemd[1]: libpod-conmon-ba989fff509079e24e4409ab4316e84d8461b577d3335cc21c42dbe24a7ce727.scope: Deactivated successfully.
Oct 02 12:56:06 compute-0 sudo[378167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:06 compute-0 sudo[378386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:06 compute-0 sudo[378386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:06 compute-0 sudo[378386]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:06 compute-0 sudo[378411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:06 compute-0 sudo[378411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:06 compute-0 sudo[378411]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:06 compute-0 sudo[378437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:06 compute-0 sudo[378437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:06 compute-0 sudo[378437]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:06 compute-0 sudo[378462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:56:06 compute-0 sudo[378462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:06 compute-0 ceph-mon[73607]: pgmap v2914: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 119 KiB/s wr, 34 op/s
Oct 02 12:56:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:07 compute-0 podman[378529]: 2025-10-02 12:56:07.078399188 +0000 UTC m=+0.035193805 container create 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:56:07 compute-0 systemd[1]: Started libpod-conmon-98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde.scope.
Oct 02 12:56:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:07 compute-0 podman[378529]: 2025-10-02 12:56:07.063909753 +0000 UTC m=+0.020704380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:07 compute-0 podman[378529]: 2025-10-02 12:56:07.161772296 +0000 UTC m=+0.118566933 container init 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:07 compute-0 podman[378529]: 2025-10-02 12:56:07.167751433 +0000 UTC m=+0.124546050 container start 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:56:07 compute-0 podman[378529]: 2025-10-02 12:56:07.171331671 +0000 UTC m=+0.128126308 container attach 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:07 compute-0 festive_dirac[378545]: 167 167
Oct 02 12:56:07 compute-0 systemd[1]: libpod-98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde.scope: Deactivated successfully.
Oct 02 12:56:07 compute-0 podman[378550]: 2025-10-02 12:56:07.20752423 +0000 UTC m=+0.022867102 container died 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da1198e1a325c39706f371ee2ff9a23acaef1be70fe8a6d7a05dc39432595bc-merged.mount: Deactivated successfully.
Oct 02 12:56:07 compute-0 podman[378550]: 2025-10-02 12:56:07.241221328 +0000 UTC m=+0.056564200 container remove 98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:56:07 compute-0 systemd[1]: libpod-conmon-98bb66705d7fcda2ff9b369e8352ac05a5e7ed0534e47dcf9ab439fee1154dde.scope: Deactivated successfully.
Oct 02 12:56:07 compute-0 podman[378572]: 2025-10-02 12:56:07.405331139 +0000 UTC m=+0.041235394 container create d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:56:07 compute-0 systemd[1]: Started libpod-conmon-d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4.scope.
Oct 02 12:56:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92db946392edec917a704c0c05b6eeda04f7f010edeb3570b535d1e893623d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92db946392edec917a704c0c05b6eeda04f7f010edeb3570b535d1e893623d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92db946392edec917a704c0c05b6eeda04f7f010edeb3570b535d1e893623d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92db946392edec917a704c0c05b6eeda04f7f010edeb3570b535d1e893623d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:07 compute-0 podman[378572]: 2025-10-02 12:56:07.383859322 +0000 UTC m=+0.019763607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:07 compute-0 podman[378572]: 2025-10-02 12:56:07.4936669 +0000 UTC m=+0.129571175 container init d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:56:07 compute-0 podman[378572]: 2025-10-02 12:56:07.500699113 +0000 UTC m=+0.136603368 container start d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:56:07 compute-0 podman[378572]: 2025-10-02 12:56:07.504332891 +0000 UTC m=+0.140237176 container attach d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:07.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 108 KiB/s wr, 33 op/s
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]: {
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:         "osd_id": 1,
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:         "type": "bluestore"
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]:     }
Oct 02 12:56:08 compute-0 flamboyant_pike[378588]: }
Oct 02 12:56:08 compute-0 systemd[1]: libpod-d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4.scope: Deactivated successfully.
Oct 02 12:56:08 compute-0 podman[378572]: 2025-10-02 12:56:08.345912805 +0000 UTC m=+0.981817060 container died d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb92db946392edec917a704c0c05b6eeda04f7f010edeb3570b535d1e893623d-merged.mount: Deactivated successfully.
Oct 02 12:56:08 compute-0 podman[378572]: 2025-10-02 12:56:08.413935437 +0000 UTC m=+1.049839692 container remove d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:56:08 compute-0 systemd[1]: libpod-conmon-d65aadb73fc7e485f954e30bb92ab91c8fe8c101ce85e238a0e9189157ad8ab4.scope: Deactivated successfully.
Oct 02 12:56:08 compute-0 sudo[378462]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:56:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:56:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d751ddff-faa7-4e77-8b2a-a7145c2d0a12 does not exist
Oct 02 12:56:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2839dfb8-68a2-437c-a8a7-9c4332b22717 does not exist
Oct 02 12:56:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7933eea4-ffcb-4051-95c5-e11a0659d951 does not exist
Oct 02 12:56:08 compute-0 sudo[378621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:08 compute-0 sudo[378621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:08 compute-0 sudo[378621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:08 compute-0 sudo[378646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:56:08 compute-0 sudo[378646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:08 compute-0 sudo[378646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:09 compute-0 nova_compute[257802]: 2025-10-02 12:56:09.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:09 compute-0 ceph-mon[73607]: pgmap v2915: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 108 KiB/s wr, 33 op/s
Oct 02 12:56:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1707496210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:56:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:09 compute-0 nova_compute[257802]: 2025-10-02 12:56:09.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:09.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:56:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/596234105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.487 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Successfully updated port: 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.650 2 DEBUG nova.compute.manager [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-changed-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.650 2 DEBUG nova.compute.manager [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Refreshing instance network info cache due to event network-changed-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.650 2 DEBUG oslo_concurrency.lockutils [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.651 2 DEBUG oslo_concurrency.lockutils [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.651 2 DEBUG nova.network.neutron [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Refreshing network info cache for port 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:10 compute-0 nova_compute[257802]: 2025-10-02 12:56:10.654 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:11 compute-0 ceph-mon[73607]: pgmap v2916: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:56:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:11.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:12 compute-0 nova_compute[257802]: 2025-10-02 12:56:12.129 2 DEBUG nova.network.neutron [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:13 compute-0 ceph-mon[73607]: pgmap v2917: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:14.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.119 2 DEBUG nova.network.neutron [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.304 2 DEBUG oslo_concurrency.lockutils [req-ab93e363-5216-4dd8-93fd-72ef5adab452 req-d7051085-1156-41e6-8ee9-d049310dd83d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.305 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquired lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.305 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:14 compute-0 nova_compute[257802]: 2025-10-02 12:56:14.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:15 compute-0 nova_compute[257802]: 2025-10-02 12:56:15.139 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:56:15 compute-0 ceph-mon[73607]: pgmap v2918: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:16.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.087 2 DEBUG nova.network.neutron [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Updating instance_info_cache with network_info: [{"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.123 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Releasing lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.124 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance network_info: |[{"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.126 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Start _get_guest_xml network_info=[{"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.131 2 WARNING nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.138 2 DEBUG nova.virt.libvirt.host [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.139 2 DEBUG nova.virt.libvirt.host [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.145 2 DEBUG nova.virt.libvirt.host [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.145 2 DEBUG nova.virt.libvirt.host [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.146 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.147 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.147 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.147 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.147 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.148 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.148 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.148 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.148 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.148 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.149 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.149 2 DEBUG nova.virt.hardware [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.152 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:17 compute-0 ceph-mon[73607]: pgmap v2919: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 12:56:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/630812361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.612 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.640 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:17 compute-0 nova_compute[257802]: 2025-10-02 12:56:17.644 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:18.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138702492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.103 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.105 2 DEBUG nova.virt.libvirt.vif [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-247156381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-247156381',id=187,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dec97fbf4fd141868e034a8652cf0c37',ramdisk_id='',reservation_id='r-xr45ogql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1795188409',owner_user_name='tempest-ServerTagsTestJSON-1795188409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:03Z,user_data=None,user_id='84e6a279aa124804af5819b25a773dc1',uuid=fe856b5e-f11d-45ed-b560-5115a72670cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.105 2 DEBUG nova.network.os_vif_util [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converting VIF {"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.106 2 DEBUG nova.network.os_vif_util [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.107 2 DEBUG nova.objects.instance [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lazy-loading 'pci_devices' on Instance uuid fe856b5e-f11d-45ed-b560-5115a72670cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 22 op/s
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.261 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <uuid>fe856b5e-f11d-45ed-b560-5115a72670cd</uuid>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <name>instance-000000bb</name>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:name>tempest-ServerTagsTestJSON-server-247156381</nova:name>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:56:17</nova:creationTime>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:user uuid="84e6a279aa124804af5819b25a773dc1">tempest-ServerTagsTestJSON-1795188409-project-member</nova:user>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:project uuid="dec97fbf4fd141868e034a8652cf0c37">tempest-ServerTagsTestJSON-1795188409</nova:project>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <nova:port uuid="45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b">
Oct 02 12:56:18 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <system>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="serial">fe856b5e-f11d-45ed-b560-5115a72670cd</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="uuid">fe856b5e-f11d-45ed-b560-5115a72670cd</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </system>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <os>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </os>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <features>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </features>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/fe856b5e-f11d-45ed-b560-5115a72670cd_disk">
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </source>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config">
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </source>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:56:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:27:5c:ee"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <target dev="tap45adb0d1-b9"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/console.log" append="off"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <video>
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </video>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:56:18 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:56:18 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:56:18 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:56:18 compute-0 nova_compute[257802]: </domain>
Oct 02 12:56:18 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.262 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Preparing to wait for external event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.263 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.263 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.263 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.264 2 DEBUG nova.virt.libvirt.vif [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-247156381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-247156381',id=187,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dec97fbf4fd141868e034a8652cf0c37',ramdisk_id='',reservation_id='r-xr45ogql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1795188409',owner_user_name='tempest-ServerTagsTestJSON-1795188409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:03Z,user_data=None,user_id='84e6a279aa124804af5819b25a773dc1',uuid=fe856b5e-f11d-45ed-b560-5115a72670cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.264 2 DEBUG nova.network.os_vif_util [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converting VIF {"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.265 2 DEBUG nova.network.os_vif_util [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.266 2 DEBUG os_vif [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45adb0d1-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45adb0d1-b9, col_values=(('external_ids', {'iface-id': '45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:5c:ee', 'vm-uuid': 'fe856b5e-f11d-45ed-b560-5115a72670cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:18 compute-0 NetworkManager[44987]: <info>  [1759409778.2735] manager: (tap45adb0d1-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:18 compute-0 nova_compute[257802]: 2025-10-02 12:56:18.281 2 INFO os_vif [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9')
Oct 02 12:56:18 compute-0 sudo[378742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:18 compute-0 sudo[378742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:18 compute-0 sudo[378742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:18 compute-0 sudo[378767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:18 compute-0 sudo[378767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:18 compute-0 sudo[378767]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/630812361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/138702492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.287 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.288 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.288 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] No VIF found with MAC fa:16:3e:27:5c:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.288 2 INFO nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Using config drive
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.315 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:19 compute-0 ceph-mon[73607]: pgmap v2920: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 22 op/s
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.869 2 INFO nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Creating config drive at /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config
Oct 02 12:56:19 compute-0 nova_compute[257802]: 2025-10-02 12:56:19.875 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi7iom60v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.011 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi7iom60v" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.043 2 DEBUG nova.storage.rbd_utils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] rbd image fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.048 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.302 2 DEBUG oslo_concurrency.processutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config fe856b5e-f11d-45ed-b560-5115a72670cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.303 2 INFO nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Deleting local config drive /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd/disk.config because it was imported into RBD.
Oct 02 12:56:20 compute-0 kernel: tap45adb0d1-b9: entered promiscuous mode
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.3835] manager: (tap45adb0d1-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 ovn_controller[148183]: 2025-10-02T12:56:20Z|00846|binding|INFO|Claiming lport 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b for this chassis.
Oct 02 12:56:20 compute-0 ovn_controller[148183]: 2025-10-02T12:56:20Z|00847|binding|INFO|45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b: Claiming fa:16:3e:27:5c:ee 10.100.0.14
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 ovn_controller[148183]: 2025-10-02T12:56:20Z|00848|binding|INFO|Setting lport 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b ovn-installed in OVS
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 systemd-udevd[378863]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.4226] device (tap45adb0d1-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.4235] device (tap45adb0d1-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:56:20 compute-0 systemd-machined[211836]: New machine qemu-92-instance-000000bb.
Oct 02 12:56:20 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-000000bb.
Oct 02 12:56:20 compute-0 ovn_controller[148183]: 2025-10-02T12:56:20Z|00849|binding|INFO|Setting lport 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b up in Southbound
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.506 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:5c:ee 10.100.0.14'], port_security=['fa:16:3e:27:5c:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fe856b5e-f11d-45ed-b560-5115a72670cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f1ad1bf-3814-48be-b7c6-71850905c341', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dec97fbf4fd141868e034a8652cf0c37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '25d62b8b-b795-46a6-a666-f5f91a710c2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f9884a3-9483-46da-bc68-ad2457e9000a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.507 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b in datapath 9f1ad1bf-3814-48be-b7c6-71850905c341 bound to our chassis
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.508 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f1ad1bf-3814-48be-b7c6-71850905c341
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.520 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8071f43d-939f-4bf5-a32e-6ed63dec7284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.520 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f1ad1bf-31 in ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.524 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f1ad1bf-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.524 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[53755248-e3ef-42bb-a52e-71723f29dab8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.525 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c934c841-3da3-4b76-afb3-c0eb6bbcd870]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.540 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c42af-a2be-40b2-94ba-52b48a282763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.555 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[48d8e9c1-dee0-4e3b-a247-360206753e8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.583 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a23be46e-cc14-4f80-990f-eb9aef717816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.5891] manager: (tap9f1ad1bf-30): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.588 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df4d05d0-1f15-4554-81f5-1d89a7534989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 systemd-udevd[378866]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.619 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[90ed204b-c5fc-48c4-9710-2a6073d1cb50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.622 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[831a85a8-c9a2-4ab3-9259-31373b16b459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.6482] device (tap9f1ad1bf-30): carrier: link connected
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.653 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec76ed3-dcf8-45b3-8f23-72e43f690c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.670 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0c535fa5-da8f-44be-a1fc-004b615d81d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f1ad1bf-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784827, 'reachable_time': 37578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378898, 'error': None, 'target': 'ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.690 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bb449d-4805-4faa-97f1-d624ed24aa7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:5b95'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784827, 'tstamp': 784827}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378899, 'error': None, 'target': 'ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.706 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc056c2-1eae-4aed-807b-00f83584c844]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f1ad1bf-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784827, 'reachable_time': 37578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378900, 'error': None, 'target': 'ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.738 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8b890a2a-7d6b-48f2-8534-1dfb3b2abb54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.803 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[af8afc10-88d6-4b24-8ce5-238773925769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.805 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f1ad1bf-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.805 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.805 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f1ad1bf-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:20 compute-0 kernel: tap9f1ad1bf-30: entered promiscuous mode
Oct 02 12:56:20 compute-0 NetworkManager[44987]: <info>  [1759409780.8089] manager: (tap9f1ad1bf-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.817 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f1ad1bf-30, col_values=(('external_ids', {'iface-id': '5c6df185-7a72-4d7e-bc3c-e0f6fe4e0d71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:20 compute-0 ovn_controller[148183]: 2025-10-02T12:56:20Z|00850|binding|INFO|Releasing lport 5c6df185-7a72-4d7e-bc3c-e0f6fe4e0d71 from this chassis (sb_readonly=0)
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.822 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f1ad1bf-3814-48be-b7c6-71850905c341.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f1ad1bf-3814-48be-b7c6-71850905c341.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.823 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3f37a66b-1ff8-4037-b565-c178c4a4892c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.824 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-9f1ad1bf-3814-48be-b7c6-71850905c341
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/9f1ad1bf-3814-48be-b7c6-71850905c341.pid.haproxy
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 9f1ad1bf-3814-48be-b7c6-71850905c341
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:56:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:20.824 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341', 'env', 'PROCESS_TAG=haproxy-9f1ad1bf-3814-48be-b7c6-71850905c341', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f1ad1bf-3814-48be-b7c6-71850905c341.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:56:20 compute-0 nova_compute[257802]: 2025-10-02 12:56:20.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:20.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:21 compute-0 ovn_controller[148183]: 2025-10-02T12:56:21Z|00851|binding|INFO|Releasing lport 5c6df185-7a72-4d7e-bc3c-e0f6fe4e0d71 from this chassis (sb_readonly=0)
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:21 compute-0 podman[378973]: 2025-10-02 12:56:21.237505116 +0000 UTC m=+0.048022501 container create 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:21 compute-0 ovn_controller[148183]: 2025-10-02T12:56:21Z|00852|binding|INFO|Releasing lport 5c6df185-7a72-4d7e-bc3c-e0f6fe4e0d71 from this chassis (sb_readonly=0)
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:21 compute-0 systemd[1]: Started libpod-conmon-99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15.scope.
Oct 02 12:56:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531bb22c311235c7fa1adaa48cd8d9ffb626d3f2bcb06f6156b597d16630f0d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:21 compute-0 podman[378973]: 2025-10-02 12:56:21.210460761 +0000 UTC m=+0.020978166 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:56:21 compute-0 podman[378973]: 2025-10-02 12:56:21.315874512 +0000 UTC m=+0.126391927 container init 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:56:21 compute-0 podman[378973]: 2025-10-02 12:56:21.321810597 +0000 UTC m=+0.132327972 container start 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:56:21 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [NOTICE]   (378993) : New worker (378995) forked
Oct 02 12:56:21 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [NOTICE]   (378993) : Loading success.
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.388 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409781.387216, fe856b5e-f11d-45ed-b560-5115a72670cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.389 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] VM Started (Lifecycle Event)
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.614 2 DEBUG nova.compute.manager [req-dfa7e66e-04b5-48a8-a31e-b44fd44dd431 req-c6aa8aa7-de2e-4134-b6cf-1c4c75abc9a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.614 2 DEBUG oslo_concurrency.lockutils [req-dfa7e66e-04b5-48a8-a31e-b44fd44dd431 req-c6aa8aa7-de2e-4134-b6cf-1c4c75abc9a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.615 2 DEBUG oslo_concurrency.lockutils [req-dfa7e66e-04b5-48a8-a31e-b44fd44dd431 req-c6aa8aa7-de2e-4134-b6cf-1c4c75abc9a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.615 2 DEBUG oslo_concurrency.lockutils [req-dfa7e66e-04b5-48a8-a31e-b44fd44dd431 req-c6aa8aa7-de2e-4134-b6cf-1c4c75abc9a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.615 2 DEBUG nova.compute.manager [req-dfa7e66e-04b5-48a8-a31e-b44fd44dd431 req-c6aa8aa7-de2e-4134-b6cf-1c4c75abc9a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Processing event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.616 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.620 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.625 2 INFO nova.virt.libvirt.driver [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance spawned successfully.
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.625 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.631 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.635 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:21 compute-0 ceph-mon[73607]: pgmap v2921: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.698 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.699 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409781.387381, fe856b5e-f11d-45ed-b560-5115a72670cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.699 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] VM Paused (Lifecycle Event)
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.712 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.713 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.713 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.714 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.714 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.715 2 DEBUG nova.virt.libvirt.driver [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.815 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.819 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409781.6195524, fe856b5e-f11d-45ed-b560-5115a72670cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.819 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] VM Resumed (Lifecycle Event)
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.864 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.869 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.887 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.967 2 INFO nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Took 17.95 seconds to spawn the instance on the hypervisor.
Oct 02 12:56:21 compute-0 nova_compute[257802]: 2025-10-02 12:56:21.968 2 DEBUG nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-0 nova_compute[257802]: 2025-10-02 12:56:22.084 2 INFO nova.compute.manager [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Took 21.54 seconds to build instance.
Oct 02 12:56:22 compute-0 nova_compute[257802]: 2025-10-02 12:56:22.159 2 DEBUG oslo_concurrency.lockutils [None req-4b458ae9-96c7-4f38-b95f-d2ef4811cc55 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 02 12:56:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-0 podman[379005]: 2025-10-02 12:56:22.92095491 +0000 UTC m=+0.059224606 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:56:22 compute-0 podman[379007]: 2025-10-02 12:56:22.922690202 +0000 UTC m=+0.057347060 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:56:22 compute-0 podman[379006]: 2025-10-02 12:56:22.928987277 +0000 UTC m=+0.065370287 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:56:23 compute-0 nova_compute[257802]: 2025-10-02 12:56:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:23 compute-0 nova_compute[257802]: 2025-10-02 12:56:23.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:23 compute-0 ceph-mon[73607]: pgmap v2922: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.018 2 DEBUG nova.compute.manager [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.019 2 DEBUG oslo_concurrency.lockutils [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.019 2 DEBUG oslo_concurrency.lockutils [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.019 2 DEBUG oslo_concurrency.lockutils [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.019 2 DEBUG nova.compute.manager [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] No waiting events found dispatching network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.019 2 WARNING nova.compute.manager [req-1f5de96f-47df-446f-872d-c35e02a9bd9e req-4e0dffc0-237b-45bb-a513-7a58827f61ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received unexpected event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b for instance with vm_state active and task_state None.
Oct 02 12:56:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 02 12:56:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:24 compute-0 nova_compute[257802]: 2025-10-02 12:56:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:25 compute-0 nova_compute[257802]: 2025-10-02 12:56:25.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:25 compute-0 nova_compute[257802]: 2025-10-02 12:56:25.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:25 compute-0 ceph-mon[73607]: pgmap v2923: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 02 12:56:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 12:56:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:26.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:26.977 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:26.978 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:26.979 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:27 compute-0 nova_compute[257802]: 2025-10-02 12:56:27.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:27 compute-0 ceph-mon[73607]: pgmap v2924: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 12:56:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 12:56:28 compute-0 nova_compute[257802]: 2025-10-02 12:56:28.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:28 compute-0 ceph-mon[73607]: pgmap v2925: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 12:56:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:28.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:28 compute-0 podman[379062]: 2025-10-02 12:56:28.978673026 +0000 UTC m=+0.106730512 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:56:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:29 compute-0 nova_compute[257802]: 2025-10-02 12:56:29.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.444 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.446 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.446 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.446 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.447 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.448 2 INFO nova.compute.manager [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Terminating instance
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.449 2 DEBUG nova.compute.manager [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:56:30 compute-0 kernel: tap45adb0d1-b9 (unregistering): left promiscuous mode
Oct 02 12:56:30 compute-0 NetworkManager[44987]: <info>  [1759409790.4994] device (tap45adb0d1-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:56:30 compute-0 ovn_controller[148183]: 2025-10-02T12:56:30Z|00853|binding|INFO|Releasing lport 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b from this chassis (sb_readonly=0)
Oct 02 12:56:30 compute-0 ovn_controller[148183]: 2025-10-02T12:56:30Z|00854|binding|INFO|Setting lport 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b down in Southbound
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 ovn_controller[148183]: 2025-10-02T12:56:30Z|00855|binding|INFO|Removing iface tap45adb0d1-b9 ovn-installed in OVS
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:30.512 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:5c:ee 10.100.0.14'], port_security=['fa:16:3e:27:5c:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fe856b5e-f11d-45ed-b560-5115a72670cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f1ad1bf-3814-48be-b7c6-71850905c341', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dec97fbf4fd141868e034a8652cf0c37', 'neutron:revision_number': '4', 'neutron:security_group_ids': '25d62b8b-b795-46a6-a666-f5f91a710c2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f9884a3-9483-46da-bc68-ad2457e9000a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:30.514 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b in datapath 9f1ad1bf-3814-48be-b7c6-71850905c341 unbound from our chassis
Oct 02 12:56:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:30.515 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f1ad1bf-3814-48be-b7c6-71850905c341, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:30.516 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0151ae-bd3c-4245-bdb7-fc40b16553eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:30 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:30.516 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341 namespace which is not needed anymore
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Oct 02 12:56:30 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000bb.scope: Consumed 9.709s CPU time.
Oct 02 12:56:30 compute-0 systemd-machined[211836]: Machine qemu-92-instance-000000bb terminated.
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.694 2 INFO nova.virt.libvirt.driver [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance destroyed successfully.
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.694 2 DEBUG nova.objects.instance [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lazy-loading 'resources' on Instance uuid fe856b5e-f11d-45ed-b560-5115a72670cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.715 2 DEBUG nova.virt.libvirt.vif [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-247156381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-247156381',id=187,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dec97fbf4fd141868e034a8652cf0c37',ramdisk_id='',reservation_id='r-xr45ogql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1795188409',owner_user_name='tempest-ServerTagsTestJSON-1795188409-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:22Z,user_data=None,user_id='84e6a279aa124804af5819b25a773dc1',uuid=fe856b5e-f11d-45ed-b560-5115a72670cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.716 2 DEBUG nova.network.os_vif_util [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converting VIF {"id": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "address": "fa:16:3e:27:5c:ee", "network": {"id": "9f1ad1bf-3814-48be-b7c6-71850905c341", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1196426130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dec97fbf4fd141868e034a8652cf0c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45adb0d1-b9", "ovs_interfaceid": "45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.717 2 DEBUG nova.network.os_vif_util [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.717 2 DEBUG os_vif [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [NOTICE]   (378993) : haproxy version is 2.8.14-c23fe91
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [NOTICE]   (378993) : path to executable is /usr/sbin/haproxy
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [WARNING]  (378993) : Exiting Master process...
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [WARNING]  (378993) : Exiting Master process...
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [ALERT]    (378993) : Current worker (378995) exited with code 143 (Terminated)
Oct 02 12:56:30 compute-0 neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341[378989]: [WARNING]  (378993) : All workers exited. Exiting... (0)
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45adb0d1-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 systemd[1]: libpod-99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15.scope: Deactivated successfully.
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:30 compute-0 nova_compute[257802]: 2025-10-02 12:56:30.728 2 INFO os_vif [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:5c:ee,bridge_name='br-int',has_traffic_filtering=True,id=45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b,network=Network(9f1ad1bf-3814-48be-b7c6-71850905c341),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45adb0d1-b9')
Oct 02 12:56:30 compute-0 podman[379113]: 2025-10-02 12:56:30.729352802 +0000 UTC m=+0.135396827 container died 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:30.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15-userdata-shm.mount: Deactivated successfully.
Oct 02 12:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-531bb22c311235c7fa1adaa48cd8d9ffb626d3f2bcb06f6156b597d16630f0d4-merged.mount: Deactivated successfully.
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.214 2 DEBUG nova.compute.manager [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-unplugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.215 2 DEBUG oslo_concurrency.lockutils [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.215 2 DEBUG oslo_concurrency.lockutils [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.216 2 DEBUG oslo_concurrency.lockutils [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.217 2 DEBUG nova.compute.manager [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] No waiting events found dispatching network-vif-unplugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.217 2 DEBUG nova.compute.manager [req-48cc2e96-4994-4250-819d-324370fd7e96 req-31e17a52-3b97-4e8c-90db-e99872d0d78d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-unplugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:56:31 compute-0 podman[379113]: 2025-10-02 12:56:31.366047153 +0000 UTC m=+0.772091178 container cleanup 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:56:31 compute-0 systemd[1]: libpod-conmon-99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15.scope: Deactivated successfully.
Oct 02 12:56:31 compute-0 ceph-mon[73607]: pgmap v2926: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Oct 02 12:56:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3045358306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:31 compute-0 podman[379174]: 2025-10-02 12:56:31.446588 +0000 UTC m=+0.055714579 container remove 99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.452 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[01f3b67b-7a0e-4579-817d-a29ddc5f3f36]: (4, ('Thu Oct  2 12:56:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341 (99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15)\n99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15\nThu Oct  2 12:56:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341 (99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15)\n99cf8c91b3553f51f75dcb9bc5983a4ae6902bc8a0fafedae9e6cc78c1323b15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.455 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df24c3e7-f569-4c59-9ef9-fd03759e0e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.456 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f1ad1bf-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:31 compute-0 kernel: tap9f1ad1bf-30: left promiscuous mode
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.464 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e624b1-3dcd-4948-9635-951b98c05395]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.496 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[38c29d06-9610-4366-b62d-800d1c7877c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.498 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e02a4d2-b55b-4bda-b861-244dcf106fc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.515 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0a15eded-20d1-44c0-a73f-be700de0aa6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784820, 'reachable_time': 18848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379190, 'error': None, 'target': 'ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f1ad1bf\x2d3814\x2d48be\x2db7c6\x2d71850905c341.mount: Deactivated successfully.
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.519 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f1ad1bf-3814-48be-b7c6-71850905c341 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:56:31 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:31.519 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[401024ca-0459-4f4e-b4f0-69ab36673bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.684 2 INFO nova.virt.libvirt.driver [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Deleting instance files /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd_del
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.685 2 INFO nova.virt.libvirt.driver [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Deletion of /var/lib/nova/instances/fe856b5e-f11d-45ed-b560-5115a72670cd_del complete
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.771 2 INFO nova.compute.manager [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.772 2 DEBUG oslo.service.loopingcall [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.773 2 DEBUG nova.compute.manager [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:56:31 compute-0 nova_compute[257802]: 2025-10-02 12:56:31.773 2 DEBUG nova.network.neutron [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:56:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Oct 02 12:56:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3892657803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.261 2 DEBUG nova.network.neutron [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.313 2 INFO nova.compute.manager [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Took 1.54 seconds to deallocate network for instance.
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.381 2 DEBUG nova.compute.manager [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.381 2 DEBUG oslo_concurrency.lockutils [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.381 2 DEBUG oslo_concurrency.lockutils [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.382 2 DEBUG oslo_concurrency.lockutils [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.382 2 DEBUG nova.compute.manager [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] No waiting events found dispatching network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.382 2 WARNING nova.compute.manager [req-8b92f4b1-58f8-4475-ae1c-90df7feb83dd req-5b731b15-4d3a-4717-a955-3903cb022ead d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received unexpected event network-vif-plugged-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b for instance with vm_state active and task_state deleting.
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.383 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.383 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:33 compute-0 ceph-mon[73607]: pgmap v2927: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Oct 02 12:56:33 compute-0 nova_compute[257802]: 2025-10-02 12:56:33.729 2 DEBUG nova.compute.manager [req-ec39121f-2e1d-40f5-b21e-4c467d78559a req-67158247-25d2-47fe-a257-f0077bdde184 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Received event network-vif-deleted-45adb0d1-b9fc-4e7c-acd5-d2cfd1a1fa6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.044 2 DEBUG oslo_concurrency.processutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:56:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 77 op/s
Oct 02 12:56:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3191543182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.483 2 DEBUG oslo_concurrency.processutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.488 2 DEBUG nova.compute.provider_tree [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.509 2 DEBUG nova.scheduler.client.report [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.536 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.594 2 INFO nova.scheduler.client.report [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Deleted allocations for instance fe856b5e-f11d-45ed-b560-5115a72670cd
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.785 2 DEBUG oslo_concurrency.lockutils [None req-8811f4f1-9a2c-49e4-b15a-fc1b041c6594 84e6a279aa124804af5819b25a773dc1 dec97fbf4fd141868e034a8652cf0c37 - - default default] Lock "fe856b5e-f11d-45ed-b560-5115a72670cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:56:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.901 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.901 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.901 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.902 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid fe856b5e-f11d-45ed-b560-5115a72670cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:34 compute-0 nova_compute[257802]: 2025-10-02 12:56:34.935 2 DEBUG nova.compute.utils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Oct 02 12:56:35 compute-0 nova_compute[257802]: 2025-10-02 12:56:35.182 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:56:35 compute-0 ceph-mon[73607]: pgmap v2928: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 77 op/s
Oct 02 12:56:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3191543182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:35 compute-0 nova_compute[257802]: 2025-10-02 12:56:35.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:35 compute-0 nova_compute[257802]: 2025-10-02 12:56:35.726 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:35 compute-0 nova_compute[257802]: 2025-10-02 12:56:35.785 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-fe856b5e-f11d-45ed-b560-5115a72670cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:35 compute-0 nova_compute[257802]: 2025-10-02 12:56:35.786 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:56:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:36.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.140 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.140 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.140 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.140 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.141 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 100 op/s
Oct 02 12:56:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1642685684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:36 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2286433497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.755 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.899 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.901 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=20.927310943603516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.901 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:36 compute-0 nova_compute[257802]: 2025-10-02 12:56:36.901 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.006 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.007 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.079 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:37 compute-0 ceph-mon[73607]: pgmap v2929: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 100 op/s
Oct 02 12:56:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2286433497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:37 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514093476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.678 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.684 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.714 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.767 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:56:37 compute-0 nova_compute[257802]: 2025-10-02 12:56:37.768 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:38.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.6 KiB/s wr, 26 op/s
Oct 02 12:56:38 compute-0 sudo[379261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:38 compute-0 sudo[379261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:38 compute-0 sudo[379261]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:38 compute-0 sudo[379286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:38 compute-0 sudo[379286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:38 compute-0 sudo[379286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2514093476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:38.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.560717) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799560770, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1390, "num_deletes": 263, "total_data_size": 2216090, "memory_usage": 2250848, "flush_reason": "Manual Compaction"}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799583126, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 2178036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63578, "largest_seqno": 64967, "table_properties": {"data_size": 2171518, "index_size": 3652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14285, "raw_average_key_size": 20, "raw_value_size": 2158198, "raw_average_value_size": 3056, "num_data_blocks": 160, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409684, "oldest_key_time": 1759409684, "file_creation_time": 1759409799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 22481 microseconds, and 5566 cpu microseconds.
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.583202) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 2178036 bytes OK
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.583223) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.641899) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.641939) EVENT_LOG_v1 {"time_micros": 1759409799641929, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.641961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2209949, prev total WAL file size 2210660, number of live WAL files 2.
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.643566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353036' seq:72057594037927935, type:22 .. '6C6F676D0032373630' seq:0, type:0; will stop at (end)
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(2126KB)], [143(11MB)]
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799643644, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 14573420, "oldest_snapshot_seqno": -1}
Oct 02 12:56:39 compute-0 nova_compute[257802]: 2025-10-02 12:56:39.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:39 compute-0 ceph-mon[73607]: pgmap v2930: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.6 KiB/s wr, 26 op/s
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9280 keys, 14413930 bytes, temperature: kUnknown
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799770869, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 14413930, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14350489, "index_size": 39136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 243868, "raw_average_key_size": 26, "raw_value_size": 14184222, "raw_average_value_size": 1528, "num_data_blocks": 1509, "num_entries": 9280, "num_filter_entries": 9280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.771161) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 14413930 bytes
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.782319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.5 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(13.3) write-amplify(6.6) OK, records in: 9822, records dropped: 542 output_compression: NoCompression
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.782351) EVENT_LOG_v1 {"time_micros": 1759409799782338, "job": 88, "event": "compaction_finished", "compaction_time_micros": 127313, "compaction_time_cpu_micros": 31258, "output_level": 6, "num_output_files": 1, "total_output_size": 14413930, "num_input_records": 9822, "num_output_records": 9280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799782978, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799785292, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.643354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.785334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.785340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.785342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.785344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:39 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:56:39.785346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:40.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:40 compute-0 nova_compute[257802]: 2025-10-02 12:56:40.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-0 ceph-mon[73607]: pgmap v2931: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:42.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:56:42
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'backups', 'default.rgw.control', 'vms']
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:42.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:43 compute-0 ceph-mon[73607]: pgmap v2932: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:56:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:56:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:56:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:56:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:56:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:44.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:56:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:44 compute-0 nova_compute[257802]: 2025-10-02 12:56:44.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-0 nova_compute[257802]: 2025-10-02 12:56:44.769 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:45 compute-0 ceph-mon[73607]: pgmap v2933: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 12:56:45 compute-0 nova_compute[257802]: 2025-10-02 12:56:45.693 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409790.69161, fe856b5e-f11d-45ed-b560-5115a72670cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:45 compute-0 nova_compute[257802]: 2025-10-02 12:56:45.693 2 INFO nova.compute.manager [-] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] VM Stopped (Lifecycle Event)
Oct 02 12:56:45 compute-0 nova_compute[257802]: 2025-10-02 12:56:45.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 nova_compute[257802]: 2025-10-02 12:56:45.783 2 DEBUG nova.compute.manager [None req-a11a5b6b-8b7a-4310-b373-00d819aa5fbe - - - - - -] [instance: fe856b5e-f11d-45ed-b560-5115a72670cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:46.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 02 12:56:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:56:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:46.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:56:47 compute-0 nova_compute[257802]: 2025-10-02 12:56:47.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:47 compute-0 ceph-mon[73607]: pgmap v2934: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 02 12:56:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4243037233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:56:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2108423254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:48.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:49 compute-0 ceph-mon[73607]: pgmap v2935: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:56:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:49 compute-0 nova_compute[257802]: 2025-10-02 12:56:49.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:50.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:56:50 compute-0 nova_compute[257802]: 2025-10-02 12:56:50.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:50.691 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:50.693 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:56:50 compute-0 nova_compute[257802]: 2025-10-02 12:56:50.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:51 compute-0 ceph-mon[73607]: pgmap v2936: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:56:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:56:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:52.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:56:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Oct 02 12:56:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:52.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:53 compute-0 ceph-mon[73607]: pgmap v2937: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Oct 02 12:56:53 compute-0 podman[379321]: 2025-10-02 12:56:53.940884188 +0000 UTC m=+0.069727534 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:56:53 compute-0 podman[379320]: 2025-10-02 12:56:53.948215988 +0000 UTC m=+0.075726701 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 12:56:53 compute-0 podman[379319]: 2025-10-02 12:56:53.960816138 +0000 UTC m=+0.095632980 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:56:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:54.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 02 12:56:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031641497417643776 of space, bias 1.0, pg target 0.9492449225293133 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 nova_compute[257802]: 2025-10-02 12:56:54.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:56:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:54.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:55 compute-0 ceph-mon[73607]: pgmap v2938: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 02 12:56:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1893935533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:56:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1893935533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:56:55 compute-0 nova_compute[257802]: 2025-10-02 12:56:55.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 23 KiB/s wr, 11 op/s
Oct 02 12:56:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:57 compute-0 ceph-mon[73607]: pgmap v2939: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 23 KiB/s wr, 11 op/s
Oct 02 12:56:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 23 KiB/s wr, 11 op/s
Oct 02 12:56:58 compute-0 sudo[379376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:58 compute-0 sudo[379376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:58 compute-0 sudo[379376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:58 compute-0 sudo[379402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:58 compute-0 sudo[379402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:58 compute-0 sudo[379402]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:56:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:56:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:56:59 compute-0 ceph-mon[73607]: pgmap v2940: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 23 KiB/s wr, 11 op/s
Oct 02 12:56:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:59 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:56:59.695 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:59 compute-0 nova_compute[257802]: 2025-10-02 12:56:59.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:59 compute-0 podman[379427]: 2025-10-02 12:56:59.942556639 +0000 UTC m=+0.082797975 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:57:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:00.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 23 KiB/s wr, 68 op/s
Oct 02 12:57:00 compute-0 nova_compute[257802]: 2025-10-02 12:57:00.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:01 compute-0 ceph-mon[73607]: pgmap v2941: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 23 KiB/s wr, 68 op/s
Oct 02 12:57:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:02.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 67 op/s
Oct 02 12:57:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:03 compute-0 ceph-mon[73607]: pgmap v2942: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 67 op/s
Oct 02 12:57:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 12:57:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.705931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824706028, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 250, "total_data_size": 410612, "memory_usage": 418704, "flush_reason": "Manual Compaction"}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824710306, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 313927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64968, "largest_seqno": 65414, "table_properties": {"data_size": 311514, "index_size": 512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6595, "raw_average_key_size": 20, "raw_value_size": 306608, "raw_average_value_size": 946, "num_data_blocks": 23, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409799, "oldest_key_time": 1759409799, "file_creation_time": 1759409824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 4422 microseconds, and 2290 cpu microseconds.
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.710366) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 313927 bytes OK
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.710395) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.711983) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.712005) EVENT_LOG_v1 {"time_micros": 1759409824711998, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.712035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 407941, prev total WAL file size 407941, number of live WAL files 2.
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.712752) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323630' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(306KB)], [146(13MB)]
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824712869, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 14727857, "oldest_snapshot_seqno": -1}
Oct 02 12:57:04 compute-0 nova_compute[257802]: 2025-10-02 12:57:04.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9101 keys, 10984862 bytes, temperature: kUnknown
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824796519, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 10984862, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10927237, "index_size": 33768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240339, "raw_average_key_size": 26, "raw_value_size": 10768664, "raw_average_value_size": 1183, "num_data_blocks": 1286, "num_entries": 9101, "num_filter_entries": 9101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.796952) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10984862 bytes
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.798088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.8 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(81.9) write-amplify(35.0) OK, records in: 9604, records dropped: 503 output_compression: NoCompression
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.798107) EVENT_LOG_v1 {"time_micros": 1759409824798097, "job": 90, "event": "compaction_finished", "compaction_time_micros": 83763, "compaction_time_cpu_micros": 47498, "output_level": 6, "num_output_files": 1, "total_output_size": 10984862, "num_input_records": 9604, "num_output_records": 9101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824798321, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824801231, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.712636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.801337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.801344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.801353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.801355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:57:04.801357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:04.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:05 compute-0 ceph-mon[73607]: pgmap v2943: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 12:57:05 compute-0 nova_compute[257802]: 2025-10-02 12:57:05.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:06.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 12:57:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:06.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:07 compute-0 ceph-mon[73607]: pgmap v2944: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 12:57:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 02 12:57:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:08.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:08 compute-0 sudo[379458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:08 compute-0 sudo[379458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:08 compute-0 ceph-mon[73607]: pgmap v2945: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 02 12:57:08 compute-0 sudo[379458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:09 compute-0 sudo[379483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 sudo[379483]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:09 compute-0 sudo[379508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 sudo[379508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:57:09 compute-0 sudo[379533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:57:09 compute-0 sudo[379533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:57:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:57:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:57:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:09 compute-0 nova_compute[257802]: 2025-10-02 12:57:09.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:09 compute-0 sudo[379578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:09 compute-0 sudo[379578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 sudo[379578]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:09 compute-0 sudo[379603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 sudo[379603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:09 compute-0 sudo[379628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:09 compute-0 sudo[379628]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:09 compute-0 sudo[379653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:57:09 compute-0 sudo[379653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 02 12:57:10 compute-0 sudo[379653]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1205470845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 97f764a8-d94b-4e56-aa6b-362b817db616 does not exist
Oct 02 12:57:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e61e9f34-7f60-48e5-a5cc-fda771b6d1fc does not exist
Oct 02 12:57:10 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 43d873e1-1109-4838-be42-4fedd4e5f9fc does not exist
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:57:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:57:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:10 compute-0 sudo[379710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:10 compute-0 sudo[379710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:10 compute-0 sudo[379710]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:10 compute-0 nova_compute[257802]: 2025-10-02 12:57:10.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:10 compute-0 sudo[379735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:10 compute-0 sudo[379735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:10 compute-0 sudo[379735]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:10.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:10 compute-0 sudo[379760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:10 compute-0 sudo[379760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:10 compute-0 sudo[379760]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:11 compute-0 sudo[379785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:57:11 compute-0 sudo[379785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.403436624 +0000 UTC m=+0.052511332 container create 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:57:11 compute-0 systemd[1]: Started libpod-conmon-3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9.scope.
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.379996537 +0000 UTC m=+0.029071285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.49653294 +0000 UTC m=+0.145607698 container init 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.511542009 +0000 UTC m=+0.160616717 container start 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.514793939 +0000 UTC m=+0.163868697 container attach 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:57:11 compute-0 unruffled_hawking[379869]: 167 167
Oct 02 12:57:11 compute-0 systemd[1]: libpod-3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9.scope: Deactivated successfully.
Oct 02 12:57:11 compute-0 conmon[379869]: conmon 3f2cd867b390f0dbf9f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9.scope/container/memory.events
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.521440892 +0000 UTC m=+0.170515590 container died 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d70296d4ce6616c03b96f463af6ee11dcdafd4d5111d2dbdf9ff339e9e215e1-merged.mount: Deactivated successfully.
Oct 02 12:57:11 compute-0 podman[379853]: 2025-10-02 12:57:11.56206466 +0000 UTC m=+0.211139378 container remove 3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:11 compute-0 systemd[1]: libpod-conmon-3f2cd867b390f0dbf9f858ecef3159b7b102258c85709a8493fd3841d62a02c9.scope: Deactivated successfully.
Oct 02 12:57:11 compute-0 ceph-mon[73607]: pgmap v2946: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/751274490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:11 compute-0 podman[379892]: 2025-10-02 12:57:11.758648539 +0000 UTC m=+0.047484347 container create f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:57:11 compute-0 systemd[1]: Started libpod-conmon-f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d.scope.
Oct 02 12:57:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:11 compute-0 podman[379892]: 2025-10-02 12:57:11.828471594 +0000 UTC m=+0.117307422 container init f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:11 compute-0 podman[379892]: 2025-10-02 12:57:11.739255843 +0000 UTC m=+0.028091751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:11 compute-0 podman[379892]: 2025-10-02 12:57:11.843250307 +0000 UTC m=+0.132086115 container start f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:11 compute-0 podman[379892]: 2025-10-02 12:57:11.84905897 +0000 UTC m=+0.137894798 container attach f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:57:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 1.6 MiB/s wr, 32 op/s
Oct 02 12:57:12 compute-0 flamboyant_bhaskara[379908]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:57:12 compute-0 flamboyant_bhaskara[379908]: --> relative data size: 1.0
Oct 02 12:57:12 compute-0 flamboyant_bhaskara[379908]: --> All data devices are unavailable
Oct 02 12:57:12 compute-0 systemd[1]: libpod-f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d.scope: Deactivated successfully.
Oct 02 12:57:12 compute-0 podman[379892]: 2025-10-02 12:57:12.715707759 +0000 UTC m=+1.004543607 container died f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7d7e13b73ba64f38868529829903a63bd6695077ecfd83be7d17d15f5345d8-merged.mount: Deactivated successfully.
Oct 02 12:57:12 compute-0 podman[379892]: 2025-10-02 12:57:12.804418538 +0000 UTC m=+1.093254346 container remove f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhaskara, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:12 compute-0 systemd[1]: libpod-conmon-f63c156c39931196a8c4082ddb32f61e5e7e8b2572b655c1407b8473b0a6e37d.scope: Deactivated successfully.
Oct 02 12:57:12 compute-0 sudo[379785]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:12 compute-0 sudo[379937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:12 compute-0 sudo[379937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:12 compute-0 sudo[379937]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:12 compute-0 sudo[379962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:12 compute-0 sudo[379962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:12 compute-0 sudo[379962]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:13 compute-0 sudo[379987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:13 compute-0 sudo[379987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:13 compute-0 sudo[379987]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:13 compute-0 sudo[380012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:57:13 compute-0 sudo[380012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.350943894 +0000 UTC m=+0.038489337 container create bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:13 compute-0 systemd[1]: Started libpod-conmon-bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1.scope.
Oct 02 12:57:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.334657743 +0000 UTC m=+0.022203166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.432573599 +0000 UTC m=+0.120119052 container init bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.439335465 +0000 UTC m=+0.126880888 container start bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.443012985 +0000 UTC m=+0.130558408 container attach bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:57:13 compute-0 goofy_leavitt[380092]: 167 167
Oct 02 12:57:13 compute-0 systemd[1]: libpod-bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1.scope: Deactivated successfully.
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.44602413 +0000 UTC m=+0.133569553 container died bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93c7efe388ae736714e6b6d0011c2dc67716346cf774c8609b4bba2813dabe5-merged.mount: Deactivated successfully.
Oct 02 12:57:13 compute-0 podman[380076]: 2025-10-02 12:57:13.489025715 +0000 UTC m=+0.176571148 container remove bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leavitt, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 02 12:57:13 compute-0 systemd[1]: libpod-conmon-bca3cd7299e345d2870a8e4ce63467c07867c12463c6c26047fb50fb55b173b1.scope: Deactivated successfully.
Oct 02 12:57:13 compute-0 podman[380114]: 2025-10-02 12:57:13.648771989 +0000 UTC m=+0.042776041 container create 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:57:13 compute-0 ceph-mon[73607]: pgmap v2947: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 1.6 MiB/s wr, 32 op/s
Oct 02 12:57:13 compute-0 systemd[1]: Started libpod-conmon-8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc.scope.
Oct 02 12:57:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e6a07f6b70ada95f5b8b9be13394ff6748b404d423b7ef218d3c7d8e39dc75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e6a07f6b70ada95f5b8b9be13394ff6748b404d423b7ef218d3c7d8e39dc75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e6a07f6b70ada95f5b8b9be13394ff6748b404d423b7ef218d3c7d8e39dc75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e6a07f6b70ada95f5b8b9be13394ff6748b404d423b7ef218d3c7d8e39dc75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:13 compute-0 podman[380114]: 2025-10-02 12:57:13.631302211 +0000 UTC m=+0.025306263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:13 compute-0 podman[380114]: 2025-10-02 12:57:13.739596441 +0000 UTC m=+0.133600493 container init 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:13 compute-0 podman[380114]: 2025-10-02 12:57:13.745749992 +0000 UTC m=+0.139754024 container start 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:57:13 compute-0 podman[380114]: 2025-10-02 12:57:13.750918098 +0000 UTC m=+0.144922140 container attach 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:14.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 271 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Oct 02 12:57:14 compute-0 competent_carver[380130]: {
Oct 02 12:57:14 compute-0 competent_carver[380130]:     "1": [
Oct 02 12:57:14 compute-0 competent_carver[380130]:         {
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "devices": [
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "/dev/loop3"
Oct 02 12:57:14 compute-0 competent_carver[380130]:             ],
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "lv_name": "ceph_lv0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "lv_size": "7511998464",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "name": "ceph_lv0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "tags": {
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.cluster_name": "ceph",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.crush_device_class": "",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.encrypted": "0",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.osd_id": "1",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.type": "block",
Oct 02 12:57:14 compute-0 competent_carver[380130]:                 "ceph.vdo": "0"
Oct 02 12:57:14 compute-0 competent_carver[380130]:             },
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "type": "block",
Oct 02 12:57:14 compute-0 competent_carver[380130]:             "vg_name": "ceph_vg0"
Oct 02 12:57:14 compute-0 competent_carver[380130]:         }
Oct 02 12:57:14 compute-0 competent_carver[380130]:     ]
Oct 02 12:57:14 compute-0 competent_carver[380130]: }
Oct 02 12:57:14 compute-0 systemd[1]: libpod-8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc.scope: Deactivated successfully.
Oct 02 12:57:14 compute-0 podman[380114]: 2025-10-02 12:57:14.541294744 +0000 UTC m=+0.935298776 container died 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:57:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-24e6a07f6b70ada95f5b8b9be13394ff6748b404d423b7ef218d3c7d8e39dc75-merged.mount: Deactivated successfully.
Oct 02 12:57:14 compute-0 podman[380114]: 2025-10-02 12:57:14.596440899 +0000 UTC m=+0.990444931 container remove 8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:57:14 compute-0 systemd[1]: libpod-conmon-8a5e169fe787becda57679576767b8952d8e9dcb2b62d8bc274941522e71c7cc.scope: Deactivated successfully.
Oct 02 12:57:14 compute-0 sudo[380012]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:14 compute-0 sudo[380150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:14 compute-0 sudo[380150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:14 compute-0 sudo[380150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:14 compute-0 nova_compute[257802]: 2025-10-02 12:57:14.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:14 compute-0 sudo[380176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:14 compute-0 sudo[380176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:14 compute-0 sudo[380176]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:14 compute-0 sudo[380201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:14 compute-0 sudo[380201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:14 compute-0 sudo[380201]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:14.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:14 compute-0 sudo[380226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:57:14 compute-0 sudo[380226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.24743041 +0000 UTC m=+0.034252052 container create 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 02 12:57:15 compute-0 systemd[1]: Started libpod-conmon-337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6.scope.
Oct 02 12:57:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.231396687 +0000 UTC m=+0.018218349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.336447667 +0000 UTC m=+0.123269339 container init 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.345406717 +0000 UTC m=+0.132228369 container start 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:57:15 compute-0 distracted_ptolemy[380304]: 167 167
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.349538829 +0000 UTC m=+0.136360491 container attach 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:57:15 compute-0 systemd[1]: libpod-337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6.scope: Deactivated successfully.
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.351076406 +0000 UTC m=+0.137898048 container died 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6b94e36b6fce5dbbade8ce7da9a6671f0ab7908d7cc0040608f7498f8c84e3-merged.mount: Deactivated successfully.
Oct 02 12:57:15 compute-0 podman[380290]: 2025-10-02 12:57:15.398571903 +0000 UTC m=+0.185393545 container remove 337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:57:15 compute-0 systemd[1]: libpod-conmon-337c1f0cd2a830305c7643001bb288d7c143ba6a951a88b9c9641dab05bee9f6.scope: Deactivated successfully.
Oct 02 12:57:15 compute-0 podman[380331]: 2025-10-02 12:57:15.566361445 +0000 UTC m=+0.039936723 container create eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:57:15 compute-0 systemd[1]: Started libpod-conmon-eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727.scope.
Oct 02 12:57:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f516f633ff03c2674703f120d53f7663aa33d1af386ae63843feb3ea2296653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f516f633ff03c2674703f120d53f7663aa33d1af386ae63843feb3ea2296653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f516f633ff03c2674703f120d53f7663aa33d1af386ae63843feb3ea2296653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f516f633ff03c2674703f120d53f7663aa33d1af386ae63843feb3ea2296653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:15 compute-0 podman[380331]: 2025-10-02 12:57:15.549534412 +0000 UTC m=+0.023109710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:15 compute-0 podman[380331]: 2025-10-02 12:57:15.648869272 +0000 UTC m=+0.122444560 container init eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:57:15 compute-0 podman[380331]: 2025-10-02 12:57:15.65774734 +0000 UTC m=+0.131322628 container start eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:57:15 compute-0 podman[380331]: 2025-10-02 12:57:15.66063081 +0000 UTC m=+0.134206088 container attach eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:57:15 compute-0 ceph-mon[73607]: pgmap v2948: 305 pgs: 305 active+clean; 271 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Oct 02 12:57:15 compute-0 nova_compute[257802]: 2025-10-02 12:57:15.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:16.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 12:57:16 compute-0 eager_poitras[380348]: {
Oct 02 12:57:16 compute-0 eager_poitras[380348]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:57:16 compute-0 eager_poitras[380348]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:57:16 compute-0 eager_poitras[380348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:57:16 compute-0 eager_poitras[380348]:         "osd_id": 1,
Oct 02 12:57:16 compute-0 eager_poitras[380348]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:57:16 compute-0 eager_poitras[380348]:         "type": "bluestore"
Oct 02 12:57:16 compute-0 eager_poitras[380348]:     }
Oct 02 12:57:16 compute-0 eager_poitras[380348]: }
Oct 02 12:57:16 compute-0 systemd[1]: libpod-eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727.scope: Deactivated successfully.
Oct 02 12:57:16 compute-0 podman[380331]: 2025-10-02 12:57:16.46457418 +0000 UTC m=+0.938149478 container died eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f516f633ff03c2674703f120d53f7663aa33d1af386ae63843feb3ea2296653-merged.mount: Deactivated successfully.
Oct 02 12:57:16 compute-0 podman[380331]: 2025-10-02 12:57:16.516220328 +0000 UTC m=+0.989795606 container remove eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:57:16 compute-0 systemd[1]: libpod-conmon-eddaa734e76f2c65d96db9884dcf5af4818b96de3b944f70839f94f5274d1727.scope: Deactivated successfully.
Oct 02 12:57:16 compute-0 sudo[380226]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:57:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:57:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8ac016a9-181b-4c59-9dfc-5b88577020f2 does not exist
Oct 02 12:57:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ace2f136-2455-49f5-a2b8-3a7d7d2796ef does not exist
Oct 02 12:57:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 36d89f0f-5576-4f8d-943b-afb21d65fd0f does not exist
Oct 02 12:57:16 compute-0 sudo[380382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:16 compute-0 sudo[380382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:16 compute-0 sudo[380382]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:16 compute-0 sudo[380407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:57:16 compute-0 sudo[380407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:16 compute-0 sudo[380407]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:17 compute-0 ceph-mon[73607]: pgmap v2949: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 12:57:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:57:18 compute-0 nova_compute[257802]: 2025-10-02 12:57:18.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:18 compute-0 nova_compute[257802]: 2025-10-02 12:57:18.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:57:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:57:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:57:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:57:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/359895377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:18 compute-0 sudo[380433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:18 compute-0 sudo[380433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:18 compute-0 sudo[380433]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:18 compute-0 sudo[380458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:18 compute-0 sudo[380458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:18 compute-0 sudo[380458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:19 compute-0 ceph-mon[73607]: pgmap v2950: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 12:57:19 compute-0 nova_compute[257802]: 2025-10-02 12:57:19.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 3.1 MiB/s wr, 82 op/s
Oct 02 12:57:20 compute-0 nova_compute[257802]: 2025-10-02 12:57:20.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:20 compute-0 ceph-mon[73607]: pgmap v2951: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 3.1 MiB/s wr, 82 op/s
Oct 02 12:57:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:20.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:21 compute-0 nova_compute[257802]: 2025-10-02 12:57:21.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 217 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 12:57:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.093 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.093 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.124 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.315 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.316 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.323 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.324 2 INFO nova.compute.claims [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:57:23 compute-0 ceph-mon[73607]: pgmap v2952: 305 pgs: 305 active+clean; 306 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 217 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.523 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474672193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:23 compute-0 nova_compute[257802]: 2025-10-02 12:57:23.997 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.006 2 DEBUG nova.compute.provider_tree [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.027 2 DEBUG nova.scheduler.client.report [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.052 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.053 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.068 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.069 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:24.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.123 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.124 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.128 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.230 2 INFO nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.268 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:57:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.294 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.295 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.306 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.307 2 INFO nova.compute.claims [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:57:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3474672193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.763 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.764 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.767 2 INFO nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Creating image(s)
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.799 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.836 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.877 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.881 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:24.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:24 compute-0 podman[380544]: 2025-10-02 12:57:24.934469152 +0000 UTC m=+0.066719891 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 12:57:24 compute-0 podman[380561]: 2025-10-02 12:57:24.944149769 +0000 UTC m=+0.068979556 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.962 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.963 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.963 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:24 compute-0 nova_compute[257802]: 2025-10-02 12:57:24.964 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:24 compute-0 podman[380552]: 2025-10-02 12:57:24.977345794 +0000 UTC m=+0.105597624 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.003 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.009 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 667910fa-37c3-40ec-a844-293ff0af4324_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.046 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.317 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 667910fa-37c3-40ec-a844-293ff0af4324_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:25 compute-0 ceph-mon[73607]: pgmap v2953: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.407 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] resizing rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:57:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58234155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.534 2 DEBUG nova.objects.instance [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'migration_context' on Instance uuid 667910fa-37c3-40ec-a844-293ff0af4324 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.537 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.547 2 DEBUG nova.compute.provider_tree [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.567 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.568 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Ensure instance console log exists: /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.568 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.569 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.569 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.570 2 DEBUG nova.scheduler.client.report [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.619 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.620 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.691 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.691 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.734 2 INFO nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.759 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.895 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.897 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.898 2 INFO nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Creating image(s)
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.934 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:25 compute-0 nova_compute[257802]: 2025-10-02 12:57:25.973 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.005 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.010 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.077 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.078 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.079 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.079 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.111 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.115 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 01108902-768b-4bee-baff-11d5854e2f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.141 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.264 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Successfully created port: acafd85c-21eb-448e-b19a-ef74dc7a64f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 190 KiB/s rd, 1.9 MiB/s wr, 56 op/s
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.328 2 DEBUG nova.policy [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3299a1aed5af4843a91417a3f181c172', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7168b5b1300495d90592b195824729a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.386 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 01108902-768b-4bee-baff-11d5854e2f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/58234155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2776539741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.471 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] resizing rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.590 2 DEBUG nova.objects.instance [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'migration_context' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.604 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.605 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Ensure instance console log exists: /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.605 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.606 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:26 compute-0 nova_compute[257802]: 2025-10-02 12:57:26.606 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:26.979 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:26.980 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:26.980 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:27 compute-0 ceph-mon[73607]: pgmap v2954: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 190 KiB/s rd, 1.9 MiB/s wr, 56 op/s
Oct 02 12:57:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1647208712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:28.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.150 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Successfully created port: 612e6054-5ce1-486c-aa51-2e5d47567ef3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.153 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Successfully updated port: acafd85c-21eb-448e-b19a-ef74dc7a64f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.196 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.197 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquired lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.197 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.495 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.747 2 DEBUG nova.compute.manager [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-changed-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.747 2 DEBUG nova.compute.manager [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Refreshing instance network info cache due to event network-changed-acafd85c-21eb-448e-b19a-ef74dc7a64f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:28 compute-0 nova_compute[257802]: 2025-10-02 12:57:28.748 2 DEBUG oslo_concurrency.lockutils [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:28.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:29 compute-0 ceph-mon[73607]: pgmap v2955: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 12:57:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/754908909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:29 compute-0 nova_compute[257802]: 2025-10-02 12:57:29.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 5.7 MiB/s wr, 101 op/s
Oct 02 12:57:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2604117681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.522 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Successfully updated port: 612e6054-5ce1-486c-aa51-2e5d47567ef3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.576 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.577 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.577 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.712 2 DEBUG nova.compute.manager [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-changed-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.713 2 DEBUG nova.compute.manager [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Refreshing instance network info cache due to event network-changed-612e6054-5ce1-486c-aa51-2e5d47567ef3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.714 2 DEBUG oslo_concurrency.lockutils [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.781 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:30 compute-0 nova_compute[257802]: 2025-10-02 12:57:30.858 2 DEBUG nova.network.neutron [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Updating instance_info_cache with network_info: [{"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:30 compute-0 podman[380922]: 2025-10-02 12:57:30.993282045 +0000 UTC m=+0.122762597 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.093 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Releasing lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.093 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Instance network_info: |[{"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.094 2 DEBUG oslo_concurrency.lockutils [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.094 2 DEBUG nova.network.neutron [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Refreshing network info cache for port acafd85c-21eb-448e-b19a-ef74dc7a64f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.098 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Start _get_guest_xml network_info=[{"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.103 2 WARNING nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.107 2 DEBUG nova.virt.libvirt.host [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.108 2 DEBUG nova.virt.libvirt.host [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.111 2 DEBUG nova.virt.libvirt.host [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.112 2 DEBUG nova.virt.libvirt.host [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.113 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.113 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.114 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.114 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.114 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.114 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.115 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.115 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.115 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.115 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.116 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.116 2 DEBUG nova.virt.hardware [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.119 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:31 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130114507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.596 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.623 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:31 compute-0 nova_compute[257802]: 2025-10-02 12:57:31.627 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13921474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.073 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.074 2 DEBUG nova.virt.libvirt.vif [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-860206562',display_name='tempest-TestServerMultinode-server-860206562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-860206562',id=190,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-j4zxv7lt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:24Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=667910fa-37c3-40ec-a844-293ff0af4324,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.075 2 DEBUG nova.network.os_vif_util [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.075 2 DEBUG nova.network.os_vif_util [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.076 2 DEBUG nova.objects.instance [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'pci_devices' on Instance uuid 667910fa-37c3-40ec-a844-293ff0af4324 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.097 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <uuid>667910fa-37c3-40ec-a844-293ff0af4324</uuid>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <name>instance-000000be</name>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:name>tempest-TestServerMultinode-server-860206562</nova:name>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:57:31</nova:creationTime>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:user uuid="de066041e985417da95924c04915bd11">tempest-TestServerMultinode-1785572191-project-admin</nova:user>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:project uuid="8668725b86704fdcacbb467738b51154">tempest-TestServerMultinode-1785572191</nova:project>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <nova:port uuid="acafd85c-21eb-448e-b19a-ef74dc7a64f9">
Oct 02 12:57:32 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <system>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="serial">667910fa-37c3-40ec-a844-293ff0af4324</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="uuid">667910fa-37c3-40ec-a844-293ff0af4324</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </system>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <os>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </os>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <features>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </features>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/667910fa-37c3-40ec-a844-293ff0af4324_disk">
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </source>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/667910fa-37c3-40ec-a844-293ff0af4324_disk.config">
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </source>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:57:32 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e6:ed:4f"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <target dev="tapacafd85c-21"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/console.log" append="off"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <video>
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </video>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:57:32 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:57:32 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:57:32 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:57:32 compute-0 nova_compute[257802]: </domain>
Oct 02 12:57:32 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.098 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Preparing to wait for external event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.098 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.099 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.099 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.100 2 DEBUG nova.virt.libvirt.vif [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-860206562',display_name='tempest-TestServerMultinode-server-860206562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-860206562',id=190,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-j4zxv7lt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:24Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=667910fa-37c3-40ec-a844-293ff0af4324,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.100 2 DEBUG nova.network.os_vif_util [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.100 2 DEBUG nova.network.os_vif_util [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.101 2 DEBUG os_vif [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacafd85c-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacafd85c-21, col_values=(('external_ids', {'iface-id': 'acafd85c-21eb-448e-b19a-ef74dc7a64f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:ed:4f', 'vm-uuid': '667910fa-37c3-40ec-a844-293ff0af4324'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:32 compute-0 NetworkManager[44987]: <info>  [1759409852.1083] manager: (tapacafd85c-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.115 2 INFO os_vif [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21')
Oct 02 12:57:32 compute-0 ceph-mon[73607]: pgmap v2956: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 5.7 MiB/s wr, 101 op/s
Oct 02 12:57:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2108199638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.165 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.165 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.165 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No VIF found with MAC fa:16:3e:e6:ed:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.166 2 INFO nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Using config drive
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.196 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 4.8 MiB/s wr, 85 op/s
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.453 2 DEBUG nova.network.neutron [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.507 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.507 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance network_info: |[{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.508 2 DEBUG oslo_concurrency.lockutils [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.508 2 DEBUG nova.network.neutron [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Refreshing network info cache for port 612e6054-5ce1-486c-aa51-2e5d47567ef3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.511 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start _get_guest_xml network_info=[{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.524 2 WARNING nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.534 2 DEBUG nova.virt.libvirt.host [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.535 2 DEBUG nova.virt.libvirt.host [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.543 2 DEBUG nova.virt.libvirt.host [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.544 2 DEBUG nova.virt.libvirt.host [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.545 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.546 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.547 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.547 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.547 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.547 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.547 2 DEBUG nova.virt.hardware [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.550 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.708 2 INFO nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Creating config drive at /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.713 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ist9ngh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.846 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ist9ngh" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.882 2 DEBUG nova.storage.rbd_utils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 667910fa-37c3-40ec-a844-293ff0af4324_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.885 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config 667910fa-37c3-40ec-a844-293ff0af4324_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658027194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:32 compute-0 nova_compute[257802]: 2025-10-02 12:57:32.997 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.026 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.029 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4130114507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/13921474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-0 ceph-mon[73607]: pgmap v2957: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 4.8 MiB/s wr, 85 op/s
Oct 02 12:57:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/658027194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576544436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.474 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.476 2 DEBUG nova.virt.libvirt.vif [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.477 2 DEBUG nova.network.os_vif_util [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.477 2 DEBUG nova.network.os_vif_util [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.478 2 DEBUG nova.objects.instance [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.480 2 DEBUG oslo_concurrency.processutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config 667910fa-37c3-40ec-a844-293ff0af4324_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.481 2 INFO nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Deleting local config drive /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324/disk.config because it was imported into RBD.
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.522 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <uuid>01108902-768b-4bee-baff-11d5854e2f77</uuid>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <name>instance-000000bf</name>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeTestJSON-server-1568055795</nova:name>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:57:32</nova:creationTime>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:user uuid="3299a1aed5af4843a91417a3f181c172">tempest-AttachVolumeTestJSON-398185718-project-member</nova:user>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:project uuid="e7168b5b1300495d90592b195824729a">tempest-AttachVolumeTestJSON-398185718</nova:project>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <nova:port uuid="612e6054-5ce1-486c-aa51-2e5d47567ef3">
Oct 02 12:57:33 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <system>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="serial">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="uuid">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </system>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <os>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </os>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <features>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </features>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk">
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </source>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk.config">
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </source>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:57:33 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3d:bc:3b"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <target dev="tap612e6054-5c"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/console.log" append="off"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <video>
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </video>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:57:33 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:57:33 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:57:33 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:57:33 compute-0 nova_compute[257802]: </domain>
Oct 02 12:57:33 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.522 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Preparing to wait for external event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.522 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.523 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.523 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.523 2 DEBUG nova.virt.libvirt.vif [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.523 2 DEBUG nova.network.os_vif_util [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.524 2 DEBUG nova.network.os_vif_util [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.524 2 DEBUG os_vif [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap612e6054-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap612e6054-5c, col_values=(('external_ids', {'iface-id': '612e6054-5ce1-486c-aa51-2e5d47567ef3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:bc:3b', 'vm-uuid': '01108902-768b-4bee-baff-11d5854e2f77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 kernel: tapacafd85c-21: entered promiscuous mode
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.5481] manager: (tapacafd85c-21): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 ovn_controller[148183]: 2025-10-02T12:57:33Z|00856|binding|INFO|Claiming lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 for this chassis.
Oct 02 12:57:33 compute-0 ovn_controller[148183]: 2025-10-02T12:57:33Z|00857|binding|INFO|acafd85c-21eb-448e-b19a-ef74dc7a64f9: Claiming fa:16:3e:e6:ed:4f 10.100.0.6
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.5751] manager: (tap612e6054-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 02 12:57:33 compute-0 systemd-udevd[381143]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.5942] device (tapacafd85c-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.5964] device (tapacafd85c-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.599 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:ed:4f 10.100.0.6'], port_security=['fa:16:3e:e6:ed:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '667910fa-37c3-40ec-a844-293ff0af4324', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=acafd85c-21eb-448e-b19a-ef74dc7a64f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.600 158261 INFO neutron.agent.ovn.metadata.agent [-] Port acafd85c-21eb-448e-b19a-ef74dc7a64f9 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 bound to our chassis
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.602 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7
Oct 02 12:57:33 compute-0 systemd-machined[211836]: New machine qemu-93-instance-000000be.
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.613 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[27f28579-1846-40fa-9c1b-d73148ba6a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.614 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b3e5364-01 in ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.618 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b3e5364-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.618 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9c5839-27fc-4029-83f3-7d58bda93da5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.619 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[dedf3330-4782-4ccb-b745-259e63bd2bf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.636 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7c1f50-0242-45d3-a001-99e51737aaf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.650 2 INFO os_vif [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:57:33 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-000000be.
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.661 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0963befc-cef7-430c-ad0c-7db49b80c58b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_controller[148183]: 2025-10-02T12:57:33Z|00858|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 ovn-installed in OVS
Oct 02 12:57:33 compute-0 ovn_controller[148183]: 2025-10-02T12:57:33Z|00859|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 up in Southbound
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.692 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1efac4-14a3-4c3f-82ce-5f2e731fe262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.6993] manager: (tap9b3e5364-00): new Veth device (/org/freedesktop/NetworkManager/Devices/384)
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.699 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[48e7a926-f408-43b2-a680-70a60bfe11b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.733 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.733 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.734 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No VIF found with MAC fa:16:3e:3d:bc:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.734 2 INFO nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Using config drive
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.734 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[02cdc777-fc5e-4676-b03b-93c23176011d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.738 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4af6de86-b8ed-4683-9f3c-44c3e34a8330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.7662] device (tap9b3e5364-00): carrier: link connected
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.771 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2feebac6-f753-4a92-8f6f-839602e4596a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.771 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.786 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4d56f2af-362b-4fb5-8226-471018cf5d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b3e5364-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:74:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792139, 'reachable_time': 21814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381202, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.801 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2654d4cf-50b9-4398-b837-8f93d43a2da5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:7417'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 792139, 'tstamp': 792139}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381203, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.816 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df7ecb27-dfd1-4ef5-8c35-5f1ebf0b7cff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b3e5364-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:74:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792139, 'reachable_time': 21814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381204, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.844 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[148a061d-4519-4f51-8e3b-74e4d704584f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.889 2 DEBUG nova.network.neutron [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Updated VIF entry in instance network info cache for port acafd85c-21eb-448e-b19a-ef74dc7a64f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.890 2 DEBUG nova.network.neutron [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Updating instance_info_cache with network_info: [{"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.892 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8dace0d4-0eee-4070-8a2d-f6ac090ecfbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.894 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b3e5364-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.894 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.895 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b3e5364-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 NetworkManager[44987]: <info>  [1759409853.8975] manager: (tap9b3e5364-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 kernel: tap9b3e5364-00: entered promiscuous mode
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.901 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b3e5364-00, col_values=(('external_ids', {'iface-id': '6e4127f6-98a7-4fa7-9ba7-1b632af1bcf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 ovn_controller[148183]: 2025-10-02T12:57:33Z|00860|binding|INFO|Releasing lport 6e4127f6-98a7-4fa7-9ba7-1b632af1bcf6 from this chassis (sb_readonly=0)
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.921 2 DEBUG oslo_concurrency.lockutils [req-1b19d3b1-6612-462d-98c1-de7257f0be3a req-068a092b-16a0-4228-9d28-7a6b8ada7bfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-667910fa-37c3-40ec-a844-293ff0af4324" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:33 compute-0 nova_compute[257802]: 2025-10-02 12:57:33.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.923 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.924 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[70452b9f-67dc-4caa-821a-178df11723fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.925 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-9b3e5364-0567-4be5-b771-728ed7dd0ab7
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 9b3e5364-0567-4be5-b771-728ed7dd0ab7
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:57:33 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:33.925 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'env', 'PROCESS_TAG=haproxy-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b3e5364-0567-4be5-b771-728ed7dd0ab7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.116 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.116 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:57:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:34.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/576544436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 942 KiB/s rd, 5.1 MiB/s wr, 117 op/s
Oct 02 12:57:34 compute-0 podman[381276]: 2025-10-02 12:57:34.306438002 +0000 UTC m=+0.075706580 container create 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:57:34 compute-0 podman[381276]: 2025-10-02 12:57:34.251748699 +0000 UTC m=+0.021017297 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.354 2 DEBUG nova.compute.manager [req-d5b26886-1691-4b2f-878b-5d9076eb7952 req-33cca27c-c4fe-42dd-b9b9-b6939d76b814 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.355 2 DEBUG oslo_concurrency.lockutils [req-d5b26886-1691-4b2f-878b-5d9076eb7952 req-33cca27c-c4fe-42dd-b9b9-b6939d76b814 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.355 2 DEBUG oslo_concurrency.lockutils [req-d5b26886-1691-4b2f-878b-5d9076eb7952 req-33cca27c-c4fe-42dd-b9b9-b6939d76b814 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.356 2 DEBUG oslo_concurrency.lockutils [req-d5b26886-1691-4b2f-878b-5d9076eb7952 req-33cca27c-c4fe-42dd-b9b9-b6939d76b814 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:34 compute-0 systemd[1]: Started libpod-conmon-905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a.scope.
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.356 2 DEBUG nova.compute.manager [req-d5b26886-1691-4b2f-878b-5d9076eb7952 req-33cca27c-c4fe-42dd-b9b9-b6939d76b814 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Processing event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:57:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9963989c2a28832acc593d4c68f2b27f78a3d3ca531684a150b225525d70e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:34 compute-0 podman[381276]: 2025-10-02 12:57:34.422539574 +0000 UTC m=+0.191808182 container init 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:57:34 compute-0 podman[381276]: 2025-10-02 12:57:34.42806599 +0000 UTC m=+0.197334568 container start 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:57:34 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [NOTICE]   (381302) : New worker (381304) forked
Oct 02 12:57:34 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [NOTICE]   (381302) : Loading success.
Oct 02 12:57:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.709 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409854.7090847, 667910fa-37c3-40ec-a844-293ff0af4324 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.710 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] VM Started (Lifecycle Event)
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.712 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.716 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.719 2 INFO nova.virt.libvirt.driver [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Instance spawned successfully.
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.720 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.741 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.748 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.752 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.753 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.754 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.754 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.755 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.755 2 DEBUG nova.virt.libvirt.driver [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.783 2 INFO nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Creating config drive at /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.793 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4q8kvb9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.838 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.838 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409854.7092655, 667910fa-37c3-40ec-a844-293ff0af4324 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.838 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] VM Paused (Lifecycle Event)
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.841 2 INFO nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Took 10.08 seconds to spawn the instance on the hypervisor.
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.841 2 DEBUG nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.902 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.907 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409854.7146873, 667910fa-37c3-40ec-a844-293ff0af4324 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.908 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] VM Resumed (Lifecycle Event)
Oct 02 12:57:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.938 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4q8kvb9" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.975 2 DEBUG nova.storage.rbd_utils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 01108902-768b-4bee-baff-11d5854e2f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:34 compute-0 nova_compute[257802]: 2025-10-02 12:57:34.981 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config 01108902-768b-4bee-baff-11d5854e2f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.022 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.026 2 INFO nova.compute.manager [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Took 11.82 seconds to build instance.
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.029 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.089 2 DEBUG oslo_concurrency.lockutils [None req-d8843628-fdbd-4e29-a161-7a8965cb0302 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 ceph-mon[73607]: pgmap v2958: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 942 KiB/s rd, 5.1 MiB/s wr, 117 op/s
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.344 2 DEBUG oslo_concurrency.processutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config 01108902-768b-4bee-baff-11d5854e2f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.345 2 INFO nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Deleting local config drive /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/disk.config because it was imported into RBD.
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.367 2 DEBUG nova.network.neutron [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updated VIF entry in instance network info cache for port 612e6054-5ce1-486c-aa51-2e5d47567ef3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.367 2 DEBUG nova.network.neutron [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.396 2 DEBUG oslo_concurrency.lockutils [req-7d9d4039-c317-4ff6-a4db-6c19d49cf5a8 req-fe051509-b7b9-4c43-a11e-2ff364b8a300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:35 compute-0 kernel: tap612e6054-5c: entered promiscuous mode
Oct 02 12:57:35 compute-0 systemd-udevd[381176]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:35 compute-0 ovn_controller[148183]: 2025-10-02T12:57:35Z|00861|binding|INFO|Claiming lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 for this chassis.
Oct 02 12:57:35 compute-0 ovn_controller[148183]: 2025-10-02T12:57:35Z|00862|binding|INFO|612e6054-5ce1-486c-aa51-2e5d47567ef3: Claiming fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.4097] manager: (tap612e6054-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.417 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.418 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f bound to our chassis
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.420 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.4277] device (tap612e6054-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.4295] device (tap612e6054-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.435 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0fad4081-0ac3-4556-b8ad-069d10181b49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.435 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff8c8423-f1 in ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.439 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff8c8423-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.439 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5de4fe12-78ad-4ef6-981d-1313b434cdff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.442 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7155d6-9411-484a-b250-bcacdb2f9920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.464 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf20720-fe7d-46c1-bf9e-877153531f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 systemd-machined[211836]: New machine qemu-94-instance-000000bf.
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-000000bf.
Oct 02 12:57:35 compute-0 ovn_controller[148183]: 2025-10-02T12:57:35Z|00863|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 ovn-installed in OVS
Oct 02 12:57:35 compute-0 ovn_controller[148183]: 2025-10-02T12:57:35Z|00864|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 up in Southbound
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.492 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[75a4ba4a-1fa0-4275-8af4-4ab7fa87c54a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.520 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6474f4-d913-4f11-8a01-89bca9f9758b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.526 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9d634591-db01-4b70-a44c-3d9993752cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.5315] manager: (tapff8c8423-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.557 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[eae9dc94-7f6b-4b77-968a-9421497fd8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.573 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0b31836e-d6ec-449e-aefc-96b0b178fa1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.5973] device (tapff8c8423-f0): carrier: link connected
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.603 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[99856e46-b317-4000-b0eb-2a04ceb96437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.625 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[590b993f-4c64-4623-be9c-3810f585e220]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792322, 'reachable_time': 35384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381385, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.647 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[31ad6fce-f4f1-47f9-956b-140841f58d57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:5000'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 792322, 'tstamp': 792322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381386, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.671 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f24e6c-9256-4b9a-a285-1e1430e52c89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792322, 'reachable_time': 35384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381387, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.715 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0deb8db5-0da7-4af7-aee3-1d363626147e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.717 2 DEBUG nova.compute.manager [req-248f3b9c-df0e-4552-a8a5-2d59a755dff3 req-54bec300-d10c-480e-af5f-402034f549fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.718 2 DEBUG oslo_concurrency.lockutils [req-248f3b9c-df0e-4552-a8a5-2d59a755dff3 req-54bec300-d10c-480e-af5f-402034f549fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.718 2 DEBUG oslo_concurrency.lockutils [req-248f3b9c-df0e-4552-a8a5-2d59a755dff3 req-54bec300-d10c-480e-af5f-402034f549fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.718 2 DEBUG oslo_concurrency.lockutils [req-248f3b9c-df0e-4552-a8a5-2d59a755dff3 req-54bec300-d10c-480e-af5f-402034f549fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.718 2 DEBUG nova.compute.manager [req-248f3b9c-df0e-4552-a8a5-2d59a755dff3 req-54bec300-d10c-480e-af5f-402034f549fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Processing event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.782 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8b847b0b-7769-4c70-9cf0-0098e4ea6207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.785 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.785 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.786 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff8c8423-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 NetworkManager[44987]: <info>  [1759409855.8326] manager: (tapff8c8423-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct 02 12:57:35 compute-0 kernel: tapff8c8423-f0: entered promiscuous mode
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.836 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff8c8423-f0, col_values=(('external_ids', {'iface-id': 'd2e0b09e-a5f5-4832-b480-4d90b14ae948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_controller[148183]: 2025-10-02T12:57:35Z|00865|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.839 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.840 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[53971fb4-ad57-4cd1-9d52-bb4f2dcebbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.842 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:35.843 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'env', 'PROCESS_TAG=haproxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:57:35 compute-0 nova_compute[257802]: 2025-10-02 12:57:35.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:36 compute-0 podman[381461]: 2025-10-02 12:57:36.205233237 +0000 UTC m=+0.046468504 container create e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:57:36 compute-0 systemd[1]: Started libpod-conmon-e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544.scope.
Oct 02 12:57:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:36 compute-0 podman[381461]: 2025-10-02 12:57:36.179782401 +0000 UTC m=+0.021017688 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a389aa3228956f7f8d525032f2c7b6054832e4d74645776e5f51737c1f610a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Oct 02 12:57:36 compute-0 podman[381461]: 2025-10-02 12:57:36.294032308 +0000 UTC m=+0.135267595 container init e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:57:36 compute-0 podman[381461]: 2025-10-02 12:57:36.299981454 +0000 UTC m=+0.141216721 container start e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:57:36 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [NOTICE]   (381480) : New worker (381482) forked
Oct 02 12:57:36 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [NOTICE]   (381480) : Loading success.
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.363 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409856.3636587, 01108902-768b-4bee-baff-11d5854e2f77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.364 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Started (Lifecycle Event)
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.366 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.369 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.371 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance spawned successfully.
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.371 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.396 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.412 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.416 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.416 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.417 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.417 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.417 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.418 2 DEBUG nova.virt.libvirt.driver [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.567 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.567 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409856.3657715, 01108902-768b-4bee-baff-11d5854e2f77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.568 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Paused (Lifecycle Event)
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.722 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.724 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409856.3688433, 01108902-768b-4bee-baff-11d5854e2f77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.724 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Resumed (Lifecycle Event)
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.805 2 DEBUG nova.compute.manager [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.806 2 DEBUG oslo_concurrency.lockutils [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.806 2 DEBUG oslo_concurrency.lockutils [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.806 2 DEBUG oslo_concurrency.lockutils [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.806 2 DEBUG nova.compute.manager [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] No waiting events found dispatching network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.807 2 WARNING nova.compute.manager [req-c8e74711-bc9e-4d5c-a949-c637e1c5f8fa req-a21a3904-1315-4f91-812a-949aaf1086e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received unexpected event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 for instance with vm_state active and task_state None.
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.881 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.882 2 INFO nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Took 10.99 seconds to spawn the instance on the hypervisor.
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.883 2 DEBUG nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.885 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:36.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:36 compute-0 nova_compute[257802]: 2025-10-02 12:57:36.970 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:37 compute-0 nova_compute[257802]: 2025-10-02 12:57:37.008 2 INFO nova.compute.manager [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Took 12.74 seconds to build instance.
Oct 02 12:57:37 compute-0 nova_compute[257802]: 2025-10-02 12:57:37.050 2 DEBUG oslo_concurrency.lockutils [None req-fd1bdb1b-ba15-46ec-955a-ee9b32279774 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:37 compute-0 ceph-mon[73607]: pgmap v2959: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Oct 02 12:57:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/886648033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.163 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.164 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.164 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.164 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.165 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 165 op/s
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.499 2 DEBUG nova.compute.manager [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.500 2 DEBUG oslo_concurrency.lockutils [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.500 2 DEBUG oslo_concurrency.lockutils [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.500 2 DEBUG oslo_concurrency.lockutils [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.501 2 DEBUG nova.compute.manager [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.501 2 WARNING nova.compute.manager [req-b5cdbe84-fe96-4d4b-b64a-b112d4cb7271 req-7be53c84-14a8-4d8b-8709-f5f547c752c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state None.
Oct 02 12:57:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817956243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.591 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2248987543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.822 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.822 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.826 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:38 compute-0 nova_compute[257802]: 2025-10-02 12:57:38.826 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:38 compute-0 sudo[381516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:39 compute-0 sudo[381516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:39 compute-0 sudo[381516]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.043 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.045 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3902MB free_disk=20.81369400024414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.045 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.045 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:39 compute-0 sudo[381541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:39 compute-0 sudo[381541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:39 compute-0 sudo[381541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.321 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 667910fa-37c3-40ec-a844-293ff0af4324 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.322 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 01108902-768b-4bee-baff-11d5854e2f77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.322 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.322 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.383 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:39 compute-0 ceph-mon[73607]: pgmap v2960: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 165 op/s
Oct 02 12:57:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3817956243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:39 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296696544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.860 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.865 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.902 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.967 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:57:39 compute-0 nova_compute[257802]: 2025-10-02 12:57:39.968 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:40.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:40 compute-0 NetworkManager[44987]: <info>  [1759409860.1488] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 02 12:57:40 compute-0 NetworkManager[44987]: <info>  [1759409860.1496] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ovn_controller[148183]: 2025-10-02T12:57:40Z|00866|binding|INFO|Releasing lport 6e4127f6-98a7-4fa7-9ba7-1b632af1bcf6 from this chassis (sb_readonly=0)
Oct 02 12:57:40 compute-0 ovn_controller[148183]: 2025-10-02T12:57:40Z|00867|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.705 2 DEBUG nova.compute.manager [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-changed-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.706 2 DEBUG nova.compute.manager [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Refreshing instance network info cache due to event network-changed-612e6054-5ce1-486c-aa51-2e5d47567ef3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.706 2 DEBUG oslo_concurrency.lockutils [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.706 2 DEBUG oslo_concurrency.lockutils [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:40 compute-0 nova_compute[257802]: 2025-10-02 12:57:40.707 2 DEBUG nova.network.neutron [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Refreshing network info cache for port 612e6054-5ce1-486c-aa51-2e5d47567ef3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/296696544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:41 compute-0 ceph-mon[73607]: pgmap v2961: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Oct 02 12:57:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:57:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:57:42 compute-0 nova_compute[257802]: 2025-10-02 12:57:42.239 2 DEBUG nova.network.neutron [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updated VIF entry in instance network info cache for port 612e6054-5ce1-486c-aa51-2e5d47567ef3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:42 compute-0 nova_compute[257802]: 2025-10-02 12:57:42.241 2 DEBUG nova.network.neutron [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:42 compute-0 nova_compute[257802]: 2025-10-02 12:57:42.260 2 DEBUG oslo_concurrency.lockutils [req-bf8b743b-12f5-483a-9dae-3fa10afe4039 req-50b65da7-5198-4229-9cfd-08caee982eb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.4 MiB/s wr, 233 op/s
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:57:42
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes']
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:42.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:57:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:57:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:57:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:57:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:57:43 compute-0 nova_compute[257802]: 2025-10-02 12:57:43.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:43 compute-0 ceph-mon[73607]: pgmap v2962: 305 pgs: 305 active+clean; 465 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.4 MiB/s wr, 233 op/s
Oct 02 12:57:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:57:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Oct 02 12:57:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:44 compute-0 nova_compute[257802]: 2025-10-02 12:57:44.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:44.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:44 compute-0 nova_compute[257802]: 2025-10-02 12:57:44.969 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:45 compute-0 ceph-mon[73607]: pgmap v2963: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Oct 02 12:57:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:57:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:57:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 498 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.2 MiB/s wr, 335 op/s
Oct 02 12:57:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:46.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:47 compute-0 ceph-mon[73607]: pgmap v2964: 305 pgs: 305 active+clean; 498 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.2 MiB/s wr, 335 op/s
Oct 02 12:57:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1917175509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.291 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.293 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.295 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.295 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.296 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.298 2 INFO nova.compute.manager [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Terminating instance
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.300 2 DEBUG nova.compute.manager [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:57:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 498 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Oct 02 12:57:48 compute-0 kernel: tapacafd85c-21 (unregistering): left promiscuous mode
Oct 02 12:57:48 compute-0 NetworkManager[44987]: <info>  [1759409868.5421] device (tapacafd85c-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00868|binding|INFO|Releasing lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 from this chassis (sb_readonly=0)
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00869|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 down in Southbound
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00870|binding|INFO|Removing iface tapacafd85c-21 ovn-installed in OVS
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.575 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:ed:4f 10.100.0.6'], port_security=['fa:16:3e:e6:ed:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '667910fa-37c3-40ec-a844-293ff0af4324', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=acafd85c-21eb-448e-b19a-ef74dc7a64f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.576 158261 INFO neutron.agent.ovn.metadata.agent [-] Port acafd85c-21eb-448e-b19a-ef74dc7a64f9 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 unbound from our chassis
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.579 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.582 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc7e6c3-e7d6-4832-a075-d91b10f0158e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.582 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 namespace which is not needed anymore
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:48 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000be.scope: Deactivated successfully.
Oct 02 12:57:48 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000be.scope: Consumed 13.346s CPU time.
Oct 02 12:57:48 compute-0 systemd-machined[211836]: Machine qemu-93-instance-000000be terminated.
Oct 02 12:57:48 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [NOTICE]   (381302) : haproxy version is 2.8.14-c23fe91
Oct 02 12:57:48 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [NOTICE]   (381302) : path to executable is /usr/sbin/haproxy
Oct 02 12:57:48 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [WARNING]  (381302) : Exiting Master process...
Oct 02 12:57:48 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [ALERT]    (381302) : Current worker (381304) exited with code 143 (Terminated)
Oct 02 12:57:48 compute-0 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[381298]: [WARNING]  (381302) : All workers exited. Exiting... (0)
Oct 02 12:57:48 compute-0 systemd[1]: libpod-905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a.scope: Deactivated successfully.
Oct 02 12:57:48 compute-0 conmon[381298]: conmon 905e69751ec8db166b7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a.scope/container/memory.events
Oct 02 12:57:48 compute-0 podman[381618]: 2025-10-02 12:57:48.718845712 +0000 UTC m=+0.044928995 container died 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:57:48 compute-0 kernel: tapacafd85c-21: entered promiscuous mode
Oct 02 12:57:48 compute-0 NetworkManager[44987]: <info>  [1759409868.7248] manager: (tapacafd85c-21): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00871|binding|INFO|Claiming lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 for this chassis.
Oct 02 12:57:48 compute-0 kernel: tapacafd85c-21 (unregistering): left promiscuous mode
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00872|binding|INFO|acafd85c-21eb-448e-b19a-ef74dc7a64f9: Claiming fa:16:3e:e6:ed:4f 10.100.0.6
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.734 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:ed:4f 10.100.0.6'], port_security=['fa:16:3e:e6:ed:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '667910fa-37c3-40ec-a844-293ff0af4324', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=acafd85c-21eb-448e-b19a-ef74dc7a64f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.753 2 INFO nova.virt.libvirt.driver [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Instance destroyed successfully.
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00873|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 ovn-installed in OVS
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00874|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 up in Southbound
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00875|binding|INFO|Releasing lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 from this chassis (sb_readonly=1)
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00876|if_status|INFO|Dropped 2 log messages in last 194 seconds (most recently, 194 seconds ago) due to excessive rate
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00877|if_status|INFO|Not setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 down as sb is readonly
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.756 2 DEBUG nova.objects.instance [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'resources' on Instance uuid 667910fa-37c3-40ec-a844-293ff0af4324 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00878|binding|INFO|Removing iface tapacafd85c-21 ovn-installed in OVS
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00879|binding|INFO|Releasing lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 from this chassis (sb_readonly=0)
Oct 02 12:57:48 compute-0 ovn_controller[148183]: 2025-10-02T12:57:48Z|00880|binding|INFO|Setting lport acafd85c-21eb-448e-b19a-ef74dc7a64f9 down in Southbound
Oct 02 12:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9963989c2a28832acc593d4c68f2b27f78a3d3ca531684a150b225525d70e0-merged.mount: Deactivated successfully.
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.775 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:ed:4f 10.100.0.6'], port_security=['fa:16:3e:e6:ed:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '667910fa-37c3-40ec-a844-293ff0af4324', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=acafd85c-21eb-448e-b19a-ef74dc7a64f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.778 2 DEBUG nova.virt.libvirt.vif [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-860206562',display_name='tempest-TestServerMultinode-server-860206562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-860206562',id=190,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-j4zxv7lt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:34Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=667910fa-37c3-40ec-a844-293ff0af4324,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.779 2 DEBUG nova.network.os_vif_util [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "address": "fa:16:3e:e6:ed:4f", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacafd85c-21", "ovs_interfaceid": "acafd85c-21eb-448e-b19a-ef74dc7a64f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.780 2 DEBUG nova.network.os_vif_util [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.781 2 DEBUG os_vif [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacafd85c-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.790 2 INFO os_vif [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:ed:4f,bridge_name='br-int',has_traffic_filtering=True,id=acafd85c-21eb-448e-b19a-ef74dc7a64f9,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacafd85c-21')
Oct 02 12:57:48 compute-0 podman[381618]: 2025-10-02 12:57:48.808293219 +0000 UTC m=+0.134376502 container cleanup 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:57:48 compute-0 ceph-mon[73607]: pgmap v2965: 305 pgs: 305 active+clean; 498 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Oct 02 12:57:48 compute-0 systemd[1]: libpod-conmon-905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a.scope: Deactivated successfully.
Oct 02 12:57:48 compute-0 podman[381666]: 2025-10-02 12:57:48.9035992 +0000 UTC m=+0.065280105 container remove 905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.911 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[73136434-5f1e-4c57-9de6-56ef059331c7]: (4, ('Thu Oct  2 12:57:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 (905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a)\n905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a\nThu Oct  2 12:57:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 (905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a)\n905e69751ec8db166b7d3a62fa6970fb3c3b96ba75e6f477cc2a08e0ac46c94a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.913 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9e691c-584c-4109-aafd-f1327e522ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.915 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b3e5364-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:48 compute-0 kernel: tap9b3e5364-00: left promiscuous mode
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 nova_compute[257802]: 2025-10-02 12:57:48.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.939 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8bec663f-98c0-456f-a095-8a4ddb149754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.967 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a50ad484-4f83-4408-8de6-bac2e4b2dc40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.969 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1e007a2c-709c-4750-bfcf-92efadf30b39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.988 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b1b729-6800-46d3-888d-9a8f80e6e681]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792131, 'reachable_time': 19686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381683, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.991 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.991 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[761bc75e-161f-4eea-af5b-a8ad60a61ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.992 158261 INFO neutron.agent.ovn.metadata.agent [-] Port acafd85c-21eb-448e-b19a-ef74dc7a64f9 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 unbound from our chassis
Oct 02 12:57:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d9b3e5364\x2d0567\x2d4be5\x2db771\x2d728ed7dd0ab7.mount: Deactivated successfully.
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.993 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.995 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[57c6bfc8-2ac9-46d5-a038-33d637ea8c57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.995 158261 INFO neutron.agent.ovn.metadata.agent [-] Port acafd85c-21eb-448e-b19a-ef74dc7a64f9 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 unbound from our chassis
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.997 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:48.997 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d42aee73-e62f-4b00-9ffd-b191f7da8307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.582 2 INFO nova.virt.libvirt.driver [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Deleting instance files /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324_del
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.583 2 INFO nova.virt.libvirt.driver [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Deletion of /var/lib/nova/instances/667910fa-37c3-40ec-a844-293ff0af4324_del complete
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.645 2 INFO nova.compute.manager [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Took 1.35 seconds to destroy the instance on the hypervisor.
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.646 2 DEBUG oslo.service.loopingcall [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.646 2 DEBUG nova.compute.manager [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.647 2 DEBUG nova.network.neutron [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.695 2 DEBUG nova.compute.manager [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-unplugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.695 2 DEBUG oslo_concurrency.lockutils [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.696 2 DEBUG oslo_concurrency.lockutils [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.696 2 DEBUG oslo_concurrency.lockutils [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.696 2 DEBUG nova.compute.manager [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] No waiting events found dispatching network-vif-unplugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.696 2 DEBUG nova.compute.manager [req-564f0ea2-1034-474e-8e85-7ca6ce0f661a req-29fc64f6-dcce-4e16-9ab4-a16504bed101 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-unplugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:57:49 compute-0 nova_compute[257802]: 2025-10-02 12:57:49.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 430 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 436 op/s
Oct 02 12:57:50 compute-0 ovn_controller[148183]: 2025-10-02T12:57:50Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:57:50 compute-0 ovn_controller[148183]: 2025-10-02T12:57:50Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:57:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:51 compute-0 ceph-mon[73607]: pgmap v2966: 305 pgs: 305 active+clean; 430 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 436 op/s
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.862 2 DEBUG nova.compute.manager [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.862 2 DEBUG oslo_concurrency.lockutils [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "667910fa-37c3-40ec-a844-293ff0af4324-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.862 2 DEBUG oslo_concurrency.lockutils [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.863 2 DEBUG oslo_concurrency.lockutils [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.863 2 DEBUG nova.compute.manager [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] No waiting events found dispatching network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:51 compute-0 nova_compute[257802]: 2025-10-02 12:57:51.863 2 WARNING nova.compute.manager [req-c5b5f7f2-dd3f-4bfb-91c6-0df73617940d req-e7fa528b-dd41-49ea-bb67-170124d8e6d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received unexpected event network-vif-plugged-acafd85c-21eb-448e-b19a-ef74dc7a64f9 for instance with vm_state active and task_state deleting.
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.003 2 DEBUG nova.network.neutron [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.029 2 INFO nova.compute.manager [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Took 2.38 seconds to deallocate network for instance.
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.099 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.099 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.159 2 DEBUG nova.compute.manager [req-171254be-2a60-4def-810a-e2c44206d980 req-c0acb1f3-b45b-4495-9790-fcc0af7dcfca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Received event network-vif-deleted-acafd85c-21eb-448e-b19a-ef74dc7a64f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.212 2 DEBUG oslo_concurrency.processutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 430 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.4 MiB/s wr, 295 op/s
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.663 2 DEBUG oslo_concurrency.processutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.670 2 DEBUG nova.compute.provider_tree [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.695 2 DEBUG nova.scheduler.client.report [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.736 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.760 2 INFO nova.scheduler.client.report [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Deleted allocations for instance 667910fa-37c3-40ec-a844-293ff0af4324
Oct 02 12:57:52 compute-0 nova_compute[257802]: 2025-10-02 12:57:52.840 2 DEBUG oslo_concurrency.lockutils [None req-7cfea883-4f16-4c8f-8e45-865dc551f8c1 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "667910fa-37c3-40ec-a844-293ff0af4324" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:53 compute-0 ceph-mon[73607]: pgmap v2967: 305 pgs: 305 active+clean; 430 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.4 MiB/s wr, 295 op/s
Oct 02 12:57:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/688148475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:53 compute-0 nova_compute[257802]: 2025-10-02 12:57:53.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.4 MiB/s wr, 313 op/s
Oct 02 12:57:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007832311150660711 of space, bias 1.0, pg target 2.3496933451982134 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6441557469058254 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:57:54 compute-0 nova_compute[257802]: 2025-10-02 12:57:54.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:57:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1732376116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:57:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:57:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1732376116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:57:55 compute-0 ceph-mon[73607]: pgmap v2968: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.4 MiB/s wr, 313 op/s
Oct 02 12:57:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1732376116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:57:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1732376116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:57:55 compute-0 podman[381711]: 2025-10-02 12:57:55.936039091 +0000 UTC m=+0.067386477 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:57:55 compute-0 podman[381710]: 2025-10-02 12:57:55.949101911 +0000 UTC m=+0.086974107 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:57:55 compute-0 podman[381712]: 2025-10-02 12:57:55.959016395 +0000 UTC m=+0.087509141 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:57:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 307 op/s
Oct 02 12:57:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:57:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:57:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:57.673 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:57 compute-0 nova_compute[257802]: 2025-10-02 12:57:57.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:57:57.675 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:57:57 compute-0 ceph-mon[73607]: pgmap v2969: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 307 op/s
Oct 02 12:57:57 compute-0 nova_compute[257802]: 2025-10-02 12:57:57.975 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:57 compute-0 nova_compute[257802]: 2025-10-02 12:57:57.976 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.000 2 DEBUG nova.objects.instance [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.037 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.229 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.230 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.231 2 INFO nova.compute.manager [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Attaching volume c4c3ee6e-d57f-4361-8cda-389d0f995821 to /dev/vdb
Oct 02 12:57:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 627 KiB/s rd, 4.3 MiB/s wr, 196 op/s
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.367 2 DEBUG os_brick.utils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.368 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.384 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.385 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4e2c9c-f6ec-4e43-b2f3-a24109a614f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.386 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.401 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.402 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[63ff1427-8c23-4482-9b43-9794d0185f83]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.404 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.416 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.416 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[58aa2ba3-8b70-4a35-a969-aaefaaa8180a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.418 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[95ef0fbd-c8d5-4d8a-bcd5-ac1ed5a16cf2]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.419 2 DEBUG oslo_concurrency.processutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.469 2 DEBUG oslo_concurrency.processutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "nvme version" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.472 2 DEBUG os_brick.initiator.connectors.lightos [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.472 2 DEBUG os_brick.initiator.connectors.lightos [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.472 2 DEBUG os_brick.utils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.473 2 DEBUG nova.virt.block_device [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating existing volume attachment record: 123ee2b6-4c88-4103-8e74-213ec4835687 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:57:58 compute-0 nova_compute[257802]: 2025-10-02 12:57:58.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2346521657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:58 compute-0 ceph-mon[73607]: pgmap v2970: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 627 KiB/s rd, 4.3 MiB/s wr, 196 op/s
Oct 02 12:57:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:57:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:58.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:59 compute-0 sudo[381775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:59 compute-0 sudo[381775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:59 compute-0 sudo[381775]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:59 compute-0 sudo[381800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:59 compute-0 sudo[381800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:59 compute-0 sudo[381800]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.414 2 DEBUG nova.objects.instance [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.437 2 DEBUG nova.virt.libvirt.driver [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Attempting to attach volume c4c3ee6e-d57f-4361-8cda-389d0f995821 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.440 2 DEBUG nova.virt.libvirt.guest [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:57:59 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-c4c3ee6e-d57f-4361-8cda-389d0f995821">
Oct 02 12:57:59 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   </source>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 12:57:59 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   </auth>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:57:59 compute-0 nova_compute[257802]:   <serial>c4c3ee6e-d57f-4361-8cda-389d0f995821</serial>
Oct 02 12:57:59 compute-0 nova_compute[257802]: </disk>
Oct 02 12:57:59 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:57:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.667 2 DEBUG nova.virt.libvirt.driver [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.667 2 DEBUG nova.virt.libvirt.driver [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.667 2 DEBUG nova.virt.libvirt.driver [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.668 2 DEBUG nova.virt.libvirt.driver [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No VIF found with MAC fa:16:3e:3d:bc:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/266658596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:59 compute-0 nova_compute[257802]: 2025-10-02 12:57:59.912 2 DEBUG oslo_concurrency.lockutils [None req-a00a31d5-ba67-4659-bff0-6f83cf9064d0 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:00.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 634 KiB/s rd, 4.3 MiB/s wr, 206 op/s
Oct 02 12:58:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:00.677 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.823 2 DEBUG oslo_concurrency.lockutils [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.823 2 DEBUG oslo_concurrency.lockutils [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.823 2 DEBUG nova.compute.manager [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.826 2 DEBUG nova.compute.manager [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.827 2 DEBUG nova.objects.instance [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:00 compute-0 ceph-mon[73607]: pgmap v2971: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 634 KiB/s rd, 4.3 MiB/s wr, 206 op/s
Oct 02 12:58:00 compute-0 nova_compute[257802]: 2025-10-02 12:58:00.892 2 DEBUG nova.virt.libvirt.driver [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:58:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:01 compute-0 podman[381846]: 2025-10-02 12:58:01.980955303 +0000 UTC m=+0.101991046 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:58:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 56 KiB/s wr, 44 op/s
Oct 02 12:58:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:03 compute-0 kernel: tap612e6054-5c (unregistering): left promiscuous mode
Oct 02 12:58:03 compute-0 NetworkManager[44987]: <info>  [1759409883.3132] device (tap612e6054-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 ovn_controller[148183]: 2025-10-02T12:58:03Z|00881|binding|INFO|Releasing lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 from this chassis (sb_readonly=0)
Oct 02 12:58:03 compute-0 ovn_controller[148183]: 2025-10-02T12:58:03Z|00882|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 down in Southbound
Oct 02 12:58:03 compute-0 ovn_controller[148183]: 2025-10-02T12:58:03Z|00883|binding|INFO|Removing iface tap612e6054-5c ovn-installed in OVS
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.333 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.334 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f unbound from our chassis
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.335 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.337 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[794da1d0-2c56-46d2-88fb-b8d4d19231c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.337 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace which is not needed anymore
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Oct 02 12:58:03 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000bf.scope: Consumed 14.431s CPU time.
Oct 02 12:58:03 compute-0 systemd-machined[211836]: Machine qemu-94-instance-000000bf terminated.
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [NOTICE]   (381480) : haproxy version is 2.8.14-c23fe91
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [NOTICE]   (381480) : path to executable is /usr/sbin/haproxy
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [WARNING]  (381480) : Exiting Master process...
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [WARNING]  (381480) : Exiting Master process...
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [ALERT]    (381480) : Current worker (381482) exited with code 143 (Terminated)
Oct 02 12:58:03 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[381476]: [WARNING]  (381480) : All workers exited. Exiting... (0)
Oct 02 12:58:03 compute-0 systemd[1]: libpod-e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544.scope: Deactivated successfully.
Oct 02 12:58:03 compute-0 podman[381895]: 2025-10-02 12:58:03.517880928 +0000 UTC m=+0.080554160 container died e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:03 compute-0 ceph-mon[73607]: pgmap v2972: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 56 KiB/s wr, 44 op/s
Oct 02 12:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a389aa3228956f7f8d525032f2c7b6054832e4d74645776e5f51737c1f610a1-merged.mount: Deactivated successfully.
Oct 02 12:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544-userdata-shm.mount: Deactivated successfully.
Oct 02 12:58:03 compute-0 podman[381895]: 2025-10-02 12:58:03.565679632 +0000 UTC m=+0.128352844 container cleanup e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:58:03 compute-0 systemd[1]: libpod-conmon-e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544.scope: Deactivated successfully.
Oct 02 12:58:03 compute-0 podman[381934]: 2025-10-02 12:58:03.638992393 +0000 UTC m=+0.046050153 container remove e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.646 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4c8527-be8b-4fbb-84d7-aabdcc75190a]: (4, ('Thu Oct  2 12:58:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544)\ne7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544\nThu Oct  2 12:58:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (e7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544)\ne7f8f8a1bd46b3c9a092ebc4f7179bcce10713b84a35f0f41814fadf6a152544\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.651 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a262d426-cb54-401e-bc65-d5082ef8ae81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.652 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 kernel: tapff8c8423-f0: left promiscuous mode
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.675 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f0dc47ce-59e1-4787-97ba-7aa8f6ac7ec3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.684 2 DEBUG nova.compute.manager [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.685 2 DEBUG oslo_concurrency.lockutils [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.685 2 DEBUG oslo_concurrency.lockutils [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.685 2 DEBUG oslo_concurrency.lockutils [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.685 2 DEBUG nova.compute.manager [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.685 2 WARNING nova.compute.manager [req-c34a3a12-5c47-4961-8e0b-0100cd67d063 req-d9a39f52-8bcf-4f26-bf3e-5ef57e88b716 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state powering-off.
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.711 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff8f2b6-a25e-4761-a852-2217a50a1237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.713 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[82735c8c-6e10-4023-bdc7-bcfceff693e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.732 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd09595-cdcf-4927-94ec-3345298a8a9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792314, 'reachable_time': 39880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381952, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.734 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:58:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:03.735 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[00923625-dd87-4a99-aef2-bdf73090c633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dff8c8423\x2df2c6\x2d4e3f\x2d8fd3\x2d2fb6b6292a3f.mount: Deactivated successfully.
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.749 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409868.748709, 667910fa-37c3-40ec-a844-293ff0af4324 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.750 2 INFO nova.compute.manager [-] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] VM Stopped (Lifecycle Event)
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.775 2 DEBUG nova.compute.manager [None req-4cecb168-f2aa-4dee-a004-f2b0f1eb343d - - - - - -] [instance: 667910fa-37c3-40ec-a844-293ff0af4324] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.908 2 INFO nova.virt.libvirt.driver [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance shutdown successfully after 3 seconds.
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.912 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance destroyed successfully.
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.912 2 DEBUG nova.objects.instance [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.923 2 DEBUG nova.compute.manager [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:03 compute-0 nova_compute[257802]: 2025-10-02 12:58:03.973 2 DEBUG oslo_concurrency.lockutils [None req-5a7c20f2-b836-48e6-bfc0-ec09d29deb01 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 67 KiB/s wr, 49 op/s
Oct 02 12:58:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:04 compute-0 nova_compute[257802]: 2025-10-02 12:58:04.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:04.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:05 compute-0 ceph-mon[73607]: pgmap v2973: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 67 KiB/s wr, 49 op/s
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.623260) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885623336, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 841, "num_deletes": 251, "total_data_size": 1182763, "memory_usage": 1199360, "flush_reason": "Manual Compaction"}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885632040, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1158332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65415, "largest_seqno": 66255, "table_properties": {"data_size": 1154084, "index_size": 1899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9719, "raw_average_key_size": 19, "raw_value_size": 1145551, "raw_average_value_size": 2342, "num_data_blocks": 84, "num_entries": 489, "num_filter_entries": 489, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409824, "oldest_key_time": 1759409824, "file_creation_time": 1759409885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 8823 microseconds, and 3974 cpu microseconds.
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.632087) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1158332 bytes OK
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.632115) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.633552) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.633566) EVENT_LOG_v1 {"time_micros": 1759409885633561, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.633584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1178665, prev total WAL file size 1178665, number of live WAL files 2.
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.634217) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1131KB)], [149(10MB)]
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885634252, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12143194, "oldest_snapshot_seqno": -1}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9074 keys, 10251299 bytes, temperature: kUnknown
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885684673, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10251299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10194585, "index_size": 32921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 240512, "raw_average_key_size": 26, "raw_value_size": 10037143, "raw_average_value_size": 1106, "num_data_blocks": 1244, "num_entries": 9074, "num_filter_entries": 9074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759409885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.685332) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10251299 bytes
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.686573) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.2 rd, 201.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(19.3) write-amplify(8.9) OK, records in: 9590, records dropped: 516 output_compression: NoCompression
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.686600) EVENT_LOG_v1 {"time_micros": 1759409885686586, "job": 92, "event": "compaction_finished", "compaction_time_micros": 50772, "compaction_time_cpu_micros": 24663, "output_level": 6, "num_output_files": 1, "total_output_size": 10251299, "num_input_records": 9590, "num_output_records": 9074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885687327, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885690486, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.634106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.690699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.690710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.690712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.690714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-12:58:05.690716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.788 2 DEBUG nova.compute.manager [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.788 2 DEBUG oslo_concurrency.lockutils [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.788 2 DEBUG oslo_concurrency.lockutils [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.789 2 DEBUG oslo_concurrency.lockutils [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.789 2 DEBUG nova.compute.manager [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.789 2 WARNING nova.compute.manager [req-b24ef86f-f334-438a-ae4b-f36aca5bbc37 req-68578bd4-97b0-4539-a0c5-4ffd75f24f64 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state stopped and task_state None.
Oct 02 12:58:05 compute-0 nova_compute[257802]: 2025-10-02 12:58:05.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.245 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.302 2 DEBUG oslo_concurrency.lockutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.302 2 DEBUG oslo_concurrency.lockutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.302 2 DEBUG nova.network.neutron [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:58:06 compute-0 nova_compute[257802]: 2025-10-02 12:58:06.302 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'info_cache' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 42 KiB/s wr, 33 op/s
Oct 02 12:58:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:07 compute-0 ceph-mon[73607]: pgmap v2974: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 42 KiB/s wr, 33 op/s
Oct 02 12:58:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:08.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 34 KiB/s wr, 15 op/s
Oct 02 12:58:08 compute-0 nova_compute[257802]: 2025-10-02 12:58:08.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:08 compute-0 nova_compute[257802]: 2025-10-02 12:58:08.890 2 DEBUG nova.network.neutron [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.009 2 DEBUG oslo_concurrency.lockutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.052 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance destroyed successfully.
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.052 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.088 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'resources' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.107 2 DEBUG nova.virt.libvirt.vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.108 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.109 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.110 2 DEBUG os_vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap612e6054-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.119 2 INFO os_vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.130 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start _get_guest_xml network_info=[{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': None, 'guest_format': None, 'attachment_id': '123ee2b6-4c88-4103-8e74-213ec4835687', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c4c3ee6e-d57f-4361-8cda-389d0f995821', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c4c3ee6e-d57f-4361-8cda-389d0f995821', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '01108902-768b-4bee-baff-11d5854e2f77', 'attached_at': '', 'detached_at': '', 'volume_id': 'c4c3ee6e-d57f-4361-8cda-389d0f995821', 'serial': 'c4c3ee6e-d57f-4361-8cda-389d0f995821'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.134 2 WARNING nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.141 2 DEBUG nova.virt.libvirt.host [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.142 2 DEBUG nova.virt.libvirt.host [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.144 2 DEBUG nova.virt.libvirt.host [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.145 2 DEBUG nova.virt.libvirt.host [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.146 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.146 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.147 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.147 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.147 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.147 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.148 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.148 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.148 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.148 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.149 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.149 2 DEBUG nova.virt.hardware [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.149 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.188 2 DEBUG oslo_concurrency.processutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242688660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.637 2 DEBUG oslo_concurrency.processutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.672 2 DEBUG oslo_concurrency.processutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:09 compute-0 ceph-mon[73607]: pgmap v2975: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 34 KiB/s wr, 15 op/s
Oct 02 12:58:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3242688660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-0 nova_compute[257802]: 2025-10-02 12:58:09.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389951155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.140 2 DEBUG oslo_concurrency.processutils [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.162 2 DEBUG nova.virt.libvirt.vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.162 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.163 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.164 2 DEBUG nova.objects.instance [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.179 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <uuid>01108902-768b-4bee-baff-11d5854e2f77</uuid>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <name>instance-000000bf</name>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeTestJSON-server-1568055795</nova:name>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:58:09</nova:creationTime>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:user uuid="3299a1aed5af4843a91417a3f181c172">tempest-AttachVolumeTestJSON-398185718-project-member</nova:user>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:project uuid="e7168b5b1300495d90592b195824729a">tempest-AttachVolumeTestJSON-398185718</nova:project>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <nova:port uuid="612e6054-5ce1-486c-aa51-2e5d47567ef3">
Oct 02 12:58:10 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <system>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="serial">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="uuid">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </system>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <os>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </os>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <features>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </features>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk.config">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-c4c3ee6e-d57f-4361-8cda-389d0f995821">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </source>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:58:10 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <serial>c4c3ee6e-d57f-4361-8cda-389d0f995821</serial>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3d:bc:3b"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <target dev="tap612e6054-5c"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/console.log" append="off"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <video>
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </video>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:58:10 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:58:10 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:58:10 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:58:10 compute-0 nova_compute[257802]: </domain>
Oct 02 12:58:10 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.181 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.181 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.182 2 DEBUG nova.virt.libvirt.driver [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.182 2 DEBUG nova.virt.libvirt.vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.183 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.183 2 DEBUG nova.network.os_vif_util [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.184 2 DEBUG os_vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:58:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:10.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap612e6054-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap612e6054-5c, col_values=(('external_ids', {'iface-id': '612e6054-5ce1-486c-aa51-2e5d47567ef3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:bc:3b', 'vm-uuid': '01108902-768b-4bee-baff-11d5854e2f77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.1913] manager: (tap612e6054-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.196 2 INFO os_vif [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:58:10 compute-0 kernel: tap612e6054-5c: entered promiscuous mode
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_controller[148183]: 2025-10-02T12:58:10Z|00884|binding|INFO|Claiming lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 for this chassis.
Oct 02 12:58:10 compute-0 ovn_controller[148183]: 2025-10-02T12:58:10Z|00885|binding|INFO|612e6054-5ce1-486c-aa51-2e5d47567ef3: Claiming fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.2970] manager: (tap612e6054-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.3079] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.3090] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.311 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.313 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f bound to our chassis
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.314 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.326 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9c9e5-7752-4711-a978-cd070f50e969]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.326 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff8c8423-f1 in ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:58:10 compute-0 systemd-udevd[382035]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.328 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff8c8423-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.328 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[244db35d-d500-405f-9c08-369767937180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.329 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4ad89a-a7ac-4073-88d2-663f1935dfda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 systemd-machined[211836]: New machine qemu-95-instance-000000bf.
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.344 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[09e81757-a60e-44f5-bd86-3444ebcde3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.3465] device (tap612e6054-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.3478] device (tap612e6054-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:58:10 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-000000bf.
Oct 02 12:58:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 35 KiB/s wr, 18 op/s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.372 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bba80bfc-3524-4687-b5e9-9afbcd1e9a3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.404 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[27a0056f-5d06-468a-8fbe-32acccd2dd1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.4126] manager: (tapff8c8423-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.411 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6b3859-a3b5-49fe-b538-4fc5780a74ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.449 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[7820dbb0-4f6d-4cca-8b30-6882f2993060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.453 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f6353584-6fb3-475e-bbe5-f6566f850ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.4852] device (tapff8c8423-f0): carrier: link connected
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.490 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f0243931-4cd1-490f-9276-dde9761e27e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_controller[148183]: 2025-10-02T12:58:10Z|00886|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 ovn-installed in OVS
Oct 02 12:58:10 compute-0 ovn_controller[148183]: 2025-10-02T12:58:10Z|00887|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 up in Southbound
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.507 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5d41f3de-360f-4b1d-95b0-807abc21c233]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795811, 'reachable_time': 23614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382068, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.525 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[464e9df1-c4d2-48ef-b2c6-a0c88b914ffb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:5000'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795811, 'tstamp': 795811}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382069, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.542 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d3679b-1d7c-4221-b248-0abccd92b4f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795811, 'reachable_time': 23614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382070, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.579 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f243f004-895d-4bde-8acf-685f386fdb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.638 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8c99b50a-bc8e-4eec-b007-759b64f9aa2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.640 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.640 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.641 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff8c8423-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 kernel: tapff8c8423-f0: entered promiscuous mode
Oct 02 12:58:10 compute-0 NetworkManager[44987]: <info>  [1759409890.6441] manager: (tapff8c8423-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.645 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff8c8423-f0, col_values=(('external_ids', {'iface-id': 'd2e0b09e-a5f5-4832-b480-4d90b14ae948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_controller[148183]: 2025-10-02T12:58:10Z|00888|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:58:10 compute-0 nova_compute[257802]: 2025-10-02 12:58:10.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.663 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.664 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2b77c2cb-79e1-4224-8fe3-4aec635ff9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.665 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:10.666 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'env', 'PROCESS_TAG=haproxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:58:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3389951155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:11 compute-0 podman[382103]: 2025-10-02 12:58:11.020953979 +0000 UTC m=+0.052679784 container create 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:58:11 compute-0 systemd[1]: Started libpod-conmon-1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9.scope.
Oct 02 12:58:11 compute-0 podman[382103]: 2025-10-02 12:58:10.994964071 +0000 UTC m=+0.026689896 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:58:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecfcc2371553fa7f47ff760a218c6f1467f691ec05cca90286af9f7252afaf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:11 compute-0 podman[382103]: 2025-10-02 12:58:11.114097368 +0000 UTC m=+0.145823193 container init 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:58:11 compute-0 podman[382103]: 2025-10-02 12:58:11.122645808 +0000 UTC m=+0.154371603 container start 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [NOTICE]   (382141) : New worker (382159) forked
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [NOTICE]   (382141) : Loading success.
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.712 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 01108902-768b-4bee-baff-11d5854e2f77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.713 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409891.7120907, 01108902-768b-4bee-baff-11d5854e2f77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.713 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Resumed (Lifecycle Event)
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.715 2 DEBUG nova.compute.manager [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.718 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance rebooted successfully.
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.718 2 DEBUG nova.compute.manager [None req-aa73b24a-091c-4376-85d6-c2638272fde9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.747 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.751 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:11 compute-0 ceph-mon[73607]: pgmap v2976: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 35 KiB/s wr, 18 op/s
Oct 02 12:58:11 compute-0 ovn_controller[148183]: 2025-10-02T12:58:11Z|00889|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.790 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.791 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409891.7146087, 01108902-768b-4bee-baff-11d5854e2f77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.791 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Started (Lifecycle Event)
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.849 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:11 compute-0 nova_compute[257802]: 2025-10-02 12:58:11.852 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 KiB/s rd, 20 KiB/s wr, 8 op/s
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.624 2 DEBUG nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.624 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.625 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.625 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.625 2 DEBUG nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.625 2 WARNING nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state None.
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.625 2 DEBUG nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.626 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.626 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.626 2 DEBUG oslo_concurrency.lockutils [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.626 2 DEBUG nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[257802]: 2025-10-02 12:58:12.626 2 WARNING nova.compute.manager [req-6fb5eb03-ab3e-40be-9a0b-e9f45d1baf16 req-4fb52d13-73ff-4c84-8429-4ec548a7a4f0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state None.
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2614947171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2101575353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:13 compute-0 ceph-mon[73607]: pgmap v2977: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 KiB/s rd, 20 KiB/s wr, 8 op/s
Oct 02 12:58:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3201776990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 332 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 534 KiB/s rd, 20 KiB/s wr, 31 op/s
Oct 02 12:58:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:14 compute-0 nova_compute[257802]: 2025-10-02 12:58:14.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:14 compute-0 ceph-mon[73607]: pgmap v2978: 305 pgs: 305 active+clean; 332 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 534 KiB/s rd, 20 KiB/s wr, 31 op/s
Oct 02 12:58:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:14.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:15 compute-0 nova_compute[257802]: 2025-10-02 12:58:15.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 102 op/s
Oct 02 12:58:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:17 compute-0 sudo[382198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:17 compute-0 sudo[382198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:17 compute-0 sudo[382198]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:17 compute-0 sudo[382223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:58:17 compute-0 sudo[382223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:17 compute-0 sudo[382223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:17 compute-0 sudo[382248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:17 compute-0 sudo[382248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:17 compute-0 sudo[382248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:17 compute-0 ceph-mon[73607]: pgmap v2979: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 102 op/s
Oct 02 12:58:17 compute-0 sudo[382273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:58:17 compute-0 sudo[382273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:58:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:58:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:18 compute-0 nova_compute[257802]: 2025-10-02 12:58:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:18 compute-0 nova_compute[257802]: 2025-10-02 12:58:18.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:58:18 compute-0 sudo[382273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e0479969-d764-4693-a371-e7494fc9c3b4 does not exist
Oct 02 12:58:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev eca8872b-f7f8-4ab7-9b64-05b5de997e84 does not exist
Oct 02 12:58:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 37600792-0d19-46ce-95fa-4392a2adafb8 does not exist
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:58:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:58:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:58:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:18 compute-0 sudo[382329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:18 compute-0 sudo[382329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:18 compute-0 sudo[382329]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:19 compute-0 sudo[382354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:58:19 compute-0 sudo[382354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:19 compute-0 sudo[382354]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: pgmap v2980: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:58:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:58:19 compute-0 sudo[382379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:19 compute-0 sudo[382379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:19 compute-0 sudo[382379]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:19 compute-0 sudo[382404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:58:19 compute-0 sudo[382404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:19 compute-0 sudo[382429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:19 compute-0 sudo[382429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:19 compute-0 sudo[382429]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:19 compute-0 sudo[382454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:19 compute-0 sudo[382454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:19 compute-0 sudo[382454]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:19 compute-0 podman[382520]: 2025-10-02 12:58:19.573390859 +0000 UTC m=+0.026795299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:19 compute-0 podman[382520]: 2025-10-02 12:58:19.768186794 +0000 UTC m=+0.221591244 container create a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:58:19 compute-0 nova_compute[257802]: 2025-10-02 12:58:19.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:19 compute-0 systemd[1]: Started libpod-conmon-a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80.scope.
Oct 02 12:58:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:19 compute-0 podman[382520]: 2025-10-02 12:58:19.96629651 +0000 UTC m=+0.419700940 container init a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:19 compute-0 podman[382520]: 2025-10-02 12:58:19.976936252 +0000 UTC m=+0.430340672 container start a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:58:19 compute-0 brave_ramanujan[382536]: 167 167
Oct 02 12:58:19 compute-0 systemd[1]: libpod-a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80.scope: Deactivated successfully.
Oct 02 12:58:20 compute-0 podman[382520]: 2025-10-02 12:58:20.011671476 +0000 UTC m=+0.465075926 container attach a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:20 compute-0 podman[382520]: 2025-10-02 12:58:20.013613363 +0000 UTC m=+0.467017783 container died a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-53ccdab00be6fc38faca8b623572bc317172e675797c6b5e97120083c4ee3de6-merged.mount: Deactivated successfully.
Oct 02 12:58:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:20.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:20 compute-0 nova_compute[257802]: 2025-10-02 12:58:20.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 podman[382520]: 2025-10-02 12:58:20.253486625 +0000 UTC m=+0.706891045 container remove a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:58:20 compute-0 systemd[1]: libpod-conmon-a85b7912f8012641be2c85d42ee18f7bc4dfb04c703038f574b79fca5e287e80.scope: Deactivated successfully.
Oct 02 12:58:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 02 12:58:20 compute-0 podman[382560]: 2025-10-02 12:58:20.428952736 +0000 UTC m=+0.027205010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:20 compute-0 podman[382560]: 2025-10-02 12:58:20.584508497 +0000 UTC m=+0.182760741 container create c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:58:20 compute-0 systemd[1]: Started libpod-conmon-c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd.scope.
Oct 02 12:58:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:20 compute-0 podman[382560]: 2025-10-02 12:58:20.903291308 +0000 UTC m=+0.501543562 container init c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:58:20 compute-0 podman[382560]: 2025-10-02 12:58:20.915259982 +0000 UTC m=+0.513512236 container start c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:58:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:20 compute-0 podman[382560]: 2025-10-02 12:58:20.992289424 +0000 UTC m=+0.590541678 container attach c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 02 12:58:21 compute-0 ceph-mon[73607]: pgmap v2981: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 02 12:58:21 compute-0 elegant_galois[382577]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:58:21 compute-0 elegant_galois[382577]: --> relative data size: 1.0
Oct 02 12:58:21 compute-0 elegant_galois[382577]: --> All data devices are unavailable
Oct 02 12:58:21 compute-0 systemd[1]: libpod-c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd.scope: Deactivated successfully.
Oct 02 12:58:21 compute-0 systemd[1]: libpod-c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd.scope: Consumed 1.043s CPU time.
Oct 02 12:58:21 compute-0 podman[382560]: 2025-10-02 12:58:21.972688127 +0000 UTC m=+1.570940371 container died c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0731f42bc378015a360b29ee1707992c56763d1cd3d1cf6dcdf7ee32997aed-merged.mount: Deactivated successfully.
Oct 02 12:58:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:22.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 KiB/s wr, 98 op/s
Oct 02 12:58:22 compute-0 podman[382560]: 2025-10-02 12:58:22.380202248 +0000 UTC m=+1.978454492 container remove c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:58:22 compute-0 systemd[1]: libpod-conmon-c27cf099ce09daa6fdf890b623d16133d7ca950b5ca7fa0b73ec4945592c8dbd.scope: Deactivated successfully.
Oct 02 12:58:22 compute-0 sudo[382404]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:22 compute-0 sudo[382606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:22 compute-0 sudo[382606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:22 compute-0 sudo[382606]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:22 compute-0 sudo[382631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:58:22 compute-0 sudo[382631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:22 compute-0 sudo[382631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:22 compute-0 sudo[382656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:22 compute-0 sudo[382656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:22 compute-0 sudo[382656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:22 compute-0 sudo[382682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:58:22 compute-0 sudo[382682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:22.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:23 compute-0 nova_compute[257802]: 2025-10-02 12:58:23.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:23 compute-0 nova_compute[257802]: 2025-10-02 12:58:23.101 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.119675843 +0000 UTC m=+0.025887217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.216875731 +0000 UTC m=+0.123087125 container create 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:23 compute-0 systemd[1]: Started libpod-conmon-475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431.scope.
Oct 02 12:58:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.57833109 +0000 UTC m=+0.484542484 container init 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.591208196 +0000 UTC m=+0.497419550 container start 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:23 compute-0 naughty_austin[382763]: 167 167
Oct 02 12:58:23 compute-0 systemd[1]: libpod-475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431.scope: Deactivated successfully.
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.715911209 +0000 UTC m=+0.622122583 container attach 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:58:23 compute-0 podman[382747]: 2025-10-02 12:58:23.717282813 +0000 UTC m=+0.623494177 container died 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:58:24 compute-0 ceph-mon[73607]: pgmap v2982: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 KiB/s wr, 98 op/s
Oct 02 12:58:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:24.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e318d3dfe1b1bf0540cd252cedbb4e58e52aea21f2282025594458b80b1b29-merged.mount: Deactivated successfully.
Oct 02 12:58:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 104 op/s
Oct 02 12:58:24 compute-0 podman[382747]: 2025-10-02 12:58:24.534785805 +0000 UTC m=+1.440997159 container remove 475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_austin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:58:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:24 compute-0 systemd[1]: libpod-conmon-475916a6fd13db7f05c851d8acd54fdba1b73d55f973b7507dafaa9d74562431.scope: Deactivated successfully.
Oct 02 12:58:24 compute-0 podman[382789]: 2025-10-02 12:58:24.756569024 +0000 UTC m=+0.060801085 container create 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:58:24 compute-0 systemd[1]: Started libpod-conmon-67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2.scope.
Oct 02 12:58:24 compute-0 podman[382789]: 2025-10-02 12:58:24.725565131 +0000 UTC m=+0.029797202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0fc3e6e62f519cd698aa336e429d611751c4cc79d0eec17258a8e5282211aba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0fc3e6e62f519cd698aa336e429d611751c4cc79d0eec17258a8e5282211aba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0fc3e6e62f519cd698aa336e429d611751c4cc79d0eec17258a8e5282211aba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0fc3e6e62f519cd698aa336e429d611751c4cc79d0eec17258a8e5282211aba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:24 compute-0 podman[382789]: 2025-10-02 12:58:24.84518605 +0000 UTC m=+0.149418081 container init 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:58:24 compute-0 nova_compute[257802]: 2025-10-02 12:58:24.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-0 podman[382789]: 2025-10-02 12:58:24.855121124 +0000 UTC m=+0.159353145 container start 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:58:24 compute-0 podman[382789]: 2025-10-02 12:58:24.859216975 +0000 UTC m=+0.163449016 container attach 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:58:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:25 compute-0 ceph-mon[73607]: pgmap v2983: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 104 op/s
Oct 02 12:58:25 compute-0 nova_compute[257802]: 2025-10-02 12:58:25.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:25 compute-0 nova_compute[257802]: 2025-10-02 12:58:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:25 compute-0 confident_kirch[382805]: {
Oct 02 12:58:25 compute-0 confident_kirch[382805]:     "1": [
Oct 02 12:58:25 compute-0 confident_kirch[382805]:         {
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "devices": [
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "/dev/loop3"
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             ],
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "lv_name": "ceph_lv0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "lv_size": "7511998464",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "name": "ceph_lv0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "tags": {
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.cluster_name": "ceph",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.crush_device_class": "",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.encrypted": "0",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.osd_id": "1",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.type": "block",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:                 "ceph.vdo": "0"
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             },
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "type": "block",
Oct 02 12:58:25 compute-0 confident_kirch[382805]:             "vg_name": "ceph_vg0"
Oct 02 12:58:25 compute-0 confident_kirch[382805]:         }
Oct 02 12:58:25 compute-0 confident_kirch[382805]:     ]
Oct 02 12:58:25 compute-0 confident_kirch[382805]: }
Oct 02 12:58:25 compute-0 systemd[1]: libpod-67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2.scope: Deactivated successfully.
Oct 02 12:58:25 compute-0 podman[382789]: 2025-10-02 12:58:25.73098552 +0000 UTC m=+1.035217541 container died 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0fc3e6e62f519cd698aa336e429d611751c4cc79d0eec17258a8e5282211aba-merged.mount: Deactivated successfully.
Oct 02 12:58:25 compute-0 podman[382789]: 2025-10-02 12:58:25.788282277 +0000 UTC m=+1.092514298 container remove 67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kirch, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:25 compute-0 systemd[1]: libpod-conmon-67f0c54e32604ef661df425fbb3eb58adb971d6cd731ee2fc847d70a7ed5c7a2.scope: Deactivated successfully.
Oct 02 12:58:25 compute-0 sudo[382682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:25 compute-0 sudo[382826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:25 compute-0 sudo[382826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:25 compute-0 sudo[382826]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:25 compute-0 sudo[382851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:58:25 compute-0 sudo[382851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:26 compute-0 sudo[382851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:26 compute-0 sudo[382894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:26 compute-0 sudo[382894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:26 compute-0 sudo[382894]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:26 compute-0 podman[382876]: 2025-10-02 12:58:26.071521565 +0000 UTC m=+0.072771698 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:58:26 compute-0 podman[382877]: 2025-10-02 12:58:26.074228282 +0000 UTC m=+0.067749306 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:58:26 compute-0 podman[382875]: 2025-10-02 12:58:26.093523545 +0000 UTC m=+0.095384564 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:58:26 compute-0 nova_compute[257802]: 2025-10-02 12:58:26.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:26 compute-0 sudo[382957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:58:26 compute-0 sudo[382957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.3 KiB/s wr, 122 op/s
Oct 02 12:58:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3640934741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.480912421 +0000 UTC m=+0.054919109 container create ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:58:26 compute-0 systemd[1]: Started libpod-conmon-ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d.scope.
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.453281142 +0000 UTC m=+0.027287930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:26 compute-0 ovn_controller[148183]: 2025-10-02T12:58:26Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.573866465 +0000 UTC m=+0.147873183 container init ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.582145808 +0000 UTC m=+0.156152496 container start ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.586731711 +0000 UTC m=+0.160738419 container attach ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:58:26 compute-0 hardcore_colden[383040]: 167 167
Oct 02 12:58:26 compute-0 systemd[1]: libpod-ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d.scope: Deactivated successfully.
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.589719684 +0000 UTC m=+0.163726382 container died ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-055fd5cc89c50a7c157ec167ebb1ae4250dfbfed1e5222e7451417c7ac85a18d-merged.mount: Deactivated successfully.
Oct 02 12:58:26 compute-0 podman[383023]: 2025-10-02 12:58:26.636088393 +0000 UTC m=+0.210095081 container remove ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:58:26 compute-0 systemd[1]: libpod-conmon-ed475108bb3b32988fa5592476de4070621dfe9aad80ad9a1202508f87ea7e5d.scope: Deactivated successfully.
Oct 02 12:58:26 compute-0 podman[383065]: 2025-10-02 12:58:26.806658604 +0000 UTC m=+0.047313934 container create b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:58:26 compute-0 systemd[1]: Started libpod-conmon-b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4.scope.
Oct 02 12:58:26 compute-0 podman[383065]: 2025-10-02 12:58:26.785461883 +0000 UTC m=+0.026117213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccec380a329830645b38059fa1c7a463fe4a1b2dd07a41d0205afdd5da706a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccec380a329830645b38059fa1c7a463fe4a1b2dd07a41d0205afdd5da706a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccec380a329830645b38059fa1c7a463fe4a1b2dd07a41d0205afdd5da706a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccec380a329830645b38059fa1c7a463fe4a1b2dd07a41d0205afdd5da706a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:26 compute-0 podman[383065]: 2025-10-02 12:58:26.919091795 +0000 UTC m=+0.159747115 container init b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:26 compute-0 podman[383065]: 2025-10-02 12:58:26.927065741 +0000 UTC m=+0.167721041 container start b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:58:26 compute-0 podman[383065]: 2025-10-02 12:58:26.930610388 +0000 UTC m=+0.171265758 container attach b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:26.981 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:26.983 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:26.983 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:27 compute-0 nova_compute[257802]: 2025-10-02 12:58:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:27 compute-0 ceph-mon[73607]: pgmap v2984: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.3 KiB/s wr, 122 op/s
Oct 02 12:58:27 compute-0 modest_noether[383081]: {
Oct 02 12:58:27 compute-0 modest_noether[383081]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:58:27 compute-0 modest_noether[383081]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:58:27 compute-0 modest_noether[383081]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:58:27 compute-0 modest_noether[383081]:         "osd_id": 1,
Oct 02 12:58:27 compute-0 modest_noether[383081]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:58:27 compute-0 modest_noether[383081]:         "type": "bluestore"
Oct 02 12:58:27 compute-0 modest_noether[383081]:     }
Oct 02 12:58:27 compute-0 modest_noether[383081]: }
Oct 02 12:58:27 compute-0 systemd[1]: libpod-b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4.scope: Deactivated successfully.
Oct 02 12:58:27 compute-0 podman[383065]: 2025-10-02 12:58:27.841352811 +0000 UTC m=+1.082008111 container died b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccec380a329830645b38059fa1c7a463fe4a1b2dd07a41d0205afdd5da706a3-merged.mount: Deactivated successfully.
Oct 02 12:58:27 compute-0 podman[383065]: 2025-10-02 12:58:27.957870173 +0000 UTC m=+1.198525473 container remove b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noether, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:58:27 compute-0 systemd[1]: libpod-conmon-b3c32040a3edf09822b8253b8769573a35f6eeb87caf9b4992345f4a1463add4.scope: Deactivated successfully.
Oct 02 12:58:27 compute-0 sudo[382957]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:58:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:58:28 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f916d522-3021-4a39-8b6b-94399593c6aa does not exist
Oct 02 12:58:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4750dad0-2969-44d5-a1b1-4edc7bf39197 does not exist
Oct 02 12:58:28 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 04ab8d23-14a9-4be4-956e-2614ac84ade4 does not exist
Oct 02 12:58:28 compute-0 sudo[383117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:28 compute-0 sudo[383117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:28 compute-0 sudo[383117]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:28 compute-0 sudo[383142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:58:28 compute-0 sudo[383142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:28 compute-0 sudo[383142]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:28.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 201 KiB/s rd, 2.5 KiB/s wr, 46 op/s
Oct 02 12:58:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:29 compute-0 nova_compute[257802]: 2025-10-02 12:58:29.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:58:29 compute-0 ceph-mon[73607]: pgmap v2985: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 201 KiB/s rd, 2.5 KiB/s wr, 46 op/s
Oct 02 12:58:29 compute-0 nova_compute[257802]: 2025-10-02 12:58:29.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:29 compute-0 nova_compute[257802]: 2025-10-02 12:58:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:30 compute-0 nova_compute[257802]: 2025-10-02 12:58:30.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:30.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/697969975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:31 compute-0 nova_compute[257802]: 2025-10-02 12:58:31.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:31 compute-0 ceph-mon[73607]: pgmap v2986: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/394349639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:31 compute-0 ovn_controller[148183]: 2025-10-02T12:58:31Z|00890|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:58:31 compute-0 nova_compute[257802]: 2025-10-02 12:58:31.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:32 compute-0 podman[383170]: 2025-10-02 12:58:32.958813611 +0000 UTC m=+0.097145578 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:58:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:33 compute-0 ceph-mon[73607]: pgmap v2987: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:58:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:34.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.459 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.460 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.460 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.461 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:34 compute-0 nova_compute[257802]: 2025-10-02 12:58:34.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:34.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:35 compute-0 nova_compute[257802]: 2025-10-02 12:58:35.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:35 compute-0 ceph-mon[73607]: pgmap v2988: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 13 KiB/s wr, 71 op/s
Oct 02 12:58:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:36.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 512 KiB/s rd, 24 KiB/s wr, 66 op/s
Oct 02 12:58:36 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct 02 12:58:36 compute-0 nova_compute[257802]: 2025-10-02 12:58:36.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:37 compute-0 ceph-mon[73607]: pgmap v2989: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 512 KiB/s rd, 24 KiB/s wr, 66 op/s
Oct 02 12:58:38 compute-0 nova_compute[257802]: 2025-10-02 12:58:38.054 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:38 compute-0 nova_compute[257802]: 2025-10-02 12:58:38.074 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:38 compute-0 nova_compute[257802]: 2025-10-02 12:58:38.074 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:58:38 compute-0 nova_compute[257802]: 2025-10-02 12:58:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 22 KiB/s wr, 25 op/s
Oct 02 12:58:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:38.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:39 compute-0 sudo[383200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:39 compute-0 sudo[383200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:39 compute-0 sudo[383200]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:39 compute-0 sudo[383225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:39 compute-0 sudo[383225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:39 compute-0 sudo[383225]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:39 compute-0 ceph-mon[73607]: pgmap v2990: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 22 KiB/s wr, 25 op/s
Oct 02 12:58:39 compute-0 nova_compute[257802]: 2025-10-02 12:58:39.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.123 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 26 KiB/s wr, 26 op/s
Oct 02 12:58:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454972062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.593 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.671 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.672 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.672 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1454972062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.852 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.853 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4039MB free_disk=20.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.853 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.854 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.978 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 01108902-768b-4bee-baff-11d5854e2f77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.979 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:58:40 compute-0 nova_compute[257802]: 2025-10-02 12:58:40.980 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:58:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.104 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638504711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.560 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.566 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.591 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.647 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:58:41 compute-0 nova_compute[257802]: 2025-10-02 12:58:41.647 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:41 compute-0 ceph-mon[73607]: pgmap v2991: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 26 KiB/s wr, 26 op/s
Oct 02 12:58:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2638504711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 1 op/s
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:58:42
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:43 compute-0 ceph-mon[73607]: pgmap v2992: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 1 op/s
Oct 02 12:58:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:58:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:58:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:58:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:58:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:58:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:44.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:44 compute-0 nova_compute[257802]: 2025-10-02 12:58:44.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 21 KiB/s wr, 2 op/s
Oct 02 12:58:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:44 compute-0 nova_compute[257802]: 2025-10-02 12:58:44.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:45 compute-0 nova_compute[257802]: 2025-10-02 12:58:45.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-0 nova_compute[257802]: 2025-10-02 12:58:45.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-0 ceph-mon[73607]: pgmap v2993: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 21 KiB/s wr, 2 op/s
Oct 02 12:58:45 compute-0 nova_compute[257802]: 2025-10-02 12:58:45.649 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 22 KiB/s wr, 2 op/s
Oct 02 12:58:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:47.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:47 compute-0 ceph-mon[73607]: pgmap v2994: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 22 KiB/s wr, 2 op/s
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.129 2 DEBUG oslo_concurrency.lockutils [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.129 2 DEBUG oslo_concurrency.lockutils [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:48.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s wr, 1 op/s
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.515 2 INFO nova.compute.manager [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Detaching volume c4c3ee6e-d57f-4361-8cda-389d0f995821
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.680 2 INFO nova.virt.block_device [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Attempting to driver detach volume c4c3ee6e-d57f-4361-8cda-389d0f995821 from mountpoint /dev/vdb
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.689 2 DEBUG nova.virt.libvirt.driver [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Attempting to detach device vdb from instance 01108902-768b-4bee-baff-11d5854e2f77 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.689 2 DEBUG nova.virt.libvirt.guest [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-c4c3ee6e-d57f-4361-8cda-389d0f995821">
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   </source>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <serial>c4c3ee6e-d57f-4361-8cda-389d0f995821</serial>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]: </disk>
Oct 02 12:58:48 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.701 2 INFO nova.virt.libvirt.driver [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdb from instance 01108902-768b-4bee-baff-11d5854e2f77 from the persistent domain config.
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.701 2 DEBUG nova.virt.libvirt.driver [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 01108902-768b-4bee-baff-11d5854e2f77 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:58:48 compute-0 nova_compute[257802]: 2025-10-02 12:58:48.702 2 DEBUG nova.virt.libvirt.guest [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-c4c3ee6e-d57f-4361-8cda-389d0f995821">
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   </source>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <serial>c4c3ee6e-d57f-4361-8cda-389d0f995821</serial>
Oct 02 12:58:48 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:58:48 compute-0 nova_compute[257802]: </disk>
Oct 02 12:58:48 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:58:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:49.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.109 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759409929.1092627, 01108902-768b-4bee-baff-11d5854e2f77 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.111 2 DEBUG nova.virt.libvirt.driver [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 01108902-768b-4bee-baff-11d5854e2f77 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.113 2 INFO nova.virt.libvirt.driver [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdb from instance 01108902-768b-4bee-baff-11d5854e2f77 from the live domain config.
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.465 2 DEBUG nova.objects.instance [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.792 2 DEBUG oslo_concurrency.lockutils [None req-898156bd-8281-4486-a82c-387391eccf3f 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:49 compute-0 ceph-mon[73607]: pgmap v2995: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s wr, 1 op/s
Oct 02 12:58:49 compute-0 nova_compute[257802]: 2025-10-02 12:58:49.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:50.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.672 2 DEBUG oslo_concurrency.lockutils [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.673 2 DEBUG oslo_concurrency.lockutils [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.673 2 DEBUG nova.compute.manager [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.677 2 DEBUG nova.compute.manager [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.677 2 DEBUG nova.objects.instance [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:50 compute-0 nova_compute[257802]: 2025-10-02 12:58:50.823 2 DEBUG nova.virt.libvirt.driver [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:58:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:51 compute-0 ceph-mon[73607]: pgmap v2996: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 12:58:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:58:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:52.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:58:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Oct 02 12:58:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:53.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:53 compute-0 ceph-mon[73607]: pgmap v2997: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Oct 02 12:58:53 compute-0 kernel: tap612e6054-5c (unregistering): left promiscuous mode
Oct 02 12:58:53 compute-0 NetworkManager[44987]: <info>  [1759409933.5805] device (tap612e6054-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:53 compute-0 ovn_controller[148183]: 2025-10-02T12:58:53Z|00891|binding|INFO|Releasing lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 from this chassis (sb_readonly=0)
Oct 02 12:58:53 compute-0 ovn_controller[148183]: 2025-10-02T12:58:53Z|00892|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 down in Southbound
Oct 02 12:58:53 compute-0 ovn_controller[148183]: 2025-10-02T12:58:53Z|00893|binding|INFO|Removing iface tap612e6054-5c ovn-installed in OVS
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:53.600 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:53.601 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f unbound from our chassis
Oct 02 12:58:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:53.603 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:58:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:53.605 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1da0e581-09e3-4b41-85b3-7cd4bc73efb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:53.606 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace which is not needed anymore
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:53 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Oct 02 12:58:53 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000bf.scope: Consumed 16.244s CPU time.
Oct 02 12:58:53 compute-0 systemd-machined[211836]: Machine qemu-95-instance-000000bf terminated.
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [NOTICE]   (382141) : haproxy version is 2.8.14-c23fe91
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [NOTICE]   (382141) : path to executable is /usr/sbin/haproxy
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [WARNING]  (382141) : Exiting Master process...
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [WARNING]  (382141) : Exiting Master process...
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [ALERT]    (382141) : Current worker (382159) exited with code 143 (Terminated)
Oct 02 12:58:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[382119]: [WARNING]  (382141) : All workers exited. Exiting... (0)
Oct 02 12:58:53 compute-0 systemd[1]: libpod-1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9.scope: Deactivated successfully.
Oct 02 12:58:53 compute-0 podman[383328]: 2025-10-02 12:58:53.770515407 +0000 UTC m=+0.072966324 container died 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.840 2 INFO nova.virt.libvirt.driver [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance shutdown successfully after 3 seconds.
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.845 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance destroyed successfully.
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.845 2 DEBUG nova.objects.instance [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.867 2 DEBUG nova.compute.manager [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ecfcc2371553fa7f47ff760a218c6f1467f691ec05cca90286af9f7252afaf4-merged.mount: Deactivated successfully.
Oct 02 12:58:53 compute-0 podman[383328]: 2025-10-02 12:58:53.901984866 +0000 UTC m=+0.204435773 container cleanup 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:53 compute-0 nova_compute[257802]: 2025-10-02 12:58:53.923 2 DEBUG oslo_concurrency.lockutils [None req-0fec6022-30fc-4de5-8e5a-d459ff9420b9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:53 compute-0 systemd[1]: libpod-conmon-1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9.scope: Deactivated successfully.
Oct 02 12:58:53 compute-0 podman[383368]: 2025-10-02 12:58:53.99903873 +0000 UTC m=+0.079233598 container remove 1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.005 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[48ae57d2-8680-4ae9-a9b7-90e030507d5b]: (4, ('Thu Oct  2 12:58:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9)\n1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9\nThu Oct  2 12:58:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9)\n1ab8cefc9342bd9b7ca7fe75d57ac065c35154b82b7df5337a7945f277ddf4d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.006 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6c211ed2-144c-417b-af09-660543f14645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.008 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:54 compute-0 kernel: tapff8c8423-f0: left promiscuous mode
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.053 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[39e5dca9-c63c-4d14-8b5b-5a7f3c40f0c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.087 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c53d30f4-f64e-43b5-88f4-1e9e0acfab61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.089 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2f324f-cd26-4d5d-af48-f4eb28eb4d19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.110 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8db08ace-20d4-427f-b9bb-a903b90b38f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795802, 'reachable_time': 20161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383389, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dff8c8423\x2df2c6\x2d4e3f\x2d8fd3\x2d2fb6b6292a3f.mount: Deactivated successfully.
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.116 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:58:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:58:54.116 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[375b32a6-0ea4-4490-b4bc-4a362e89e4aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:54 compute-0 sshd-session[383388]: Invalid user vr from 167.99.55.34 port 44814
Oct 02 12:58:54 compute-0 sshd-session[383388]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:58:54 compute-0 sshd-session[383388]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 12:58:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:54.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Oct 02 12:58:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.606 2 DEBUG nova.compute.manager [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.607 2 DEBUG oslo_concurrency.lockutils [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.607 2 DEBUG oslo_concurrency.lockutils [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.607 2 DEBUG oslo_concurrency.lockutils [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.607 2 DEBUG nova.compute.manager [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.607 2 WARNING nova.compute.manager [req-3723f97d-d9cd-4b47-9da1-f6d91b35d46e req-2819ea20-0d85-4d72-9f0c-b134cda60710 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state stopped and task_state None.
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002176136760189838 of space, bias 1.0, pg target 0.6528410280569514 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:58:54 compute-0 nova_compute[257802]: 2025-10-02 12:58:54.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:58:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1738967758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:58:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:58:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1738967758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:58:55 compute-0 nova_compute[257802]: 2025-10-02 12:58:55.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:55 compute-0 ceph-mon[73607]: pgmap v2998: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Oct 02 12:58:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1738967758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:58:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1738967758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:58:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:58:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:58:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.8 KiB/s wr, 10 op/s
Oct 02 12:58:56 compute-0 sshd-session[383388]: Failed password for invalid user vr from 167.99.55.34 port 44814 ssh2
Oct 02 12:58:56 compute-0 podman[383393]: 2025-10-02 12:58:56.927140668 +0000 UTC m=+0.067245743 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:58:56 compute-0 podman[383395]: 2025-10-02 12:58:56.931462454 +0000 UTC m=+0.068177395 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:56 compute-0 podman[383394]: 2025-10-02 12:58:56.931376192 +0000 UTC m=+0.068091703 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:58:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.215 2 DEBUG nova.compute.manager [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.215 2 DEBUG oslo_concurrency.lockutils [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.215 2 DEBUG oslo_concurrency.lockutils [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.215 2 DEBUG oslo_concurrency.lockutils [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.216 2 DEBUG nova.compute.manager [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.216 2 WARNING nova.compute.manager [req-273d5b23-7721-473c-b270-fdd9302551bd req-67b41fde-3fbe-4e78-80e4-dec65a0ff3cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state stopped and task_state None.
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.628 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.648 2 DEBUG oslo_concurrency.lockutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.648 2 DEBUG oslo_concurrency.lockutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquired lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.648 2 DEBUG nova.network.neutron [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:58:57 compute-0 nova_compute[257802]: 2025-10-02 12:58:57.648 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'info_cache' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:57 compute-0 ceph-mon[73607]: pgmap v2999: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.8 KiB/s wr, 10 op/s
Oct 02 12:58:57 compute-0 sshd-session[383388]: Connection closed by invalid user vr 167.99.55.34 port 44814 [preauth]
Oct 02 12:58:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 KiB/s wr, 10 op/s
Oct 02 12:58:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:58:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.190 2 DEBUG nova.network.neutron [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.236 2 DEBUG oslo_concurrency.lockutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Releasing lock "refresh_cache-01108902-768b-4bee-baff-11d5854e2f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.262 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance destroyed successfully.
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.262 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.274 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'resources' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.286 2 DEBUG nova.virt.libvirt.vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.287 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.287 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.288 2 DEBUG os_vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap612e6054-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.295 2 INFO os_vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.303 2 DEBUG nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start _get_guest_xml network_info=[{"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.309 2 WARNING nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.315 2 DEBUG nova.virt.libvirt.host [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.315 2 DEBUG nova.virt.libvirt.host [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.318 2 DEBUG nova.virt.libvirt.host [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.319 2 DEBUG nova.virt.libvirt.host [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.320 2 DEBUG nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.321 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.321 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.321 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.321 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.322 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.323 2 DEBUG nova.virt.hardware [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.323 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.340 2 DEBUG oslo_concurrency.processutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:59 compute-0 sudo[383470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:59 compute-0 sudo[383470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:59 compute-0 sudo[383470]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:59 compute-0 sudo[383495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:59 compute-0 sudo[383495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:59 compute-0 sudo[383495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910635488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:59 compute-0 ceph-mon[73607]: pgmap v3000: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 KiB/s wr, 10 op/s
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.778 2 DEBUG oslo_concurrency.processutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.815 2 DEBUG oslo_concurrency.processutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:59 compute-0 nova_compute[257802]: 2025-10-02 12:58:59.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913084742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.231 2 DEBUG oslo_concurrency.processutils [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.232 2 DEBUG nova.virt.libvirt.vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.233 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.233 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.235 2 DEBUG nova.objects.instance [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.262 2 DEBUG nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <uuid>01108902-768b-4bee-baff-11d5854e2f77</uuid>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <name>instance-000000bf</name>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeTestJSON-server-1568055795</nova:name>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:58:59</nova:creationTime>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:user uuid="3299a1aed5af4843a91417a3f181c172">tempest-AttachVolumeTestJSON-398185718-project-member</nova:user>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:project uuid="e7168b5b1300495d90592b195824729a">tempest-AttachVolumeTestJSON-398185718</nova:project>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <nova:port uuid="612e6054-5ce1-486c-aa51-2e5d47567ef3">
Oct 02 12:59:00 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <system>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="serial">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="uuid">01108902-768b-4bee-baff-11d5854e2f77</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </system>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <os>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </os>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <features>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </features>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk">
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/01108902-768b-4bee-baff-11d5854e2f77_disk.config">
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </source>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:59:00 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3d:bc:3b"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <target dev="tap612e6054-5c"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77/console.log" append="off"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <video>
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </video>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:59:00 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:59:00 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:59:00 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:59:00 compute-0 nova_compute[257802]: </domain>
Oct 02 12:59:00 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.264 2 DEBUG nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.264 2 DEBUG nova.virt.libvirt.driver [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.265 2 DEBUG nova.virt.libvirt.vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.265 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.267 2 DEBUG nova.network.os_vif_util [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.268 2 DEBUG os_vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap612e6054-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap612e6054-5c, col_values=(('external_ids', {'iface-id': '612e6054-5ce1-486c-aa51-2e5d47567ef3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:bc:3b', 'vm-uuid': '01108902-768b-4bee-baff-11d5854e2f77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.2741] manager: (tap612e6054-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.280 2 INFO os_vif [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:59:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000072s ======
Oct 02 12:59:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:00.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Oct 02 12:59:00 compute-0 kernel: tap612e6054-5c: entered promiscuous mode
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.3471] manager: (tap612e6054-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Oct 02 12:59:00 compute-0 ovn_controller[148183]: 2025-10-02T12:59:00Z|00894|binding|INFO|Claiming lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 for this chassis.
Oct 02 12:59:00 compute-0 ovn_controller[148183]: 2025-10-02T12:59:00Z|00895|binding|INFO|612e6054-5ce1-486c-aa51-2e5d47567ef3: Claiming fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 ovn_controller[148183]: 2025-10-02T12:59:00Z|00896|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 ovn-installed in OVS
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 systemd-udevd[383577]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:00 compute-0 systemd-machined[211836]: New machine qemu-96-instance-000000bf.
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.3895] device (tap612e6054-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.3903] device (tap612e6054-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:59:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 12:59:00 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-000000bf.
Oct 02 12:59:00 compute-0 ovn_controller[148183]: 2025-10-02T12:59:00Z|00897|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 up in Southbound
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.401 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.402 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f bound to our chassis
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.403 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.413 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7332d1b6-1e42-48bc-9260-8e8dda8c39d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.414 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff8c8423-f1 in ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.416 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff8c8423-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.416 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9191362b-a8a7-4bac-8287-0a7ce04e0f62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.418 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fd11179c-9ac3-4619-aef3-892c91deb493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.430 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a91bb768-6dca-4f0c-b3fa-a767107d3a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.442 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6918be22-6d32-46c2-9807-14734b412d76]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.473 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[296d9eac-6dd4-4a00-87e3-cd29fe4f1481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 systemd-udevd[383579]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.4802] manager: (tapff8c8423-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/400)
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.480 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d619f30c-80ee-424b-b2e6-7c3b065a3d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.514 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b41e8a14-1f04-4c16-81b4-e43087511c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.516 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a55670ca-38c0-495c-ba50-30030bf0635f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.5372] device (tapff8c8423-f0): carrier: link connected
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.541 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[c261871e-6056-42ab-881c-251bf46a93db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.557 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45961d02-dafa-486a-9cd6-b18fba17f62d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800816, 'reachable_time': 19435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383610, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.579 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e346ff-f524-4e4f-8768-e347dd306ac3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:5000'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800816, 'tstamp': 800816}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383611, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.597 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6693f8fb-2288-46dc-8d31-8803fcafb5d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800816, 'reachable_time': 19435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383612, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.629 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[62551b90-4033-4e16-b086-a5d6c0e34562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.695 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[65817f1f-16aa-4e45-a760-716912f8a1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.698 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.699 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.699 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff8c8423-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 NetworkManager[44987]: <info>  [1759409940.7020] manager: (tapff8c8423-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 kernel: tapff8c8423-f0: entered promiscuous mode
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.706 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff8c8423-f0, col_values=(('external_ids', {'iface-id': 'd2e0b09e-a5f5-4832-b480-4d90b14ae948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-0 ovn_controller[148183]: 2025-10-02T12:59:00Z|00898|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 nova_compute[257802]: 2025-10-02 12:59:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.725 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.727 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[994483da-f395-49b2-97ba-848ac6dccb9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.728 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:59:00 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:00.732 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'env', 'PROCESS_TAG=haproxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:59:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1910635488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2913084742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:01 compute-0 podman[383687]: 2025-10-02 12:59:01.184547821 +0000 UTC m=+0.085256916 container create 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:59:01 compute-0 podman[383687]: 2025-10-02 12:59:01.125199763 +0000 UTC m=+0.025908868 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:59:01 compute-0 systemd[1]: Started libpod-conmon-9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac.scope.
Oct 02 12:59:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0276cef2100b9194da36d0a16f97a37ad5a035d32e599bc5d1756035ac74efb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.285 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 01108902-768b-4bee-baff-11d5854e2f77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.287 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409941.285308, 01108902-768b-4bee-baff-11d5854e2f77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.287 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Resumed (Lifecycle Event)
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.288 2 DEBUG nova.compute.manager [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:59:01 compute-0 podman[383687]: 2025-10-02 12:59:01.289495339 +0000 UTC m=+0.190204454 container init 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.292 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance rebooted successfully.
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.292 2 DEBUG nova.compute.manager [None req-34dd8778-a018-4dce-8924-523335d422cf 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:01 compute-0 podman[383687]: 2025-10-02 12:59:01.299658188 +0000 UTC m=+0.200367273 container start 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:59:01 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [NOTICE]   (383706) : New worker (383708) forked
Oct 02 12:59:01 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [NOTICE]   (383706) : Loading success.
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.398 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.401 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.436 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.437 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409941.2859614, 01108902-768b-4bee-baff-11d5854e2f77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.437 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Started (Lifecycle Event)
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.494 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:01 compute-0 nova_compute[257802]: 2025-10-02 12:59:01.497 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Oct 02 12:59:01 compute-0 ceph-mon[73607]: pgmap v3001: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 12:59:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/246538470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2434650181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Oct 02 12:59:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Oct 02 12:59:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.697 2 DEBUG nova.compute.manager [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.697 2 DEBUG oslo_concurrency.lockutils [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.698 2 DEBUG oslo_concurrency.lockutils [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.698 2 DEBUG oslo_concurrency.lockutils [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.698 2 DEBUG nova.compute.manager [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:02 compute-0 nova_compute[257802]: 2025-10-02 12:59:02.699 2 WARNING nova.compute.manager [req-89d5e614-3839-4833-9527-657653c7b03d req-ecd642d8-f0bb-4a09-a7b6-7824b507a1b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state None.
Oct 02 12:59:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Oct 02 12:59:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Oct 02 12:59:02 compute-0 ceph-mon[73607]: osdmap e391: 3 total, 3 up, 3 in
Oct 02 12:59:02 compute-0 ceph-mon[73607]: pgmap v3003: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 12:59:02 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Oct 02 12:59:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:03.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Oct 02 12:59:03 compute-0 ceph-mon[73607]: osdmap e392: 3 total, 3 up, 3 in
Oct 02 12:59:03 compute-0 podman[383718]: 2025-10-02 12:59:03.953612253 +0000 UTC m=+0.088853015 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Oct 02 12:59:03 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Oct 02 12:59:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 136 op/s
Oct 02 12:59:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.760 2 DEBUG nova.compute.manager [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.760 2 DEBUG oslo_concurrency.lockutils [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.760 2 DEBUG oslo_concurrency.lockutils [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.760 2 DEBUG oslo_concurrency.lockutils [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.761 2 DEBUG nova.compute.manager [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.761 2 WARNING nova.compute.manager [req-fa8690b2-6a22-42dc-a2f1-03a3bff779e6 req-3f2ae76a-88a6-4c8e-95a1-9d7ad062497f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state None.
Oct 02 12:59:04 compute-0 nova_compute[257802]: 2025-10-02 12:59:04.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Oct 02 12:59:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Oct 02 12:59:05 compute-0 ceph-mon[73607]: osdmap e393: 3 total, 3 up, 3 in
Oct 02 12:59:05 compute-0 ceph-mon[73607]: pgmap v3006: 305 pgs: 305 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 136 op/s
Oct 02 12:59:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Oct 02 12:59:05 compute-0 nova_compute[257802]: 2025-10-02 12:59:05.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:06 compute-0 ceph-mon[73607]: osdmap e394: 3 total, 3 up, 3 in
Oct 02 12:59:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.9 MiB/s rd, 8.6 MiB/s wr, 322 op/s
Oct 02 12:59:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:07 compute-0 ceph-mon[73607]: pgmap v3008: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.9 MiB/s rd, 8.6 MiB/s wr, 322 op/s
Oct 02 12:59:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2653659783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3003324144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/757562566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.5 MiB/s wr, 243 op/s
Oct 02 12:59:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:09.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Oct 02 12:59:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Oct 02 12:59:09 compute-0 ceph-mon[73607]: pgmap v3009: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.5 MiB/s wr, 243 op/s
Oct 02 12:59:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Oct 02 12:59:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Oct 02 12:59:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Oct 02 12:59:09 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Oct 02 12:59:09 compute-0 nova_compute[257802]: 2025-10-02 12:59:09.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:10 compute-0 nova_compute[257802]: 2025-10-02 12:59:10.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 387 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 MiB/s rd, 9.5 MiB/s wr, 260 op/s
Oct 02 12:59:10 compute-0 ceph-mon[73607]: osdmap e395: 3 total, 3 up, 3 in
Oct 02 12:59:10 compute-0 ceph-mon[73607]: osdmap e396: 3 total, 3 up, 3 in
Oct 02 12:59:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:11.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:11 compute-0 ceph-mon[73607]: pgmap v3012: 305 pgs: 305 active+clean; 387 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 MiB/s rd, 9.5 MiB/s wr, 260 op/s
Oct 02 12:59:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:11.729 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:11.730 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:59:11 compute-0 nova_compute[257802]: 2025-10-02 12:59:11.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 387 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 140 op/s
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:13.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:13 compute-0 ovn_controller[148183]: 2025-10-02T12:59:13Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:bc:3b 10.100.0.9
Oct 02 12:59:13 compute-0 ceph-mon[73607]: pgmap v3013: 305 pgs: 305 active+clean; 387 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 140 op/s
Oct 02 12:59:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:13.732 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Oct 02 12:59:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3794624728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:15 compute-0 nova_compute[257802]: 2025-10-02 12:59:15.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:15.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:15 compute-0 nova_compute[257802]: 2025-10-02 12:59:15.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:15 compute-0 ceph-mon[73607]: pgmap v3014: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Oct 02 12:59:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2952988850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:16.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 235 op/s
Oct 02 12:59:17 compute-0 ceph-mon[73607]: pgmap v3015: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 235 op/s
Oct 02 12:59:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:17.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:18 compute-0 nova_compute[257802]: 2025-10-02 12:59:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:18 compute-0 nova_compute[257802]: 2025-10-02 12:59:18.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:59:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:18.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.7 MiB/s wr, 183 op/s
Oct 02 12:59:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:19.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:19 compute-0 ceph-mon[73607]: pgmap v3016: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.7 MiB/s wr, 183 op/s
Oct 02 12:59:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:19 compute-0 sudo[383754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:19 compute-0 sudo[383754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:19 compute-0 sudo[383754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:19 compute-0 sudo[383779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:19 compute-0 sudo[383779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:19 compute-0 sudo[383779]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:20 compute-0 nova_compute[257802]: 2025-10-02 12:59:20.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-0 nova_compute[257802]: 2025-10-02 12:59:20.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.4 MiB/s wr, 180 op/s
Oct 02 12:59:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:21.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:21 compute-0 ceph-mon[73607]: pgmap v3017: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.4 MiB/s wr, 180 op/s
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.165 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.165 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.165 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.166 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.166 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.167 2 INFO nova.compute.manager [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Terminating instance
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.167 2 DEBUG nova.compute.manager [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:59:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 143 op/s
Oct 02 12:59:22 compute-0 kernel: tap612e6054-5c (unregistering): left promiscuous mode
Oct 02 12:59:22 compute-0 NetworkManager[44987]: <info>  [1759409962.5175] device (tap612e6054-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:22 compute-0 ovn_controller[148183]: 2025-10-02T12:59:22Z|00899|binding|INFO|Releasing lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 from this chassis (sb_readonly=0)
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 ovn_controller[148183]: 2025-10-02T12:59:22Z|00900|binding|INFO|Setting lport 612e6054-5ce1-486c-aa51-2e5d47567ef3 down in Southbound
Oct 02 12:59:22 compute-0 ovn_controller[148183]: 2025-10-02T12:59:22Z|00901|binding|INFO|Removing iface tap612e6054-5c ovn-installed in OVS
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:22.570 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bc:3b 10.100.0.9'], port_security=['fa:16:3e:3d:bc:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01108902-768b-4bee-baff-11d5854e2f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0a80d211-09c4-4e45-80f9-f1b2f1a7f90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=612e6054-5ce1-486c-aa51-2e5d47567ef3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:22.571 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 612e6054-5ce1-486c-aa51-2e5d47567ef3 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f unbound from our chassis
Oct 02 12:59:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:22.572 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:59:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:22.573 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a3de3fc1-f57e-4973-9421-86d3e06859fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:22.574 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace which is not needed anymore
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Oct 02 12:59:22 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000bf.scope: Consumed 13.843s CPU time.
Oct 02 12:59:22 compute-0 systemd-machined[211836]: Machine qemu-96-instance-000000bf terminated.
Oct 02 12:59:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3109007842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [NOTICE]   (383706) : haproxy version is 2.8.14-c23fe91
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [NOTICE]   (383706) : path to executable is /usr/sbin/haproxy
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [WARNING]  (383706) : Exiting Master process...
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [WARNING]  (383706) : Exiting Master process...
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [ALERT]    (383706) : Current worker (383708) exited with code 143 (Terminated)
Oct 02 12:59:22 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[383702]: [WARNING]  (383706) : All workers exited. Exiting... (0)
Oct 02 12:59:22 compute-0 systemd[1]: libpod-9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac.scope: Deactivated successfully.
Oct 02 12:59:22 compute-0 podman[383830]: 2025-10-02 12:59:22.770378916 +0000 UTC m=+0.110692821 container died 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.804 2 INFO nova.virt.libvirt.driver [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Instance destroyed successfully.
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.805 2 DEBUG nova.objects.instance [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'resources' on Instance uuid 01108902-768b-4bee-baff-11d5854e2f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.833 2 DEBUG nova.virt.libvirt.vif [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1568055795',display_name='tempest-AttachVolumeTestJSON-server-1568055795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1568055795',id=191,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbS2mKT+0z/yQuxuzfwCwsXKWxC0UzeLnKuZT+sZHjADBetDcCrAmsjHh5DZcFc1laecrJqG3Gw7KVMmBAo01ad6Z646e7xI9MDC+TYltwo6ghxZsSIWSKPTUL61VunAQ==',key_name='tempest-keypair-1834836644',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-ikr3sr2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=01108902-768b-4bee-baff-11d5854e2f77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.833 2 DEBUG nova.network.os_vif_util [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "address": "fa:16:3e:3d:bc:3b", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612e6054-5c", "ovs_interfaceid": "612e6054-5ce1-486c-aa51-2e5d47567ef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.834 2 DEBUG nova.network.os_vif_util [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.834 2 DEBUG os_vif [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap612e6054-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 nova_compute[257802]: 2025-10-02 12:59:22.841 2 INFO os_vif [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bc:3b,bridge_name='br-int',has_traffic_filtering=True,id=612e6054-5ce1-486c-aa51-2e5d47567ef3,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612e6054-5c')
Oct 02 12:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac-userdata-shm.mount: Deactivated successfully.
Oct 02 12:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0276cef2100b9194da36d0a16f97a37ad5a035d32e599bc5d1756035ac74efb9-merged.mount: Deactivated successfully.
Oct 02 12:59:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:23 compute-0 podman[383830]: 2025-10-02 12:59:23.085391244 +0000 UTC m=+0.425705149 container cleanup 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:23 compute-0 systemd[1]: libpod-conmon-9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac.scope: Deactivated successfully.
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.121 2 DEBUG nova.compute.manager [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.121 2 DEBUG oslo_concurrency.lockutils [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.122 2 DEBUG oslo_concurrency.lockutils [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.122 2 DEBUG oslo_concurrency.lockutils [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.123 2 DEBUG nova.compute.manager [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.123 2 DEBUG nova.compute.manager [req-e4bd63d5-8618-4529-a00f-c3c2a474f2f2 req-b2fd7bb5-0434-4c2a-b9dd-f8f70efd0be3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-unplugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:59:23 compute-0 podman[383890]: 2025-10-02 12:59:23.279770039 +0000 UTC m=+0.172235852 container remove 9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.286 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b1f385-5275-438e-90de-9d909e0e1ae0]: (4, ('Thu Oct  2 12:59:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac)\n9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac\nThu Oct  2 12:59:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac)\n9bd015da0ccf1e53eb204857de04e213179e898f944d3d179e376aae17bceeac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.288 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5ef21f-a4b2-48b2-817e-4aef9d7847d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.288 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:23 compute-0 kernel: tapff8c8423-f0: left promiscuous mode
Oct 02 12:59:23 compute-0 nova_compute[257802]: 2025-10-02 12:59:23.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.306 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa88d19d-7456-48da-9b19-3e71d13b43c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.333 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00f8c232-b35b-4dfa-a1f7-20676305431d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.334 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[63abee19-1df8-45d8-be16-a3349012a71b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.350 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[54982e57-e382-45fe-a4d5-a17478c7db67]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800809, 'reachable_time': 30489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383904, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 systemd[1]: run-netns-ovnmeta\x2dff8c8423\x2df2c6\x2d4e3f\x2d8fd3\x2d2fb6b6292a3f.mount: Deactivated successfully.
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.356 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:59:23 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:23.356 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[70cf032c-75c8-4e62-af9f-8aff1fd1be07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2200611487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:23 compute-0 ceph-mon[73607]: pgmap v3018: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 143 op/s
Oct 02 12:59:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/166007406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2200611487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.119 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:59:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 144 op/s
Oct 02 12:59:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.743 2 INFO nova.virt.libvirt.driver [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Deleting instance files /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77_del
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.744 2 INFO nova.virt.libvirt.driver [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Deletion of /var/lib/nova/instances/01108902-768b-4bee-baff-11d5854e2f77_del complete
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.919 2 INFO nova.compute.manager [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Took 2.75 seconds to destroy the instance on the hypervisor.
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.919 2 DEBUG oslo.service.loopingcall [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.920 2 DEBUG nova.compute.manager [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:59:24 compute-0 nova_compute[257802]: 2025-10-02 12:59:24.920 2 DEBUG nova.network.neutron [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:59:24 compute-0 ceph-mon[73607]: pgmap v3019: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 144 op/s
Oct 02 12:59:25 compute-0 nova_compute[257802]: 2025-10-02 12:59:25.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:25.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:25 compute-0 nova_compute[257802]: 2025-10-02 12:59:25.119 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 135 op/s
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.571 2 DEBUG nova.compute.manager [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.572 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "01108902-768b-4bee-baff-11d5854e2f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.572 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.572 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.572 2 DEBUG nova.compute.manager [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] No waiting events found dispatching network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:26 compute-0 nova_compute[257802]: 2025-10-02 12:59:26.573 2 WARNING nova.compute.manager [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received unexpected event network-vif-plugged-612e6054-5ce1-486c-aa51-2e5d47567ef3 for instance with vm_state active and task_state deleting.
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:26.982 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:26.982 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:26.982 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:27 compute-0 ceph-mon[73607]: pgmap v3020: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 135 op/s
Oct 02 12:59:27 compute-0 nova_compute[257802]: 2025-10-02 12:59:27.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:27 compute-0 podman[383908]: 2025-10-02 12:59:27.925794167 +0000 UTC m=+0.066702740 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:27 compute-0 podman[383910]: 2025-10-02 12:59:27.93488029 +0000 UTC m=+0.068851943 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:59:27 compute-0 podman[383909]: 2025-10-02 12:59:27.96784605 +0000 UTC m=+0.103469173 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.090 2 DEBUG nova.network.neutron [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.123 2 INFO nova.compute.manager [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Took 3.20 seconds to deallocate network for instance.
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.217 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.218 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.223 2 DEBUG nova.compute.manager [req-739721dc-0228-4680-b0ac-3399a81340a9 req-453007e3-a6ea-4c63-9ef5-27ebe412fe63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Received event network-vif-deleted-612e6054-5ce1-486c-aa51-2e5d47567ef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.276 2 DEBUG oslo_concurrency.processutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 13 KiB/s wr, 55 op/s
Oct 02 12:59:28 compute-0 sudo[383987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:28 compute-0 sudo[383987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 sudo[383987]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:28 compute-0 sudo[384012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:28 compute-0 sudo[384012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 sudo[384012]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:28 compute-0 sudo[384037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:28 compute-0 sudo[384037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 sudo[384037]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378614124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.745 2 DEBUG oslo_concurrency.processutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.751 2 DEBUG nova.compute.provider_tree [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:28 compute-0 sudo[384063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:59:28 compute-0 sudo[384063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.772 2 DEBUG nova.scheduler.client.report [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.815 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:28 compute-0 nova_compute[257802]: 2025-10-02 12:59:28.850 2 INFO nova.scheduler.client.report [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Deleted allocations for instance 01108902-768b-4bee-baff-11d5854e2f77
Oct 02 12:59:29 compute-0 nova_compute[257802]: 2025-10-02 12:59:29.021 2 DEBUG oslo_concurrency.lockutils [None req-95695c2e-afcb-4bca-a3a0-3d4c46f4b7a9 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "01108902-768b-4bee-baff-11d5854e2f77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:29 compute-0 nova_compute[257802]: 2025-10-02 12:59:29.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:29 compute-0 sudo[384063]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 35017a74-2ed1-46c2-8d73-a2139fe69821 does not exist
Oct 02 12:59:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2cb882b9-aa4f-4543-a0ff-48db9fa85d61 does not exist
Oct 02 12:59:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5c9cce1f-ea2f-4103-b89f-69084b69ce2e does not exist
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:59:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:29 compute-0 sudo[384121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:29 compute-0 sudo[384121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:29 compute-0 sudo[384121]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:29 compute-0 sudo[384146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:29 compute-0 sudo[384146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:29 compute-0 sudo[384146]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:29 compute-0 sudo[384171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:29 compute-0 sudo[384171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:29 compute-0 sudo[384171]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:29 compute-0 ceph-mon[73607]: pgmap v3021: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 13 KiB/s wr, 55 op/s
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1378614124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3167330298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:59:29 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:29 compute-0 sudo[384196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:59:29 compute-0 sudo[384196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:29 compute-0 podman[384260]: 2025-10-02 12:59:29.907800534 +0000 UTC m=+0.053797922 container create 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:59:29 compute-0 systemd[1]: Started libpod-conmon-5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13.scope.
Oct 02 12:59:29 compute-0 podman[384260]: 2025-10-02 12:59:29.881499209 +0000 UTC m=+0.027496627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:30 compute-0 nova_compute[257802]: 2025-10-02 12:59:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:30 compute-0 podman[384260]: 2025-10-02 12:59:30.079195205 +0000 UTC m=+0.225192623 container init 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:59:30 compute-0 podman[384260]: 2025-10-02 12:59:30.085740306 +0000 UTC m=+0.231737694 container start 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:59:30 compute-0 thirsty_joliot[384276]: 167 167
Oct 02 12:59:30 compute-0 systemd[1]: libpod-5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13.scope: Deactivated successfully.
Oct 02 12:59:30 compute-0 podman[384260]: 2025-10-02 12:59:30.149472141 +0000 UTC m=+0.295469559 container attach 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:59:30 compute-0 podman[384260]: 2025-10-02 12:59:30.15105584 +0000 UTC m=+0.297053228 container died 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-13394068cefd45d270abd4b7242b10533d3fee64ba5d6f4cff1dca8baf8c14b7-merged.mount: Deactivated successfully.
Oct 02 12:59:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:30 compute-0 podman[384260]: 2025-10-02 12:59:30.375062002 +0000 UTC m=+0.521059400 container remove 5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:30 compute-0 systemd[1]: libpod-conmon-5d6532bd29fb6df8d91a253b8a0f1cae5551bec630fd49699e177574e2c5fe13.scope: Deactivated successfully.
Oct 02 12:59:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 26 KiB/s wr, 60 op/s
Oct 02 12:59:30 compute-0 podman[384301]: 2025-10-02 12:59:30.552234195 +0000 UTC m=+0.057368370 container create fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:59:30 compute-0 systemd[1]: Started libpod-conmon-fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b.scope.
Oct 02 12:59:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1514215622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:30 compute-0 podman[384301]: 2025-10-02 12:59:30.519805348 +0000 UTC m=+0.024939543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:30 compute-0 podman[384301]: 2025-10-02 12:59:30.652861457 +0000 UTC m=+0.157995662 container init fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:59:30 compute-0 podman[384301]: 2025-10-02 12:59:30.661726634 +0000 UTC m=+0.166860809 container start fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:59:30 compute-0 podman[384301]: 2025-10-02 12:59:30.670344046 +0000 UTC m=+0.175478231 container attach fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:59:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:31 compute-0 nova_compute[257802]: 2025-10-02 12:59:31.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:31 compute-0 trusting_mendel[384317]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:59:31 compute-0 trusting_mendel[384317]: --> relative data size: 1.0
Oct 02 12:59:31 compute-0 trusting_mendel[384317]: --> All data devices are unavailable
Oct 02 12:59:31 compute-0 systemd[1]: libpod-fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b.scope: Deactivated successfully.
Oct 02 12:59:31 compute-0 podman[384301]: 2025-10-02 12:59:31.485407838 +0000 UTC m=+0.990542043 container died fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:59:31 compute-0 ceph-mon[73607]: pgmap v3022: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 26 KiB/s wr, 60 op/s
Oct 02 12:59:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2742360118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9b3083b87e77d764294c8465782fa5f1e8b12f59272cfc26f29161a4eaa74a8-merged.mount: Deactivated successfully.
Oct 02 12:59:31 compute-0 podman[384301]: 2025-10-02 12:59:31.777529804 +0000 UTC m=+1.282663979 container remove fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:31 compute-0 sudo[384196]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:31 compute-0 systemd[1]: libpod-conmon-fa3f99517fb284d1068a8c9e021a36e3e3d5bcbed23625ebe2b96831f162134b.scope: Deactivated successfully.
Oct 02 12:59:31 compute-0 sudo[384346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:31 compute-0 sudo[384346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:31 compute-0 sudo[384346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:31 compute-0 sudo[384371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:31 compute-0 sudo[384371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:31 compute-0 sudo[384371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:31 compute-0 sudo[384396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:31 compute-0 sudo[384396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:31 compute-0 sudo[384396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:32 compute-0 sudo[384421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 12:59:32 compute-0 sudo[384421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:32.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.376143598 +0000 UTC m=+0.053687059 container create 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 33 op/s
Oct 02 12:59:32 compute-0 systemd[1]: Started libpod-conmon-092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93.scope.
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.345435084 +0000 UTC m=+0.022978575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.484137542 +0000 UTC m=+0.161681003 container init 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.493524532 +0000 UTC m=+0.171067993 container start 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:32 compute-0 zealous_saha[384503]: 167 167
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.499431797 +0000 UTC m=+0.176975268 container attach 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:59:32 compute-0 systemd[1]: libpod-092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93.scope: Deactivated successfully.
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.503960099 +0000 UTC m=+0.181503560 container died 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e5efd9030c8b007b65978eb6c9f5dc5f7b3901abbc770c8ef04ff3be618b0f2-merged.mount: Deactivated successfully.
Oct 02 12:59:32 compute-0 podman[384487]: 2025-10-02 12:59:32.570940044 +0000 UTC m=+0.248483495 container remove 092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:59:32 compute-0 systemd[1]: libpod-conmon-092a78c6705d6d0eb2c916581f8f25e7f1981bea9c96f1cb4bdf44f6d145bf93.scope: Deactivated successfully.
Oct 02 12:59:32 compute-0 podman[384531]: 2025-10-02 12:59:32.760260875 +0000 UTC m=+0.061191895 container create 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:59:32 compute-0 systemd[1]: Started libpod-conmon-0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3.scope.
Oct 02 12:59:32 compute-0 podman[384531]: 2025-10-02 12:59:32.723929572 +0000 UTC m=+0.024860612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca70e0bc3883ba29ca4fb5a2bb520dacb353bf8e4a840ec22beddd33e20af87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca70e0bc3883ba29ca4fb5a2bb520dacb353bf8e4a840ec22beddd33e20af87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca70e0bc3883ba29ca4fb5a2bb520dacb353bf8e4a840ec22beddd33e20af87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca70e0bc3883ba29ca4fb5a2bb520dacb353bf8e4a840ec22beddd33e20af87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:32 compute-0 nova_compute[257802]: 2025-10-02 12:59:32.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:32 compute-0 podman[384531]: 2025-10-02 12:59:32.863561502 +0000 UTC m=+0.164492542 container init 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:32 compute-0 podman[384531]: 2025-10-02 12:59:32.871852445 +0000 UTC m=+0.172783485 container start 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:32 compute-0 podman[384531]: 2025-10-02 12:59:32.879574875 +0000 UTC m=+0.180505935 container attach 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:33 compute-0 priceless_payne[384549]: {
Oct 02 12:59:33 compute-0 priceless_payne[384549]:     "1": [
Oct 02 12:59:33 compute-0 priceless_payne[384549]:         {
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "devices": [
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "/dev/loop3"
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             ],
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "lv_name": "ceph_lv0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "lv_size": "7511998464",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "name": "ceph_lv0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "tags": {
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.cluster_name": "ceph",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.crush_device_class": "",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.encrypted": "0",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.osd_id": "1",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.type": "block",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:                 "ceph.vdo": "0"
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             },
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "type": "block",
Oct 02 12:59:33 compute-0 priceless_payne[384549]:             "vg_name": "ceph_vg0"
Oct 02 12:59:33 compute-0 priceless_payne[384549]:         }
Oct 02 12:59:33 compute-0 priceless_payne[384549]:     ]
Oct 02 12:59:33 compute-0 priceless_payne[384549]: }
Oct 02 12:59:33 compute-0 systemd[1]: libpod-0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3.scope: Deactivated successfully.
Oct 02 12:59:33 compute-0 podman[384531]: 2025-10-02 12:59:33.644564887 +0000 UTC m=+0.945495907 container died 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:33 compute-0 ceph-mon[73607]: pgmap v3023: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 33 op/s
Oct 02 12:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca70e0bc3883ba29ca4fb5a2bb520dacb353bf8e4a840ec22beddd33e20af87-merged.mount: Deactivated successfully.
Oct 02 12:59:33 compute-0 podman[384531]: 2025-10-02 12:59:33.946125685 +0000 UTC m=+1.247056705 container remove 0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:33 compute-0 systemd[1]: libpod-conmon-0e3fb23e73d344f5c7425bb306d16279ed35c4eb1661adf0202d4926db9158a3.scope: Deactivated successfully.
Oct 02 12:59:33 compute-0 sudo[384421]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:34 compute-0 sudo[384572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:34 compute-0 sudo[384572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:34 compute-0 sudo[384572]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:34 compute-0 nova_compute[257802]: 2025-10-02 12:59:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:34 compute-0 nova_compute[257802]: 2025-10-02 12:59:34.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:59:34 compute-0 nova_compute[257802]: 2025-10-02 12:59:34.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:59:34 compute-0 sudo[384603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:34 compute-0 sudo[384603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:34 compute-0 sudo[384603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:34 compute-0 podman[384596]: 2025-10-02 12:59:34.164672164 +0000 UTC m=+0.093375845 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:59:34 compute-0 sudo[384644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:34 compute-0 sudo[384644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:34 compute-0 sudo[384644]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:34 compute-0 nova_compute[257802]: 2025-10-02 12:59:34.211 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:59:34 compute-0 sudo[384674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 12:59:34 compute-0 sudo[384674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:34.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.565378007 +0000 UTC m=+0.050721747 container create 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:59:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:34 compute-0 systemd[1]: Started libpod-conmon-40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa.scope.
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.534886398 +0000 UTC m=+0.020230168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.685442686 +0000 UTC m=+0.170786426 container init 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.692725655 +0000 UTC m=+0.178069395 container start 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:59:34 compute-0 loving_hellman[384758]: 167 167
Oct 02 12:59:34 compute-0 systemd[1]: libpod-40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa.scope: Deactivated successfully.
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.704945395 +0000 UTC m=+0.190289165 container attach 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.705610841 +0000 UTC m=+0.190954581 container died 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4747c7b5fc4f33f4cd7530980ee2d49818af3187c5e146ee3d1e77f586d53f57-merged.mount: Deactivated successfully.
Oct 02 12:59:34 compute-0 podman[384742]: 2025-10-02 12:59:34.866424412 +0000 UTC m=+0.351768182 container remove 40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:34 compute-0 systemd[1]: libpod-conmon-40e8115db8bb4a34d5c29787847644bff824b0d5f23112a8409254eb56bcc5fa.scope: Deactivated successfully.
Oct 02 12:59:35 compute-0 nova_compute[257802]: 2025-10-02 12:59:35.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:35.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:35 compute-0 podman[384783]: 2025-10-02 12:59:35.118273358 +0000 UTC m=+0.097364753 container create c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:59:35 compute-0 podman[384783]: 2025-10-02 12:59:35.055312562 +0000 UTC m=+0.034403977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:35 compute-0 systemd[1]: Started libpod-conmon-c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b.scope.
Oct 02 12:59:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c64e16bf10ae49fe2e1bc6865245c50045b8417e366a5c94167d32ec8da64b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c64e16bf10ae49fe2e1bc6865245c50045b8417e366a5c94167d32ec8da64b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c64e16bf10ae49fe2e1bc6865245c50045b8417e366a5c94167d32ec8da64b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c64e16bf10ae49fe2e1bc6865245c50045b8417e366a5c94167d32ec8da64b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:35 compute-0 podman[384783]: 2025-10-02 12:59:35.253038219 +0000 UTC m=+0.232129644 container init c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:35 compute-0 podman[384783]: 2025-10-02 12:59:35.261130388 +0000 UTC m=+0.240221783 container start c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:35 compute-0 podman[384783]: 2025-10-02 12:59:35.266983371 +0000 UTC m=+0.246074786 container attach c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:59:35 compute-0 ceph-mon[73607]: pgmap v3024: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 12:59:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/783575176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:36 compute-0 nova_compute[257802]: 2025-10-02 12:59:36.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:36 compute-0 goofy_bartik[384800]: {
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:         "osd_id": 1,
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:         "type": "bluestore"
Oct 02 12:59:36 compute-0 goofy_bartik[384800]:     }
Oct 02 12:59:36 compute-0 goofy_bartik[384800]: }
Oct 02 12:59:36 compute-0 systemd[1]: libpod-c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b.scope: Deactivated successfully.
Oct 02 12:59:36 compute-0 podman[384783]: 2025-10-02 12:59:36.174913445 +0000 UTC m=+1.154004870 container died c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:59:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 12:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c64e16bf10ae49fe2e1bc6865245c50045b8417e366a5c94167d32ec8da64b4-merged.mount: Deactivated successfully.
Oct 02 12:59:36 compute-0 podman[384783]: 2025-10-02 12:59:36.584448465 +0000 UTC m=+1.563539900 container remove c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bartik, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:59:36 compute-0 systemd[1]: libpod-conmon-c5148b8207a02436d6724cf52b56af0dd095658779d5f304c4ecefabd8db167b.scope: Deactivated successfully.
Oct 02 12:59:36 compute-0 sudo[384674]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:59:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:59:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 42351952-4618-4e09-9984-b365b26d0cc2 does not exist
Oct 02 12:59:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8e74a093-93d7-4d38-aab1-e676ab391ae0 does not exist
Oct 02 12:59:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4a6220ac-b073-4375-a8f3-bb8f377e85e8 does not exist
Oct 02 12:59:36 compute-0 sudo[384836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:36 compute-0 sudo[384836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:36 compute-0 sudo[384836]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:36 compute-0 sudo[384861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:59:36 compute-0 sudo[384861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:36 compute-0 sudo[384861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:37.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:37 compute-0 ceph-mon[73607]: pgmap v3025: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 12:59:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 12:59:37 compute-0 nova_compute[257802]: 2025-10-02 12:59:37.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409962.801808, 01108902-768b-4bee-baff-11d5854e2f77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:37 compute-0 nova_compute[257802]: 2025-10-02 12:59:37.804 2 INFO nova.compute.manager [-] [instance: 01108902-768b-4bee-baff-11d5854e2f77] VM Stopped (Lifecycle Event)
Oct 02 12:59:37 compute-0 nova_compute[257802]: 2025-10-02 12:59:37.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:37 compute-0 nova_compute[257802]: 2025-10-02 12:59:37.869 2 DEBUG nova.compute.manager [None req-e3844abf-cbb2-4406-b68d-e696a398a447 - - - - - -] [instance: 01108902-768b-4bee-baff-11d5854e2f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:38.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 02 12:59:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:39 compute-0 ceph-mon[73607]: pgmap v3026: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 02 12:59:39 compute-0 sudo[384887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:39 compute-0 sudo[384887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:39 compute-0 sudo[384887]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:40 compute-0 sudo[384912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:40 compute-0 sudo[384912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:40 compute-0 sudo[384912]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.276 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.397 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.397 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.398 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.398 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.398 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.775 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.776 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211326943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.840 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:40 compute-0 ceph-mon[73607]: pgmap v3027: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 12:59:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1211326943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:40 compute-0 nova_compute[257802]: 2025-10-02 12:59:40.917 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.006 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.007 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4236MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.007 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.008 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 12:59:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.117 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.181 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 15a3e611-0b02-417e-96b9-552573fac39e has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.182 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.182 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.227 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906166653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.665 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.672 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.701 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.739 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.740 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.740 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.741 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.741 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.746 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.747 2 INFO nova.compute.claims [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:59:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3906166653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:41 compute-0 nova_compute[257802]: 2025-10-02 12:59:41.894 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683854394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.351 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.357 2 DEBUG nova.compute.provider_tree [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.372 2 DEBUG nova.scheduler.client.report [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:42.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.394 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.395 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.494 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.494 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_12:59:42
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', '.mgr']
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.542 2 INFO nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.574 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.669 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.671 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.672 2 INFO nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Creating image(s)
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.704 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.739 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.770 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.775 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.843 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.844 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.845 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.845 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.879 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.884 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 15a3e611-0b02-417e-96b9-552573fac39e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:42 compute-0 nova_compute[257802]: 2025-10-02 12:59:42.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3336498066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3683854394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:42 compute-0 ceph-mon[73607]: pgmap v3028: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 12:59:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3263233194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:43.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:59:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:59:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:59:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:59:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.524 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 15a3e611-0b02-417e-96b9-552573fac39e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.573 2 DEBUG nova.policy [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3299a1aed5af4843a91417a3f181c172', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7168b5b1300495d90592b195824729a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.630 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] resizing rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.772 2 DEBUG nova.objects.instance [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'migration_context' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.788 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.788 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Ensure instance console log exists: /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.789 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.789 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:43 compute-0 nova_compute[257802]: 2025-10-02 12:59:43.789 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:59:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 328 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 131 op/s
Oct 02 12:59:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:45 compute-0 nova_compute[257802]: 2025-10-02 12:59:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:45.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:45 compute-0 ceph-mon[73607]: pgmap v3029: 305 pgs: 305 active+clean; 328 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 131 op/s
Oct 02 12:59:46 compute-0 nova_compute[257802]: 2025-10-02 12:59:46.092 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Successfully created port: 856edb73-6931-4d42-b614-8212ebb68a63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:59:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 383 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 150 op/s
Oct 02 12:59:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:47.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:47 compute-0 ceph-mon[73607]: pgmap v3030: 305 pgs: 305 active+clean; 383 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 150 op/s
Oct 02 12:59:47 compute-0 nova_compute[257802]: 2025-10-02 12:59:47.603 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:47 compute-0 nova_compute[257802]: 2025-10-02 12:59:47.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.079 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Successfully updated port: 856edb73-6931-4d42-b614-8212ebb68a63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.151 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.151 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquired lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.151 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.225 2 DEBUG nova.compute.manager [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-changed-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.226 2 DEBUG nova.compute.manager [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Refreshing instance network info cache due to event network-changed-856edb73-6931-4d42-b614-8212ebb68a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.226 2 DEBUG oslo_concurrency.lockutils [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:48 compute-0 nova_compute[257802]: 2025-10-02 12:59:48.328 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:59:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:48.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 383 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 165 KiB/s rd, 5.6 MiB/s wr, 103 op/s
Oct 02 12:59:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 12:59:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:49.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.482 2 DEBUG nova.network.neutron [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:49 compute-0 ceph-mon[73607]: pgmap v3031: 305 pgs: 305 active+clean; 383 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 165 KiB/s rd, 5.6 MiB/s wr, 103 op/s
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.579 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Releasing lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.580 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Instance network_info: |[{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.582 2 DEBUG oslo_concurrency.lockutils [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.582 2 DEBUG nova.network.neutron [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Refreshing network info cache for port 856edb73-6931-4d42-b614-8212ebb68a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.587 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Start _get_guest_xml network_info=[{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.596 2 WARNING nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.601 2 DEBUG nova.virt.libvirt.host [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.602 2 DEBUG nova.virt.libvirt.host [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.607 2 DEBUG nova.virt.libvirt.host [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.608 2 DEBUG nova.virt.libvirt.host [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:59:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.610 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.610 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.611 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.612 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.612 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.612 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.612 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.613 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.613 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.613 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.613 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.614 2 DEBUG nova.virt.hardware [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:59:49 compute-0 nova_compute[257802]: 2025-10-02 12:59:49.618 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528526786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.123 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.155 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.161 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 213 op/s
Oct 02 12:59:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1528526786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543124491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.683 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.685 2 DEBUG nova.virt.libvirt.vif [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-117296075',display_name='tempest-AttachVolumeTestJSON-server-117296075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-117296075',id=196,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHgFMRTF59wI+A1RbQFflyWqYsp+1+TLsbzpI2oIY5UH/9iZQfgqf1mLEqowiuTjZbWOpGDDEuLycngHCpVUhA7dxt5wH3ss9pWx7NuWid7bytzU6Gv6Ig5GVF/D0JmNDw==',key_name='tempest-keypair-1172883700',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-q9se00qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:59:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=15a3e611-0b02-417e-96b9-552573fac39e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.685 2 DEBUG nova.network.os_vif_util [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.686 2 DEBUG nova.network.os_vif_util [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.687 2 DEBUG nova.objects.instance [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'pci_devices' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.713 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <uuid>15a3e611-0b02-417e-96b9-552573fac39e</uuid>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <name>instance-000000c4</name>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <metadata>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeTestJSON-server-117296075</nova:name>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 12:59:49</nova:creationTime>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:user uuid="3299a1aed5af4843a91417a3f181c172">tempest-AttachVolumeTestJSON-398185718-project-member</nova:user>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:project uuid="e7168b5b1300495d90592b195824729a">tempest-AttachVolumeTestJSON-398185718</nova:project>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <nova:port uuid="856edb73-6931-4d42-b614-8212ebb68a63">
Oct 02 12:59:50 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </metadata>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <system>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="serial">15a3e611-0b02-417e-96b9-552573fac39e</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="uuid">15a3e611-0b02-417e-96b9-552573fac39e</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </system>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <os>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </os>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <features>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <apic/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </features>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </clock>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </cpu>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   <devices>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/15a3e611-0b02-417e-96b9-552573fac39e_disk">
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </source>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/15a3e611-0b02-417e-96b9-552573fac39e_disk.config">
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </source>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 12:59:50 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       </auth>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </disk>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:e6:54:06"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <target dev="tap856edb73-69"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </interface>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/console.log" append="off"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </serial>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <video>
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </video>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </rng>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 12:59:50 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 12:59:50 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 12:59:50 compute-0 nova_compute[257802]:   </devices>
Oct 02 12:59:50 compute-0 nova_compute[257802]: </domain>
Oct 02 12:59:50 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.715 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Preparing to wait for external event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.715 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.715 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.715 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.716 2 DEBUG nova.virt.libvirt.vif [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-117296075',display_name='tempest-AttachVolumeTestJSON-server-117296075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-117296075',id=196,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHgFMRTF59wI+A1RbQFflyWqYsp+1+TLsbzpI2oIY5UH/9iZQfgqf1mLEqowiuTjZbWOpGDDEuLycngHCpVUhA7dxt5wH3ss9pWx7NuWid7bytzU6Gv6Ig5GVF/D0JmNDw==',key_name='tempest-keypair-1172883700',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-q9se00qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:59:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=15a3e611-0b02-417e-96b9-552573fac39e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.716 2 DEBUG nova.network.os_vif_util [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.717 2 DEBUG nova.network.os_vif_util [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.717 2 DEBUG os_vif [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap856edb73-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap856edb73-69, col_values=(('external_ids', {'iface-id': '856edb73-6931-4d42-b614-8212ebb68a63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:54:06', 'vm-uuid': '15a3e611-0b02-417e-96b9-552573fac39e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:50 compute-0 NetworkManager[44987]: <info>  [1759409990.7259] manager: (tap856edb73-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.734 2 INFO os_vif [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69')
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.820 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.821 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.821 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No VIF found with MAC fa:16:3e:e6:54:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.822 2 INFO nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Using config drive
Oct 02 12:59:50 compute-0 nova_compute[257802]: 2025-10-02 12:59:50.849 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:51.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:51 compute-0 ceph-mon[73607]: pgmap v3032: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 213 op/s
Oct 02 12:59:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2543124491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:51 compute-0 nova_compute[257802]: 2025-10-02 12:59:51.743 2 INFO nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Creating config drive at /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config
Oct 02 12:59:51 compute-0 nova_compute[257802]: 2025-10-02 12:59:51.749 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqxtkrm6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:51 compute-0 nova_compute[257802]: 2025-10-02 12:59:51.883 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqxtkrm6" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:51 compute-0 nova_compute[257802]: 2025-10-02 12:59:51.913 2 DEBUG nova.storage.rbd_utils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] rbd image 15a3e611-0b02-417e-96b9-552573fac39e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:51 compute-0 nova_compute[257802]: 2025-10-02 12:59:51.917 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config 15a3e611-0b02-417e-96b9-552573fac39e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.087 2 DEBUG oslo_concurrency.processutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config 15a3e611-0b02-417e-96b9-552573fac39e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.088 2 INFO nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Deleting local config drive /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e/disk.config because it was imported into RBD.
Oct 02 12:59:52 compute-0 kernel: tap856edb73-69: entered promiscuous mode
Oct 02 12:59:52 compute-0 ovn_controller[148183]: 2025-10-02T12:59:52Z|00902|binding|INFO|Claiming lport 856edb73-6931-4d42-b614-8212ebb68a63 for this chassis.
Oct 02 12:59:52 compute-0 ovn_controller[148183]: 2025-10-02T12:59:52Z|00903|binding|INFO|856edb73-6931-4d42-b614-8212ebb68a63: Claiming fa:16:3e:e6:54:06 10.100.0.10
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.1491] manager: (tap856edb73-69): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.155 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:54:06 10.100.0.10'], port_security=['fa:16:3e:e6:54:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '15a3e611-0b02-417e-96b9-552573fac39e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0917dd9d-c9c4-4e03-a8a1-353435192b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=856edb73-6931-4d42-b614-8212ebb68a63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.156 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 856edb73-6931-4d42-b614-8212ebb68a63 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f bound to our chassis
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.158 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:52 compute-0 ovn_controller[148183]: 2025-10-02T12:59:52Z|00904|binding|INFO|Setting lport 856edb73-6931-4d42-b614-8212ebb68a63 ovn-installed in OVS
Oct 02 12:59:52 compute-0 ovn_controller[148183]: 2025-10-02T12:59:52Z|00905|binding|INFO|Setting lport 856edb73-6931-4d42-b614-8212ebb68a63 up in Southbound
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.170 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[eceddd2d-8b94-49fe-b373-a5a905a30de0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.172 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff8c8423-f1 in ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.174 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff8c8423-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.174 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0d2f96-f4ed-400f-8ba0-a389e80b1e8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.176 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb27091-c60b-48e0-b0bc-3a766a6a49c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 systemd-udevd[385314]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.188 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1b1a6f-1db2-4768-b049-268470bee149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 systemd-machined[211836]: New machine qemu-97-instance-000000c4.
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.1999] device (tap856edb73-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.2011] device (tap856edb73-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:59:52 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-000000c4.
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.214 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7570d705-86a5-4875-a6be-4db86515d549]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.243 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3d09dd78-d93e-4a79-a78f-05a421823040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.247 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe5e372-20dc-4c09-9665-0bd0d3c5acf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 systemd-udevd[385317]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.2501] manager: (tapff8c8423-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.284 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5669c40a-a930-48b8-893e-f123d275924d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.286 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef453b5-ff8b-4efd-9bda-47a655faa969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.3061] device (tapff8c8423-f0): carrier: link connected
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.309 2 DEBUG nova.network.neutron [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updated VIF entry in instance network info cache for port 856edb73-6931-4d42-b614-8212ebb68a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.310 2 DEBUG nova.network.neutron [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.312 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[eaccae9f-dee1-4943-874c-cbf97eea500a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.329 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6449ea3c-836a-459e-9a30-c4bda4da703c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805993, 'reachable_time': 33038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385345, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.337 2 DEBUG oslo_concurrency.lockutils [req-7ed4d00b-57bf-4436-bf3d-6add29d0bea1 req-5b593e67-70ea-4717-ac6f-e70a95f5ee5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.345 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c83f2-565a-406c-b570-779a4069d476]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:5000'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 805993, 'tstamp': 805993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385346, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.361 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8c8df2-c56d-46f3-acac-800425803d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff8c8423-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:50:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805993, 'reachable_time': 33038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385347, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.393 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0587240d-4288-4f90-8e46-7bfee5484552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.446 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd42698-00f7-4791-a8db-c429871294b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.448 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.448 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.448 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff8c8423-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 NetworkManager[44987]: <info>  [1759409992.4507] manager: (tapff8c8423-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct 02 12:59:52 compute-0 kernel: tapff8c8423-f0: entered promiscuous mode
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.452 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff8c8423-f0, col_values=(('external_ids', {'iface-id': 'd2e0b09e-a5f5-4832-b480-4d90b14ae948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:52 compute-0 ovn_controller[148183]: 2025-10-02T12:59:52Z|00906|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:59:52 compute-0 nova_compute[257802]: 2025-10-02 12:59:52.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.470 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.471 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98b4ea66-6591-47ba-95df-c233aa6603c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.472 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: global
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.pid.haproxy
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:59:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 12:59:52.474 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'env', 'PROCESS_TAG=haproxy-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:59:52 compute-0 podman[385422]: 2025-10-02 12:59:52.900479897 +0000 UTC m=+0.081855701 container create 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:59:52 compute-0 podman[385422]: 2025-10-02 12:59:52.844549433 +0000 UTC m=+0.025925257 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:59:52 compute-0 systemd[1]: Started libpod-conmon-76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d.scope.
Oct 02 12:59:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12972fda930fcb04d6398f00e17c96d854a903b248f6c4b0e8ef7eff62c3b137/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:53 compute-0 podman[385422]: 2025-10-02 12:59:53.009240019 +0000 UTC m=+0.190615833 container init 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:59:53 compute-0 podman[385422]: 2025-10-02 12:59:53.014729074 +0000 UTC m=+0.196104878 container start 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:59:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [NOTICE]   (385442) : New worker (385444) forked
Oct 02 12:59:53 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [NOTICE]   (385442) : Loading success.
Oct 02 12:59:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 12:59:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:53.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.197 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409993.1970325, 15a3e611-0b02-417e-96b9-552573fac39e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.198 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] VM Started (Lifecycle Event)
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.217 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.222 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409993.1971729, 15a3e611-0b02-417e-96b9-552573fac39e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.222 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] VM Paused (Lifecycle Event)
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.238 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.241 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:53 compute-0 nova_compute[257802]: 2025-10-02 12:59:53.259 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:59:53 compute-0 ceph-mon[73607]: pgmap v3033: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Oct 02 12:59:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4141547658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.288 2 DEBUG nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.288 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.288 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.288 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.289 2 DEBUG nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Processing event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.289 2 DEBUG nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.289 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.289 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.290 2 DEBUG oslo_concurrency.lockutils [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.290 2 DEBUG nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] No waiting events found dispatching network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.290 2 WARNING nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received unexpected event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 for instance with vm_state building and task_state spawning.
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.291 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.294 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759409994.2944171, 15a3e611-0b02-417e-96b9-552573fac39e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.295 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] VM Resumed (Lifecycle Event)
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.297 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.304 2 INFO nova.virt.libvirt.driver [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Instance spawned successfully.
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.304 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.313 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.317 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.330 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.331 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.331 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.332 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.332 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.333 2 DEBUG nova.virt.libvirt.driver [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.342 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.388 2 INFO nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Took 11.72 seconds to spawn the instance on the hypervisor.
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.389 2 DEBUG nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.457 2 INFO nova.compute.manager [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Took 13.37 seconds to build instance.
Oct 02 12:59:54 compute-0 nova_compute[257802]: 2025-10-02 12:59:54.481 2 DEBUG oslo_concurrency.lockutils [None req-081e1d1e-d0dd-4238-ae15-bddb1e080d75 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009960101433091383 of space, bias 1.0, pg target 0.29880304299274146 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0053130234505862645 of space, bias 1.0, pg target 1.5939070351758793 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:59:55 compute-0 nova_compute[257802]: 2025-10-02 12:59:55.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:55.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:55 compute-0 ceph-mon[73607]: pgmap v3034: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Oct 02 12:59:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4289389056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3464753884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3464753884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:55 compute-0 nova_compute[257802]: 2025-10-02 12:59:55.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:59:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Oct 02 12:59:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Oct 02 12:59:56 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Oct 02 12:59:56 compute-0 nova_compute[257802]: 2025-10-02 12:59:56.688 2 DEBUG nova.compute.manager [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-changed-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:56 compute-0 nova_compute[257802]: 2025-10-02 12:59:56.688 2 DEBUG nova.compute.manager [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Refreshing instance network info cache due to event network-changed-856edb73-6931-4d42-b614-8212ebb68a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:56 compute-0 nova_compute[257802]: 2025-10-02 12:59:56.689 2 DEBUG oslo_concurrency.lockutils [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:56 compute-0 nova_compute[257802]: 2025-10-02 12:59:56.689 2 DEBUG oslo_concurrency.lockutils [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:56 compute-0 nova_compute[257802]: 2025-10-02 12:59:56.689 2 DEBUG nova.network.neutron [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Refreshing network info cache for port 856edb73-6931-4d42-b614-8212ebb68a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:59:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141333118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:59:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141333118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Oct 02 12:59:57 compute-0 ceph-mon[73607]: pgmap v3035: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:59:57 compute-0 ceph-mon[73607]: osdmap e397: 3 total, 3 up, 3 in
Oct 02 12:59:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1141333118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1141333118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Oct 02 12:59:57 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Oct 02 12:59:57 compute-0 nova_compute[257802]: 2025-10-02 12:59:57.990 2 DEBUG nova.network.neutron [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updated VIF entry in instance network info cache for port 856edb73-6931-4d42-b614-8212ebb68a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:57 compute-0 nova_compute[257802]: 2025-10-02 12:59:57.991 2 DEBUG nova.network.neutron [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:58 compute-0 nova_compute[257802]: 2025-10-02 12:59:58.008 2 DEBUG oslo_concurrency.lockutils [req-a8e33972-90da-40fa-ad3a-084fa9326476 req-9af99b20-7af1-4738-b400-c09fb05b7ea7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:58.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 611 KiB/s rd, 41 KiB/s wr, 67 op/s
Oct 02 12:59:58 compute-0 ceph-mon[73607]: osdmap e398: 3 total, 3 up, 3 in
Oct 02 12:59:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/764651198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/764651198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:58 compute-0 podman[385456]: 2025-10-02 12:59:58.910580265 +0000 UTC m=+0.051680071 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:59:58 compute-0 podman[385457]: 2025-10-02 12:59:58.933538718 +0000 UTC m=+0.074215584 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 12:59:58 compute-0 podman[385458]: 2025-10-02 12:59:58.93361454 +0000 UTC m=+0.069351864 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:58 compute-0 ovn_controller[148183]: 2025-10-02T12:59:58Z|00907|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 12:59:59 compute-0 nova_compute[257802]: 2025-10-02 12:59:59.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 12:59:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:59.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:59 compute-0 ceph-mon[73607]: pgmap v3038: 305 pgs: 305 active+clean; 348 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 611 KiB/s rd, 41 KiB/s wr, 67 op/s
Oct 02 13:00:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:00:00 compute-0 nova_compute[257802]: 2025-10-02 13:00:00.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:00 compute-0 sudo[385518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:00 compute-0 sudo[385518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:00 compute-0 sudo[385518]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:00 compute-0 sudo[385543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:00 compute-0 sudo[385543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:00 compute-0 sudo[385543]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:00.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 46 KiB/s wr, 238 op/s
Oct 02 13:00:00 compute-0 nova_compute[257802]: 2025-10-02 13:00:00.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 13:00:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:01 compute-0 ovn_controller[148183]: 2025-10-02T13:00:01Z|00908|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 13:00:01 compute-0 nova_compute[257802]: 2025-10-02 13:00:01.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-0 ceph-mon[73607]: pgmap v3039: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 46 KiB/s wr, 238 op/s
Oct 02 13:00:02 compute-0 nova_compute[257802]: 2025-10-02 13:00:02.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 220 op/s
Oct 02 13:00:02 compute-0 ceph-mon[73607]: pgmap v3040: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 220 op/s
Oct 02 13:00:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.5 KiB/s wr, 171 op/s
Oct 02 13:00:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Oct 02 13:00:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Oct 02 13:00:04 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Oct 02 13:00:04 compute-0 podman[385571]: 2025-10-02 13:00:04.954579434 +0000 UTC m=+0.090430222 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:05 compute-0 nova_compute[257802]: 2025-10-02 13:00:05.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:05.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:05 compute-0 ceph-mon[73607]: pgmap v3041: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.5 KiB/s wr, 171 op/s
Oct 02 13:00:05 compute-0 ceph-mon[73607]: osdmap e399: 3 total, 3 up, 3 in
Oct 02 13:00:05 compute-0 nova_compute[257802]: 2025-10-02 13:00:05.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 KiB/s wr, 155 op/s
Oct 02 13:00:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:07.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:07 compute-0 ceph-mon[73607]: pgmap v3043: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 KiB/s wr, 155 op/s
Oct 02 13:00:07 compute-0 ovn_controller[148183]: 2025-10-02T13:00:07Z|00909|binding|INFO|Releasing lport d2e0b09e-a5f5-4832-b480-4d90b14ae948 from this chassis (sb_readonly=0)
Oct 02 13:00:07 compute-0 ovn_controller[148183]: 2025-10-02T13:00:07Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:54:06 10.100.0.10
Oct 02 13:00:07 compute-0 ovn_controller[148183]: 2025-10-02T13:00:07Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:54:06 10.100.0.10
Oct 02 13:00:07 compute-0 nova_compute[257802]: 2025-10-02 13:00:07.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 KiB/s wr, 136 op/s
Oct 02 13:00:08 compute-0 nova_compute[257802]: 2025-10-02 13:00:08.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:09 compute-0 ceph-mon[73607]: pgmap v3044: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 KiB/s wr, 136 op/s
Oct 02 13:00:10 compute-0 nova_compute[257802]: 2025-10-02 13:00:10.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:10 compute-0 nova_compute[257802]: 2025-10-02 13:00:10.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:11 compute-0 ceph-mon[73607]: pgmap v3045: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:13.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:13 compute-0 ceph-mon[73607]: pgmap v3046: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:15 compute-0 nova_compute[257802]: 2025-10-02 13:00:15.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:15 compute-0 nova_compute[257802]: 2025-10-02 13:00:15.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:15 compute-0 ceph-mon[73607]: pgmap v3047: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Oct 02 13:00:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 13:00:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/462770064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2036417450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:00:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:17.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:00:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:17.753 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:17.754 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:00:17 compute-0 nova_compute[257802]: 2025-10-02 13:00:17.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 ceph-mon[73607]: pgmap v3048: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 13:00:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2996359736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-0 nova_compute[257802]: 2025-10-02 13:00:18.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:18 compute-0 nova_compute[257802]: 2025-10-02 13:00:18.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:00:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:00:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:19.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:19 compute-0 ceph-mon[73607]: pgmap v3049: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:00:20 compute-0 nova_compute[257802]: 2025-10-02 13:00:20.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 sudo[385606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:20 compute-0 sudo[385606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 sudo[385606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 sudo[385631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:20 compute-0 sudo[385631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 sudo[385631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 02 13:00:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:00:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:20.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:00:20 compute-0 nova_compute[257802]: 2025-10-02 13:00:20.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2740930688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:20 compute-0 ceph-mon[73607]: pgmap v3050: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 02 13:00:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3618510218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:21.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:00:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:22.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:23 compute-0 nova_compute[257802]: 2025-10-02 13:00:23.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:23.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:23 compute-0 ceph-mon[73607]: pgmap v3051: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:00:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:00:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:24.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:25 compute-0 nova_compute[257802]: 2025-10-02 13:00:25.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:25 compute-0 ceph-mon[73607]: pgmap v3052: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:00:25 compute-0 nova_compute[257802]: 2025-10-02 13:00:25.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:26 compute-0 nova_compute[257802]: 2025-10-02 13:00:26.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 13:00:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:26.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:26.757 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:26.983 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:26.983 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:26.984 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:27 compute-0 nova_compute[257802]: 2025-10-02 13:00:27.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:27 compute-0 ceph-mon[73607]: pgmap v3053: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 13:00:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 13:00:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:28.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:29.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:29 compute-0 ceph-mon[73607]: pgmap v3054: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 13:00:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:29 compute-0 podman[385662]: 2025-10-02 13:00:29.943088534 +0000 UTC m=+0.071833656 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 13:00:29 compute-0 podman[385661]: 2025-10-02 13:00:29.943779561 +0000 UTC m=+0.082638951 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:00:29 compute-0 podman[385663]: 2025-10-02 13:00:29.961865515 +0000 UTC m=+0.089695834 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.431 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.432 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.448 2 DEBUG nova.objects.instance [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:30.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.504 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.690 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.690 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.690 2 INFO nova.compute.manager [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attaching volume e20bbad6-3733-408d-b226-999bd1df2110 to /dev/vdb
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.852 2 DEBUG os_brick.utils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.853 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.865 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.865 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2599c2d7-608f-46d1-8acc-97fdeadd5851]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.867 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.875 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.875 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2124ca88-872f-4ee8-9f29-4d4f46f60fa9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.877 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.886 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.886 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d73de56a-7d84-4184-a6bc-5725e7164441]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.888 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[53c19e39-da62-4098-9c97-18476ecbdc52]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.889 2 DEBUG oslo_concurrency.processutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.917 2 DEBUG oslo_concurrency.processutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.919 2 DEBUG os_brick.initiator.connectors.lightos [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.919 2 DEBUG os_brick.initiator.connectors.lightos [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.920 2 DEBUG os_brick.initiator.connectors.lightos [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.920 2 DEBUG os_brick.utils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:00:30 compute-0 nova_compute[257802]: 2025-10-02 13:00:30.920 2 DEBUG nova.virt.block_device [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating existing volume attachment record: cab1b627-97f1-490d-96ca-0e33b1604a90 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:00:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:31.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:31 compute-0 ceph-mon[73607]: pgmap v3055: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.758 2 DEBUG nova.objects.instance [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.778 2 DEBUG nova.virt.libvirt.driver [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attempting to attach volume e20bbad6-3733-408d-b226-999bd1df2110 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.781 2 DEBUG nova.virt.libvirt.guest [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:00:31 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-e20bbad6-3733-408d-b226-999bd1df2110">
Oct 02 13:00:31 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 13:00:31 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   </auth>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:00:31 compute-0 nova_compute[257802]:   <serial>e20bbad6-3733-408d-b226-999bd1df2110</serial>
Oct 02 13:00:31 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:31 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.888 2 DEBUG nova.virt.libvirt.driver [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.889 2 DEBUG nova.virt.libvirt.driver [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.889 2 DEBUG nova.virt.libvirt.driver [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:31 compute-0 nova_compute[257802]: 2025-10-02 13:00:31.889 2 DEBUG nova.virt.libvirt.driver [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No VIF found with MAC fa:16:3e:e6:54:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:32 compute-0 nova_compute[257802]: 2025-10-02 13:00:32.072 2 DEBUG oslo_concurrency.lockutils [None req-3dbd9865-5e96-471f-9146-be5bd1cc6933 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:32 compute-0 nova_compute[257802]: 2025-10-02 13:00:32.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:32 compute-0 nova_compute[257802]: 2025-10-02 13:00:32.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 02 13:00:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:32.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1643893512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/77007013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:33.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:33 compute-0 ceph-mon[73607]: pgmap v3056: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 02 13:00:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1130639323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Oct 02 13:00:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:34 compute-0 nova_compute[257802]: 2025-10-02 13:00:34.957 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:34 compute-0 nova_compute[257802]: 2025-10-02 13:00:34.958 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:34 compute-0 nova_compute[257802]: 2025-10-02 13:00:34.980 2 DEBUG nova.objects.instance [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.029 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:35.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.204 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.205 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.205 2 INFO nova.compute.manager [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attaching volume 34e323fe-cd44-4887-8162-f6a31b8bc60c to /dev/vdc
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.381 2 DEBUG os_brick.utils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.382 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.393 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.393 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[144dbf4e-b65f-4a07-8e6e-baf922e14bd1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.394 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.403 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.403 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[465d716b-bba5-4f25-a05b-db06e725336a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.405 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.413 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.414 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4039f6-38c5-4a5b-9607-640e855ae159]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.415 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9a2204-aa1b-4595-b5b5-36cae78c1ece]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.415 2 DEBUG oslo_concurrency.processutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.444 2 DEBUG oslo_concurrency.processutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.447 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.447 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.448 2 DEBUG os_brick.initiator.connectors.lightos [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.448 2 DEBUG os_brick.utils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.448 2 DEBUG nova.virt.block_device [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating existing volume attachment record: cfb3de7b-27b7-48ae-aaba-65f3a1c4ead2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:00:35 compute-0 ceph-mon[73607]: pgmap v3057: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Oct 02 13:00:35 compute-0 nova_compute[257802]: 2025-10-02 13:00:35.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:35 compute-0 podman[385758]: 2025-10-02 13:00:35.950663339 +0000 UTC m=+0.091440717 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.174 2 DEBUG nova.objects.instance [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.195 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attempting to attach volume 34e323fe-cd44-4887-8162-f6a31b8bc60c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.197 2 DEBUG nova.virt.libvirt.guest [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:00:36 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-34e323fe-cd44-4887-8162-f6a31b8bc60c">
Oct 02 13:00:36 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 13:00:36 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   </auth>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:00:36 compute-0 nova_compute[257802]:   <serial>34e323fe-cd44-4887-8162-f6a31b8bc60c</serial>
Oct 02 13:00:36 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:36 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.296 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.296 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.296 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.297 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.297 2 DEBUG nova.virt.libvirt.driver [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] No VIF found with MAC fa:16:3e:e6:54:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 72 op/s
Oct 02 13:00:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.464 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.464 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.464 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.465 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:36 compute-0 nova_compute[257802]: 2025-10-02 13:00:36.501 2 DEBUG oslo_concurrency.lockutils [None req-6a377d52-c9e1-409c-9d81-f4517defa759 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/998983341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:37 compute-0 sudo[385806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:37 compute-0 sudo[385806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:37 compute-0 sudo[385806]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:37 compute-0 sudo[385831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:37 compute-0 sudo[385831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:37 compute-0 sudo[385831]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:37 compute-0 sudo[385856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:37 compute-0 sudo[385856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:37 compute-0 sudo[385856]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:37 compute-0 sudo[385881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:00:37 compute-0 sudo[385881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:37 compute-0 nova_compute[257802]: 2025-10-02 13:00:37.647 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [{"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:37 compute-0 nova_compute[257802]: 2025-10-02 13:00:37.729 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-15a3e611-0b02-417e-96b9-552573fac39e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:37 compute-0 nova_compute[257802]: 2025-10-02 13:00:37.730 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:00:37 compute-0 sudo[385881]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ed5ffcd7-92f8-4e20-a889-dfb95c699f64 does not exist
Oct 02 13:00:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e658bb9e-1226-4748-8ab1-b37c06263c98 does not exist
Oct 02 13:00:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2e23ceba-4550-414a-85fd-a6aa5f9fa73e does not exist
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:00:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:38 compute-0 sudo[385937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:38 compute-0 sudo[385937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:38 compute-0 sudo[385937]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:38 compute-0 ceph-mon[73607]: pgmap v3058: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 72 op/s
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:00:38 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:38 compute-0 sudo[385962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:38 compute-0 sudo[385962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:38 compute-0 sudo[385962]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:38 compute-0 sudo[385987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:38 compute-0 sudo[385987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:38 compute-0 sudo[385987]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:38 compute-0 sudo[386012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:00:38 compute-0 sudo[386012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.337 2 DEBUG oslo_concurrency.lockutils [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.337 2 DEBUG oslo_concurrency.lockutils [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.350 2 INFO nova.compute.manager [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Detaching volume e20bbad6-3733-408d-b226-999bd1df2110
Oct 02 13:00:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 66 op/s
Oct 02 13:00:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:38.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.471 2 INFO nova.virt.block_device [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attempting to driver detach volume e20bbad6-3733-408d-b226-999bd1df2110 from mountpoint /dev/vdb
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.482 2 DEBUG nova.virt.libvirt.driver [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Attempting to detach device vdb from instance 15a3e611-0b02-417e-96b9-552573fac39e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.483 2 DEBUG nova.virt.libvirt.guest [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-e20bbad6-3733-408d-b226-999bd1df2110">
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <serial>e20bbad6-3733-408d-b226-999bd1df2110</serial>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:38 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.516 2 INFO nova.virt.libvirt.driver [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdb from instance 15a3e611-0b02-417e-96b9-552573fac39e from the persistent domain config.
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.516 2 DEBUG nova.virt.libvirt.driver [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 15a3e611-0b02-417e-96b9-552573fac39e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.517 2 DEBUG nova.virt.libvirt.guest [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-e20bbad6-3733-408d-b226-999bd1df2110">
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <serial>e20bbad6-3733-408d-b226-999bd1df2110</serial>
Oct 02 13:00:38 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:00:38 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:38 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.624 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759410038.6237545, 15a3e611-0b02-417e-96b9-552573fac39e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.626 2 DEBUG nova.virt.libvirt.driver [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 15a3e611-0b02-417e-96b9-552573fac39e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.629 2 INFO nova.virt.libvirt.driver [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdb from instance 15a3e611-0b02-417e-96b9-552573fac39e from the live domain config.
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.704549058 +0000 UTC m=+0.083527853 container create 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.657317348 +0000 UTC m=+0.036296163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.757 2 DEBUG nova.objects.instance [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:38 compute-0 systemd[1]: Started libpod-conmon-4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710.scope.
Oct 02 13:00:38 compute-0 nova_compute[257802]: 2025-10-02 13:00:38.793 2 DEBUG oslo_concurrency.lockutils [None req-8d926b88-b348-4184-b3ab-bae810021a15 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.863699528 +0000 UTC m=+0.242678323 container init 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.880319547 +0000 UTC m=+0.259298372 container start 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:00:38 compute-0 determined_bohr[386096]: 167 167
Oct 02 13:00:38 compute-0 systemd[1]: libpod-4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710.scope: Deactivated successfully.
Oct 02 13:00:38 compute-0 conmon[386096]: conmon 4c77d49dcba2e8dcdecc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710.scope/container/memory.events
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.91141389 +0000 UTC m=+0.290392715 container attach 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:00:38 compute-0 podman[386077]: 2025-10-02 13:00:38.912217819 +0000 UTC m=+0.291196624 container died 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac3978499c6cde0fd9758d6a414fa473477d825c89b1a05596ebf9f8bbc776bc-merged.mount: Deactivated successfully.
Oct 02 13:00:39 compute-0 podman[386077]: 2025-10-02 13:00:39.058865602 +0000 UTC m=+0.437844397 container remove 4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:00:39 compute-0 systemd[1]: libpod-conmon-4c77d49dcba2e8dcdecc30605357d60fd8ffbb8c6f3c1224a494634e7ad64710.scope: Deactivated successfully.
Oct 02 13:00:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:39 compute-0 ceph-mon[73607]: pgmap v3059: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 66 op/s
Oct 02 13:00:39 compute-0 podman[386123]: 2025-10-02 13:00:39.293394664 +0000 UTC m=+0.069867378 container create 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:00:39 compute-0 podman[386123]: 2025-10-02 13:00:39.260334311 +0000 UTC m=+0.036807065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:39 compute-0 systemd[1]: Started libpod-conmon-04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd.scope.
Oct 02 13:00:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:39 compute-0 podman[386123]: 2025-10-02 13:00:39.447797706 +0000 UTC m=+0.224270450 container init 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:00:39 compute-0 podman[386123]: 2025-10-02 13:00:39.454678625 +0000 UTC m=+0.231151379 container start 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:39 compute-0 podman[386123]: 2025-10-02 13:00:39.467460579 +0000 UTC m=+0.243933693 container attach 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.176 2 DEBUG oslo_concurrency.lockutils [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.177 2 DEBUG oslo_concurrency.lockutils [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.204 2 INFO nova.compute.manager [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Detaching volume 34e323fe-cd44-4887-8162-f6a31b8bc60c
Oct 02 13:00:40 compute-0 distracted_agnesi[386139]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:00:40 compute-0 distracted_agnesi[386139]: --> relative data size: 1.0
Oct 02 13:00:40 compute-0 distracted_agnesi[386139]: --> All data devices are unavailable
Oct 02 13:00:40 compute-0 systemd[1]: libpod-04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd.scope: Deactivated successfully.
Oct 02 13:00:40 compute-0 podman[386123]: 2025-10-02 13:00:40.268401404 +0000 UTC m=+1.044874118 container died 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6917370ec422b29a7dcb34b29f21abc4c04c475333b57830a29069eb46d34b7c-merged.mount: Deactivated successfully.
Oct 02 13:00:40 compute-0 podman[386123]: 2025-10-02 13:00:40.330719525 +0000 UTC m=+1.107192239 container remove 04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:00:40 compute-0 systemd[1]: libpod-conmon-04fddda5376ffe8cfda5a584e99d07aff145734b21f409487a894a228a88f9dd.scope: Deactivated successfully.
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.345 2 INFO nova.virt.block_device [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Attempting to driver detach volume 34e323fe-cd44-4887-8162-f6a31b8bc60c from mountpoint /dev/vdc
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.359 2 DEBUG nova.virt.libvirt.driver [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Attempting to detach device vdc from instance 15a3e611-0b02-417e-96b9-552573fac39e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.360 2 DEBUG nova.virt.libvirt.guest [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-34e323fe-cd44-4887-8162-f6a31b8bc60c">
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <serial>34e323fe-cd44-4887-8162-f6a31b8bc60c</serial>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:40 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.370 2 INFO nova.virt.libvirt.driver [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdc from instance 15a3e611-0b02-417e-96b9-552573fac39e from the persistent domain config.
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.371 2 DEBUG nova.virt.libvirt.driver [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 15a3e611-0b02-417e-96b9-552573fac39e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:00:40 compute-0 sudo[386012]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.371 2 DEBUG nova.virt.libvirt.guest [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-34e323fe-cd44-4887-8162-f6a31b8bc60c">
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   </source>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <serial>34e323fe-cd44-4887-8162-f6a31b8bc60c</serial>
Oct 02 13:00:40 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 13:00:40 compute-0 nova_compute[257802]: </disk>
Oct 02 13:00:40 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:00:40 compute-0 sudo[386168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:40 compute-0 sudo[386168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 sudo[386168]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 13:00:40 compute-0 sudo[386189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:40 compute-0 sudo[386189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 sudo[386189]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:40.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.485 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759410040.4856188, 15a3e611-0b02-417e-96b9-552573fac39e => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.488 2 DEBUG nova.virt.libvirt.driver [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 15a3e611-0b02-417e-96b9-552573fac39e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:00:40 compute-0 sudo[386216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.491 2 INFO nova.virt.libvirt.driver [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully detached device vdc from instance 15a3e611-0b02-417e-96b9-552573fac39e from the live domain config.
Oct 02 13:00:40 compute-0 sudo[386216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 sudo[386216]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 sudo[386231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:40 compute-0 sudo[386231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 sudo[386231]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 sudo[386270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:40 compute-0 sudo[386270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 sudo[386270]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:40 compute-0 sudo[386295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:00:40 compute-0 sudo[386295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.713 2 DEBUG nova.objects.instance [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'flavor' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:40 compute-0 nova_compute[257802]: 2025-10-02 13:00:40.785 2 DEBUG oslo_concurrency.lockutils [None req-72830006-31da-4318-838b-f99ec75ba9c1 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.007704515 +0000 UTC m=+0.035930084 container create a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:00:41 compute-0 systemd[1]: Started libpod-conmon-a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6.scope.
Oct 02 13:00:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.082630586 +0000 UTC m=+0.110856155 container init a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:40.992960752 +0000 UTC m=+0.021186341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.089966616 +0000 UTC m=+0.118192185 container start a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.092656562 +0000 UTC m=+0.120882131 container attach a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:00:41 compute-0 friendly_wilbur[386378]: 167 167
Oct 02 13:00:41 compute-0 systemd[1]: libpod-a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6.scope: Deactivated successfully.
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.094342493 +0000 UTC m=+0.122568062 container died a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e286a78cc6d4be8dee24883fd0ce28c14ba6d4264e9602804785f8cbdc347d-merged.mount: Deactivated successfully.
Oct 02 13:00:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:41 compute-0 podman[386362]: 2025-10-02 13:00:41.134405888 +0000 UTC m=+0.162631467 container remove a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 02 13:00:41 compute-0 systemd[1]: libpod-conmon-a9ea8179edcbf477f3a6ac5b4bae8aa8027806f72d004b484217d07dd679a6b6.scope: Deactivated successfully.
Oct 02 13:00:41 compute-0 podman[386401]: 2025-10-02 13:00:41.350499046 +0000 UTC m=+0.083806180 container create 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:00:41 compute-0 podman[386401]: 2025-10-02 13:00:41.290213495 +0000 UTC m=+0.023520659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:41 compute-0 systemd[1]: Started libpod-conmon-23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea.scope.
Oct 02 13:00:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829711ab35cbef670c7c08c6e9df03b435268d897a5b832ac4a0e9a4604db731/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829711ab35cbef670c7c08c6e9df03b435268d897a5b832ac4a0e9a4604db731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829711ab35cbef670c7c08c6e9df03b435268d897a5b832ac4a0e9a4604db731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829711ab35cbef670c7c08c6e9df03b435268d897a5b832ac4a0e9a4604db731/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:41 compute-0 podman[386401]: 2025-10-02 13:00:41.467245983 +0000 UTC m=+0.200553147 container init 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:00:41 compute-0 podman[386401]: 2025-10-02 13:00:41.473222251 +0000 UTC m=+0.206529385 container start 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:00:41 compute-0 podman[386401]: 2025-10-02 13:00:41.47645822 +0000 UTC m=+0.209765354 container attach 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:00:41 compute-0 ceph-mon[73607]: pgmap v3060: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.125 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.126 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.126 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.127 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.259 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.260 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.260 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.260 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.261 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.262 2 INFO nova.compute.manager [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Terminating instance
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.263 2 DEBUG nova.compute.manager [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:00:42 compute-0 musing_mestorf[386417]: {
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:     "1": [
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:         {
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "devices": [
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "/dev/loop3"
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             ],
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "lv_name": "ceph_lv0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "lv_size": "7511998464",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "name": "ceph_lv0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "tags": {
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.cluster_name": "ceph",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.crush_device_class": "",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.encrypted": "0",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.osd_id": "1",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.type": "block",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:                 "ceph.vdo": "0"
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             },
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "type": "block",
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:             "vg_name": "ceph_vg0"
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:         }
Oct 02 13:00:42 compute-0 musing_mestorf[386417]:     ]
Oct 02 13:00:42 compute-0 musing_mestorf[386417]: }
Oct 02 13:00:42 compute-0 systemd[1]: libpod-23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea.scope: Deactivated successfully.
Oct 02 13:00:42 compute-0 podman[386401]: 2025-10-02 13:00:42.311247566 +0000 UTC m=+1.044554700 container died 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-829711ab35cbef670c7c08c6e9df03b435268d897a5b832ac4a0e9a4604db731-merged.mount: Deactivated successfully.
Oct 02 13:00:42 compute-0 kernel: tap856edb73-69 (unregistering): left promiscuous mode
Oct 02 13:00:42 compute-0 NetworkManager[44987]: <info>  [1759410042.3619] device (tap856edb73-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:00:42 compute-0 ovn_controller[148183]: 2025-10-02T13:00:42Z|00910|binding|INFO|Releasing lport 856edb73-6931-4d42-b614-8212ebb68a63 from this chassis (sb_readonly=0)
Oct 02 13:00:42 compute-0 ovn_controller[148183]: 2025-10-02T13:00:42Z|00911|binding|INFO|Setting lport 856edb73-6931-4d42-b614-8212ebb68a63 down in Southbound
Oct 02 13:00:42 compute-0 ovn_controller[148183]: 2025-10-02T13:00:42Z|00912|binding|INFO|Removing iface tap856edb73-69 ovn-installed in OVS
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.389 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:54:06 10.100.0.10'], port_security=['fa:16:3e:e6:54:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '15a3e611-0b02-417e-96b9-552573fac39e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7168b5b1300495d90592b195824729a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0917dd9d-c9c4-4e03-a8a1-353435192b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb99a427-f5c8-46c4-b56f-12cf288447a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=856edb73-6931-4d42-b614-8212ebb68a63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:42 compute-0 podman[386401]: 2025-10-02 13:00:42.392852221 +0000 UTC m=+1.126159355 container remove 23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.392 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 856edb73-6931-4d42-b614-8212ebb68a63 in datapath ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f unbound from our chassis
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.393 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.395 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c88dc9-b813-4f8a-a2a7-d9cc1bc2e09e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.395 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f namespace which is not needed anymore
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 systemd[1]: libpod-conmon-23c10ecd1539e5534b4fc9919a50c3cfa97df6983946e955fa8075d95273bdea.scope: Deactivated successfully.
Oct 02 13:00:42 compute-0 sudo[386295]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:42 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:00:42 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c4.scope: Consumed 15.788s CPU time.
Oct 02 13:00:42 compute-0 systemd-machined[211836]: Machine qemu-97-instance-000000c4 terminated.
Oct 02 13:00:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:42 compute-0 sudo[386468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:42 compute-0 sudo[386468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:42 compute-0 sudo[386468]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.503 2 INFO nova.virt.libvirt.driver [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Instance destroyed successfully.
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.503 2 DEBUG nova.objects.instance [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lazy-loading 'resources' on Instance uuid 15a3e611-0b02-417e-96b9-552573fac39e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.535 2 DEBUG nova.virt.libvirt.vif [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-117296075',display_name='tempest-AttachVolumeTestJSON-server-117296075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-117296075',id=196,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHgFMRTF59wI+A1RbQFflyWqYsp+1+TLsbzpI2oIY5UH/9iZQfgqf1mLEqowiuTjZbWOpGDDEuLycngHCpVUhA7dxt5wH3ss9pWx7NuWid7bytzU6Gv6Ig5GVF/D0JmNDw==',key_name='tempest-keypair-1172883700',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:59:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7168b5b1300495d90592b195824729a',ramdisk_id='',reservation_id='r-q9se00qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-398185718',owner_user_name='tempest-AttachVolumeTestJSON-398185718-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299a1aed5af4843a91417a3f181c172',uuid=15a3e611-0b02-417e-96b9-552573fac39e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.536 2 DEBUG nova.network.os_vif_util [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converting VIF {"id": "856edb73-6931-4d42-b614-8212ebb68a63", "address": "fa:16:3e:e6:54:06", "network": {"id": "ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1428268404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7168b5b1300495d90592b195824729a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856edb73-69", "ovs_interfaceid": "856edb73-6931-4d42-b614-8212ebb68a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.537 2 DEBUG nova.network.os_vif_util [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.537 2 DEBUG os_vif [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:00:42
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'backups']
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap856edb73-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.545 2 INFO os_vif [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:54:06,bridge_name='br-int',has_traffic_filtering=True,id=856edb73-6931-4d42-b614-8212ebb68a63,network=Network(ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856edb73-69')
Oct 02 13:00:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346435653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:42 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [NOTICE]   (385442) : haproxy version is 2.8.14-c23fe91
Oct 02 13:00:42 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [NOTICE]   (385442) : path to executable is /usr/sbin/haproxy
Oct 02 13:00:42 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [WARNING]  (385442) : Exiting Master process...
Oct 02 13:00:42 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [ALERT]    (385442) : Current worker (385444) exited with code 143 (Terminated)
Oct 02 13:00:42 compute-0 neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f[385438]: [WARNING]  (385442) : All workers exited. Exiting... (0)
Oct 02 13:00:42 compute-0 systemd[1]: libpod-76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d.scope: Deactivated successfully.
Oct 02 13:00:42 compute-0 sudo[386516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:42 compute-0 podman[386514]: 2025-10-02 13:00:42.56701911 +0000 UTC m=+0.051968718 container died 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:42 compute-0 sudo[386516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:42 compute-0 sudo[386516]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.580 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d-userdata-shm.mount: Deactivated successfully.
Oct 02 13:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-12972fda930fcb04d6398f00e17c96d854a903b248f6c4b0e8ef7eff62c3b137-merged.mount: Deactivated successfully.
Oct 02 13:00:42 compute-0 podman[386514]: 2025-10-02 13:00:42.628714845 +0000 UTC m=+0.113664453 container cleanup 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:42 compute-0 sudo[386580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:42 compute-0 sudo[386580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:42 compute-0 sudo[386580]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:42 compute-0 systemd[1]: libpod-conmon-76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d.scope: Deactivated successfully.
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.678 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.678 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:00:42 compute-0 sudo[386615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:00:42 compute-0 sudo[386615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:42 compute-0 podman[386614]: 2025-10-02 13:00:42.700400096 +0000 UTC m=+0.045443257 container remove 76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.708 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9de08e-7a5a-4930-a1d8-d6d4a94f8439]: (4, ('Thu Oct  2 01:00:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d)\n76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d\nThu Oct  2 01:00:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f (76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d)\n76de268cc181cffe0c5e3ded3db01db9f068a649a94c6ae78b83a93aec69a01d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.710 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a34f494d-3258-4176-a902-81a11141c6f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.711 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff8c8423-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:42 compute-0 kernel: tapff8c8423-f0: left promiscuous mode
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.721 2 DEBUG nova.compute.manager [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-unplugged-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.721 2 DEBUG oslo_concurrency.lockutils [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.721 2 DEBUG oslo_concurrency.lockutils [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.721 2 DEBUG oslo_concurrency.lockutils [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.722 2 DEBUG nova.compute.manager [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] No waiting events found dispatching network-vif-unplugged-856edb73-6931-4d42-b614-8212ebb68a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.722 2 DEBUG nova.compute.manager [req-a7ecf9ed-b246-4542-b611-29a1bcbfbcd0 req-e08f4de9-fc1c-4e9f-9059-8cb7b15d5745 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-unplugged-856edb73-6931-4d42-b614-8212ebb68a63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.730 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[26e03c9d-ff17-4e56-9d2f-1aaaf729ea08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.759 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a3b86c-6bd7-4332-a5ff-c8bffa312f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.760 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0a700f-b41d-4b9d-b56a-eaba61e11428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.777 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aa856364-a929-4389-8387-44515d8f4928]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805986, 'reachable_time': 22372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386655, 'error': None, 'target': 'ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dff8c8423\x2df2c6\x2d4e3f\x2d8fd3\x2d2fb6b6292a3f.mount: Deactivated successfully.
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.784 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff8c8423-f2c6-4e3f-8fd3-2fb6b6292a3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:00:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:00:42.784 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[cfabb468-eed0-4c9e-8366-53e4b0d58f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.887 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.888 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4240MB free_disk=20.897296905517578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.888 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.888 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.996 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 15a3e611-0b02-417e-96b9-552573fac39e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.997 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:00:42 compute-0 nova_compute[257802]: 2025-10-02 13:00:42.997 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.019441883 +0000 UTC m=+0.037100423 container create 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.023 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.055 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.055 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:00:43 compute-0 systemd[1]: Started libpod-conmon-95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9.scope.
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.069 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:00:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.095 2 INFO nova.virt.libvirt.driver [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Deleting instance files /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e_del
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.096 2 INFO nova.virt.libvirt.driver [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Deletion of /var/lib/nova/instances/15a3e611-0b02-417e-96b9-552573fac39e_del complete
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.002847795 +0000 UTC m=+0.020506365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.099 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.100627557 +0000 UTC m=+0.118286117 container init 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.109000413 +0000 UTC m=+0.126658953 container start 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.111960516 +0000 UTC m=+0.129619066 container attach 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:00:43 compute-0 serene_snyder[386715]: 167 167
Oct 02 13:00:43 compute-0 systemd[1]: libpod-95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9.scope: Deactivated successfully.
Oct 02 13:00:43 compute-0 conmon[386715]: conmon 95f7bf61940e52ca1752 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9.scope/container/memory.events
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.115754079 +0000 UTC m=+0.133412619 container died 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.143 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ddd19cc35e33fc56b6a98d1b508eac504baf75a6d454575f9dff66f3a3612ca-merged.mount: Deactivated successfully.
Oct 02 13:00:43 compute-0 podman[386699]: 2025-10-02 13:00:43.161311998 +0000 UTC m=+0.178970538 container remove 95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:00:43 compute-0 systemd[1]: libpod-conmon-95f7bf61940e52ca1752c4f125e09ef9da031b6cb337d17a2f49a574402cfbb9.scope: Deactivated successfully.
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.182 2 INFO nova.compute.manager [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.183 2 DEBUG oslo.service.loopingcall [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.183 2 DEBUG nova.compute.manager [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.183 2 DEBUG nova.network.neutron [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:00:43 compute-0 podman[386748]: 2025-10-02 13:00:43.332633717 +0000 UTC m=+0.045605632 container create 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:00:43 compute-0 systemd[1]: Started libpod-conmon-1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb.scope.
Oct 02 13:00:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2beef92434a412ac6c0b024cef94d9428903041221cc253ad572dc33d76e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2beef92434a412ac6c0b024cef94d9428903041221cc253ad572dc33d76e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2beef92434a412ac6c0b024cef94d9428903041221cc253ad572dc33d76e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2beef92434a412ac6c0b024cef94d9428903041221cc253ad572dc33d76e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:43 compute-0 podman[386748]: 2025-10-02 13:00:43.315544786 +0000 UTC m=+0.028516711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:43 compute-0 podman[386748]: 2025-10-02 13:00:43.412661553 +0000 UTC m=+0.125633488 container init 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:43 compute-0 podman[386748]: 2025-10-02 13:00:43.418795173 +0000 UTC m=+0.131767088 container start 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:00:43 compute-0 podman[386748]: 2025-10-02 13:00:43.421907729 +0000 UTC m=+0.134879644 container attach 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:00:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:00:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:00:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:00:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:00:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:00:43 compute-0 ceph-mon[73607]: pgmap v3061: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:00:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1346435653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253781985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.583 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.588 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.633 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.692 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:00:43 compute-0 nova_compute[257802]: 2025-10-02 13:00:43.692 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]: {
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:         "osd_id": 1,
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:         "type": "bluestore"
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]:     }
Oct 02 13:00:44 compute-0 keen_ramanujan[386774]: }
Oct 02 13:00:44 compute-0 systemd[1]: libpod-1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb.scope: Deactivated successfully.
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:00:44 compute-0 podman[386797]: 2025-10-02 13:00:44.255190639 +0000 UTC m=+0.024426251 container died 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.263 2 DEBUG nova.network.neutron [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e2beef92434a412ac6c0b024cef94d9428903041221cc253ad572dc33d76e7-merged.mount: Deactivated successfully.
Oct 02 13:00:44 compute-0 podman[386797]: 2025-10-02 13:00:44.309583505 +0000 UTC m=+0.078819107 container remove 1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ramanujan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:00:44 compute-0 systemd[1]: libpod-conmon-1c65569d9c51ca4548190e28bc164b11e3309fe86a0c3486e87e2ebef59b81eb.scope: Deactivated successfully.
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.328 2 INFO nova.compute.manager [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Took 1.14 seconds to deallocate network for instance.
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.334 2 DEBUG nova.compute.manager [req-70b712e2-2109-40fe-b1cc-3ae920c701b6 req-d322be9e-b3d9-4a84-a7df-9420ab04f2cd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-deleted-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.334 2 INFO nova.compute.manager [req-70b712e2-2109-40fe-b1cc-3ae920c701b6 req-d322be9e-b3d9-4a84-a7df-9420ab04f2cd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Neutron deleted interface 856edb73-6931-4d42-b614-8212ebb68a63; detaching it from the instance and deleting it from the info cache
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.334 2 DEBUG nova.network.neutron [req-70b712e2-2109-40fe-b1cc-3ae920c701b6 req-d322be9e-b3d9-4a84-a7df-9420ab04f2cd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:44 compute-0 sudo[386615]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:00:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:00:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f8f90a34-015d-427b-98c0-920138f9fb72 does not exist
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d167774a-2e4f-4a95-85c0-b0c7cf120ce9 does not exist
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b5e3c714-571e-4b36-b56a-f8f1fed6434b does not exist
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.403 2 DEBUG nova.compute.manager [req-70b712e2-2109-40fe-b1cc-3ae920c701b6 req-d322be9e-b3d9-4a84-a7df-9420ab04f2cd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Detach interface failed, port_id=856edb73-6931-4d42-b614-8212ebb68a63, reason: Instance 15a3e611-0b02-417e-96b9-552573fac39e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 13:00:44 compute-0 sudo[386812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.436 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.436 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:44 compute-0 sudo[386812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:44 compute-0 sudo[386812]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 13:00:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.488 2 DEBUG oslo_concurrency.processutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:44 compute-0 sudo[386837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:00:44 compute-0 sudo[386837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:44 compute-0 sudo[386837]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3253781985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:00:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.796 2 DEBUG nova.compute.manager [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.797 2 DEBUG oslo_concurrency.lockutils [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "15a3e611-0b02-417e-96b9-552573fac39e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.798 2 DEBUG oslo_concurrency.lockutils [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.799 2 DEBUG oslo_concurrency.lockutils [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.800 2 DEBUG nova.compute.manager [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] No waiting events found dispatching network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.800 2 WARNING nova.compute.manager [req-4d9a3884-c66f-43c8-af54-0c065e12d2dc req-791712af-d8b2-48f8-9dfa-07bfdb3af8ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Received unexpected event network-vif-plugged-856edb73-6931-4d42-b614-8212ebb68a63 for instance with vm_state deleted and task_state None.
Oct 02 13:00:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014335753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.896 2 DEBUG oslo_concurrency.processutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.901 2 DEBUG nova.compute.provider_tree [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:44 compute-0 nova_compute[257802]: 2025-10-02 13:00:44.980 2 DEBUG nova.scheduler.client.report [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:45 compute-0 nova_compute[257802]: 2025-10-02 13:00:45.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 nova_compute[257802]: 2025-10-02 13:00:45.132 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:45 compute-0 nova_compute[257802]: 2025-10-02 13:00:45.180 2 INFO nova.scheduler.client.report [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Deleted allocations for instance 15a3e611-0b02-417e-96b9-552573fac39e
Oct 02 13:00:45 compute-0 nova_compute[257802]: 2025-10-02 13:00:45.278 2 DEBUG oslo_concurrency.lockutils [None req-817001c0-999b-4c7d-af5b-4fe133506bf8 3299a1aed5af4843a91417a3f181c172 e7168b5b1300495d90592b195824729a - - default default] Lock "15a3e611-0b02-417e-96b9-552573fac39e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:45 compute-0 ceph-mon[73607]: pgmap v3062: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 13:00:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1014335753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 13:00:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:47 compute-0 nova_compute[257802]: 2025-10-02 13:00:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:47 compute-0 ceph-mon[73607]: pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 13:00:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:00:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:48 compute-0 nova_compute[257802]: 2025-10-02 13:00:48.692 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:00:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 67K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1591 writes, 6794 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 10.54 MB, 0.02 MB/s
                                           Interval WAL: 1591 writes, 1591 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     41.5      2.12              0.25        46    0.046       0      0       0.0       0.0
                                             L6      1/0    9.78 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    100.5     85.7      5.25              1.27        45    0.117    318K    24K       0.0       0.0
                                            Sum      1/0    9.78 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     71.7     73.0      7.37              1.52        91    0.081    318K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     97.3     95.3      0.67              0.18        10    0.067     47K   2602       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    100.5     85.7      5.25              1.27        45    0.117    318K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.5      2.11              0.25        45    0.047       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.086, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.53 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 7.4 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 57.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000389 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3334,55.08 MB,18.1179%) FilterBlock(92,890.73 KB,0.286137%) IndexBlock(92,1.48 MB,0.485235%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:00:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:49.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:49 compute-0 ceph-mon[73607]: pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:00:50 compute-0 nova_compute[257802]: 2025-10-02 13:00:50.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:00:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:51.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:51 compute-0 ceph-mon[73607]: pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:00:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3702376452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3702376452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 13:00:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:52.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:52 compute-0 nova_compute[257802]: 2025-10-02 13:00:52.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1841137174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1841137174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1858059662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:53.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:53 compute-0 ceph-mon[73607]: pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 60 op/s
Oct 02 13:00:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521641773683231 of space, bias 1.0, pg target 0.4564925321049693 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:00:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/609838238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/609838238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:54 compute-0 ceph-mon[73607]: pgmap v3067: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 60 op/s
Oct 02 13:00:55 compute-0 nova_compute[257802]: 2025-10-02 13:00:55.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:00:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/811450301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:00:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/811450301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:55.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/811450301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/811450301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 15 KiB/s wr, 85 op/s
Oct 02 13:00:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:56.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:56 compute-0 ceph-mon[73607]: pgmap v3068: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 15 KiB/s wr, 85 op/s
Oct 02 13:00:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:57 compute-0 nova_compute[257802]: 2025-10-02 13:00:57.501 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410042.4982755, 15a3e611-0b02-417e-96b9-552573fac39e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:57 compute-0 nova_compute[257802]: 2025-10-02 13:00:57.502 2 INFO nova.compute.manager [-] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] VM Stopped (Lifecycle Event)
Oct 02 13:00:57 compute-0 nova_compute[257802]: 2025-10-02 13:00:57.538 2 DEBUG nova.compute.manager [None req-35fb8af3-5625-4987-90ca-60838fac143d - - - - - -] [instance: 15a3e611-0b02-417e-96b9-552573fac39e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:57 compute-0 nova_compute[257802]: 2025-10-02 13:00:57.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Oct 02 13:00:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:58 compute-0 nova_compute[257802]: 2025-10-02 13:00:58.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:59 compute-0 nova_compute[257802]: 2025-10-02 13:00:59.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:00:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:00:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:59.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:00:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:59 compute-0 ceph-mon[73607]: pgmap v3069: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Oct 02 13:01:00 compute-0 nova_compute[257802]: 2025-10-02 13:01:00.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Oct 02 13:01:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:00.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:00 compute-0 sudo[386893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:00 compute-0 sudo[386893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:00 compute-0 sudo[386893]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:00 compute-0 sudo[386927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:00 compute-0 podman[386919]: 2025-10-02 13:01:00.678620626 +0000 UTC m=+0.057570295 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:00 compute-0 sudo[386927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:00 compute-0 sudo[386927]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:00 compute-0 podman[386917]: 2025-10-02 13:01:00.699963751 +0000 UTC m=+0.086188229 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:01:00 compute-0 podman[386918]: 2025-10-02 13:01:00.700018172 +0000 UTC m=+0.072567074 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:01 compute-0 CROND[386999]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-0 run-parts[387002]: (/etc/cron.hourly) starting 0anacron
Oct 02 13:01:01 compute-0 run-parts[387008]: (/etc/cron.hourly) finished 0anacron
Oct 02 13:01:01 compute-0 CROND[386998]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-0 ceph-mon[73607]: pgmap v3070: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Oct 02 13:01:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Oct 02 13:01:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:02 compute-0 nova_compute[257802]: 2025-10-02 13:01:02.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:03 compute-0 ceph-mon[73607]: pgmap v3071: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Oct 02 13:01:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Oct 02 13:01:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:05 compute-0 nova_compute[257802]: 2025-10-02 13:01:05.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:05 compute-0 ceph-mon[73607]: pgmap v3072: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Oct 02 13:01:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Oct 02 13:01:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:06.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:06 compute-0 ceph-mon[73607]: pgmap v3073: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Oct 02 13:01:06 compute-0 podman[387012]: 2025-10-02 13:01:06.934590787 +0000 UTC m=+0.075164018 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:07 compute-0 nova_compute[257802]: 2025-10-02 13:01:07.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:08.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:09.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:09 compute-0 ceph-mon[73607]: pgmap v3074: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:10 compute-0 nova_compute[257802]: 2025-10-02 13:01:10.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:11 compute-0 ceph-mon[73607]: pgmap v3075: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:12.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:12 compute-0 nova_compute[257802]: 2025-10-02 13:01:12.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:13.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:13 compute-0 ceph-mon[73607]: pgmap v3076: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:13 compute-0 nova_compute[257802]: 2025-10-02 13:01:13.816 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.34 sec
Oct 02 13:01:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:15 compute-0 nova_compute[257802]: 2025-10-02 13:01:15.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:15.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:15 compute-0 ceph-mon[73607]: pgmap v3077: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:17 compute-0 ceph-mon[73607]: pgmap v3078: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:17 compute-0 nova_compute[257802]: 2025-10-02 13:01:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:18 compute-0 nova_compute[257802]: 2025-10-02 13:01:18.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:18 compute-0 nova_compute[257802]: 2025-10-02 13:01:18.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:01:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3759177344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:18.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:19.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:19.213 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:19 compute-0 nova_compute[257802]: 2025-10-02 13:01:19.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:19.214 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:01:19 compute-0 ceph-mon[73607]: pgmap v3079: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/446484916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:20 compute-0 nova_compute[257802]: 2025-10-02 13:01:20.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:20 compute-0 sudo[387045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:20 compute-0 sudo[387045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:20 compute-0 sudo[387045]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:20 compute-0 sudo[387070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:20 compute-0 sudo[387070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:20 compute-0 sudo[387070]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:21 compute-0 ceph-mon[73607]: pgmap v3080: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:22.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:22 compute-0 nova_compute[257802]: 2025-10-02 13:01:22.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:23 compute-0 ceph-mon[73607]: pgmap v3081: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:24.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:25 compute-0 nova_compute[257802]: 2025-10-02 13:01:25.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:25 compute-0 nova_compute[257802]: 2025-10-02 13:01:25.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:25 compute-0 ceph-mon[73607]: pgmap v3082: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:26 compute-0 nova_compute[257802]: 2025-10-02 13:01:26.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:26.983 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:26.984 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:26.984 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:27 compute-0 nova_compute[257802]: 2025-10-02 13:01:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:27 compute-0 nova_compute[257802]: 2025-10-02 13:01:27.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:27 compute-0 ceph-mon[73607]: pgmap v3083: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:01:28.215 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:29 compute-0 ceph-mon[73607]: pgmap v3084: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:30 compute-0 nova_compute[257802]: 2025-10-02 13:01:30.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:30 compute-0 podman[387100]: 2025-10-02 13:01:30.927001428 +0000 UTC m=+0.058218510 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:30 compute-0 podman[387102]: 2025-10-02 13:01:30.930740088 +0000 UTC m=+0.056343883 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:01:30 compute-0 podman[387101]: 2025-10-02 13:01:30.932300236 +0000 UTC m=+0.062243917 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:31 compute-0 nova_compute[257802]: 2025-10-02 13:01:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:31 compute-0 ceph-mon[73607]: pgmap v3085: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:32.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:32 compute-0 nova_compute[257802]: 2025-10-02 13:01:32.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:33 compute-0 ceph-mon[73607]: pgmap v3086: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2957230612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:34 compute-0 nova_compute[257802]: 2025-10-02 13:01:34.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:34.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:35 compute-0 nova_compute[257802]: 2025-10-02 13:01:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.114416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095114500, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2188, "num_deletes": 254, "total_data_size": 3850799, "memory_usage": 3913104, "flush_reason": "Manual Compaction"}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 02 13:01:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/61616564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095159674, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3781841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66256, "largest_seqno": 68443, "table_properties": {"data_size": 3771917, "index_size": 6289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20823, "raw_average_key_size": 20, "raw_value_size": 3751972, "raw_average_value_size": 3729, "num_data_blocks": 273, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409886, "oldest_key_time": 1759409886, "file_creation_time": 1759410095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 45429 microseconds, and 8844 cpu microseconds.
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.159854) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3781841 bytes OK
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.159908) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.163613) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.163650) EVENT_LOG_v1 {"time_micros": 1759410095163640, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.163672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3841865, prev total WAL file size 3841865, number of live WAL files 2.
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.165628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3693KB)], [152(10011KB)]
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095166153, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 14033140, "oldest_snapshot_seqno": -1}
Oct 02 13:01:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9555 keys, 12069423 bytes, temperature: kUnknown
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095263271, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12069423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12007858, "index_size": 36556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 251469, "raw_average_key_size": 26, "raw_value_size": 11840436, "raw_average_value_size": 1239, "num_data_blocks": 1393, "num_entries": 9555, "num_filter_entries": 9555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.263531) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12069423 bytes
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.268156) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.4 rd, 124.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10080, records dropped: 525 output_compression: NoCompression
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.268195) EVENT_LOG_v1 {"time_micros": 1759410095268180, "job": 94, "event": "compaction_finished", "compaction_time_micros": 97181, "compaction_time_cpu_micros": 26829, "output_level": 6, "num_output_files": 1, "total_output_size": 12069423, "num_input_records": 10080, "num_output_records": 9555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095268929, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095270845, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.165453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.271043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.271054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.271056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.271058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:01:35.271060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:01:36 compute-0 ceph-mon[73607]: pgmap v3087: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:37 compute-0 ceph-mon[73607]: pgmap v3088: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:37 compute-0 nova_compute[257802]: 2025-10-02 13:01:37.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:37 compute-0 podman[387161]: 2025-10-02 13:01:37.932628483 +0000 UTC m=+0.074547735 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:01:38 compute-0 nova_compute[257802]: 2025-10-02 13:01:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:38 compute-0 nova_compute[257802]: 2025-10-02 13:01:38.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:01:38 compute-0 nova_compute[257802]: 2025-10-02 13:01:38.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:01:38 compute-0 nova_compute[257802]: 2025-10-02 13:01:38.162 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:01:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:39 compute-0 ceph-mon[73607]: pgmap v3089: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:40 compute-0 nova_compute[257802]: 2025-10-02 13:01:40.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:40 compute-0 sudo[387190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:40 compute-0 sudo[387190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:40 compute-0 sudo[387190]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:40 compute-0 sudo[387215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:40 compute-0 sudo[387215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:40 compute-0 sudo[387215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:41 compute-0 ceph-mon[73607]: pgmap v3090: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:01:42
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta']
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:01:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:42 compute-0 nova_compute[257802]: 2025-10-02 13:01:42.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.183 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.183 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.184 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.184 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.184 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:01:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:01:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:01:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:01:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:01:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:43 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697057780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.655 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:43 compute-0 ceph-mon[73607]: pgmap v3091: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/697057780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.810 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.811 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4264MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.812 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:43 compute-0 nova_compute[257802]: 2025-10-02 13:01:43.812 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.357 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.358 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.378 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:44.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290858359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.831 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:44 compute-0 sudo[387284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.838 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:44 compute-0 sudo[387284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:44 compute-0 sudo[387284]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:44 compute-0 sudo[387311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:44 compute-0 sudo[387311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:44 compute-0 sudo[387311]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:44 compute-0 sudo[387336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:44 compute-0 sudo[387336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:44 compute-0 sudo[387336]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:44 compute-0 nova_compute[257802]: 2025-10-02 13:01:44.969 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:45 compute-0 sudo[387361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:01:45 compute-0 sudo[387361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:45 compute-0 nova_compute[257802]: 2025-10-02 13:01:45.018 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:01:45 compute-0 nova_compute[257802]: 2025-10-02 13:01:45.018 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:45 compute-0 nova_compute[257802]: 2025-10-02 13:01:45.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:45.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:45 compute-0 podman[387459]: 2025-10-02 13:01:45.438007828 +0000 UTC m=+0.060621047 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:45 compute-0 podman[387459]: 2025-10-02 13:01:45.541504952 +0000 UTC m=+0.164118161 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:45 compute-0 ceph-mon[73607]: pgmap v3092: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2290858359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:46 compute-0 podman[387598]: 2025-10-02 13:01:46.099185013 +0000 UTC m=+0.054196781 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:01:46 compute-0 podman[387598]: 2025-10-02 13:01:46.113140591 +0000 UTC m=+0.068152329 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:01:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:01:46 compute-0 podman[387664]: 2025-10-02 13:01:46.320507988 +0000 UTC m=+0.055250567 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container)
Oct 02 13:01:46 compute-0 podman[387664]: 2025-10-02 13:01:46.332251343 +0000 UTC m=+0.066993902 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public)
Oct 02 13:01:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:01:46 compute-0 sudo[387361]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:01:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:01:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:46 compute-0 sudo[387716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:46 compute-0 sudo[387716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:46 compute-0 sudo[387716]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:46 compute-0 sudo[387741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:46 compute-0 sudo[387741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:46 compute-0 sudo[387741]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:46 compute-0 sudo[387767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:46 compute-0 sudo[387767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:46 compute-0 sudo[387767]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:46 compute-0 sudo[387792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:01:46 compute-0 sudo[387792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:47.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:47 compute-0 sudo[387792]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bae5b14c-d79e-4309-be4a-74e2e8875c16 does not exist
Oct 02 13:01:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7ee5b913-7a9f-4983-8660-494b93fb02e2 does not exist
Oct 02 13:01:47 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 60d1c920-87cb-4877-bbe5-c2439a9fe56e does not exist
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:01:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:47 compute-0 sudo[387848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:47 compute-0 sudo[387848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:47 compute-0 sudo[387848]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:47 compute-0 sudo[387873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:47 compute-0 sudo[387873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:47 compute-0 sudo[387873]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:47 compute-0 ceph-mon[73607]: pgmap v3093: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:47 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:01:47 compute-0 nova_compute[257802]: 2025-10-02 13:01:47.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:47 compute-0 sudo[387898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:47 compute-0 sudo[387898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:47 compute-0 sudo[387898]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:47 compute-0 sudo[387923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:01:47 compute-0 sudo[387923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.026791848 +0000 UTC m=+0.042582471 container create f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:01:48 compute-0 systemd[1]: Started libpod-conmon-f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532.scope.
Oct 02 13:01:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.009651623 +0000 UTC m=+0.025442236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.112142903 +0000 UTC m=+0.127933536 container init f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.120433683 +0000 UTC m=+0.136224286 container start f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.124497742 +0000 UTC m=+0.140288355 container attach f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:48 compute-0 xenodochial_ganguly[388005]: 167 167
Oct 02 13:01:48 compute-0 systemd[1]: libpod-f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532.scope: Deactivated successfully.
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.126225373 +0000 UTC m=+0.142015986 container died f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fce174ab4ad55c9d6fbafbd507114ada445493b3462e215cf598111a3abf18e-merged.mount: Deactivated successfully.
Oct 02 13:01:48 compute-0 podman[387989]: 2025-10-02 13:01:48.171525309 +0000 UTC m=+0.187315922 container remove f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 02 13:01:48 compute-0 systemd[1]: libpod-conmon-f939a095c46669ee5a675820cf2d4e52664c3afdf3504db5fcbe961a5eace532.scope: Deactivated successfully.
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.215 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.217 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.284 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:01:48 compute-0 podman[388028]: 2025-10-02 13:01:48.391152772 +0000 UTC m=+0.059758286 container create 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:01:48 compute-0 systemd[1]: Started libpod-conmon-75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83.scope.
Oct 02 13:01:48 compute-0 podman[388028]: 2025-10-02 13:01:48.361229999 +0000 UTC m=+0.029835573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.505 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.508 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:48 compute-0 podman[388028]: 2025-10-02 13:01:48.51176043 +0000 UTC m=+0.180365944 container init 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.520 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.521 2 INFO nova.compute.claims [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:01:48 compute-0 podman[388028]: 2025-10-02 13:01:48.524677023 +0000 UTC m=+0.193282497 container start 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:01:48 compute-0 podman[388028]: 2025-10-02 13:01:48.553467709 +0000 UTC m=+0.222073203 container attach 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:01:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:48.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:48 compute-0 nova_compute[257802]: 2025-10-02 13:01:48.649 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:01:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:01:48 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/710081758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1926249132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.122 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.130 2 DEBUG nova.compute.provider_tree [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.148 2 DEBUG nova.scheduler.client.report [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.168 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.169 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:01:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:49.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.215 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.215 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.236 2 INFO nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.259 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.359 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.361 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.361 2 INFO nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Creating image(s)
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.390 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.422 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.451 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.456 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:49 compute-0 reverent_elbakyan[388044]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:01:49 compute-0 reverent_elbakyan[388044]: --> relative data size: 1.0
Oct 02 13:01:49 compute-0 reverent_elbakyan[388044]: --> All data devices are unavailable
Oct 02 13:01:49 compute-0 systemd[1]: libpod-75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83.scope: Deactivated successfully.
Oct 02 13:01:49 compute-0 podman[388028]: 2025-10-02 13:01:49.51533679 +0000 UTC m=+1.183942264 container died 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.519 2 DEBUG nova.policy [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93facc00c95f4cbfa6cecaf3641182bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.539 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.540 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.541 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.541 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f7127d446acb70a15f7c8b83051b00487c0cc5dc15b22eff4ed5fb9be44f27-merged.mount: Deactivated successfully.
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.582 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:49 compute-0 nova_compute[257802]: 2025-10-02 13:01:49.586 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 08b29362-e7c1-450f-bf22-95d23c21ff23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:49 compute-0 podman[388028]: 2025-10-02 13:01:49.589127065 +0000 UTC m=+1.257732539 container remove 75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:01:49 compute-0 systemd[1]: libpod-conmon-75e991e2394b32169aef3e50ab7b794ad231af1822cb06495532b931f71c2b83.scope: Deactivated successfully.
Oct 02 13:01:49 compute-0 sudo[387923]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:49 compute-0 sudo[388174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:49 compute-0 sudo[388174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:49 compute-0 sudo[388174]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 sudo[388214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:49 compute-0 sudo[388214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:49 compute-0 sudo[388214]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 sudo[388242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:49 compute-0 sudo[388242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:49 compute-0 sudo[388242]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 ceph-mon[73607]: pgmap v3094: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:01:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1926249132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:49 compute-0 sudo[388267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:01:49 compute-0 sudo[388267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.059 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 08b29362-e7c1-450f-bf22-95d23c21ff23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.156 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] resizing rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.176384982 +0000 UTC m=+0.021735706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.300458434 +0000 UTC m=+0.145809138 container create a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:01:50 compute-0 systemd[1]: Started libpod-conmon-a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5.scope.
Oct 02 13:01:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.4610668 +0000 UTC m=+0.306417544 container init a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.467829983 +0000 UTC m=+0.313180687 container start a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.471398049 +0000 UTC m=+0.316748753 container attach a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:01:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 MiB/s wr, 30 op/s
Oct 02 13:01:50 compute-0 nostalgic_jemison[388402]: 167 167
Oct 02 13:01:50 compute-0 systemd[1]: libpod-a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5.scope: Deactivated successfully.
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.47433843 +0000 UTC m=+0.319689134 container died a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.570 2 DEBUG nova.objects.instance [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'migration_context' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-64455e32eae87128f24e7683c1585f25a512081769d858d4daf478aa98eb89fb-merged.mount: Deactivated successfully.
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.586 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.586 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Ensure instance console log exists: /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.587 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.587 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:50 compute-0 nova_compute[257802]: 2025-10-02 13:01:50.587 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:50 compute-0 podman[388368]: 2025-10-02 13:01:50.590017069 +0000 UTC m=+0.435367773 container remove a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:01:50 compute-0 systemd[1]: libpod-conmon-a18b9e9826f0f0ff14b81843902196d4b9c77d05defe3f896ae655ca008a58b5.scope: Deactivated successfully.
Oct 02 13:01:50 compute-0 podman[388447]: 2025-10-02 13:01:50.754808965 +0000 UTC m=+0.050705307 container create 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:01:50 compute-0 systemd[1]: Started libpod-conmon-12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f.scope.
Oct 02 13:01:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da1d131e6bd8acbfebbea2a1bbce87c46d3f75952be9182e7331f3d2044503d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da1d131e6bd8acbfebbea2a1bbce87c46d3f75952be9182e7331f3d2044503d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:50 compute-0 podman[388447]: 2025-10-02 13:01:50.725254621 +0000 UTC m=+0.021150983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da1d131e6bd8acbfebbea2a1bbce87c46d3f75952be9182e7331f3d2044503d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da1d131e6bd8acbfebbea2a1bbce87c46d3f75952be9182e7331f3d2044503d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:50 compute-0 podman[388447]: 2025-10-02 13:01:50.836608635 +0000 UTC m=+0.132504997 container init 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:01:50 compute-0 podman[388447]: 2025-10-02 13:01:50.842914637 +0000 UTC m=+0.138810979 container start 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:50 compute-0 podman[388447]: 2025-10-02 13:01:50.846521675 +0000 UTC m=+0.142418017 container attach 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:01:50 compute-0 ceph-mon[73607]: pgmap v3095: 305 pgs: 305 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 MiB/s wr, 30 op/s
Oct 02 13:01:51 compute-0 nova_compute[257802]: 2025-10-02 13:01:51.019 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:51.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:51 compute-0 kind_murdock[388464]: {
Oct 02 13:01:51 compute-0 kind_murdock[388464]:     "1": [
Oct 02 13:01:51 compute-0 kind_murdock[388464]:         {
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "devices": [
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "/dev/loop3"
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             ],
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "lv_name": "ceph_lv0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "lv_size": "7511998464",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "name": "ceph_lv0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "tags": {
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.cluster_name": "ceph",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.crush_device_class": "",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.encrypted": "0",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.osd_id": "1",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.type": "block",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:                 "ceph.vdo": "0"
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             },
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "type": "block",
Oct 02 13:01:51 compute-0 kind_murdock[388464]:             "vg_name": "ceph_vg0"
Oct 02 13:01:51 compute-0 kind_murdock[388464]:         }
Oct 02 13:01:51 compute-0 kind_murdock[388464]:     ]
Oct 02 13:01:51 compute-0 kind_murdock[388464]: }
Oct 02 13:01:51 compute-0 systemd[1]: libpod-12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f.scope: Deactivated successfully.
Oct 02 13:01:51 compute-0 podman[388447]: 2025-10-02 13:01:51.616097323 +0000 UTC m=+0.911993665 container died 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da1d131e6bd8acbfebbea2a1bbce87c46d3f75952be9182e7331f3d2044503d-merged.mount: Deactivated successfully.
Oct 02 13:01:51 compute-0 podman[388447]: 2025-10-02 13:01:51.667930737 +0000 UTC m=+0.963827079 container remove 12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:01:51 compute-0 systemd[1]: libpod-conmon-12637b7e0d47b0d5d556c7ef98705c12c3635fb5cf5cf835ba0f2c49cd83433f.scope: Deactivated successfully.
Oct 02 13:01:51 compute-0 sudo[388267]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:51 compute-0 sudo[388487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:51 compute-0 sudo[388487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:51 compute-0 sudo[388487]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:51 compute-0 sudo[388512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:51 compute-0 sudo[388512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:51 compute-0 sudo[388512]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:51 compute-0 sudo[388537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:51 compute-0 sudo[388537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:51 compute-0 sudo[388537]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:51 compute-0 sudo[388562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:01:51 compute-0 sudo[388562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.235910567 +0000 UTC m=+0.038921482 container create 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:01:52 compute-0 systemd[1]: Started libpod-conmon-4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3.scope.
Oct 02 13:01:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.293083631 +0000 UTC m=+0.096094546 container init 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.299877555 +0000 UTC m=+0.102888470 container start 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.303741779 +0000 UTC m=+0.106752714 container attach 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:01:52 compute-0 mystifying_galileo[388645]: 167 167
Oct 02 13:01:52 compute-0 systemd[1]: libpod-4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3.scope: Deactivated successfully.
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.306743221 +0000 UTC m=+0.109754136 container died 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.21781801 +0000 UTC m=+0.020828945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-49468a163a064d82acf52d480344a10f9329acf9d233a592ff9ceac78b18b2dd-merged.mount: Deactivated successfully.
Oct 02 13:01:52 compute-0 podman[388628]: 2025-10-02 13:01:52.342039845 +0000 UTC m=+0.145050760 container remove 4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:52 compute-0 systemd[1]: libpod-conmon-4a43d65ad039e751f4e6863fb64ecddc623fdcda1d0feddfe641defd23cc9bd3.scope: Deactivated successfully.
Oct 02 13:01:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 MiB/s wr, 30 op/s
Oct 02 13:01:52 compute-0 podman[388667]: 2025-10-02 13:01:52.494971915 +0000 UTC m=+0.040264176 container create 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:01:52 compute-0 systemd[1]: Started libpod-conmon-1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f.scope.
Oct 02 13:01:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc6870119f7f3f15b0c3209f8a4a4879abc15cb132c3f51f2395681201201ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc6870119f7f3f15b0c3209f8a4a4879abc15cb132c3f51f2395681201201ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc6870119f7f3f15b0c3209f8a4a4879abc15cb132c3f51f2395681201201ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc6870119f7f3f15b0c3209f8a4a4879abc15cb132c3f51f2395681201201ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:52 compute-0 podman[388667]: 2025-10-02 13:01:52.476605051 +0000 UTC m=+0.021897342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:52 compute-0 podman[388667]: 2025-10-02 13:01:52.580397911 +0000 UTC m=+0.125690162 container init 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:01:52 compute-0 podman[388667]: 2025-10-02 13:01:52.588211401 +0000 UTC m=+0.133503672 container start 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:01:52 compute-0 podman[388667]: 2025-10-02 13:01:52.59230738 +0000 UTC m=+0.137599661 container attach 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:01:52 compute-0 nova_compute[257802]: 2025-10-02 13:01:52.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:01:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:53.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:01:53 compute-0 sad_elion[388683]: {
Oct 02 13:01:53 compute-0 sad_elion[388683]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:01:53 compute-0 sad_elion[388683]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:01:53 compute-0 sad_elion[388683]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:01:53 compute-0 sad_elion[388683]:         "osd_id": 1,
Oct 02 13:01:53 compute-0 sad_elion[388683]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:01:53 compute-0 sad_elion[388683]:         "type": "bluestore"
Oct 02 13:01:53 compute-0 sad_elion[388683]:     }
Oct 02 13:01:53 compute-0 sad_elion[388683]: }
Oct 02 13:01:53 compute-0 systemd[1]: libpod-1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f.scope: Deactivated successfully.
Oct 02 13:01:53 compute-0 podman[388667]: 2025-10-02 13:01:53.426502132 +0000 UTC m=+0.971794393 container died 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc6870119f7f3f15b0c3209f8a4a4879abc15cb132c3f51f2395681201201ab-merged.mount: Deactivated successfully.
Oct 02 13:01:53 compute-0 podman[388667]: 2025-10-02 13:01:53.490823247 +0000 UTC m=+1.036115508 container remove 1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:01:53 compute-0 systemd[1]: libpod-conmon-1de6bb1688d63ba37bf2d530996682eefe1352e5ce986786baece6a0e4c5aa9f.scope: Deactivated successfully.
Oct 02 13:01:53 compute-0 sudo[388562]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:01:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:01:53 compute-0 ceph-mon[73607]: pgmap v3096: 305 pgs: 305 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 MiB/s wr, 30 op/s
Oct 02 13:01:53 compute-0 nova_compute[257802]: 2025-10-02 13:01:53.713 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Successfully created port: fa0139f5-ff5d-45c1-9873-48b0a33759d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:01:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c0e40a65-ea78-407d-881c-f425ae209e23 does not exist
Oct 02 13:01:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6b4ce48c-412a-4a84-b15a-49f5eb582bfc does not exist
Oct 02 13:01:53 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 053d524f-0285-4773-a51e-351502659a22 does not exist
Oct 02 13:01:53 compute-0 sudo[388717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:53 compute-0 sudo[388717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:53 compute-0 sudo[388717]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:53 compute-0 sudo[388742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:01:53 compute-0 sudo[388742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:53 compute-0 sudo[388742]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.5 MiB/s wr, 44 op/s
Oct 02 13:01:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:54.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019776617462311558 of space, bias 1.0, pg target 0.5932985238693468 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:01:55 compute-0 nova_compute[257802]: 2025-10-02 13:01:55.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:01:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902022552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:01:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902022552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:55 compute-0 ceph-mon[73607]: pgmap v3097: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.5 MiB/s wr, 44 op/s
Oct 02 13:01:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/902022552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/902022552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:01:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:56.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.584 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Successfully updated port: fa0139f5-ff5d-45c1-9873-48b0a33759d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.625 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.625 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquired lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.625 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.730 2 DEBUG nova.compute.manager [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-changed-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.730 2 DEBUG nova.compute.manager [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Refreshing instance network info cache due to event network-changed-fa0139f5-ff5d-45c1-9873-48b0a33759d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.730 2 DEBUG oslo_concurrency.lockutils [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:57 compute-0 ceph-mon[73607]: pgmap v3098: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:01:57 compute-0 nova_compute[257802]: 2025-10-02 13:01:57.868 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:01:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:01:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:58.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.678 2 DEBUG nova.network.neutron [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.898 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Releasing lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.899 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Instance network_info: |[{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.899 2 DEBUG oslo_concurrency.lockutils [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.899 2 DEBUG nova.network.neutron [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Refreshing network info cache for port fa0139f5-ff5d-45c1-9873-48b0a33759d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.902 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Start _get_guest_xml network_info=[{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.907 2 WARNING nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.912 2 DEBUG nova.virt.libvirt.host [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.913 2 DEBUG nova.virt.libvirt.host [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.917 2 DEBUG nova.virt.libvirt.host [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.917 2 DEBUG nova.virt.libvirt.host [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.919 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.919 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.919 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.920 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.920 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.920 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.920 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.920 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.921 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.921 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.921 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.921 2 DEBUG nova.virt.hardware [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:01:58 compute-0 nova_compute[257802]: 2025-10-02 13:01:58.923 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:01:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:01:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:59.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:01:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282137247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.353 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.378 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.382 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273937471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.813 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.816 2 DEBUG nova.virt.libvirt.vif [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-247817219',display_name='tempest-AttachVolumeNegativeTest-server-247817219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-247817219',id=199,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBj6tNsdWzptM1YL5GSEH1m7nxdiRaIlwCB2W6y7LUFKIu26VXI47mGh3X2ihi0CDsGqTRgVbGT/FY7e/MdF8Lmm+0sICub5iqjLIVf4S4ob9DXCs+NW7Dr/Dq12CLXmgQ==',key_name='tempest-keypair-1672670520',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-uf4kq0sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=08b29362-e7c1-450f-bf22-95d23c21ff23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.816 2 DEBUG nova.network.os_vif_util [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.817 2 DEBUG nova.network.os_vif_util [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.818 2 DEBUG nova.objects.instance [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'pci_devices' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:59 compute-0 ceph-mon[73607]: pgmap v3099: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:01:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3460342541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4282137247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1874987275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1273937471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.961 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <uuid>08b29362-e7c1-450f-bf22-95d23c21ff23</uuid>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <name>instance-000000c7</name>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeNegativeTest-server-247817219</nova:name>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:01:58</nova:creationTime>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:user uuid="93facc00c95f4cbfa6cecaf3641182bc">tempest-AttachVolumeNegativeTest-1407980822-project-member</nova:user>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:project uuid="5eceae619a6f4fdeaa8ba6fafda4912a">tempest-AttachVolumeNegativeTest-1407980822</nova:project>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <nova:port uuid="fa0139f5-ff5d-45c1-9873-48b0a33759d5">
Oct 02 13:01:59 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <system>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="serial">08b29362-e7c1-450f-bf22-95d23c21ff23</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="uuid">08b29362-e7c1-450f-bf22-95d23c21ff23</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </system>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <os>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </os>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <features>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </features>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/08b29362-e7c1-450f-bf22-95d23c21ff23_disk">
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </source>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config">
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </source>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:01:59 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:07:91:1b"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <target dev="tapfa0139f5-ff"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/console.log" append="off"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <video>
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </video>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:01:59 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:01:59 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:01:59 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:01:59 compute-0 nova_compute[257802]: </domain>
Oct 02 13:01:59 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.963 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Preparing to wait for external event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.963 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.963 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.964 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.964 2 DEBUG nova.virt.libvirt.vif [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-247817219',display_name='tempest-AttachVolumeNegativeTest-server-247817219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-247817219',id=199,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBj6tNsdWzptM1YL5GSEH1m7nxdiRaIlwCB2W6y7LUFKIu26VXI47mGh3X2ihi0CDsGqTRgVbGT/FY7e/MdF8Lmm+0sICub5iqjLIVf4S4ob9DXCs+NW7Dr/Dq12CLXmgQ==',key_name='tempest-keypair-1672670520',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-uf4kq0sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=08b29362-e7c1-450f-bf22-95d23c21ff23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.965 2 DEBUG nova.network.os_vif_util [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.966 2 DEBUG nova.network.os_vif_util [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.966 2 DEBUG os_vif [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa0139f5-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa0139f5-ff, col_values=(('external_ids', {'iface-id': 'fa0139f5-ff5d-45c1-9873-48b0a33759d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:91:1b', 'vm-uuid': '08b29362-e7c1-450f-bf22-95d23c21ff23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:59 compute-0 NetworkManager[44987]: <info>  [1759410119.9757] manager: (tapfa0139f5-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:59 compute-0 nova_compute[257802]: 2025-10-02 13:01:59.982 2 INFO os_vif [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff')
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.196 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.197 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.197 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:07:91:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.197 2 INFO nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Using config drive
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.223 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:00 compute-0 ovn_controller[148183]: 2025-10-02T13:02:00Z|00913|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 13:02:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:02:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.676 2 INFO nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Creating config drive at /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.682 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7jwywsr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.817 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7jwywsr" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.854 2 DEBUG nova.storage.rbd_utils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:00 compute-0 nova_compute[257802]: 2025-10-02 13:02:00.858 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config 08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:00 compute-0 ceph-mon[73607]: pgmap v3100: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:02:01 compute-0 sudo[388890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:01 compute-0 sudo[388890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:01 compute-0 sudo[388890]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:01 compute-0 sudo[388933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:01 compute-0 sudo[388933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:01 compute-0 sudo[388933]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:01 compute-0 podman[388914]: 2025-10-02 13:02:01.136583398 +0000 UTC m=+0.065575478 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:02:01 compute-0 podman[388915]: 2025-10-02 13:02:01.136793293 +0000 UTC m=+0.066215013 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:02:01 compute-0 podman[388916]: 2025-10-02 13:02:01.137600553 +0000 UTC m=+0.062557015 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:02:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.428 2 DEBUG oslo_concurrency.processutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config 08b29362-e7c1-450f-bf22-95d23c21ff23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.429 2 INFO nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Deleting local config drive /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23/disk.config because it was imported into RBD.
Oct 02 13:02:01 compute-0 kernel: tapfa0139f5-ff: entered promiscuous mode
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.4787] manager: (tapfa0139f5-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 02 13:02:01 compute-0 systemd-udevd[389007]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:02:01 compute-0 ovn_controller[148183]: 2025-10-02T13:02:01Z|00914|binding|INFO|Claiming lport fa0139f5-ff5d-45c1-9873-48b0a33759d5 for this chassis.
Oct 02 13:02:01 compute-0 ovn_controller[148183]: 2025-10-02T13:02:01Z|00915|binding|INFO|fa0139f5-ff5d-45c1-9873-48b0a33759d5: Claiming fa:16:3e:07:91:1b 10.100.0.14
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.5481] device (tapfa0139f5-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.5487] device (tapfa0139f5-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:02:01 compute-0 systemd-machined[211836]: New machine qemu-98-instance-000000c7.
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.577 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:91:1b 10.100.0.14'], port_security=['fa:16:3e:07:91:1b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08b29362-e7c1-450f-bf22-95d23c21ff23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '175c521b-bdcf-4a1d-a720-87f5bec0bed9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fa0139f5-ff5d-45c1-9873-48b0a33759d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.578 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fa0139f5-ff5d-45c1-9873-48b0a33759d5 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 bound to our chassis
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.579 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:02:01 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-000000c7.
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.592 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b53831f6-10ee-4def-a6db-b4b923e9f80b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.593 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2471b6f7-e1 in ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.595 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2471b6f7-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.596 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[63551a7b-73e7-42f2-b52b-5a04cc3312e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.596 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf9b55e-8aac-4e36-9a19-e0e5a7fe911d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.609 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[f15417d0-5368-44e0-8e64-5fd3c2177ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_controller[148183]: 2025-10-02T13:02:01Z|00916|binding|INFO|Setting lport fa0139f5-ff5d-45c1-9873-48b0a33759d5 ovn-installed in OVS
Oct 02 13:02:01 compute-0 ovn_controller[148183]: 2025-10-02T13:02:01Z|00917|binding|INFO|Setting lport fa0139f5-ff5d-45c1-9873-48b0a33759d5 up in Southbound
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.633 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb40653-8a07-487b-81a4-8880b213126e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.662 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a06218cb-f76f-4fee-8a9a-cefcede97d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.666 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[98bdc4fb-b322-4c01-abf0-81bc91b7e3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.6680] manager: (tap2471b6f7-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.702 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[40d45c9a-8a9d-4224-b97a-ccf15a0e5361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.706 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb5c7f4-73a4-41f4-8695-534588cdf1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.711 2 DEBUG nova.network.neutron [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updated VIF entry in instance network info cache for port fa0139f5-ff5d-45c1-9873-48b0a33759d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.712 2 DEBUG nova.network.neutron [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.7326] device (tap2471b6f7-e0): carrier: link connected
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.738 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[fc232fdf-b6af-47d2-8caa-940870ac7850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.756 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f315b4e1-a65e-46bf-bb98-3cff625dc57d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818935, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389043, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.770 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fc903bba-2b23-4d49-bdbb-7a09b3c416d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:da65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 818935, 'tstamp': 818935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389044, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.791 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[18cbb35e-e534-4720-90c4-3c49ec3e23ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818935, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389045, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.796 2 DEBUG oslo_concurrency.lockutils [req-a0f143be-a747-4ded-93b8-b3ee6e894b25 req-666d025e-b82e-481d-b153-a020d551ec04 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.824 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[412eab49-f76d-43d1-9aac-42fc1f73d710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.968 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b0545d0e-4d73-4d86-91fa-d9147b5f9d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.969 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.969 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.970 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2471b6f7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 kernel: tap2471b6f7-e0: entered promiscuous mode
Oct 02 13:02:01 compute-0 NetworkManager[44987]: <info>  [1759410121.9741] manager: (tap2471b6f7-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.975 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2471b6f7-e0, col_values=(('external_ids', {'iface-id': 'c5388d11-12a4-491d-825a-d4dc574d0a0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:01 compute-0 ovn_controller[148183]: 2025-10-02T13:02:01Z|00918|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.979 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.980 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[505f6a2a-18fc-43d7-b5b9-2d8ab5615bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.981 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:02:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:01.982 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'env', 'PROCESS_TAG=haproxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:02:01 compute-0 nova_compute[257802]: 2025-10-02 13:02:01.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.234 2 DEBUG nova.compute.manager [req-a496cefb-2a9f-4b2d-a3f0-856cd46c9117 req-5a0b870d-4cb7-4bec-9433-6178418d2262 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.235 2 DEBUG oslo_concurrency.lockutils [req-a496cefb-2a9f-4b2d-a3f0-856cd46c9117 req-5a0b870d-4cb7-4bec-9433-6178418d2262 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.236 2 DEBUG oslo_concurrency.lockutils [req-a496cefb-2a9f-4b2d-a3f0-856cd46c9117 req-5a0b870d-4cb7-4bec-9433-6178418d2262 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.236 2 DEBUG oslo_concurrency.lockutils [req-a496cefb-2a9f-4b2d-a3f0-856cd46c9117 req-5a0b870d-4cb7-4bec-9433-6178418d2262 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.236 2 DEBUG nova.compute.manager [req-a496cefb-2a9f-4b2d-a3f0-856cd46c9117 req-5a0b870d-4cb7-4bec-9433-6178418d2262 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Processing event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:02:02 compute-0 podman[389114]: 2025-10-02 13:02:02.341850077 +0000 UTC m=+0.052720637 container create 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:02:02 compute-0 systemd[1]: Started libpod-conmon-6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22.scope.
Oct 02 13:02:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38cacc1a97e2b2d0f6a47f4d3c69917086fade6266dcb33ffa08af3e2e7de04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:02 compute-0 podman[389114]: 2025-10-02 13:02:02.312019705 +0000 UTC m=+0.022890285 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:02:02 compute-0 podman[389114]: 2025-10-02 13:02:02.418372598 +0000 UTC m=+0.129243188 container init 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:02:02 compute-0 podman[389114]: 2025-10-02 13:02:02.428053713 +0000 UTC m=+0.138924273 container start 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:02:02 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [NOTICE]   (389138) : New worker (389140) forked
Oct 02 13:02:02 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [NOTICE]   (389138) : Loading success.
Oct 02 13:02:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Oct 02 13:02:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.779 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410122.7789378, 08b29362-e7c1-450f-bf22-95d23c21ff23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.780 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] VM Started (Lifecycle Event)
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.782 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.784 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.787 2 INFO nova.virt.libvirt.driver [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Instance spawned successfully.
Oct 02 13:02:02 compute-0 nova_compute[257802]: 2025-10-02 13:02:02.787 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.090 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.094 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.215 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.216 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.217 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.217 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.218 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.219 2 DEBUG nova.virt.libvirt.driver [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.293 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.293 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410122.7799497, 08b29362-e7c1-450f-bf22-95d23c21ff23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.294 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] VM Paused (Lifecycle Event)
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.418 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.422 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410122.7842133, 08b29362-e7c1-450f-bf22-95d23c21ff23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.423 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] VM Resumed (Lifecycle Event)
Oct 02 13:02:03 compute-0 ceph-mon[73607]: pgmap v3101: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.676 2 INFO nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Took 14.32 seconds to spawn the instance on the hypervisor.
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.677 2 DEBUG nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.683 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:03 compute-0 nova_compute[257802]: 2025-10-02 13:02:03.685 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.152 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.372 2 INFO nova.compute.manager [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Took 15.96 seconds to build instance.
Oct 02 13:02:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Oct 02 13:02:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.611 2 DEBUG oslo_concurrency.lockutils [None req-2f15d4bd-3c3c-4c3b-8a46-d03064639331 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.663 2 DEBUG nova.compute.manager [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.664 2 DEBUG oslo_concurrency.lockutils [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.664 2 DEBUG oslo_concurrency.lockutils [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.664 2 DEBUG oslo_concurrency.lockutils [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.665 2 DEBUG nova.compute.manager [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] No waiting events found dispatching network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.665 2 WARNING nova.compute.manager [req-2661adaf-ea53-4853-b461-db0585be5874 req-551b74a5-6d48-4f01-ab04-aec2f83bd982 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received unexpected event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 for instance with vm_state active and task_state None.
Oct 02 13:02:04 compute-0 nova_compute[257802]: 2025-10-02 13:02:04.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-0 nova_compute[257802]: 2025-10-02 13:02:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:05 compute-0 ceph-mon[73607]: pgmap v3102: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Oct 02 13:02:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 93 op/s
Oct 02 13:02:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:07 compute-0 ceph-mon[73607]: pgmap v3103: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 93 op/s
Oct 02 13:02:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 02 13:02:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:08 compute-0 podman[389153]: 2025-10-02 13:02:08.958814829 +0000 UTC m=+0.088516093 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 13:02:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:09 compute-0 ceph-mon[73607]: pgmap v3104: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 02 13:02:09 compute-0 nova_compute[257802]: 2025-10-02 13:02:09.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-0 nova_compute[257802]: 2025-10-02 13:02:10.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-0 NetworkManager[44987]: <info>  [1759410130.3775] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct 02 13:02:10 compute-0 NetworkManager[44987]: <info>  [1759410130.3783] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 02 13:02:10 compute-0 nova_compute[257802]: 2025-10-02 13:02:10.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-0 nova_compute[257802]: 2025-10-02 13:02:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-0 ovn_controller[148183]: 2025-10-02T13:02:10Z|00919|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct 02 13:02:10 compute-0 nova_compute[257802]: 2025-10-02 13:02:10.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:02:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:10.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:10 compute-0 ceph-mon[73607]: pgmap v3105: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:02:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:11 compute-0 nova_compute[257802]: 2025-10-02 13:02:11.749 2 DEBUG nova.compute.manager [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-changed-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:11 compute-0 nova_compute[257802]: 2025-10-02 13:02:11.750 2 DEBUG nova.compute.manager [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Refreshing instance network info cache due to event network-changed-fa0139f5-ff5d-45c1-9873-48b0a33759d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:11 compute-0 nova_compute[257802]: 2025-10-02 13:02:11.750 2 DEBUG oslo_concurrency.lockutils [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:11 compute-0 nova_compute[257802]: 2025-10-02 13:02:11.750 2 DEBUG oslo_concurrency.lockutils [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:11 compute-0 nova_compute[257802]: 2025-10-02 13:02:11.750 2 DEBUG nova.network.neutron [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Refreshing network info cache for port fa0139f5-ff5d-45c1-9873-48b0a33759d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:02:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:13 compute-0 ceph-mon[73607]: pgmap v3106: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:02:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 149 op/s
Oct 02 13:02:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:14.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:14 compute-0 nova_compute[257802]: 2025-10-02 13:02:14.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:15 compute-0 nova_compute[257802]: 2025-10-02 13:02:15.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:15 compute-0 nova_compute[257802]: 2025-10-02 13:02:15.541 2 DEBUG nova.network.neutron [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updated VIF entry in instance network info cache for port fa0139f5-ff5d-45c1-9873-48b0a33759d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:15 compute-0 nova_compute[257802]: 2025-10-02 13:02:15.542 2 DEBUG nova.network.neutron [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:15 compute-0 ovn_controller[148183]: 2025-10-02T13:02:15Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:91:1b 10.100.0.14
Oct 02 13:02:15 compute-0 ovn_controller[148183]: 2025-10-02T13:02:15Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:91:1b 10.100.0.14
Oct 02 13:02:15 compute-0 ceph-mon[73607]: pgmap v3107: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 149 op/s
Oct 02 13:02:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 235 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Oct 02 13:02:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:17 compute-0 nova_compute[257802]: 2025-10-02 13:02:17.148 2 DEBUG oslo_concurrency.lockutils [req-534dacae-daf2-4056-b92b-884526e82a91 req-2deb659c-c208-4f73-9b83-b00d8e865ec3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:17.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:17 compute-0 ceph-mon[73607]: pgmap v3108: 305 pgs: 305 active+clean; 235 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Oct 02 13:02:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 235 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 90 op/s
Oct 02 13:02:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1747323140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:19 compute-0 nova_compute[257802]: 2025-10-02 13:02:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:19 compute-0 nova_compute[257802]: 2025-10-02 13:02:19.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:02:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:19 compute-0 ceph-mon[73607]: pgmap v3109: 305 pgs: 305 active+clean; 235 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 90 op/s
Oct 02 13:02:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3523359346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:19 compute-0 nova_compute[257802]: 2025-10-02 13:02:19.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:20 compute-0 nova_compute[257802]: 2025-10-02 13:02:20.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:20.383 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:20 compute-0 nova_compute[257802]: 2025-10-02 13:02:20.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:20 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:20.385 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:02:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 169 op/s
Oct 02 13:02:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:21.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:21 compute-0 sudo[389188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:21 compute-0 sudo[389188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:21 compute-0 sudo[389188]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:21 compute-0 sudo[389213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:21 compute-0 sudo[389213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:21 compute-0 sudo[389213]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:21 compute-0 ceph-mon[73607]: pgmap v3110: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 169 op/s
Oct 02 13:02:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 455 KiB/s rd, 4.2 MiB/s wr, 105 op/s
Oct 02 13:02:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:23 compute-0 ceph-mon[73607]: pgmap v3111: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 455 KiB/s rd, 4.2 MiB/s wr, 105 op/s
Oct 02 13:02:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 631 KiB/s rd, 4.2 MiB/s wr, 120 op/s
Oct 02 13:02:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:24.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:24 compute-0 nova_compute[257802]: 2025-10-02 13:02:24.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:25 compute-0 nova_compute[257802]: 2025-10-02 13:02:25.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:25.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:25 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:25.387 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:25 compute-0 ceph-mon[73607]: pgmap v3112: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 631 KiB/s rd, 4.2 MiB/s wr, 120 op/s
Oct 02 13:02:26 compute-0 nova_compute[257802]: 2025-10-02 13:02:26.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 648 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 02 13:02:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:26 compute-0 ceph-mon[73607]: pgmap v3113: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 648 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:26.985 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:26.985 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:02:26.986 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:27 compute-0 nova_compute[257802]: 2025-10-02 13:02:27.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:27 compute-0 nova_compute[257802]: 2025-10-02 13:02:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:27.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 2.4 MiB/s wr, 102 op/s
Oct 02 13:02:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:28.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:29 compute-0 ceph-mon[73607]: pgmap v3114: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 2.4 MiB/s wr, 102 op/s
Oct 02 13:02:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:29 compute-0 nova_compute[257802]: 2025-10-02 13:02:29.830 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:29 compute-0 nova_compute[257802]: 2025-10-02 13:02:29.830 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:29 compute-0 nova_compute[257802]: 2025-10-02 13:02:29.904 2 DEBUG nova.objects.instance [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:29 compute-0 nova_compute[257802]: 2025-10-02 13:02:29.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.024 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.469 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.469 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.470 2 INFO nova.compute.manager [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Attaching volume b071d28d-2baf-4099-b6f4-9ad6bb72c88f to /dev/vdb
Oct 02 13:02:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 2.4 MiB/s wr, 104 op/s
Oct 02 13:02:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:30.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.774 2 DEBUG os_brick.utils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.776 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.789 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.790 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1e7831-6fc9-4f01-9f5f-48cb35963af3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.792 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.809 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.810 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[7d664177-58aa-4cb3-b220-0909bbfe2e2c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.812 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.827 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.828 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cb6342-b5ba-4f96-97f6-5169abb24e91]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.830 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee02155-ddba-40ee-81e8-4a0096f8f29e]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.830 2 DEBUG oslo_concurrency.processutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.868 2 DEBUG oslo_concurrency.processutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.871 2 DEBUG os_brick.initiator.connectors.lightos [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.871 2 DEBUG os_brick.initiator.connectors.lightos [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.871 2 DEBUG os_brick.initiator.connectors.lightos [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.872 2 DEBUG os_brick.utils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:02:30 compute-0 nova_compute[257802]: 2025-10-02 13:02:30.872 2 DEBUG nova.virt.block_device [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating existing volume attachment record: cca26f33-916c-4391-862c-471a8cfad51c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:02:31 compute-0 nova_compute[257802]: 2025-10-02 13:02:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:31 compute-0 ceph-mon[73607]: pgmap v3115: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 2.4 MiB/s wr, 104 op/s
Oct 02 13:02:31 compute-0 podman[389251]: 2025-10-02 13:02:31.931950039 +0000 UTC m=+0.055345681 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 13:02:31 compute-0 podman[389252]: 2025-10-02 13:02:31.932626805 +0000 UTC m=+0.056441877 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:31 compute-0 podman[389250]: 2025-10-02 13:02:31.949576735 +0000 UTC m=+0.079902684 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:02:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 117 KiB/s wr, 25 op/s
Oct 02 13:02:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:32.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:32 compute-0 nova_compute[257802]: 2025-10-02 13:02:32.700 2 DEBUG nova.objects.instance [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:32 compute-0 nova_compute[257802]: 2025-10-02 13:02:32.767 2 DEBUG nova.virt.libvirt.driver [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Attempting to attach volume b071d28d-2baf-4099-b6f4-9ad6bb72c88f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:02:32 compute-0 nova_compute[257802]: 2025-10-02 13:02:32.770 2 DEBUG nova.virt.libvirt.guest [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:02:32 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-b071d28d-2baf-4099-b6f4-9ad6bb72c88f">
Oct 02 13:02:32 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   </source>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 13:02:32 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   </auth>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:02:32 compute-0 nova_compute[257802]:   <serial>b071d28d-2baf-4099-b6f4-9ad6bb72c88f</serial>
Oct 02 13:02:32 compute-0 nova_compute[257802]: </disk>
Oct 02 13:02:32 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:02:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/28107586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2639032570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:33 compute-0 nova_compute[257802]: 2025-10-02 13:02:33.053 2 DEBUG nova.virt.libvirt.driver [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:33 compute-0 nova_compute[257802]: 2025-10-02 13:02:33.054 2 DEBUG nova.virt.libvirt.driver [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:33 compute-0 nova_compute[257802]: 2025-10-02 13:02:33.054 2 DEBUG nova.virt.libvirt.driver [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:33 compute-0 nova_compute[257802]: 2025-10-02 13:02:33.055 2 DEBUG nova.virt.libvirt.driver [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:07:91:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:02:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:33.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:34 compute-0 ceph-mon[73607]: pgmap v3116: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 117 KiB/s wr, 25 op/s
Oct 02 13:02:34 compute-0 nova_compute[257802]: 2025-10-02 13:02:34.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 120 KiB/s wr, 25 op/s
Oct 02 13:02:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:34 compute-0 nova_compute[257802]: 2025-10-02 13:02:34.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:35 compute-0 nova_compute[257802]: 2025-10-02 13:02:35.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:35 compute-0 ceph-mon[73607]: pgmap v3117: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 120 KiB/s wr, 25 op/s
Oct 02 13:02:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:35.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2751757529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 45 KiB/s wr, 13 op/s
Oct 02 13:02:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:36 compute-0 nova_compute[257802]: 2025-10-02 13:02:36.705 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:37 compute-0 ceph-mon[73607]: pgmap v3118: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 45 KiB/s wr, 13 op/s
Oct 02 13:02:38 compute-0 nova_compute[257802]: 2025-10-02 13:02:38.263 2 DEBUG oslo_concurrency.lockutils [None req-9564aa46-301b-4c83-bc2f-874245456bdd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 7.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 17 KiB/s wr, 4 op/s
Oct 02 13:02:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1299835460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:02:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:39 compute-0 ceph-mon[73607]: pgmap v3119: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 17 KiB/s wr, 4 op/s
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.876 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.877 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.878 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.878 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:39 compute-0 podman[389331]: 2025-10-02 13:02:39.937878333 +0000 UTC m=+0.075759683 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:39 compute-0 nova_compute[257802]: 2025-10-02 13:02:39.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:40 compute-0 nova_compute[257802]: 2025-10-02 13:02:40.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:02:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:40.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:41.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:41 compute-0 sudo[389358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:41 compute-0 sudo[389358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:41 compute-0 sudo[389358]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:41 compute-0 sudo[389383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:41 compute-0 sudo[389383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:41 compute-0 sudo[389383]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:41 compute-0 ceph-mon[73607]: pgmap v3120: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:02:42
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:02:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:42 compute-0 nova_compute[257802]: 2025-10-02 13:02:42.936 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:43 compute-0 nova_compute[257802]: 2025-10-02 13:02:43.000 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:43 compute-0 nova_compute[257802]: 2025-10-02 13:02:43.000 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:02:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:43.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:02:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:02:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:02:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:02:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:02:43 compute-0 ceph-mon[73607]: pgmap v3121: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:02:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:44.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:44 compute-0 ceph-mon[73607]: pgmap v3122: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:44 compute-0 nova_compute[257802]: 2025-10-02 13:02:44.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.282 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.283 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.283 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.283 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.284 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:45 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468566757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.706 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.802 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.803 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.803 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.940 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.941 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4049MB free_disk=20.876380920410156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.942 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:45 compute-0 nova_compute[257802]: 2025-10-02 13:02:45.942 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/468566757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.096 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 08b29362-e7c1-450f-bf22-95d23c21ff23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.097 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.097 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.241 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:46.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620999960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.677 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.682 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.721 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.759 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:02:46 compute-0 nova_compute[257802]: 2025-10-02 13:02:46.759 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:47 compute-0 ceph-mon[73607]: pgmap v3123: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 13:02:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3463811713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3620999960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3106170106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:02:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:48.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:49 compute-0 ceph-mon[73607]: pgmap v3124: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:02:49 compute-0 nova_compute[257802]: 2025-10-02 13:02:49.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:50 compute-0 nova_compute[257802]: 2025-10-02 13:02:50.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:02:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:02:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:02:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:51 compute-0 ceph-mon[73607]: pgmap v3125: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:02:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 15 KiB/s wr, 6 op/s
Oct 02 13:02:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:02:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:02:52 compute-0 nova_compute[257802]: 2025-10-02 13:02:52.759 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:52 compute-0 ceph-mon[73607]: pgmap v3126: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 15 KiB/s wr, 6 op/s
Oct 02 13:02:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:53.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:54 compute-0 sudo[389459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:54 compute-0 sudo[389459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:54 compute-0 sudo[389459]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:54 compute-0 sudo[389484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:54 compute-0 sudo[389484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:54 compute-0 sudo[389484]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2175556008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:54 compute-0 sudo[389509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:54 compute-0 sudo[389509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:54 compute-0 sudo[389509]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:54 compute-0 sudo[389534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:02:54 compute-0 sudo[389534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct 02 13:02:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:54.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005338832472547925 of space, bias 1.0, pg target 1.6016497417643776 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:02:54 compute-0 sudo[389534]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:55 compute-0 nova_compute[257802]: 2025-10-02 13:02:55.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:02:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7934a698-1d0b-4b4f-8755-2a82d3ed2874 does not exist
Oct 02 13:02:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 82a35637-a551-47c6-ba0a-a96f23296808 does not exist
Oct 02 13:02:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c26f8d75-f092-4645-b36a-e450f3edc97d does not exist
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:02:55 compute-0 nova_compute[257802]: 2025-10-02 13:02:55.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:02:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:55 compute-0 sudo[389593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:55 compute-0 sudo[389593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:55 compute-0 sudo[389593]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:55.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:55 compute-0 ceph-mon[73607]: pgmap v3127: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2008475557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:02:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2008475557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:02:55 compute-0 sudo[389618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:55 compute-0 sudo[389618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:55 compute-0 sudo[389618]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:55 compute-0 sudo[389643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:55 compute-0 sudo[389643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:55 compute-0 sudo[389643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:55 compute-0 sudo[389668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:02:55 compute-0 sudo[389668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.760891153 +0000 UTC m=+0.039614770 container create 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:02:55 compute-0 systemd[1]: Started libpod-conmon-97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723.scope.
Oct 02 13:02:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.743135684 +0000 UTC m=+0.021859301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.845464499 +0000 UTC m=+0.124188116 container init 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.853645487 +0000 UTC m=+0.132369104 container start 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.856853834 +0000 UTC m=+0.135577481 container attach 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:02:55 compute-0 brave_goldberg[389750]: 167 167
Oct 02 13:02:55 compute-0 systemd[1]: libpod-97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723.scope: Deactivated successfully.
Oct 02 13:02:55 compute-0 conmon[389750]: conmon 97a609f147496f9c778d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723.scope/container/memory.events
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.8628675 +0000 UTC m=+0.141591137 container died 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3532419f74c8901dc8d071fbec5b200b7c2146a61fbe58991642632873af8b9-merged.mount: Deactivated successfully.
Oct 02 13:02:55 compute-0 podman[389734]: 2025-10-02 13:02:55.908051524 +0000 UTC m=+0.186775141 container remove 97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:02:55 compute-0 systemd[1]: libpod-conmon-97a609f147496f9c778dbad4cb52323b55902a992e488fdde1e75c46c8224723.scope: Deactivated successfully.
Oct 02 13:02:56 compute-0 podman[389776]: 2025-10-02 13:02:56.072346058 +0000 UTC m=+0.040015020 container create a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:02:56 compute-0 systemd[1]: Started libpod-conmon-a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3.scope.
Oct 02 13:02:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:56 compute-0 podman[389776]: 2025-10-02 13:02:56.14721675 +0000 UTC m=+0.114885732 container init a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:56 compute-0 podman[389776]: 2025-10-02 13:02:56.055691695 +0000 UTC m=+0.023360677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:56 compute-0 podman[389776]: 2025-10-02 13:02:56.158719068 +0000 UTC m=+0.126388030 container start a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:02:56 compute-0 podman[389776]: 2025-10-02 13:02:56.164347634 +0000 UTC m=+0.132016616 container attach a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 92 op/s
Oct 02 13:02:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:56 compute-0 condescending_heyrovsky[389793]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:02:56 compute-0 condescending_heyrovsky[389793]: --> relative data size: 1.0
Oct 02 13:02:56 compute-0 condescending_heyrovsky[389793]: --> All data devices are unavailable
Oct 02 13:02:57 compute-0 systemd[1]: libpod-a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3.scope: Deactivated successfully.
Oct 02 13:02:57 compute-0 podman[389776]: 2025-10-02 13:02:57.026287116 +0000 UTC m=+0.993956078 container died a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-35f987db4bb8868585f07180cc18bfe7840a8869db87c98eaaf045924bde68cd-merged.mount: Deactivated successfully.
Oct 02 13:02:57 compute-0 podman[389776]: 2025-10-02 13:02:57.115779341 +0000 UTC m=+1.083448293 container remove a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:57 compute-0 systemd[1]: libpod-conmon-a400baa8e5d79c063fdd153a600f8672c6ca906f11aed9ef51935d4ae6bd92e3.scope: Deactivated successfully.
Oct 02 13:02:57 compute-0 sudo[389668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:57 compute-0 sudo[389820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:57 compute-0 sudo[389820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:57 compute-0 sudo[389820]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:57 compute-0 sudo[389845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:57 compute-0 sudo[389845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:57 compute-0 sudo[389845]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:57 compute-0 sudo[389870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:57 compute-0 sudo[389870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:57 compute-0 sudo[389870]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:57 compute-0 sudo[389895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:02:57 compute-0 sudo[389895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:57 compute-0 ceph-mon[73607]: pgmap v3128: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 92 op/s
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.726907336 +0000 UTC m=+0.037841366 container create bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:57 compute-0 systemd[1]: Started libpod-conmon-bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2.scope.
Oct 02 13:02:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.712546019 +0000 UTC m=+0.023480089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.808363967 +0000 UTC m=+0.119298027 container init bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.816197697 +0000 UTC m=+0.127131737 container start bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.819517027 +0000 UTC m=+0.130451087 container attach bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:02:57 compute-0 beautiful_pike[389977]: 167 167
Oct 02 13:02:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:02:57 compute-0 systemd[1]: libpod-bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2.scope: Deactivated successfully.
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.822368876 +0000 UTC m=+0.133302916 container died bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-562d456fab605d13d135d057a4a4a109e1119fa23c5432fcd07260c595bd1754-merged.mount: Deactivated successfully.
Oct 02 13:02:57 compute-0 podman[389960]: 2025-10-02 13:02:57.86306122 +0000 UTC m=+0.173995260 container remove bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:02:57 compute-0 systemd[1]: libpod-conmon-bce1f54130c2fef5590f25df18cd7bd382c43147464d653c96fe3866e97e12b2.scope: Deactivated successfully.
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.025482419 +0000 UTC m=+0.044242831 container create 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:02:58 compute-0 systemd[1]: Started libpod-conmon-8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f.scope.
Oct 02 13:02:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cb98de804dfd6981d698a8491d3b1b9d583da54adb9ad2a76ed8fc4afefae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cb98de804dfd6981d698a8491d3b1b9d583da54adb9ad2a76ed8fc4afefae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cb98de804dfd6981d698a8491d3b1b9d583da54adb9ad2a76ed8fc4afefae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cb98de804dfd6981d698a8491d3b1b9d583da54adb9ad2a76ed8fc4afefae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.006796838 +0000 UTC m=+0.025557270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.109244646 +0000 UTC m=+0.128005078 container init 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.116982933 +0000 UTC m=+0.135743345 container start 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.1201412 +0000 UTC m=+0.138901632 container attach 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:02:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 92 op/s
Oct 02 13:02:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:02:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 60K writes, 236K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.77 writes per sync, written: 0.22 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4972 writes, 18K keys, 4972 commit groups, 1.0 writes per commit group, ingest: 17.15 MB, 0.03 MB/s
                                           Interval WAL: 4972 writes, 2044 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:02:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:58.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:58 compute-0 great_golick[390019]: {
Oct 02 13:02:58 compute-0 great_golick[390019]:     "1": [
Oct 02 13:02:58 compute-0 great_golick[390019]:         {
Oct 02 13:02:58 compute-0 great_golick[390019]:             "devices": [
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "/dev/loop3"
Oct 02 13:02:58 compute-0 great_golick[390019]:             ],
Oct 02 13:02:58 compute-0 great_golick[390019]:             "lv_name": "ceph_lv0",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "lv_size": "7511998464",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "name": "ceph_lv0",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "tags": {
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.cluster_name": "ceph",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.crush_device_class": "",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.encrypted": "0",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.osd_id": "1",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.type": "block",
Oct 02 13:02:58 compute-0 great_golick[390019]:                 "ceph.vdo": "0"
Oct 02 13:02:58 compute-0 great_golick[390019]:             },
Oct 02 13:02:58 compute-0 great_golick[390019]:             "type": "block",
Oct 02 13:02:58 compute-0 great_golick[390019]:             "vg_name": "ceph_vg0"
Oct 02 13:02:58 compute-0 great_golick[390019]:         }
Oct 02 13:02:58 compute-0 great_golick[390019]:     ]
Oct 02 13:02:58 compute-0 great_golick[390019]: }
Oct 02 13:02:58 compute-0 systemd[1]: libpod-8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f.scope: Deactivated successfully.
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.911492374 +0000 UTC m=+0.930252796 container died 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0cb98de804dfd6981d698a8491d3b1b9d583da54adb9ad2a76ed8fc4afefae-merged.mount: Deactivated successfully.
Oct 02 13:02:58 compute-0 podman[390002]: 2025-10-02 13:02:58.970120373 +0000 UTC m=+0.988880775 container remove 8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:58 compute-0 systemd[1]: libpod-conmon-8c76b91ec3a3898c84601a4935cde11ddbb731bdc5c45951a6db9b270734443f.scope: Deactivated successfully.
Oct 02 13:02:59 compute-0 sudo[389895]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:59 compute-0 sudo[390043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:59 compute-0 sudo[390043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:59 compute-0 sudo[390043]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:59 compute-0 sudo[390068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:59 compute-0 sudo[390068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:59 compute-0 sudo[390068]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:59 compute-0 sudo[390093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:59 compute-0 sudo[390093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:59 compute-0 sudo[390093]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:02:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:59 compute-0 sudo[390118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:02:59 compute-0 sudo[390118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.550243467 +0000 UTC m=+0.035130970 container create e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:02:59 compute-0 ceph-mon[73607]: pgmap v3129: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 92 op/s
Oct 02 13:02:59 compute-0 systemd[1]: Started libpod-conmon-e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a.scope.
Oct 02 13:02:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.627608659 +0000 UTC m=+0.112496152 container init e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.536168147 +0000 UTC m=+0.021055670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.633205204 +0000 UTC m=+0.118092697 container start e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.63673605 +0000 UTC m=+0.121623553 container attach e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:59 compute-0 frosty_blackburn[390200]: 167 167
Oct 02 13:02:59 compute-0 systemd[1]: libpod-e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a.scope: Deactivated successfully.
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.639428955 +0000 UTC m=+0.124316458 container died e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:02:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5b9bc7ccb202d7360f5d8251dd944912b95d2b02b38ab843560f8dd7a3132d7-merged.mount: Deactivated successfully.
Oct 02 13:02:59 compute-0 podman[390184]: 2025-10-02 13:02:59.676538953 +0000 UTC m=+0.161426446 container remove e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackburn, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:02:59 compute-0 systemd[1]: libpod-conmon-e0fb9db2f0a7b724e24076d44c6faedd8bf41b8e30ff19b56c7f92fafc4ecc8a.scope: Deactivated successfully.
Oct 02 13:02:59 compute-0 podman[390224]: 2025-10-02 13:02:59.843156534 +0000 UTC m=+0.040600014 container create 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:02:59 compute-0 systemd[1]: Started libpod-conmon-67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c.scope.
Oct 02 13:02:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e492fc730209477cd740b43871aa15662a04bf56fb432b93aefdbc5b6df1314/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e492fc730209477cd740b43871aa15662a04bf56fb432b93aefdbc5b6df1314/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e492fc730209477cd740b43871aa15662a04bf56fb432b93aefdbc5b6df1314/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e492fc730209477cd740b43871aa15662a04bf56fb432b93aefdbc5b6df1314/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:59 compute-0 podman[390224]: 2025-10-02 13:02:59.915410932 +0000 UTC m=+0.112854432 container init 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:59 compute-0 podman[390224]: 2025-10-02 13:02:59.825053935 +0000 UTC m=+0.022497435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:59 compute-0 podman[390224]: 2025-10-02 13:02:59.921758695 +0000 UTC m=+0.119202175 container start 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:02:59 compute-0 podman[390224]: 2025-10-02 13:02:59.924656545 +0000 UTC m=+0.122100045 container attach 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:03:00 compute-0 nova_compute[257802]: 2025-10-02 13:03:00.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:00 compute-0 nova_compute[257802]: 2025-10-02 13:03:00.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:03:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:03:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:00.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:03:00 compute-0 youthful_wiles[390240]: {
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:         "osd_id": 1,
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:         "type": "bluestore"
Oct 02 13:03:00 compute-0 youthful_wiles[390240]:     }
Oct 02 13:03:00 compute-0 youthful_wiles[390240]: }
Oct 02 13:03:00 compute-0 systemd[1]: libpod-67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c.scope: Deactivated successfully.
Oct 02 13:03:00 compute-0 podman[390224]: 2025-10-02 13:03:00.763351535 +0000 UTC m=+0.960795025 container died 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e492fc730209477cd740b43871aa15662a04bf56fb432b93aefdbc5b6df1314-merged.mount: Deactivated successfully.
Oct 02 13:03:00 compute-0 podman[390224]: 2025-10-02 13:03:00.814059753 +0000 UTC m=+1.011503233 container remove 67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:03:00 compute-0 systemd[1]: libpod-conmon-67bc27903bbc55f3c425c44da7278496b9d8705f8b837f14cbf45c7c7857359c.scope: Deactivated successfully.
Oct 02 13:03:00 compute-0 sudo[390118]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:03:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:03:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:03:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:03:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dcc92cb0-d172-425a-a45f-155f579f399b does not exist
Oct 02 13:03:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 88d60344-5b65-43bb-b404-d1a522b7f7c3 does not exist
Oct 02 13:03:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a5d2aa2e-fe2f-4518-b350-f9b89e0e1132 does not exist
Oct 02 13:03:00 compute-0 sudo[390273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:00 compute-0 sudo[390273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:00 compute-0 sudo[390273]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:01 compute-0 sudo[390298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:03:01 compute-0 sudo[390298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:01 compute-0 sudo[390298]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:01 compute-0 sudo[390323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:01 compute-0 sudo[390323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:01 compute-0 sudo[390323]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:01 compute-0 sudo[390348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:01 compute-0 sudo[390348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:01 compute-0 sudo[390348]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:01 compute-0 ceph-mon[73607]: pgmap v3130: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:03:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:03:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:03:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 13:03:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4130636126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1682172624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:02.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:02 compute-0 podman[390374]: 2025-10-02 13:03:02.92987534 +0000 UTC m=+0.062089714 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:03:02 compute-0 podman[390376]: 2025-10-02 13:03:02.933713053 +0000 UTC m=+0.061416897 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:03:02 compute-0 podman[390375]: 2025-10-02 13:03:02.964076347 +0000 UTC m=+0.092708304 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:03:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:03 compute-0 ceph-mon[73607]: pgmap v3131: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 13:03:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 375 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Oct 02 13:03:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:05 compute-0 nova_compute[257802]: 2025-10-02 13:03:05.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-0 nova_compute[257802]: 2025-10-02 13:03:05.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:05 compute-0 ceph-mon[73607]: pgmap v3132: 305 pgs: 305 active+clean; 375 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Oct 02 13:03:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Oct 02 13:03:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:06.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:06 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 13:03:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:07 compute-0 ceph-mon[73607]: pgmap v3133: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Oct 02 13:03:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.9 MiB/s wr, 72 op/s
Oct 02 13:03:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:08.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:08.720 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:08 compute-0 nova_compute[257802]: 2025-10-02 13:03:08.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:08.721 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:03:08 compute-0 ceph-mon[73607]: pgmap v3134: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.9 MiB/s wr, 72 op/s
Oct 02 13:03:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:10 compute-0 nova_compute[257802]: 2025-10-02 13:03:10.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:10 compute-0 nova_compute[257802]: 2025-10-02 13:03:10.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Oct 02 13:03:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:10.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:10 compute-0 podman[390434]: 2025-10-02 13:03:10.931611153 +0000 UTC m=+0.076539893 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:03:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:11 compute-0 ceph-mon[73607]: pgmap v3135: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 13:03:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:03:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:12.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:03:12 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:12.722 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:13.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:13 compute-0 ceph-mon[73607]: pgmap v3136: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 13:03:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 13:03:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:14.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:15 compute-0 nova_compute[257802]: 2025-10-02 13:03:15.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:15 compute-0 nova_compute[257802]: 2025-10-02 13:03:15.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:15 compute-0 ceph-mon[73607]: pgmap v3137: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 13:03:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Oct 02 13:03:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:16.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:17 compute-0 ceph-mon[73607]: pgmap v3138: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Oct 02 13:03:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 76 op/s
Oct 02 13:03:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:18.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1156892007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:19 compute-0 nova_compute[257802]: 2025-10-02 13:03:19.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:19 compute-0 nova_compute[257802]: 2025-10-02 13:03:19.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:03:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:20 compute-0 nova_compute[257802]: 2025-10-02 13:03:20.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:20 compute-0 ceph-mon[73607]: pgmap v3139: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 76 op/s
Oct 02 13:03:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2633219503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:20 compute-0 nova_compute[257802]: 2025-10-02 13:03:20.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 678 KiB/s wr, 84 op/s
Oct 02 13:03:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:20.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:21 compute-0 ceph-mon[73607]: pgmap v3140: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 678 KiB/s wr, 84 op/s
Oct 02 13:03:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:21 compute-0 sudo[390466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:21 compute-0 sudo[390466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:21 compute-0 sudo[390466]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:21 compute-0 sudo[390491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:21 compute-0 sudo[390491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:21 compute-0 sudo[390491]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 655 KiB/s wr, 9 op/s
Oct 02 13:03:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:22.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:23 compute-0 ceph-mon[73607]: pgmap v3141: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 655 KiB/s wr, 9 op/s
Oct 02 13:03:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 414 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 977 KiB/s wr, 23 op/s
Oct 02 13:03:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:24.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:25 compute-0 nova_compute[257802]: 2025-10-02 13:03:25.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:25 compute-0 nova_compute[257802]: 2025-10-02 13:03:25.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:25 compute-0 ceph-mon[73607]: pgmap v3142: 305 pgs: 305 active+clean; 414 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 977 KiB/s wr, 23 op/s
Oct 02 13:03:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:03:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:26.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:26.986 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:26.986 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:26.987 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:27 compute-0 ceph-mon[73607]: pgmap v3143: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:03:27 compute-0 nova_compute[257802]: 2025-10-02 13:03:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:27 compute-0 nova_compute[257802]: 2025-10-02 13:03:27.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:27 compute-0 nova_compute[257802]: 2025-10-02 13:03:27.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:27.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:03:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:28.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:29.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:29 compute-0 ceph-mon[73607]: pgmap v3144: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:03:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.671490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209671527, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1268, "num_deletes": 256, "total_data_size": 2082765, "memory_usage": 2118472, "flush_reason": "Manual Compaction"}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209682534, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2036658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68444, "largest_seqno": 69711, "table_properties": {"data_size": 2030704, "index_size": 3220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12767, "raw_average_key_size": 19, "raw_value_size": 2018706, "raw_average_value_size": 3115, "num_data_blocks": 142, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410096, "oldest_key_time": 1759410096, "file_creation_time": 1759410209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 11076 microseconds, and 4768 cpu microseconds.
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.682566) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2036658 bytes OK
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.682582) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684046) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684059) EVENT_LOG_v1 {"time_micros": 1759410209684055, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2077173, prev total WAL file size 2077173, number of live WAL files 2.
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684740) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373539' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1988KB)], [155(11MB)]
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209684789, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 14106081, "oldest_snapshot_seqno": -1}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9676 keys, 13960386 bytes, temperature: kUnknown
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209761097, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 13960386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13895919, "index_size": 39202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 254949, "raw_average_key_size": 26, "raw_value_size": 13724323, "raw_average_value_size": 1418, "num_data_blocks": 1504, "num_entries": 9676, "num_filter_entries": 9676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.761337) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 13960386 bytes
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.763428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.7 rd, 182.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(13.8) write-amplify(6.9) OK, records in: 10203, records dropped: 527 output_compression: NoCompression
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.763448) EVENT_LOG_v1 {"time_micros": 1759410209763438, "job": 96, "event": "compaction_finished", "compaction_time_micros": 76371, "compaction_time_cpu_micros": 33671, "output_level": 6, "num_output_files": 1, "total_output_size": 13960386, "num_input_records": 10203, "num_output_records": 9676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209763816, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209765946, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.766016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.766021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.766023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.766025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:29 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:03:29.766027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:30 compute-0 nova_compute[257802]: 2025-10-02 13:03:30.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:30 compute-0 nova_compute[257802]: 2025-10-02 13:03:30.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 02 13:03:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:30.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:31 compute-0 ceph-mon[73607]: pgmap v3145: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 02 13:03:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 02 13:03:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/612150870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:33 compute-0 nova_compute[257802]: 2025-10-02 13:03:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:33 compute-0 podman[390522]: 2025-10-02 13:03:33.919454009 +0000 UTC m=+0.054058299 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:03:33 compute-0 podman[390524]: 2025-10-02 13:03:33.921378305 +0000 UTC m=+0.052676415 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:33 compute-0 podman[390523]: 2025-10-02 13:03:33.922409781 +0000 UTC m=+0.053726831 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 02 13:03:34 compute-0 ceph-mon[73607]: pgmap v3146: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 02 13:03:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 02 13:03:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:34.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:35 compute-0 nova_compute[257802]: 2025-10-02 13:03:35.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:35 compute-0 nova_compute[257802]: 2025-10-02 13:03:35.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:35 compute-0 ceph-mon[73607]: pgmap v3147: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 02 13:03:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:35.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Oct 02 13:03:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3504288442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:36.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-0 nova_compute[257802]: 2025-10-02 13:03:37.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-0 ceph-mon[73607]: pgmap v3148: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Oct 02 13:03:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1883656776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 25 KiB/s wr, 31 op/s
Oct 02 13:03:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:38.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:39 compute-0 ceph-mon[73607]: pgmap v3149: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 25 KiB/s wr, 31 op/s
Oct 02 13:03:40 compute-0 nova_compute[257802]: 2025-10-02 13:03:40.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-0 nova_compute[257802]: 2025-10-02 13:03:40.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 27 KiB/s wr, 58 op/s
Oct 02 13:03:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:40.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:03:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:41.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.327 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.328 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.328 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:03:41 compute-0 nova_compute[257802]: 2025-10-02 13:03:41.328 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:41 compute-0 sudo[390583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:41 compute-0 sudo[390583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:41 compute-0 sudo[390583]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:41 compute-0 ceph-mon[73607]: pgmap v3150: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 27 KiB/s wr, 58 op/s
Oct 02 13:03:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2773844659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:41 compute-0 sudo[390622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:41 compute-0 sudo[390622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:41 compute-0 sudo[390622]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:41 compute-0 podman[390582]: 2025-10-02 13:03:41.962609035 +0000 UTC m=+0.092079869 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:03:42
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root']
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:42.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:43 compute-0 ceph-mon[73607]: pgmap v3151: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:03:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:03:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:43.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:03:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:03:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:03:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:03:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:03:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:03:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:03:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:44.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:45.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.494 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [{"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:45 compute-0 ceph-mon[73607]: pgmap v3152: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.593 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-08b29362-e7c1-450f-bf22-95d23c21ff23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.594 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.594 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.644 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.644 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.644 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.645 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:03:45 compute-0 nova_compute[257802]: 2025-10-02 13:03:45.645 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501904323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.097 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.190 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.191 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.191 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:46.270 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:46 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:46.270 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.408 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.409 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4053MB free_disk=20.897136688232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.409 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.409 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.501 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 08b29362-e7c1-450f-bf22-95d23c21ff23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.501 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.501 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:03:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 02 13:03:46 compute-0 nova_compute[257802]: 2025-10-02 13:03:46.551 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2501904323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198847553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:47 compute-0 nova_compute[257802]: 2025-10-02 13:03:47.001 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:47 compute-0 nova_compute[257802]: 2025-10-02 13:03:47.009 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:47 compute-0 nova_compute[257802]: 2025-10-02 13:03:47.036 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:47 compute-0 nova_compute[257802]: 2025-10-02 13:03:47.083 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:03:47 compute-0 nova_compute[257802]: 2025-10-02 13:03:47.083 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:47.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:47 compute-0 ceph-mon[73607]: pgmap v3153: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 02 13:03:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4198847553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 13:03:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:48.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:03:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:49.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:03:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:49 compute-0 ceph-mon[73607]: pgmap v3154: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 13:03:50 compute-0 nova_compute[257802]: 2025-10-02 13:03:50.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:50 compute-0 nova_compute[257802]: 2025-10-02 13:03:50.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:50 compute-0 ovn_controller[148183]: 2025-10-02T13:03:50Z|00920|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct 02 13:03:50 compute-0 nova_compute[257802]: 2025-10-02 13:03:50.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 13:03:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/56588911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:51.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:51 compute-0 ceph-mon[73607]: pgmap v3155: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 13:03:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:52 compute-0 nova_compute[257802]: 2025-10-02 13:03:52.587 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:52.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:52 compute-0 ceph-mon[73607]: pgmap v3156: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:53.273 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:53 compute-0 nova_compute[257802]: 2025-10-02 13:03:53.790 2 DEBUG oslo_concurrency.lockutils [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:53 compute-0 nova_compute[257802]: 2025-10-02 13:03:53.791 2 DEBUG oslo_concurrency.lockutils [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:53 compute-0 nova_compute[257802]: 2025-10-02 13:03:53.972 2 INFO nova.compute.manager [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Detaching volume b071d28d-2baf-4099-b6f4-9ad6bb72c88f
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.174 2 INFO nova.virt.block_device [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Attempting to driver detach volume b071d28d-2baf-4099-b6f4-9ad6bb72c88f from mountpoint /dev/vdb
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.181 2 DEBUG nova.virt.libvirt.driver [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Attempting to detach device vdb from instance 08b29362-e7c1-450f-bf22-95d23c21ff23 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.182 2 DEBUG nova.virt.libvirt.guest [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-b071d28d-2baf-4099-b6f4-9ad6bb72c88f">
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   </source>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <serial>b071d28d-2baf-4099-b6f4-9ad6bb72c88f</serial>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]: </disk>
Oct 02 13:03:54 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.188 2 INFO nova.virt.libvirt.driver [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance 08b29362-e7c1-450f-bf22-95d23c21ff23 from the persistent domain config.
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.188 2 DEBUG nova.virt.libvirt.driver [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 08b29362-e7c1-450f-bf22-95d23c21ff23 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.188 2 DEBUG nova.virt.libvirt.guest [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-b071d28d-2baf-4099-b6f4-9ad6bb72c88f">
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   </source>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <serial>b071d28d-2baf-4099-b6f4-9ad6bb72c88f</serial>
Oct 02 13:03:54 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:03:54 compute-0 nova_compute[257802]: </disk>
Oct 02 13:03:54 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.429 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759410234.4285467, 08b29362-e7c1-450f-bf22-95d23c21ff23 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.430 2 DEBUG nova.virt.libvirt.driver [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 08b29362-e7c1-450f-bf22-95d23c21ff23 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.433 2 INFO nova.virt.libvirt.driver [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance 08b29362-e7c1-450f-bf22-95d23c21ff23 from the live domain config.
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:54.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021719564256467523 of space, bias 1.0, pg target 0.6515869276940257 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:03:54 compute-0 nova_compute[257802]: 2025-10-02 13:03:54.873 2 DEBUG nova.objects.instance [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:55 compute-0 nova_compute[257802]: 2025-10-02 13:03:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:55 compute-0 nova_compute[257802]: 2025-10-02 13:03:55.135 2 DEBUG oslo_concurrency.lockutils [None req-4af1c6d5-ca3b-4587-bed7-734c90e954a8 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:55 compute-0 nova_compute[257802]: 2025-10-02 13:03:55.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:03:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191523559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:03:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191523559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:55 compute-0 nova_compute[257802]: 2025-10-02 13:03:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:55 compute-0 ceph-mon[73607]: pgmap v3157: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1191523559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1191523559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:56.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.292 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.293 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.293 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.293 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.293 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.294 2 INFO nova.compute.manager [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Terminating instance
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.295 2 DEBUG nova.compute.manager [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:03:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:57.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:57 compute-0 kernel: tapfa0139f5-ff (unregistering): left promiscuous mode
Oct 02 13:03:57 compute-0 NetworkManager[44987]: <info>  [1759410237.5265] device (tapfa0139f5-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:03:57 compute-0 ovn_controller[148183]: 2025-10-02T13:03:57Z|00921|binding|INFO|Releasing lport fa0139f5-ff5d-45c1-9873-48b0a33759d5 from this chassis (sb_readonly=0)
Oct 02 13:03:57 compute-0 ovn_controller[148183]: 2025-10-02T13:03:57Z|00922|binding|INFO|Setting lport fa0139f5-ff5d-45c1-9873-48b0a33759d5 down in Southbound
Oct 02 13:03:57 compute-0 ovn_controller[148183]: 2025-10-02T13:03:57Z|00923|binding|INFO|Removing iface tapfa0139f5-ff ovn-installed in OVS
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.606 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:91:1b 10.100.0.14'], port_security=['fa:16:3e:07:91:1b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08b29362-e7c1-450f-bf22-95d23c21ff23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '175c521b-bdcf-4a1d-a720-87f5bec0bed9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=fa0139f5-ff5d-45c1-9873-48b0a33759d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.607 158261 INFO neutron.agent.ovn.metadata.agent [-] Port fa0139f5-ff5d-45c1-9873-48b0a33759d5 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 unbound from our chassis
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.609 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.610 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df72d018-4db1-48c7-b349-0a09276dec62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.610 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace which is not needed anymore
Oct 02 13:03:57 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Oct 02 13:03:57 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c7.scope: Consumed 17.809s CPU time.
Oct 02 13:03:57 compute-0 systemd-machined[211836]: Machine qemu-98-instance-000000c7 terminated.
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.732 2 INFO nova.virt.libvirt.driver [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Instance destroyed successfully.
Oct 02 13:03:57 compute-0 ceph-mon[73607]: pgmap v3158: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.734 2 DEBUG nova.objects.instance [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'resources' on Instance uuid 08b29362-e7c1-450f-bf22-95d23c21ff23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:57 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [NOTICE]   (389138) : haproxy version is 2.8.14-c23fe91
Oct 02 13:03:57 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [NOTICE]   (389138) : path to executable is /usr/sbin/haproxy
Oct 02 13:03:57 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [WARNING]  (389138) : Exiting Master process...
Oct 02 13:03:57 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [ALERT]    (389138) : Current worker (389140) exited with code 143 (Terminated)
Oct 02 13:03:57 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[389134]: [WARNING]  (389138) : All workers exited. Exiting... (0)
Oct 02 13:03:57 compute-0 systemd[1]: libpod-6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22.scope: Deactivated successfully.
Oct 02 13:03:57 compute-0 podman[390736]: 2025-10-02 13:03:57.750731561 +0000 UTC m=+0.054985212 container died 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.763 2 DEBUG nova.virt.libvirt.vif [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-247817219',display_name='tempest-AttachVolumeNegativeTest-server-247817219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-247817219',id=199,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBj6tNsdWzptM1YL5GSEH1m7nxdiRaIlwCB2W6y7LUFKIu26VXI47mGh3X2ihi0CDsGqTRgVbGT/FY7e/MdF8Lmm+0sICub5iqjLIVf4S4ob9DXCs+NW7Dr/Dq12CLXmgQ==',key_name='tempest-keypair-1672670520',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-uf4kq0sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=08b29362-e7c1-450f-bf22-95d23c21ff23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.764 2 DEBUG nova.network.os_vif_util [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "address": "fa:16:3e:07:91:1b", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa0139f5-ff", "ovs_interfaceid": "fa0139f5-ff5d-45c1-9873-48b0a33759d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.765 2 DEBUG nova.network.os_vif_util [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.765 2 DEBUG os_vif [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.768 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa0139f5-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.775 2 INFO os_vif [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:91:1b,bridge_name='br-int',has_traffic_filtering=True,id=fa0139f5-ff5d-45c1-9873-48b0a33759d5,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa0139f5-ff')
Oct 02 13:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22-userdata-shm.mount: Deactivated successfully.
Oct 02 13:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c38cacc1a97e2b2d0f6a47f4d3c69917086fade6266dcb33ffa08af3e2e7de04-merged.mount: Deactivated successfully.
Oct 02 13:03:57 compute-0 podman[390736]: 2025-10-02 13:03:57.799966872 +0000 UTC m=+0.104220523 container cleanup 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 13:03:57 compute-0 systemd[1]: libpod-conmon-6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22.scope: Deactivated successfully.
Oct 02 13:03:57 compute-0 podman[390780]: 2025-10-02 13:03:57.868148662 +0000 UTC m=+0.043176066 container remove 6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.874 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c24bdc50-3354-406d-893b-b10b45292721]: (4, ('Thu Oct  2 01:03:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22)\n6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22\nThu Oct  2 01:03:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22)\n6374e2b8c3073dd12e02dbececa33d0e568e85fec94ef81c4969608d60ae7d22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.875 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ed0aae-6a00-4401-915f-c40318ccd60d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.877 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:57 compute-0 kernel: tap2471b6f7-e0: left promiscuous mode
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.895 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0add583a-0b8e-41f2-996d-86e553726066]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 nova_compute[257802]: 2025-10-02 13:03:57.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.931 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ea666fda-edf5-43c5-a486-71a34e44142b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.932 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a283c7ba-15ca-4009-a80a-49608ac61763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.946 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bf604e-cab9-4626-92ca-433d8b09afa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818928, 'reachable_time': 39803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390803, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d2471b6f7\x2dee51\x2d4239\x2d8b52\x2d7016ab4d9fd1.mount: Deactivated successfully.
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.951 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:03:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:03:57.951 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7a476d80-e009-4307-b7d8-e584916e3e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.385 2 DEBUG nova.compute.manager [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-unplugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.385 2 DEBUG oslo_concurrency.lockutils [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.385 2 DEBUG oslo_concurrency.lockutils [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.385 2 DEBUG oslo_concurrency.lockutils [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.386 2 DEBUG nova.compute.manager [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] No waiting events found dispatching network-vif-unplugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:58 compute-0 nova_compute[257802]: 2025-10-02 13:03:58.386 2 DEBUG nova.compute.manager [req-b73d1e1a-83ba-405d-9b1e-4df52a0d7597 req-626ff440-113c-44e4-8c24-3692d7ffe202 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-unplugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:03:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:03:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:03:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:03:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:59.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:03:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:59 compute-0 ceph-mon[73607]: pgmap v3159: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 40 op/s
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.533 2 DEBUG nova.compute.manager [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.534 2 DEBUG oslo_concurrency.lockutils [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.534 2 DEBUG oslo_concurrency.lockutils [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.534 2 DEBUG oslo_concurrency.lockutils [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.534 2 DEBUG nova.compute.manager [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] No waiting events found dispatching network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.534 2 WARNING nova.compute.manager [req-2d30e71f-b29b-4617-9de0-9519fd623c56 req-4ee35a8b-7284-46cc-9405-89c84c65b718 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received unexpected event network-vif-plugged-fa0139f5-ff5d-45c1-9873-48b0a33759d5 for instance with vm_state active and task_state deleting.
Oct 02 13:04:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.896 2 INFO nova.virt.libvirt.driver [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Deleting instance files /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23_del
Oct 02 13:04:00 compute-0 nova_compute[257802]: 2025-10-02 13:04:00.897 2 INFO nova.virt.libvirt.driver [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Deletion of /var/lib/nova/instances/08b29362-e7c1-450f-bf22-95d23c21ff23_del complete
Oct 02 13:04:00 compute-0 ceph-mon[73607]: pgmap v3160: 305 pgs: 305 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 40 op/s
Oct 02 13:04:01 compute-0 nova_compute[257802]: 2025-10-02 13:04:01.241 2 INFO nova.compute.manager [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Took 3.95 seconds to destroy the instance on the hypervisor.
Oct 02 13:04:01 compute-0 nova_compute[257802]: 2025-10-02 13:04:01.241 2 DEBUG oslo.service.loopingcall [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:04:01 compute-0 nova_compute[257802]: 2025-10-02 13:04:01.242 2 DEBUG nova.compute.manager [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:04:01 compute-0 nova_compute[257802]: 2025-10-02 13:04:01.242 2 DEBUG nova.network.neutron [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:04:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:01 compute-0 sudo[390810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:01 compute-0 sudo[390810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:01 compute-0 sudo[390810]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:01 compute-0 sudo[390835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:01 compute-0 sudo[390835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:01 compute-0 sudo[390835]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:01 compute-0 sudo[390860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:01 compute-0 sudo[390860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:01 compute-0 sudo[390860]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:01 compute-0 sudo[390885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:04:01 compute-0 sudo[390885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[390930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[390930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[390930]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[390885]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[390967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[390967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[390968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[390967]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[390968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[390968]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[391017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:02 compute-0 sudo[391017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[391017]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[391042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[391042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[391042]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[391067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 13:04:02 compute-0 sudo[391067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 255 B/s wr, 12 op/s
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:02 compute-0 sudo[391067]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:02.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 57cc1609-9547-4cde-9868-36b408a5eb34 does not exist
Oct 02 13:04:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e800933d-4859-4c75-9657-7d851630eccb does not exist
Oct 02 13:04:02 compute-0 nova_compute[257802]: 2025-10-02 13:04:02.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:02 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev caaf8f1f-4645-4215-86d5-6fe8c1325714 does not exist
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:02 compute-0 sudo[391111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[391111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[391111]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[391136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:02 compute-0 sudo[391136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[391136]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:02 compute-0 sudo[391161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:02 compute-0 sudo[391161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:02 compute-0 sudo[391161]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:03 compute-0 sudo[391186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:04:03 compute-0 sudo[391186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.314172545 +0000 UTC m=+0.040740947 container create 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:03 compute-0 systemd[1]: Started libpod-conmon-5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089.scope.
Oct 02 13:04:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.387 2 DEBUG nova.network.neutron [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.296473856 +0000 UTC m=+0.023042278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.397393718 +0000 UTC m=+0.123962140 container init 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.405766411 +0000 UTC m=+0.132334813 container start 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.409157343 +0000 UTC m=+0.135725755 container attach 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:04:03 compute-0 boring_wu[391267]: 167 167
Oct 02 13:04:03 compute-0 systemd[1]: libpod-5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089.scope: Deactivated successfully.
Oct 02 13:04:03 compute-0 conmon[391267]: conmon 5b30ba1b233668663b2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089.scope/container/memory.events
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.413034207 +0000 UTC m=+0.139602609 container died 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.428 2 INFO nova.compute.manager [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Took 2.19 seconds to deallocate network for instance.
Oct 02 13:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9ac01e9b4fc2a55090b27ddf63f5292a81249e6ac604b862ce0c6c0b70b6978-merged.mount: Deactivated successfully.
Oct 02 13:04:03 compute-0 podman[391251]: 2025-10-02 13:04:03.454606482 +0000 UTC m=+0.181174884 container remove 5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:04:03 compute-0 systemd[1]: libpod-conmon-5b30ba1b233668663b2daa814a9397ae7309b45e6dbbda9b6cdc1261679c2089.scope: Deactivated successfully.
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.546 2 DEBUG nova.compute.manager [req-006ffde1-8b9b-4f06-b462-ad276bb01eb1 req-faa7de9a-99a2-43ac-aa8d-b9696bc6cfb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Received event network-vif-deleted-fa0139f5-ff5d-45c1-9873-48b0a33759d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.575 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.576 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:03 compute-0 podman[391291]: 2025-10-02 13:04:03.603201298 +0000 UTC m=+0.039063337 container create 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:04:03 compute-0 ceph-mon[73607]: pgmap v3161: 305 pgs: 305 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 255 B/s wr, 12 op/s
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:04:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:03 compute-0 systemd[1]: Started libpod-conmon-9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a.scope.
Oct 02 13:04:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:03 compute-0 podman[391291]: 2025-10-02 13:04:03.587710522 +0000 UTC m=+0.023572581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:03 compute-0 nova_compute[257802]: 2025-10-02 13:04:03.686 2 DEBUG oslo_concurrency.processutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:03 compute-0 podman[391291]: 2025-10-02 13:04:03.694459315 +0000 UTC m=+0.130321354 container init 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:04:03 compute-0 podman[391291]: 2025-10-02 13:04:03.701377663 +0000 UTC m=+0.137239702 container start 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:04:03 compute-0 podman[391291]: 2025-10-02 13:04:03.704680712 +0000 UTC m=+0.140542751 container attach 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:04:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196282432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.142 2 DEBUG oslo_concurrency.processutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.151 2 DEBUG nova.compute.provider_tree [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.188 2 DEBUG nova.scheduler.client.report [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.232 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.274 2 INFO nova.scheduler.client.report [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Deleted allocations for instance 08b29362-e7c1-450f-bf22-95d23c21ff23
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.366 2 DEBUG oslo_concurrency.lockutils [None req-167605ab-5ee0-4dee-a837-00e235a7ae15 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "08b29362-e7c1-450f-bf22-95d23c21ff23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:04 compute-0 cool_beaver[391307]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:04:04 compute-0 cool_beaver[391307]: --> relative data size: 1.0
Oct 02 13:04:04 compute-0 cool_beaver[391307]: --> All data devices are unavailable
Oct 02 13:04:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 15 op/s
Oct 02 13:04:04 compute-0 systemd[1]: libpod-9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a.scope: Deactivated successfully.
Oct 02 13:04:04 compute-0 conmon[391307]: conmon 9fcccb1b529ef3f0ac2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a.scope/container/memory.events
Oct 02 13:04:04 compute-0 podman[391291]: 2025-10-02 13:04:04.535976124 +0000 UTC m=+0.971838163 container died 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7890749ecccf89925972c70d4c523c773f95a86a1713ad73a989fff7b684a45b-merged.mount: Deactivated successfully.
Oct 02 13:04:04 compute-0 podman[391291]: 2025-10-02 13:04:04.613068599 +0000 UTC m=+1.048930648 container remove 9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_beaver, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:04:04 compute-0 systemd[1]: libpod-conmon-9fcccb1b529ef3f0ac2c7a192349656d3e93acd6d38eea2ccc5c340dccccf30a.scope: Deactivated successfully.
Oct 02 13:04:04 compute-0 nova_compute[257802]: 2025-10-02 13:04:04.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:04 compute-0 podman[391352]: 2025-10-02 13:04:04.659656545 +0000 UTC m=+0.076024909 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 13:04:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4196282432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:04 compute-0 sudo[391186]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:04 compute-0 podman[391345]: 2025-10-02 13:04:04.676669087 +0000 UTC m=+0.107016900 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:04:04 compute-0 podman[391354]: 2025-10-02 13:04:04.68261133 +0000 UTC m=+0.101294621 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 02 13:04:04 compute-0 sudo[391410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:04 compute-0 sudo[391410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:04 compute-0 sudo[391410]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:04 compute-0 sudo[391436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:04 compute-0 sudo[391436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:04 compute-0 sudo[391436]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:04 compute-0 sudo[391461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:04 compute-0 sudo[391461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:04 compute-0 sudo[391461]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:04 compute-0 sudo[391486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:04:04 compute-0 sudo[391486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:05 compute-0 nova_compute[257802]: 2025-10-02 13:04:05.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.213376231 +0000 UTC m=+0.067619837 container create 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.168678701 +0000 UTC m=+0.022922337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:05 compute-0 systemd[1]: Started libpod-conmon-4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c.scope.
Oct 02 13:04:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:05.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.379949622 +0000 UTC m=+0.234193238 container init 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.388456798 +0000 UTC m=+0.242700414 container start 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:04:05 compute-0 dazzling_brown[391568]: 167 167
Oct 02 13:04:05 compute-0 systemd[1]: libpod-4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c.scope: Deactivated successfully.
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.4257731 +0000 UTC m=+0.280016716 container attach 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.427421599 +0000 UTC m=+0.281665205 container died 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 02 13:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6ec5a77371f029bc8c423bada89381454444bfa996d3034d54e5203a7bb6ec9-merged.mount: Deactivated successfully.
Oct 02 13:04:05 compute-0 podman[391551]: 2025-10-02 13:04:05.477168173 +0000 UTC m=+0.331411789 container remove 4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:04:05 compute-0 systemd[1]: libpod-conmon-4e3ba5e65dce25088b040c49798157c8584a4be071aba41129e05804c172f93c.scope: Deactivated successfully.
Oct 02 13:04:05 compute-0 podman[391594]: 2025-10-02 13:04:05.653177432 +0000 UTC m=+0.039544949 container create 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:04:05 compute-0 ceph-mon[73607]: pgmap v3162: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 15 op/s
Oct 02 13:04:05 compute-0 systemd[1]: Started libpod-conmon-8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0.scope.
Oct 02 13:04:05 compute-0 podman[391594]: 2025-10-02 13:04:05.637143503 +0000 UTC m=+0.023511050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e1b98febbfaffe193c904ed6111311cf472f88bf2817adc20f910ff6646278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e1b98febbfaffe193c904ed6111311cf472f88bf2817adc20f910ff6646278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e1b98febbfaffe193c904ed6111311cf472f88bf2817adc20f910ff6646278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e1b98febbfaffe193c904ed6111311cf472f88bf2817adc20f910ff6646278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:05 compute-0 podman[391594]: 2025-10-02 13:04:05.755035525 +0000 UTC m=+0.141403072 container init 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:05 compute-0 podman[391594]: 2025-10-02 13:04:05.763189213 +0000 UTC m=+0.149556740 container start 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:04:05 compute-0 podman[391594]: 2025-10-02 13:04:05.766255167 +0000 UTC m=+0.152622714 container attach 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:04:06 compute-0 naughty_pare[391611]: {
Oct 02 13:04:06 compute-0 naughty_pare[391611]:     "1": [
Oct 02 13:04:06 compute-0 naughty_pare[391611]:         {
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "devices": [
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "/dev/loop3"
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             ],
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "lv_name": "ceph_lv0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "lv_size": "7511998464",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "name": "ceph_lv0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "tags": {
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.cluster_name": "ceph",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.crush_device_class": "",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.encrypted": "0",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.osd_id": "1",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.type": "block",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:                 "ceph.vdo": "0"
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             },
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "type": "block",
Oct 02 13:04:06 compute-0 naughty_pare[391611]:             "vg_name": "ceph_vg0"
Oct 02 13:04:06 compute-0 naughty_pare[391611]:         }
Oct 02 13:04:06 compute-0 naughty_pare[391611]:     ]
Oct 02 13:04:06 compute-0 naughty_pare[391611]: }
Oct 02 13:04:06 compute-0 systemd[1]: libpod-8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0.scope: Deactivated successfully.
Oct 02 13:04:06 compute-0 podman[391594]: 2025-10-02 13:04:06.496636497 +0000 UTC m=+0.883004014 container died 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:04:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-59e1b98febbfaffe193c904ed6111311cf472f88bf2817adc20f910ff6646278-merged.mount: Deactivated successfully.
Oct 02 13:04:06 compute-0 podman[391594]: 2025-10-02 13:04:06.566797994 +0000 UTC m=+0.953165511 container remove 8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:06 compute-0 systemd[1]: libpod-conmon-8ae9fb6d733b0c18a2febaa623866a9c907158b766eb0687819c27a91a7246a0.scope: Deactivated successfully.
Oct 02 13:04:06 compute-0 sudo[391486]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:06 compute-0 sudo[391632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:06 compute-0 sudo[391632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:06 compute-0 sudo[391632]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:06 compute-0 sudo[391658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:06 compute-0 sudo[391658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:06 compute-0 sudo[391658]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:06 compute-0 sudo[391683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:06 compute-0 sudo[391683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:06 compute-0 sudo[391683]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:06 compute-0 sudo[391708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:04:06 compute-0 sudo[391708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.198092777 +0000 UTC m=+0.035001188 container create 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:04:07 compute-0 systemd[1]: Started libpod-conmon-79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42.scope.
Oct 02 13:04:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.274164027 +0000 UTC m=+0.111072448 container init 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.183162205 +0000 UTC m=+0.020070636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.281384923 +0000 UTC m=+0.118293334 container start 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.284429306 +0000 UTC m=+0.121337717 container attach 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:04:07 compute-0 systemd[1]: libpod-79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42.scope: Deactivated successfully.
Oct 02 13:04:07 compute-0 infallible_sutherland[391789]: 167 167
Oct 02 13:04:07 compute-0 conmon[391789]: conmon 79ad508b296a072eabbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42.scope/container/memory.events
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.287385647 +0000 UTC m=+0.124294058 container died 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9861015dd64aef8cb4ac6d9fa9073b9210265850485449814953800a0894aa99-merged.mount: Deactivated successfully.
Oct 02 13:04:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:07.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:07 compute-0 podman[391773]: 2025-10-02 13:04:07.322812404 +0000 UTC m=+0.159720815 container remove 79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sutherland, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:07 compute-0 systemd[1]: libpod-conmon-79ad508b296a072eabbba5dc6a11e14daaf3adb4616808157818d78b1d150e42.scope: Deactivated successfully.
Oct 02 13:04:07 compute-0 podman[391813]: 2025-10-02 13:04:07.471474241 +0000 UTC m=+0.041467264 container create c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:04:07 compute-0 systemd[1]: Started libpod-conmon-c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147.scope.
Oct 02 13:04:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e155ec52cad779d0a38edda12e56062c1794379412e85b7eb09da26da9f213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e155ec52cad779d0a38edda12e56062c1794379412e85b7eb09da26da9f213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e155ec52cad779d0a38edda12e56062c1794379412e85b7eb09da26da9f213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e155ec52cad779d0a38edda12e56062c1794379412e85b7eb09da26da9f213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:07 compute-0 podman[391813]: 2025-10-02 13:04:07.452239376 +0000 UTC m=+0.022232419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:07 compute-0 podman[391813]: 2025-10-02 13:04:07.553611638 +0000 UTC m=+0.123604661 container init c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:04:07 compute-0 podman[391813]: 2025-10-02 13:04:07.559925921 +0000 UTC m=+0.129918944 container start c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:04:07 compute-0 podman[391813]: 2025-10-02 13:04:07.563103048 +0000 UTC m=+0.133096071 container attach c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:04:07 compute-0 ceph-mon[73607]: pgmap v3163: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:07 compute-0 nova_compute[257802]: 2025-10-02 13:04:07.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]: {
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:         "osd_id": 1,
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:         "type": "bluestore"
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]:     }
Oct 02 13:04:08 compute-0 eager_bhaskara[391831]: }
Oct 02 13:04:08 compute-0 systemd[1]: libpod-c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147.scope: Deactivated successfully.
Oct 02 13:04:08 compute-0 podman[391813]: 2025-10-02 13:04:08.389759387 +0000 UTC m=+0.959752410 container died c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e155ec52cad779d0a38edda12e56062c1794379412e85b7eb09da26da9f213-merged.mount: Deactivated successfully.
Oct 02 13:04:08 compute-0 podman[391813]: 2025-10-02 13:04:08.450361932 +0000 UTC m=+1.020354955 container remove c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:04:08 compute-0 systemd[1]: libpod-conmon-c2d01a79c0308a161736ce4495102984f62334f54f7db3618f05f06ccd0e6147.scope: Deactivated successfully.
Oct 02 13:04:08 compute-0 sudo[391708]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:04:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:04:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f48ca4ba-4f8d-4132-a518-201076956de5 does not exist
Oct 02 13:04:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 411b39e1-00ab-4e3c-8579-f3ed045b4d66 does not exist
Oct 02 13:04:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a937914b-c63a-45dd-ac29-bba3a366c02d does not exist
Oct 02 13:04:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:08 compute-0 sudo[391864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:08 compute-0 sudo[391864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:08 compute-0 sudo[391864]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:08 compute-0 sudo[391889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:04:08 compute-0 sudo[391889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:08 compute-0 sudo[391889]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:09 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:04:09 compute-0 ceph-mon[73607]: pgmap v3164: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:10 compute-0 nova_compute[257802]: 2025-10-02 13:04:10.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:11.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:11 compute-0 ceph-mon[73607]: pgmap v3165: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Oct 02 13:04:12 compute-0 nova_compute[257802]: 2025-10-02 13:04:12.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410237.7291207, 08b29362-e7c1-450f-bf22-95d23c21ff23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:12 compute-0 nova_compute[257802]: 2025-10-02 13:04:12.730 2 INFO nova.compute.manager [-] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] VM Stopped (Lifecycle Event)
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:12 compute-0 nova_compute[257802]: 2025-10-02 13:04:12.775 2 DEBUG nova.compute.manager [None req-0d0e594d-2cb0-4173-9678-2cb4c577e214 - - - - - -] [instance: 08b29362-e7c1-450f-bf22-95d23c21ff23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:12.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:12 compute-0 nova_compute[257802]: 2025-10-02 13:04:12.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:12 compute-0 podman[391917]: 2025-10-02 13:04:12.957380889 +0000 UTC m=+0.095047390 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:04:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:13.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:13 compute-0 ceph-mon[73607]: pgmap v3166: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Oct 02 13:04:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Oct 02 13:04:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:14.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:15 compute-0 nova_compute[257802]: 2025-10-02 13:04:15.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:15 compute-0 ceph-mon[73607]: pgmap v3167: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Oct 02 13:04:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 02 13:04:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:16.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:17.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:17 compute-0 ceph-mon[73607]: pgmap v3168: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 02 13:04:17 compute-0 nova_compute[257802]: 2025-10-02 13:04:17.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:18 compute-0 nova_compute[257802]: 2025-10-02 13:04:18.235 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:18 compute-0 nova_compute[257802]: 2025-10-02 13:04:18.235 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:18 compute-0 nova_compute[257802]: 2025-10-02 13:04:18.657 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:04:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4188905716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:18.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.194 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.194 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.205 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.206 2 INFO nova.compute.claims [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:04:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.552 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:19 compute-0 ceph-mon[73607]: pgmap v3169: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1470117821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291602439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.974 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:19 compute-0 nova_compute[257802]: 2025-10-02 13:04:19.980 2 DEBUG nova.compute.provider_tree [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.024 2 DEBUG nova.scheduler.client.report [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.058 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.059 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.191 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.191 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.251 2 INFO nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.296 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.448 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.449 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.449 2 INFO nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Creating image(s)
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.474 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.497 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.521 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.524 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.591 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.593 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.593 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.594 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.618 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.621 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d08382ad-f3df-432b-848a-b0990a79ddf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.715 2 DEBUG nova.policy [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93facc00c95f4cbfa6cecaf3641182bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:04:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/291602439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/597105384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:20.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.903 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d08382ad-f3df-432b-848a-b0990a79ddf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:20 compute-0 nova_compute[257802]: 2025-10-02 13:04:20.965 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] resizing rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.084 2 DEBUG nova.objects.instance [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'migration_context' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.205 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.205 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Ensure instance console log exists: /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.206 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.206 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:21 compute-0 nova_compute[257802]: 2025-10-02 13:04:21.206 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:21.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:21 compute-0 ceph-mon[73607]: pgmap v3170: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:22 compute-0 sudo[392136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:22 compute-0 sudo[392136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:22 compute-0 sudo[392136]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:22 compute-0 sudo[392161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:22 compute-0 sudo[392161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:22 compute-0 sudo[392161]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:22 compute-0 nova_compute[257802]: 2025-10-02 13:04:22.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:23 compute-0 nova_compute[257802]: 2025-10-02 13:04:23.626 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Successfully created port: 312d1fff-af06-4b70-b163-5401eab527c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:04:23 compute-0 ceph-mon[73607]: pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:04:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.6 MiB/s wr, 37 op/s
Oct 02 13:04:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:24 compute-0 nova_compute[257802]: 2025-10-02 13:04:24.908 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Successfully updated port: 312d1fff-af06-4b70-b163-5401eab527c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:04:24 compute-0 nova_compute[257802]: 2025-10-02 13:04:24.946 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:24 compute-0 nova_compute[257802]: 2025-10-02 13:04:24.947 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquired lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:24 compute-0 nova_compute[257802]: 2025-10-02 13:04:24.947 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:04:25 compute-0 nova_compute[257802]: 2025-10-02 13:04:25.117 2 DEBUG nova.compute.manager [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-changed-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:25 compute-0 nova_compute[257802]: 2025-10-02 13:04:25.117 2 DEBUG nova.compute.manager [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Refreshing instance network info cache due to event network-changed-312d1fff-af06-4b70-b163-5401eab527c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:25 compute-0 nova_compute[257802]: 2025-10-02 13:04:25.118 2 DEBUG oslo_concurrency.lockutils [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:25 compute-0 nova_compute[257802]: 2025-10-02 13:04:25.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:25 compute-0 nova_compute[257802]: 2025-10-02 13:04:25.221 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:04:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:25 compute-0 ceph-mon[73607]: pgmap v3172: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.6 MiB/s wr, 37 op/s
Oct 02 13:04:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.722 2 DEBUG nova.network.neutron [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating instance_info_cache with network_info: [{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.749 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Releasing lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.749 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Instance network_info: |[{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.749 2 DEBUG oslo_concurrency.lockutils [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.750 2 DEBUG nova.network.neutron [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Refreshing network info cache for port 312d1fff-af06-4b70-b163-5401eab527c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.752 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Start _get_guest_xml network_info=[{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.756 2 WARNING nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.760 2 DEBUG nova.virt.libvirt.host [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.760 2 DEBUG nova.virt.libvirt.host [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.764 2 DEBUG nova.virt.libvirt.host [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.764 2 DEBUG nova.virt.libvirt.host [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.765 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.766 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.766 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.766 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.766 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.766 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.767 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.767 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.767 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.767 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.767 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.768 2 DEBUG nova.virt.hardware [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:04:26 compute-0 nova_compute[257802]: 2025-10-02 13:04:26.770 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:26.986 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:26.987 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:26.987 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.119 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:04:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411217143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.183 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.212 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.217 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553155895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.653 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.654 2 DEBUG nova.virt.libvirt.vif [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-23549016',display_name='tempest-AttachVolumeNegativeTest-server-23549016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-23549016',id=202,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhl4HsziwyBXoN0Jr2ywLkLThlZb6gbts/E5vP9PJb/IM7wl3M+MbRyHdfTM/dVeoNbicKiUF5APP45mJZQhWxY+qRtBZXmMutFYKtC4i7zKLlfUh/gvcUPRp44XkkKcg==',key_name='tempest-keypair-1620100568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-mlms067s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=d08382ad-f3df-432b-848a-b0990a79ddf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.655 2 DEBUG nova.network.os_vif_util [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.656 2 DEBUG nova.network.os_vif_util [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.657 2 DEBUG nova.objects.instance [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'pci_devices' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.682 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <uuid>d08382ad-f3df-432b-848a-b0990a79ddf7</uuid>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <name>instance-000000ca</name>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:name>tempest-AttachVolumeNegativeTest-server-23549016</nova:name>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:04:26</nova:creationTime>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:user uuid="93facc00c95f4cbfa6cecaf3641182bc">tempest-AttachVolumeNegativeTest-1407980822-project-member</nova:user>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:project uuid="5eceae619a6f4fdeaa8ba6fafda4912a">tempest-AttachVolumeNegativeTest-1407980822</nova:project>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <nova:port uuid="312d1fff-af06-4b70-b163-5401eab527c3">
Oct 02 13:04:27 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <system>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="serial">d08382ad-f3df-432b-848a-b0990a79ddf7</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="uuid">d08382ad-f3df-432b-848a-b0990a79ddf7</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </system>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <os>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </os>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <features>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </features>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/d08382ad-f3df-432b-848a-b0990a79ddf7_disk">
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </source>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config">
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </source>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:04:27 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:f7:6f:84"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <target dev="tap312d1fff-af"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/console.log" append="off"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <video>
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </video>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:04:27 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:04:27 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:04:27 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:04:27 compute-0 nova_compute[257802]: </domain>
Oct 02 13:04:27 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.683 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Preparing to wait for external event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.684 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.684 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.684 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.685 2 DEBUG nova.virt.libvirt.vif [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-23549016',display_name='tempest-AttachVolumeNegativeTest-server-23549016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-23549016',id=202,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhl4HsziwyBXoN0Jr2ywLkLThlZb6gbts/E5vP9PJb/IM7wl3M+MbRyHdfTM/dVeoNbicKiUF5APP45mJZQhWxY+qRtBZXmMutFYKtC4i7zKLlfUh/gvcUPRp44XkkKcg==',key_name='tempest-keypair-1620100568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-mlms067s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=d08382ad-f3df-432b-848a-b0990a79ddf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.686 2 DEBUG nova.network.os_vif_util [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.686 2 DEBUG nova.network.os_vif_util [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.687 2 DEBUG os_vif [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap312d1fff-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap312d1fff-af, col_values=(('external_ids', {'iface-id': '312d1fff-af06-4b70-b163-5401eab527c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:6f:84', 'vm-uuid': 'd08382ad-f3df-432b-848a-b0990a79ddf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:27 compute-0 NetworkManager[44987]: <info>  [1759410267.6948] manager: (tap312d1fff-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.701 2 INFO os_vif [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af')
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.764 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.765 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.766 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:f7:6f:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.766 2 INFO nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Using config drive
Oct 02 13:04:27 compute-0 nova_compute[257802]: 2025-10-02 13:04:27.788 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:27 compute-0 ceph-mon[73607]: pgmap v3173: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:04:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1867330307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3411217143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2193588837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1553155895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.316 2 INFO nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Creating config drive at /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.321 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3klzlqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.457 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3klzlqd" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.485 2 DEBUG nova.storage.rbd_utils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.489 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.659 2 DEBUG oslo_concurrency.processutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config d08382ad-f3df-432b-848a-b0990a79ddf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.661 2 INFO nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Deleting local config drive /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7/disk.config because it was imported into RBD.
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.672 2 DEBUG nova.network.neutron [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updated VIF entry in instance network info cache for port 312d1fff-af06-4b70-b163-5401eab527c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.673 2 DEBUG nova.network.neutron [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating instance_info_cache with network_info: [{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.706 2 DEBUG oslo_concurrency.lockutils [req-c091d7d8-ce6e-4634-979b-64a37a6f1675 req-4e1e90dd-3ea5-4fca-a805-8895e7113bb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:28 compute-0 kernel: tap312d1fff-af: entered promiscuous mode
Oct 02 13:04:28 compute-0 NetworkManager[44987]: <info>  [1759410268.7152] manager: (tap312d1fff-af): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:28 compute-0 ovn_controller[148183]: 2025-10-02T13:04:28Z|00924|binding|INFO|Claiming lport 312d1fff-af06-4b70-b163-5401eab527c3 for this chassis.
Oct 02 13:04:28 compute-0 ovn_controller[148183]: 2025-10-02T13:04:28Z|00925|binding|INFO|312d1fff-af06-4b70-b163-5401eab527c3: Claiming fa:16:3e:f7:6f:84 10.100.0.6
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.720 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:6f:84 10.100.0.6'], port_security=['fa:16:3e:f7:6f:84 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd08382ad-f3df-432b-848a-b0990a79ddf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d19ab5-8104-4e4a-b8b9-1f60756a96f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=312d1fff-af06-4b70-b163-5401eab527c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.721 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 312d1fff-af06-4b70-b163-5401eab527c3 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 bound to our chassis
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.722 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:28 compute-0 ovn_controller[148183]: 2025-10-02T13:04:28Z|00926|binding|INFO|Setting lport 312d1fff-af06-4b70-b163-5401eab527c3 ovn-installed in OVS
Oct 02 13:04:28 compute-0 ovn_controller[148183]: 2025-10-02T13:04:28Z|00927|binding|INFO|Setting lport 312d1fff-af06-4b70-b163-5401eab527c3 up in Southbound
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:28 compute-0 nova_compute[257802]: 2025-10-02 13:04:28.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.735 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[af070bd7-3c2d-4990-b71a-c735eee18c6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.735 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2471b6f7-e1 in ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.738 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2471b6f7-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.738 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5feb28-b818-4719-8a88-9cdbd3cd37bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.738 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[05bbfc7c-7e2f-425c-8f4d-d1da28ff7114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 systemd-udevd[392328]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.750 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[c9541476-d8f3-479e-8cfc-fe54aa4ba7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 systemd-machined[211836]: New machine qemu-99-instance-000000ca.
Oct 02 13:04:28 compute-0 NetworkManager[44987]: <info>  [1759410268.7606] device (tap312d1fff-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:04:28 compute-0 NetworkManager[44987]: <info>  [1759410268.7622] device (tap312d1fff-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:04:28 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-000000ca.
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.773 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cac1ad6a-0dd8-43bf-929c-347cbd99dc12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.801 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfcf0a1-48ce-4fa3-b201-868640ce8fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.806 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1453cd45-da66-4832-9a62-eabac0a8cc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 NetworkManager[44987]: <info>  [1759410268.8077] manager: (tap2471b6f7-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Oct 02 13:04:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:28.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.849 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[68f66904-fbb6-4744-bef2-32029d9b5851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.853 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b508d828-67df-4d15-a47a-9904bc451192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 NetworkManager[44987]: <info>  [1759410268.8829] device (tap2471b6f7-e0): carrier: link connected
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.890 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[91d036ae-1493-47d1-a45d-6f4cf3b93eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.906 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e2becfd8-4ac6-4731-8fae-8af1109df2f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833650, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392359, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.921 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ae12251e-ff9e-4cdb-aa84-7c1ad28a33ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:da65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833650, 'tstamp': 833650}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392360, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ceph-mon[73607]: pgmap v3174: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.948 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[218fa7a0-6493-4bfe-9f30-60800d89b057]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833650, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392361, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:28.974 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[428ce634-3237-4adb-b842-7168ed48d046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.021 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2142976b-9394-4c17-9c6f-1f39318b98f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.022 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.022 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.023 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2471b6f7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:29 compute-0 kernel: tap2471b6f7-e0: entered promiscuous mode
Oct 02 13:04:29 compute-0 NetworkManager[44987]: <info>  [1759410269.0660] manager: (tap2471b6f7-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.068 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2471b6f7-e0, col_values=(('external_ids', {'iface-id': 'c5388d11-12a4-491d-825a-d4dc574d0a0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:29 compute-0 ovn_controller[148183]: 2025-10-02T13:04:29Z|00928|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.084 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.085 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fdda8b0a-be3c-4ede-9e81-14fb62ecff35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.085 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.086 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'env', 'PROCESS_TAG=haproxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.117 2 DEBUG nova.compute.manager [req-b5e177f3-70b4-4c0a-9695-14771a794426 req-bfd91439-e0fb-4402-b23b-7564776c74dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.117 2 DEBUG oslo_concurrency.lockutils [req-b5e177f3-70b4-4c0a-9695-14771a794426 req-bfd91439-e0fb-4402-b23b-7564776c74dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.117 2 DEBUG oslo_concurrency.lockutils [req-b5e177f3-70b4-4c0a-9695-14771a794426 req-bfd91439-e0fb-4402-b23b-7564776c74dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.118 2 DEBUG oslo_concurrency.lockutils [req-b5e177f3-70b4-4c0a-9695-14771a794426 req-bfd91439-e0fb-4402-b23b-7564776c74dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.118 2 DEBUG nova.compute.manager [req-b5e177f3-70b4-4c0a-9695-14771a794426 req-bfd91439-e0fb-4402-b23b-7564776c74dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Processing event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.119 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.119 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:29.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:29 compute-0 podman[392435]: 2025-10-02 13:04:29.433852579 +0000 UTC m=+0.044152469 container create a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.435 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:29 compute-0 systemd[1]: Started libpod-conmon-a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202.scope.
Oct 02 13:04:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:29 compute-0 podman[392435]: 2025-10-02 13:04:29.410623507 +0000 UTC m=+0.020923417 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95dc2d0828e7b3e95a1a4e2303397da35427925fb7948d1b086e44649021cee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:29 compute-0 podman[392435]: 2025-10-02 13:04:29.520530496 +0000 UTC m=+0.130830406 container init a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:04:29 compute-0 podman[392435]: 2025-10-02 13:04:29.526247294 +0000 UTC m=+0.136547184 container start a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:04:29 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [NOTICE]   (392453) : New worker (392455) forked
Oct 02 13:04:29 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [NOTICE]   (392453) : Loading success.
Oct 02 13:04:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:29.585 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:04:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.801 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.802 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410269.8012524, d08382ad-f3df-432b-848a-b0990a79ddf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.802 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] VM Started (Lifecycle Event)
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.805 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.808 2 INFO nova.virt.libvirt.driver [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Instance spawned successfully.
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.808 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.857 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.862 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.865 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.865 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.866 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.866 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.867 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.867 2 DEBUG nova.virt.libvirt.driver [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.899 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.900 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410269.8021524, d08382ad-f3df-432b-848a-b0990a79ddf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.900 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] VM Paused (Lifecycle Event)
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.933 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.936 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410269.8044584, d08382ad-f3df-432b-848a-b0990a79ddf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.939 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] VM Resumed (Lifecycle Event)
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.964 2 INFO nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Took 9.52 seconds to spawn the instance on the hypervisor.
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.965 2 DEBUG nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.966 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.972 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:29 compute-0 nova_compute[257802]: 2025-10-02 13:04:29.997 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:30 compute-0 nova_compute[257802]: 2025-10-02 13:04:30.050 2 INFO nova.compute.manager [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Took 10.88 seconds to build instance.
Oct 02 13:04:30 compute-0 nova_compute[257802]: 2025-10-02 13:04:30.069 2 DEBUG oslo_concurrency.lockutils [None req-0f64cfba-fb8b-475b-996c-9de472429e98 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:30 compute-0 nova_compute[257802]: 2025-10-02 13:04:30.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 02 13:04:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:30.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.250 2 DEBUG nova.compute.manager [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.251 2 DEBUG oslo_concurrency.lockutils [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.251 2 DEBUG oslo_concurrency.lockutils [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.251 2 DEBUG oslo_concurrency.lockutils [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.252 2 DEBUG nova.compute.manager [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] No waiting events found dispatching network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:31 compute-0 nova_compute[257802]: 2025-10-02 13:04:31.252 2 WARNING nova.compute.manager [req-81830a1a-5621-41ed-9dd3-e097ee8737f3 req-6332675a-e4b7-43c8-b368-b96dcdaf0354 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received unexpected event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 for instance with vm_state active and task_state None.
Oct 02 13:04:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:31.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:31 compute-0 ceph-mon[73607]: pgmap v3175: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 02 13:04:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 02 13:04:32 compute-0 nova_compute[257802]: 2025-10-02 13:04:32.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:32.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:33 compute-0 ceph-mon[73607]: pgmap v3176: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 02 13:04:33 compute-0 nova_compute[257802]: 2025-10-02 13:04:33.765 2 DEBUG nova.compute.manager [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-changed-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:33 compute-0 nova_compute[257802]: 2025-10-02 13:04:33.766 2 DEBUG nova.compute.manager [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Refreshing instance network info cache due to event network-changed-312d1fff-af06-4b70-b163-5401eab527c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:33 compute-0 nova_compute[257802]: 2025-10-02 13:04:33.766 2 DEBUG oslo_concurrency.lockutils [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:33 compute-0 nova_compute[257802]: 2025-10-02 13:04:33.766 2 DEBUG oslo_concurrency.lockutils [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:33 compute-0 nova_compute[257802]: 2025-10-02 13:04:33.766 2 DEBUG nova.network.neutron [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Refreshing network info cache for port 312d1fff-af06-4b70-b163-5401eab527c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 161 op/s
Oct 02 13:04:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:34.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:34 compute-0 podman[392467]: 2025-10-02 13:04:34.916373936 +0000 UTC m=+0.050395730 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:04:34 compute-0 podman[392468]: 2025-10-02 13:04:34.925631229 +0000 UTC m=+0.059840998 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:04:34 compute-0 podman[392469]: 2025-10-02 13:04:34.929586145 +0000 UTC m=+0.060542235 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 13:04:35 compute-0 nova_compute[257802]: 2025-10-02 13:04:35.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:35 compute-0 nova_compute[257802]: 2025-10-02 13:04:35.127 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:35 compute-0 nova_compute[257802]: 2025-10-02 13:04:35.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:35 compute-0 ceph-mon[73607]: pgmap v3177: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 161 op/s
Oct 02 13:04:35 compute-0 nova_compute[257802]: 2025-10-02 13:04:35.984 2 DEBUG nova.network.neutron [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updated VIF entry in instance network info cache for port 312d1fff-af06-4b70-b163-5401eab527c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:35 compute-0 nova_compute[257802]: 2025-10-02 13:04:35.985 2 DEBUG nova.network.neutron [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating instance_info_cache with network_info: [{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:36 compute-0 nova_compute[257802]: 2025-10-02 13:04:36.012 2 DEBUG oslo_concurrency.lockutils [req-35ef0eb4-171e-43b6-a7b2-cc0e453a1a72 req-f579a82f-cf73-4f1a-8fa0-539601d27c5e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 951 KiB/s wr, 164 op/s
Oct 02 13:04:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:37 compute-0 nova_compute[257802]: 2025-10-02 13:04:37.127 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:37 compute-0 ceph-mon[73607]: pgmap v3178: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 951 KiB/s wr, 164 op/s
Oct 02 13:04:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3449800090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:37 compute-0 nova_compute[257802]: 2025-10-02 13:04:37.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:04:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/44426836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:38.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:39 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:39.588 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:39 compute-0 ceph-mon[73607]: pgmap v3179: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:04:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:40 compute-0 nova_compute[257802]: 2025-10-02 13:04:40.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:04:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:40.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:41 compute-0 ceph-mon[73607]: pgmap v3180: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:04:42 compute-0 sudo[392527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:42 compute-0 sudo[392527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:42 compute-0 sudo[392527]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 135 op/s
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:04:42
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'backups', 'images', 'vms']
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:04:42 compute-0 sudo[392552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:42 compute-0 sudo[392552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:42 compute-0 sudo[392552]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.659 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.661 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.661 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.661 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:42 compute-0 nova_compute[257802]: 2025-10-02 13:04:42.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.003000070s ======
Oct 02 13:04:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:42.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000070s
Oct 02 13:04:43 compute-0 ovn_controller[148183]: 2025-10-02T13:04:43Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:6f:84 10.100.0.6
Oct 02 13:04:43 compute-0 ovn_controller[148183]: 2025-10-02T13:04:43Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:6f:84 10.100.0.6
Oct 02 13:04:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:04:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:04:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:04:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:04:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:04:43 compute-0 ceph-mon[73607]: pgmap v3181: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 135 op/s
Oct 02 13:04:43 compute-0 podman[392578]: 2025-10-02 13:04:43.964637927 +0000 UTC m=+0.092049917 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 02 13:04:44 compute-0 nova_compute[257802]: 2025-10-02 13:04:44.217 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating instance_info_cache with network_info: [{"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:44 compute-0 nova_compute[257802]: 2025-10-02 13:04:44.241 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-d08382ad-f3df-432b-848a-b0990a79ddf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:44 compute-0 nova_compute[257802]: 2025-10-02 13:04:44.241 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:04:44 compute-0 nova_compute[257802]: 2025-10-02 13:04:44.241 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:04:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 250 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.4 MiB/s wr, 330 op/s
Oct 02 13:04:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:45 compute-0 nova_compute[257802]: 2025-10-02 13:04:45.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:45 compute-0 ceph-mon[73607]: pgmap v3182: 305 pgs: 305 active+clean; 250 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.4 MiB/s wr, 330 op/s
Oct 02 13:04:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 335 op/s
Oct 02 13:04:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:46.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.109 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.134 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.135 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.135 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.135 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.135 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432933901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.553 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.622 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.622 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:47 compute-0 ceph-mon[73607]: pgmap v3183: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 335 op/s
Oct 02 13:04:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/432933901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.831 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.832 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4066MB free_disk=20.897640228271484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.832 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:47 compute-0 nova_compute[257802]: 2025-10-02 13:04:47.833 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.145 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance d08382ad-f3df-432b-848a-b0990a79ddf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.145 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.146 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.194 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3926820245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.613 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.618 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.638 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.666 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:04:48 compute-0 nova_compute[257802]: 2025-10-02 13:04:48.667 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3926820245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:48.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:49 compute-0 ceph-mon[73607]: pgmap v3184: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:50 compute-0 nova_compute[257802]: 2025-10-02 13:04:50.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:50 compute-0 ovn_controller[148183]: 2025-10-02T13:04:50Z|00929|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct 02 13:04:50 compute-0 nova_compute[257802]: 2025-10-02 13:04:50.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:51 compute-0 nova_compute[257802]: 2025-10-02 13:04:51.462 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:51 compute-0 nova_compute[257802]: 2025-10-02 13:04:51.462 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:51 compute-0 nova_compute[257802]: 2025-10-02 13:04:51.481 2 DEBUG nova.objects.instance [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:51 compute-0 nova_compute[257802]: 2025-10-02 13:04:51.525 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:51 compute-0 ceph-mon[73607]: pgmap v3185: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.037 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.038 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.038 2 INFO nova.compute.manager [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Attaching volume dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd to /dev/vdb
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.204 2 DEBUG os_brick.utils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.206 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.217 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.217 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[333a42a2-6972-4a29-a10c-4766d29876ed]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.218 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.227 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.227 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[293eb1d0-b4e5-460c-8575-64880cadd000]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.229 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.235 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.236 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[174b0640-f898-4b38-939d-9fc2f3bd9bc0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.237 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5d0b19-6c66-446f-a864-f38475e1497f]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.237 2 DEBUG oslo_concurrency.processutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.265 2 DEBUG oslo_concurrency.processutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.267 2 DEBUG os_brick.initiator.connectors.lightos [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.268 2 DEBUG os_brick.initiator.connectors.lightos [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.268 2 DEBUG os_brick.initiator.connectors.lightos [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.268 2 DEBUG os_brick.utils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.269 2 DEBUG nova.virt.block_device [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating existing volume attachment record: 6f7efa5a-79d5-4ae4-8e82-4792a4025580 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:04:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:52 compute-0 nova_compute[257802]: 2025-10-02 13:04:52.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:52.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.334 2 DEBUG nova.objects.instance [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.358 2 DEBUG nova.virt.libvirt.driver [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Attempting to attach volume dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.360 2 DEBUG nova.virt.libvirt.guest [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:04:53 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd">
Oct 02 13:04:53 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   </source>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   <auth username="openstack">
Oct 02 13:04:53 compute-0 nova_compute[257802]:     <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   </auth>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:04:53 compute-0 nova_compute[257802]:   <serial>dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd</serial>
Oct 02 13:04:53 compute-0 nova_compute[257802]: </disk>
Oct 02 13:04:53 compute-0 nova_compute[257802]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:04:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.718 2 DEBUG nova.virt.libvirt.driver [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.718 2 DEBUG nova.virt.libvirt.driver [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.718 2 DEBUG nova.virt.libvirt.driver [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:53 compute-0 nova_compute[257802]: 2025-10-02 13:04:53.719 2 DEBUG nova.virt.libvirt.driver [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:f7:6f:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:04:53 compute-0 ceph-mon[73607]: pgmap v3186: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 739 KiB/s rd, 4.3 MiB/s wr, 304 op/s
Oct 02 13:04:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1802367210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:54 compute-0 nova_compute[257802]: 2025-10-02 13:04:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:54 compute-0 nova_compute[257802]: 2025-10-02 13:04:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:54 compute-0 nova_compute[257802]: 2025-10-02 13:04:54.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:04:54 compute-0 nova_compute[257802]: 2025-10-02 13:04:54.125 2 DEBUG oslo_concurrency.lockutils [None req-3d0f0753-ed5a-4810-b38f-c13492a69b2f 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 4.3 MiB/s wr, 305 op/s
Oct 02 13:04:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:54.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004339732516750419 of space, bias 1.0, pg target 1.3019197550251258 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:04:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.691 2 DEBUG oslo_concurrency.lockutils [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.692 2 DEBUG oslo_concurrency.lockutils [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.724 2 INFO nova.compute.manager [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Detaching volume dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd
Oct 02 13:04:55 compute-0 ceph-mon[73607]: pgmap v3187: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 4.3 MiB/s wr, 305 op/s
Oct 02 13:04:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3280960449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:04:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3280960449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.983 2 INFO nova.virt.block_device [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Attempting to driver detach volume dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd from mountpoint /dev/vdb
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.990 2 DEBUG nova.virt.libvirt.driver [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Attempting to detach device vdb from instance d08382ad-f3df-432b-848a-b0990a79ddf7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.991 2 DEBUG nova.virt.libvirt.guest [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd">
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   </source>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <serial>dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd</serial>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]: </disk>
Oct 02 13:04:55 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.997 2 INFO nova.virt.libvirt.driver [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance d08382ad-f3df-432b-848a-b0990a79ddf7 from the persistent domain config.
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.998 2 DEBUG nova.virt.libvirt.driver [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d08382ad-f3df-432b-848a-b0990a79ddf7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:04:55 compute-0 nova_compute[257802]: 2025-10-02 13:04:55.998 2 DEBUG nova.virt.libvirt.guest [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <source protocol="rbd" name="volumes/volume-dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd">
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   </source>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <serial>dd4a5f3e-c055-4aa9-b3b2-bdd3900319cd</serial>
Oct 02 13:04:55 compute-0 nova_compute[257802]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:04:55 compute-0 nova_compute[257802]: </disk>
Oct 02 13:04:55 compute-0 nova_compute[257802]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:04:56 compute-0 nova_compute[257802]: 2025-10-02 13:04:56.101 2 DEBUG nova.virt.libvirt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Received event <DeviceRemovedEvent: 1759410296.100678, d08382ad-f3df-432b-848a-b0990a79ddf7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:04:56 compute-0 nova_compute[257802]: 2025-10-02 13:04:56.103 2 DEBUG nova.virt.libvirt.driver [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d08382ad-f3df-432b-848a-b0990a79ddf7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:04:56 compute-0 nova_compute[257802]: 2025-10-02 13:04:56.104 2 INFO nova.virt.libvirt.driver [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance d08382ad-f3df-432b-848a-b0990a79ddf7 from the live domain config.
Oct 02 13:04:56 compute-0 nova_compute[257802]: 2025-10-02 13:04:56.390 2 DEBUG nova.objects.instance [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:56 compute-0 nova_compute[257802]: 2025-10-02 13:04:56.430 2 DEBUG oslo_concurrency.lockutils [None req-6512854f-10ba-4072-8a6f-5fdc021e3cb9 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Oct 02 13:04:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:56.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:56 compute-0 ceph-mon[73607]: pgmap v3188: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Oct 02 13:04:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:57.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.803 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.804 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.804 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.804 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.805 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.806 2 INFO nova.compute.manager [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Terminating instance
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.807 2 DEBUG nova.compute.manager [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:04:57 compute-0 kernel: tap312d1fff-af (unregistering): left promiscuous mode
Oct 02 13:04:57 compute-0 NetworkManager[44987]: <info>  [1759410297.8688] device (tap312d1fff-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:57 compute-0 ovn_controller[148183]: 2025-10-02T13:04:57Z|00930|binding|INFO|Releasing lport 312d1fff-af06-4b70-b163-5401eab527c3 from this chassis (sb_readonly=0)
Oct 02 13:04:57 compute-0 ovn_controller[148183]: 2025-10-02T13:04:57Z|00931|binding|INFO|Setting lport 312d1fff-af06-4b70-b163-5401eab527c3 down in Southbound
Oct 02 13:04:57 compute-0 ovn_controller[148183]: 2025-10-02T13:04:57Z|00932|binding|INFO|Removing iface tap312d1fff-af ovn-installed in OVS
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:57.894 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:6f:84 10.100.0.6'], port_security=['fa:16:3e:f7:6f:84 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd08382ad-f3df-432b-848a-b0990a79ddf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26d19ab5-8104-4e4a-b8b9-1f60756a96f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=312d1fff-af06-4b70-b163-5401eab527c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:57 compute-0 nova_compute[257802]: 2025-10-02 13:04:57.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:57.896 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 312d1fff-af06-4b70-b163-5401eab527c3 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 unbound from our chassis
Oct 02 13:04:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:57.898 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:04:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:57.899 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[840e9cd9-4a8b-4c50-9efe-f8d182687c9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:57 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:57.899 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace which is not needed anymore
Oct 02 13:04:57 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Oct 02 13:04:57 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Consumed 14.470s CPU time.
Oct 02 13:04:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2836223493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:57 compute-0 systemd-machined[211836]: Machine qemu-99-instance-000000ca terminated.
Oct 02 13:04:58 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [NOTICE]   (392453) : haproxy version is 2.8.14-c23fe91
Oct 02 13:04:58 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [NOTICE]   (392453) : path to executable is /usr/sbin/haproxy
Oct 02 13:04:58 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [WARNING]  (392453) : Exiting Master process...
Oct 02 13:04:58 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [ALERT]    (392453) : Current worker (392455) exited with code 143 (Terminated)
Oct 02 13:04:58 compute-0 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[392449]: [WARNING]  (392453) : All workers exited. Exiting... (0)
Oct 02 13:04:58 compute-0 systemd[1]: libpod-a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202.scope: Deactivated successfully.
Oct 02 13:04:58 compute-0 podman[392708]: 2025-10-02 13:04:58.043221325 +0000 UTC m=+0.054480719 container died a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.047 2 INFO nova.virt.libvirt.driver [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Instance destroyed successfully.
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.048 2 DEBUG nova.objects.instance [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'resources' on Instance uuid d08382ad-f3df-432b-848a-b0990a79ddf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202-userdata-shm.mount: Deactivated successfully.
Oct 02 13:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95dc2d0828e7b3e95a1a4e2303397da35427925fb7948d1b086e44649021cee-merged.mount: Deactivated successfully.
Oct 02 13:04:58 compute-0 podman[392708]: 2025-10-02 13:04:58.097434747 +0000 UTC m=+0.108694141 container cleanup a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:04:58 compute-0 systemd[1]: libpod-conmon-a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202.scope: Deactivated successfully.
Oct 02 13:04:58 compute-0 podman[392750]: 2025-10-02 13:04:58.206891325 +0000 UTC m=+0.088370069 container remove a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.213 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3320672d-47fc-495b-a113-8803835534c1]: (4, ('Thu Oct  2 01:04:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202)\na652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202\nThu Oct  2 01:04:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (a652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202)\na652fbf1791debd7c11f5ca79aa171e4a6f08a516e3ca37bb33ccc476078a202\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.215 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[58bd7528-0a8f-4bdc-bbcb-696f2cb1e947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.215 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:58 compute-0 kernel: tap2471b6f7-e0: left promiscuous mode
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.229 2 DEBUG nova.virt.libvirt.vif [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-23549016',display_name='tempest-AttachVolumeNegativeTest-server-23549016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-23549016',id=202,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhl4HsziwyBXoN0Jr2ywLkLThlZb6gbts/E5vP9PJb/IM7wl3M+MbRyHdfTM/dVeoNbicKiUF5APP45mJZQhWxY+qRtBZXmMutFYKtC4i7zKLlfUh/gvcUPRp44XkkKcg==',key_name='tempest-keypair-1620100568',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-mlms067s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=d08382ad-f3df-432b-848a-b0990a79ddf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.229 2 DEBUG nova.network.os_vif_util [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "312d1fff-af06-4b70-b163-5401eab527c3", "address": "fa:16:3e:f7:6f:84", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d1fff-af", "ovs_interfaceid": "312d1fff-af06-4b70-b163-5401eab527c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.230 2 DEBUG nova.network.os_vif_util [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.231 2 DEBUG os_vif [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap312d1fff-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.238 2 INFO os_vif [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:6f:84,bridge_name='br-int',has_traffic_filtering=True,id=312d1fff-af06-4b70-b163-5401eab527c3,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d1fff-af')
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.239 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d1880305-d486-4530-937f-4f5e63f93c33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.264 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df13e3ed-547a-4069-be35-b4c6c7611a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.265 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[25d939a2-34eb-41a5-85b0-47bac9c03692]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.283 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13b7e4d4-13fc-4f9e-8666-78116f941271]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833642, 'reachable_time': 22751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392782, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.287 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:04:58 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:04:58.288 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e0bb80-6daf-425c-9c09-cf2debde03fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d2471b6f7\x2dee51\x2d4239\x2d8b52\x2d7016ab4d9fd1.mount: Deactivated successfully.
Oct 02 13:04:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 49 KiB/s wr, 19 op/s
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.615 2 DEBUG nova.compute.manager [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-unplugged-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.616 2 DEBUG oslo_concurrency.lockutils [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.616 2 DEBUG oslo_concurrency.lockutils [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.616 2 DEBUG oslo_concurrency.lockutils [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.616 2 DEBUG nova.compute.manager [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] No waiting events found dispatching network-vif-unplugged-312d1fff-af06-4b70-b163-5401eab527c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.616 2 DEBUG nova.compute.manager [req-3c69b5d1-1c5c-4233-8e9b-a59edc24acec req-2bcf3144-f3db-4fef-aa52-1dda1ba165ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-unplugged-312d1fff-af06-4b70-b163-5401eab527c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:04:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:04:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:58.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.919 2 INFO nova.virt.libvirt.driver [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Deleting instance files /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7_del
Oct 02 13:04:58 compute-0 nova_compute[257802]: 2025-10-02 13:04:58.920 2 INFO nova.virt.libvirt.driver [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Deletion of /var/lib/nova/instances/d08382ad-f3df-432b-848a-b0990a79ddf7_del complete
Oct 02 13:04:58 compute-0 ceph-mon[73607]: pgmap v3189: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 49 KiB/s wr, 19 op/s
Oct 02 13:04:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:04:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:59 compute-0 nova_compute[257802]: 2025-10-02 13:04:59.578 2 INFO nova.compute.manager [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Took 1.77 seconds to destroy the instance on the hypervisor.
Oct 02 13:04:59 compute-0 nova_compute[257802]: 2025-10-02 13:04:59.579 2 DEBUG oslo.service.loopingcall [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:04:59 compute-0 nova_compute[257802]: 2025-10-02 13:04:59.580 2 DEBUG nova.compute.manager [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:04:59 compute-0 nova_compute[257802]: 2025-10-02 13:04:59.580 2 DEBUG nova.network.neutron [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:04:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:00 compute-0 nova_compute[257802]: 2025-10-02 13:05:00.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 32 KiB/s wr, 52 op/s
Oct 02 13:05:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:01.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.495 2 DEBUG nova.compute.manager [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.495 2 DEBUG oslo_concurrency.lockutils [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.495 2 DEBUG oslo_concurrency.lockutils [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.496 2 DEBUG oslo_concurrency.lockutils [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.496 2 DEBUG nova.compute.manager [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] No waiting events found dispatching network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:01 compute-0 nova_compute[257802]: 2025-10-02 13:05:01.496 2 WARNING nova.compute.manager [req-3af56e51-3f4a-4a90-9058-387e515b9de4 req-e80b0c1c-75ae-4ead-8683-ee1870d888cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received unexpected event network-vif-plugged-312d1fff-af06-4b70-b163-5401eab527c3 for instance with vm_state active and task_state deleting.
Oct 02 13:05:01 compute-0 ceph-mon[73607]: pgmap v3190: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 32 KiB/s wr, 52 op/s
Oct 02 13:05:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 9.7 KiB/s wr, 51 op/s
Oct 02 13:05:02 compute-0 sudo[392788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:02 compute-0 sudo[392788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:02 compute-0 sudo[392788]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:02 compute-0 nova_compute[257802]: 2025-10-02 13:05:02.724 2 DEBUG nova.network.neutron [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:02 compute-0 nova_compute[257802]: 2025-10-02 13:05:02.755 2 INFO nova.compute.manager [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Took 3.18 seconds to deallocate network for instance.
Oct 02 13:05:02 compute-0 sudo[392814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:02 compute-0 sudo[392814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:02 compute-0 sudo[392814]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:02 compute-0 nova_compute[257802]: 2025-10-02 13:05:02.816 2 DEBUG nova.compute.manager [req-7a3d6352-3057-4282-8398-fdad6b49c15a req-28354b73-07cf-4663-a6c4-2ff072b18094 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Received event network-vif-deleted-312d1fff-af06-4b70-b163-5401eab527c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:02 compute-0 nova_compute[257802]: 2025-10-02 13:05:02.858 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:02 compute-0 nova_compute[257802]: 2025-10-02 13:05:02.859 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:02.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.013 2 DEBUG oslo_concurrency.processutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:03.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687981365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.437 2 DEBUG oslo_concurrency.processutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.442 2 DEBUG nova.compute.provider_tree [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.466 2 DEBUG nova.scheduler.client.report [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.496 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.594 2 INFO nova.scheduler.client.report [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Deleted allocations for instance d08382ad-f3df-432b-848a-b0990a79ddf7
Oct 02 13:05:03 compute-0 ceph-mon[73607]: pgmap v3191: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 9.7 KiB/s wr, 51 op/s
Oct 02 13:05:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3687981365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:03 compute-0 nova_compute[257802]: 2025-10-02 13:05:03.679 2 DEBUG oslo_concurrency.lockutils [None req-9e96c0a1-8245-43d9-9510-5f1c988474bd 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "d08382ad-f3df-432b-848a-b0990a79ddf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 10 KiB/s wr, 59 op/s
Oct 02 13:05:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:04.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:05 compute-0 nova_compute[257802]: 2025-10-02 13:05:05.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:05.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:05 compute-0 nova_compute[257802]: 2025-10-02 13:05:05.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:05 compute-0 ceph-mon[73607]: pgmap v3192: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 10 KiB/s wr, 59 op/s
Oct 02 13:05:05 compute-0 nova_compute[257802]: 2025-10-02 13:05:05.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:05 compute-0 podman[392863]: 2025-10-02 13:05:05.918816916 +0000 UTC m=+0.057351238 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:05:05 compute-0 podman[392864]: 2025-10-02 13:05:05.925726394 +0000 UTC m=+0.058966588 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:05:05 compute-0 podman[392865]: 2025-10-02 13:05:05.926491303 +0000 UTC m=+0.061139571 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:05:05 compute-0 nova_compute[257802]: 2025-10-02 13:05:05.970 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 8.5 KiB/s wr, 58 op/s
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.701922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306701959, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1104, "num_deletes": 251, "total_data_size": 1791623, "memory_usage": 1812928, "flush_reason": "Manual Compaction"}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306715877, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 1760507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69712, "largest_seqno": 70815, "table_properties": {"data_size": 1755222, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11653, "raw_average_key_size": 19, "raw_value_size": 1744585, "raw_average_value_size": 2992, "num_data_blocks": 121, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410210, "oldest_key_time": 1759410210, "file_creation_time": 1759410306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 14000 microseconds, and 4286 cpu microseconds.
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.715917) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 1760507 bytes OK
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.715937) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.718004) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.718017) EVENT_LOG_v1 {"time_micros": 1759410306718013, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.718034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1786632, prev total WAL file size 1786632, number of live WAL files 2.
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.718611) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(1719KB)], [158(13MB)]
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306718657, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15720893, "oldest_snapshot_seqno": -1}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9742 keys, 13789664 bytes, temperature: kUnknown
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306818121, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 13789664, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13724973, "index_size": 39234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 257032, "raw_average_key_size": 26, "raw_value_size": 13552395, "raw_average_value_size": 1391, "num_data_blocks": 1500, "num_entries": 9742, "num_filter_entries": 9742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.818329) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13789664 bytes
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.820765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 138.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.3 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(16.8) write-amplify(7.8) OK, records in: 10259, records dropped: 517 output_compression: NoCompression
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.820782) EVENT_LOG_v1 {"time_micros": 1759410306820774, "job": 98, "event": "compaction_finished", "compaction_time_micros": 99524, "compaction_time_cpu_micros": 32521, "output_level": 6, "num_output_files": 1, "total_output_size": 13789664, "num_input_records": 10259, "num_output_records": 9742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306821190, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306823702, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.718517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.823752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.823757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.823759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.823761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:06.823763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:06.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:07 compute-0 ceph-mon[73607]: pgmap v3193: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 8.5 KiB/s wr, 58 op/s
Oct 02 13:05:08 compute-0 nova_compute[257802]: 2025-10-02 13:05:08.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 57 op/s
Oct 02 13:05:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:08 compute-0 sudo[392919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:08 compute-0 sudo[392919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:08 compute-0 sudo[392919]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[392944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:09 compute-0 sudo[392944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 sudo[392944]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[392969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:09 compute-0 sudo[392969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 sudo[392969]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[392994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:05:09 compute-0 sudo[392994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:09 compute-0 sudo[392994]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:09 compute-0 ceph-mon[73607]: pgmap v3194: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 57 op/s
Oct 02 13:05:09 compute-0 sudo[393049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:09 compute-0 sudo[393049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 sudo[393049]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[393074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:09 compute-0 sudo[393074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 sudo[393074]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[393099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:09 compute-0 sudo[393099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:09 compute-0 sudo[393099]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:09 compute-0 sudo[393124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- inventory --format=json-pretty --filter-for-batch
Oct 02 13:05:09 compute-0 sudo[393124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:10 compute-0 nova_compute[257802]: 2025-10-02 13:05:10.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.284949525 +0000 UTC m=+0.045308268 container create 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:05:10 compute-0 systemd[1]: Started libpod-conmon-181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65.scope.
Oct 02 13:05:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.267033622 +0000 UTC m=+0.027392385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.378766885 +0000 UTC m=+0.139125708 container init 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.391037881 +0000 UTC m=+0.151396654 container start 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.394986987 +0000 UTC m=+0.155345820 container attach 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:05:10 compute-0 funny_pasteur[393206]: 167 167
Oct 02 13:05:10 compute-0 systemd[1]: libpod-181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65.scope: Deactivated successfully.
Oct 02 13:05:10 compute-0 conmon[393206]: conmon 181a83de8e32d4a0b64e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65.scope/container/memory.events
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.401671309 +0000 UTC m=+0.162030072 container died 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-36acc588152b66779140f0af2ee050367604ba30fe2eb92a8006cdb56e8481e3-merged.mount: Deactivated successfully.
Oct 02 13:05:10 compute-0 podman[393190]: 2025-10-02 13:05:10.449138147 +0000 UTC m=+0.209496920 container remove 181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:05:10 compute-0 systemd[1]: libpod-conmon-181a83de8e32d4a0b64ec51bedeb459c6af364e1b19e38a76a82e5fa9842ce65.scope: Deactivated successfully.
Oct 02 13:05:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 49 op/s
Oct 02 13:05:10 compute-0 podman[393231]: 2025-10-02 13:05:10.623652729 +0000 UTC m=+0.036081184 container create 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:05:10 compute-0 systemd[1]: Started libpod-conmon-62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7.scope.
Oct 02 13:05:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53914b3c443ee185fe552debd39d224bb753050e798fdbd932ce414d814df70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53914b3c443ee185fe552debd39d224bb753050e798fdbd932ce414d814df70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53914b3c443ee185fe552debd39d224bb753050e798fdbd932ce414d814df70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53914b3c443ee185fe552debd39d224bb753050e798fdbd932ce414d814df70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:10 compute-0 podman[393231]: 2025-10-02 13:05:10.698719275 +0000 UTC m=+0.111147740 container init 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:05:10 compute-0 podman[393231]: 2025-10-02 13:05:10.608933423 +0000 UTC m=+0.021361898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:10 compute-0 podman[393231]: 2025-10-02 13:05:10.704749151 +0000 UTC m=+0.117177606 container start 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:05:10 compute-0 podman[393231]: 2025-10-02 13:05:10.707696533 +0000 UTC m=+0.120124988 container attach 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:05:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:05:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:05:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:11 compute-0 ceph-mon[73607]: pgmap v3195: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 49 op/s
Oct 02 13:05:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:05:11 compute-0 sad_banzai[393247]: [
Oct 02 13:05:11 compute-0 sad_banzai[393247]:     {
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "available": false,
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "ceph_device": false,
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "lsm_data": {},
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "lvs": [],
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "path": "/dev/sr0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "rejected_reasons": [
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "Has a FileSystem",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "Insufficient space (<5GB)"
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         ],
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         "sys_api": {
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "actuators": null,
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "device_nodes": "sr0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "devname": "sr0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "human_readable_size": "482.00 KB",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "id_bus": "ata",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "model": "QEMU DVD-ROM",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "nr_requests": "2",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "parent": "/dev/sr0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "partitions": {},
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "path": "/dev/sr0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "removable": "1",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "rev": "2.5+",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "ro": "0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "rotational": "0",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "sas_address": "",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "sas_device_handle": "",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "scheduler_mode": "mq-deadline",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "sectors": 0,
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "sectorsize": "2048",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "size": 493568.0,
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "support_discard": "2048",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "type": "disk",
Oct 02 13:05:11 compute-0 sad_banzai[393247]:             "vendor": "QEMU"
Oct 02 13:05:11 compute-0 sad_banzai[393247]:         }
Oct 02 13:05:11 compute-0 sad_banzai[393247]:     }
Oct 02 13:05:11 compute-0 sad_banzai[393247]: ]
Oct 02 13:05:11 compute-0 systemd[1]: libpod-62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7.scope: Deactivated successfully.
Oct 02 13:05:11 compute-0 systemd[1]: libpod-62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7.scope: Consumed 1.250s CPU time.
Oct 02 13:05:11 compute-0 podman[393231]: 2025-10-02 13:05:11.970145964 +0000 UTC m=+1.382574429 container died 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:05:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53914b3c443ee185fe552debd39d224bb753050e798fdbd932ce414d814df70-merged.mount: Deactivated successfully.
Oct 02 13:05:12 compute-0 podman[393231]: 2025-10-02 13:05:12.221132736 +0000 UTC m=+1.633561191 container remove 62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:05:12 compute-0 systemd[1]: libpod-conmon-62e40d3a6f6cb0c4440b19278df279bfb46a7b1e11527d6e13252cf209d57fe7.scope: Deactivated successfully.
Oct 02 13:05:12 compute-0 sudo[393124]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e7034a33-2151-4e62-906c-79b54adbd7e6 does not exist
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a8bc5322-d277-493a-be9c-87043ad47415 does not exist
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0d32b882-7f3c-4a2b-b5c6-427a91be89f5 does not exist
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:05:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 13:05:12 compute-0 sudo[394474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:12 compute-0 sudo[394474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:12 compute-0 sudo[394474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:12 compute-0 sudo[394499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:12 compute-0 sudo[394499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:12 compute-0 sudo[394499]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:12 compute-0 sudo[394524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:12 compute-0 sudo[394524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:12 compute-0 sudo[394524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:12 compute-0 sudo[394550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:05:12 compute-0 sudo[394550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:12.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:05:12 compute-0 ceph-mon[73607]: pgmap v3196: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 13:05:13 compute-0 nova_compute[257802]: 2025-10-02 13:05:13.046 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410298.045618, d08382ad-f3df-432b-848a-b0990a79ddf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:13 compute-0 nova_compute[257802]: 2025-10-02 13:05:13.048 2 INFO nova.compute.manager [-] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] VM Stopped (Lifecycle Event)
Oct 02 13:05:13 compute-0 nova_compute[257802]: 2025-10-02 13:05:13.067 2 DEBUG nova.compute.manager [None req-acd3dfcf-e736-4209-9b65-8160f7117217 - - - - - -] [instance: d08382ad-f3df-432b-848a-b0990a79ddf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.161185699 +0000 UTC m=+0.107088313 container create 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.07855939 +0000 UTC m=+0.024462024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:13 compute-0 systemd[1]: Started libpod-conmon-5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9.scope.
Oct 02 13:05:13 compute-0 nova_compute[257802]: 2025-10-02 13:05:13.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.328165168 +0000 UTC m=+0.274067802 container init 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.334948072 +0000 UTC m=+0.280850686 container start 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:05:13 compute-0 charming_keller[394631]: 167 167
Oct 02 13:05:13 compute-0 systemd[1]: libpod-5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9.scope: Deactivated successfully.
Oct 02 13:05:13 compute-0 conmon[394631]: conmon 5b4721beed2cbcfd917a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9.scope/container/memory.events
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.353265516 +0000 UTC m=+0.299168130 container attach 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.353811018 +0000 UTC m=+0.299713642 container died 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:05:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ecce343e393192f357a6fe91ad91c405326dd52c53ba7e497d5b773d5648c2-merged.mount: Deactivated successfully.
Oct 02 13:05:13 compute-0 podman[394614]: 2025-10-02 13:05:13.4774479 +0000 UTC m=+0.423350524 container remove 5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:05:13 compute-0 systemd[1]: libpod-conmon-5b4721beed2cbcfd917a994fac5228771f10d2f7fd40ac09d1d1b1c874528cb9.scope: Deactivated successfully.
Oct 02 13:05:13 compute-0 podman[394655]: 2025-10-02 13:05:13.650904767 +0000 UTC m=+0.057685708 container create 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:05:13 compute-0 systemd[1]: Started libpod-conmon-60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee.scope.
Oct 02 13:05:13 compute-0 podman[394655]: 2025-10-02 13:05:13.614700711 +0000 UTC m=+0.021481672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:13 compute-0 podman[394655]: 2025-10-02 13:05:13.776615647 +0000 UTC m=+0.183396618 container init 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:05:13 compute-0 podman[394655]: 2025-10-02 13:05:13.783297149 +0000 UTC m=+0.190078100 container start 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:05:13 compute-0 podman[394655]: 2025-10-02 13:05:13.799003309 +0000 UTC m=+0.205784290 container attach 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:05:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 13:05:14 compute-0 peaceful_napier[394671]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:05:14 compute-0 peaceful_napier[394671]: --> relative data size: 1.0
Oct 02 13:05:14 compute-0 peaceful_napier[394671]: --> All data devices are unavailable
Oct 02 13:05:14 compute-0 systemd[1]: libpod-60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee.scope: Deactivated successfully.
Oct 02 13:05:14 compute-0 podman[394655]: 2025-10-02 13:05:14.581607732 +0000 UTC m=+0.988388673 container died 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:05:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c4c79affe15fa04c7ca7091c146fdb94e7c767d7994f8516e893f2b461f70e4-merged.mount: Deactivated successfully.
Oct 02 13:05:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2766716528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:14.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:15 compute-0 podman[394655]: 2025-10-02 13:05:15.026203338 +0000 UTC m=+1.432984279 container remove 60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_napier, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:05:15 compute-0 systemd[1]: libpod-conmon-60e7d8214920b5a5b48e8f20a43c4acb1f183e8359b79018c61a6c5bfb4a11ee.scope: Deactivated successfully.
Oct 02 13:05:15 compute-0 sudo[394550]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:15 compute-0 sudo[394712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:15 compute-0 sudo[394712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:15 compute-0 sudo[394712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:15 compute-0 podman[394687]: 2025-10-02 13:05:15.133974816 +0000 UTC m=+0.520103795 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:05:15 compute-0 sudo[394750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:15 compute-0 sudo[394750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:15 compute-0 sudo[394750]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:15 compute-0 nova_compute[257802]: 2025-10-02 13:05:15.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:15 compute-0 sudo[394775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:15 compute-0 sudo[394775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:15 compute-0 sudo[394775]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:15 compute-0 sudo[394800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:05:15 compute-0 sudo[394800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.621711955 +0000 UTC m=+0.054392057 container create 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.586898003 +0000 UTC m=+0.019578125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:15 compute-0 systemd[1]: Started libpod-conmon-960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9.scope.
Oct 02 13:05:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.83936663 +0000 UTC m=+0.272046762 container init 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.846206356 +0000 UTC m=+0.278886458 container start 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:05:15 compute-0 pedantic_grothendieck[394882]: 167 167
Oct 02 13:05:15 compute-0 systemd[1]: libpod-960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9.scope: Deactivated successfully.
Oct 02 13:05:15 compute-0 conmon[394882]: conmon 960f2edd1aefe1c39fb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9.scope/container/memory.events
Oct 02 13:05:15 compute-0 ceph-mon[73607]: pgmap v3197: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.907816007 +0000 UTC m=+0.340496109 container attach 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:05:15 compute-0 podman[394865]: 2025-10-02 13:05:15.908816 +0000 UTC m=+0.341496102 container died 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-38ac8cd3890f20b7db48eb7fd1cb2bd1962bc23da07357cc905cb1551cb2615f-merged.mount: Deactivated successfully.
Oct 02 13:05:16 compute-0 podman[394865]: 2025-10-02 13:05:16.228248449 +0000 UTC m=+0.660928551 container remove 960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:05:16 compute-0 systemd[1]: libpod-conmon-960f2edd1aefe1c39fb16fd6a7334ebc173432639b7cab774213787e61749ea9.scope: Deactivated successfully.
Oct 02 13:05:16 compute-0 podman[394907]: 2025-10-02 13:05:16.403398686 +0000 UTC m=+0.044331493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:16 compute-0 podman[394907]: 2025-10-02 13:05:16.54489084 +0000 UTC m=+0.185823567 container create 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:05:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 217 KiB/s wr, 6 op/s
Oct 02 13:05:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:17 compute-0 systemd[1]: Started libpod-conmon-636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05.scope.
Oct 02 13:05:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54d63dcb3900230914fe2b4cbd1dc6a01b3d45f04f583f3f748b14b4fb4e88c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54d63dcb3900230914fe2b4cbd1dc6a01b3d45f04f583f3f748b14b4fb4e88c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54d63dcb3900230914fe2b4cbd1dc6a01b3d45f04f583f3f748b14b4fb4e88c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54d63dcb3900230914fe2b4cbd1dc6a01b3d45f04f583f3f748b14b4fb4e88c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:18 compute-0 nova_compute[257802]: 2025-10-02 13:05:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 563 KiB/s wr, 19 op/s
Oct 02 13:05:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:20 compute-0 nova_compute[257802]: 2025-10-02 13:05:20.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:20.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:21 compute-0 nova_compute[257802]: 2025-10-02 13:05:21.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:21 compute-0 nova_compute[257802]: 2025-10-02 13:05:21.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:05:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:22 compute-0 sudo[394930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:22 compute-0 sudo[394930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:22 compute-0 sudo[394930]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:22 compute-0 sudo[394955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:22 compute-0 sudo[394955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:22 compute-0 sudo[394955]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:23 compute-0 nova_compute[257802]: 2025-10-02 13:05:23.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:23.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:24 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.odxjnj missed beacon ack from the monitors
Oct 02 13:05:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:25 compute-0 nova_compute[257802]: 2025-10-02 13:05:25.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:25.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:25 compute-0 podman[394907]: 2025-10-02 13:05:25.637992826 +0000 UTC m=+9.278925593 container init 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:05:25 compute-0 podman[394907]: 2025-10-02 13:05:25.653587953 +0000 UTC m=+9.294520710 container start 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:05:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).mds e11 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.0756 seconds
Oct 02 13:05:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:25 compute-0 ceph-mon[73607]: pgmap v3198: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 217 KiB/s wr, 6 op/s
Oct 02 13:05:25 compute-0 podman[394907]: 2025-10-02 13:05:25.831942498 +0000 UTC m=+9.472875255 container attach 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:05:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]: {
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:     "1": [
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:         {
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "devices": [
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "/dev/loop3"
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             ],
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "lv_name": "ceph_lv0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "lv_size": "7511998464",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "name": "ceph_lv0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "tags": {
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.cluster_name": "ceph",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.crush_device_class": "",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.encrypted": "0",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.osd_id": "1",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.type": "block",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:                 "ceph.vdo": "0"
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             },
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "type": "block",
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:             "vg_name": "ceph_vg0"
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:         }
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]:     ]
Oct 02 13:05:26 compute-0 vibrant_khorana[394924]: }
Oct 02 13:05:26 compute-0 systemd[1]: libpod-636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05.scope: Deactivated successfully.
Oct 02 13:05:26 compute-0 podman[394907]: 2025-10-02 13:05:26.601894945 +0000 UTC m=+10.242827662 container died 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54d63dcb3900230914fe2b4cbd1dc6a01b3d45f04f583f3f748b14b4fb4e88c-merged.mount: Deactivated successfully.
Oct 02 13:05:26 compute-0 podman[394907]: 2025-10-02 13:05:26.907289034 +0000 UTC m=+10.548221751 container remove 636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khorana, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:05:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:26 compute-0 systemd[1]: libpod-conmon-636ae3116402c8002a0eeca94e7587db8ede0363276bb9e4832d0955fef03b05.scope: Deactivated successfully.
Oct 02 13:05:26 compute-0 sudo[394800]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:26 compute-0 ceph-mon[73607]: pgmap v3199: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 563 KiB/s wr, 19 op/s
Oct 02 13:05:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3537897323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1490843618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3621640472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ceph-mon[73607]: pgmap v3200: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1765555069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ceph-mon[73607]: pgmap v3201: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:26 compute-0 ceph-mon[73607]: pgmap v3202: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/797407119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:26.987 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:26.988 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:26.988 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:27 compute-0 sudo[395001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:27 compute-0 sudo[395001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:27 compute-0 sudo[395001]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:27 compute-0 sudo[395026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:27 compute-0 sudo[395026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:27 compute-0 sudo[395026]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:27 compute-0 sudo[395051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:27 compute-0 sudo[395051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:27 compute-0 sudo[395051]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:27 compute-0 sudo[395076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:05:27 compute-0 sudo[395076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:27.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.636500965 +0000 UTC m=+0.090112391 container create e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.572317702 +0000 UTC m=+0.025929158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:27 compute-0 systemd[1]: Started libpod-conmon-e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e.scope.
Oct 02 13:05:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.801913246 +0000 UTC m=+0.255524702 container init e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.80948386 +0000 UTC m=+0.263095286 container start e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:05:27 compute-0 dazzling_wiles[395158]: 167 167
Oct 02 13:05:27 compute-0 systemd[1]: libpod-e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e.scope: Deactivated successfully.
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.914347867 +0000 UTC m=+0.367959293 container attach e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:05:27 compute-0 podman[395141]: 2025-10-02 13:05:27.914993202 +0000 UTC m=+0.368604648 container died e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:05:28 compute-0 ceph-mon[73607]: pgmap v3203: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:05:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2288016458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:28 compute-0 nova_compute[257802]: 2025-10-02 13:05:28.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-58120b78a2621c6d900598efc40303195a0f4b244ddc99e1ce1e2b62e654ecd2-merged.mount: Deactivated successfully.
Oct 02 13:05:28 compute-0 podman[395141]: 2025-10-02 13:05:28.156583137 +0000 UTC m=+0.610194573 container remove e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:05:28 compute-0 systemd[1]: libpod-conmon-e1ee61996daee9c2069b00b51be470e83976052671f1fc10766e14d3c6e43a6e.scope: Deactivated successfully.
Oct 02 13:05:28 compute-0 nova_compute[257802]: 2025-10-02 13:05:28.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-0 podman[395183]: 2025-10-02 13:05:28.36006127 +0000 UTC m=+0.049829546 container create e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:05:28 compute-0 systemd[1]: Started libpod-conmon-e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7.scope.
Oct 02 13:05:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:28 compute-0 podman[395183]: 2025-10-02 13:05:28.336706974 +0000 UTC m=+0.026475300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e1b111583fac76e3ce6398e730f4ab7e532fbbd71442231cefec601d04f155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e1b111583fac76e3ce6398e730f4ab7e532fbbd71442231cefec601d04f155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e1b111583fac76e3ce6398e730f4ab7e532fbbd71442231cefec601d04f155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e1b111583fac76e3ce6398e730f4ab7e532fbbd71442231cefec601d04f155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:28 compute-0 podman[395183]: 2025-10-02 13:05:28.456074003 +0000 UTC m=+0.145842309 container init e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:05:28 compute-0 podman[395183]: 2025-10-02 13:05:28.462618 +0000 UTC m=+0.152386276 container start e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:05:28 compute-0 podman[395183]: 2025-10-02 13:05:28.466813332 +0000 UTC m=+0.156581718 container attach e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:05:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.5 MiB/s wr, 38 op/s
Oct 02 13:05:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/868795829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:29 compute-0 ceph-mon[73607]: pgmap v3204: 305 pgs: 305 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.5 MiB/s wr, 38 op/s
Oct 02 13:05:29 compute-0 eager_wozniak[395199]: {
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:         "osd_id": 1,
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:         "type": "bluestore"
Oct 02 13:05:29 compute-0 eager_wozniak[395199]:     }
Oct 02 13:05:29 compute-0 eager_wozniak[395199]: }
Oct 02 13:05:29 compute-0 systemd[1]: libpod-e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7.scope: Deactivated successfully.
Oct 02 13:05:29 compute-0 podman[395183]: 2025-10-02 13:05:29.299780914 +0000 UTC m=+0.989549190 container died e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e1b111583fac76e3ce6398e730f4ab7e532fbbd71442231cefec601d04f155-merged.mount: Deactivated successfully.
Oct 02 13:05:29 compute-0 podman[395183]: 2025-10-02 13:05:29.361516567 +0000 UTC m=+1.051284843 container remove e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:05:29 compute-0 systemd[1]: libpod-conmon-e6f25c78a63f3eca8f9413086371fcc160596a26afacc2486b1364013712bde7.scope: Deactivated successfully.
Oct 02 13:05:29 compute-0 sudo[395076]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:05:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:05:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0333a0a9-74d8-444f-a733-d59110a55209 does not exist
Oct 02 13:05:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 59722f7a-0242-4766-a2b5-5930c7b7923d does not exist
Oct 02 13:05:29 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4b772373-7e20-4e9e-b6dd-98555d414ef9 does not exist
Oct 02 13:05:29 compute-0 sudo[395233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:29 compute-0 sudo[395233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:29 compute-0 sudo[395233]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:29 compute-0 sudo[395258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:05:29 compute-0 sudo[395258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:29 compute-0 sudo[395258]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:30 compute-0 nova_compute[257802]: 2025-10-02 13:05:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:30 compute-0 nova_compute[257802]: 2025-10-02 13:05:30.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:05:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 189 KiB/s rd, 3.0 MiB/s wr, 51 op/s
Oct 02 13:05:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:31 compute-0 nova_compute[257802]: 2025-10-02 13:05:31.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:31 compute-0 ceph-mon[73607]: pgmap v3205: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 189 KiB/s rd, 3.0 MiB/s wr, 51 op/s
Oct 02 13:05:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 13:05:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:33 compute-0 nova_compute[257802]: 2025-10-02 13:05:33.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:33 compute-0 ceph-mon[73607]: pgmap v3206: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 13:05:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 13:05:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:35 compute-0 nova_compute[257802]: 2025-10-02 13:05:35.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:35 compute-0 ceph-mon[73607]: pgmap v3207: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 13:05:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.785476) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335785551, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 534, "num_deletes": 252, "total_data_size": 593242, "memory_usage": 602568, "flush_reason": "Manual Compaction"}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335790794, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 503389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70817, "largest_seqno": 71349, "table_properties": {"data_size": 500449, "index_size": 911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7949, "raw_average_key_size": 21, "raw_value_size": 494372, "raw_average_value_size": 1314, "num_data_blocks": 38, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410307, "oldest_key_time": 1759410307, "file_creation_time": 1759410335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 5381 microseconds, and 2275 cpu microseconds.
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.790866) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 503389 bytes OK
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.790886) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.792744) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.792759) EVENT_LOG_v1 {"time_micros": 1759410335792754, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.792777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 590176, prev total WAL file size 590176, number of live WAL files 2.
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.793341) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373633' seq:0, type:0; will stop at (end)
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(491KB)], [161(13MB)]
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335793388, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 14293053, "oldest_snapshot_seqno": -1}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9601 keys, 10455323 bytes, temperature: kUnknown
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335855964, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10455323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10396087, "index_size": 34099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 254337, "raw_average_key_size": 26, "raw_value_size": 10230595, "raw_average_value_size": 1065, "num_data_blocks": 1286, "num_entries": 9601, "num_filter_entries": 9601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.856256) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10455323 bytes
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.857998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.2 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(49.2) write-amplify(20.8) OK, records in: 10118, records dropped: 517 output_compression: NoCompression
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.858016) EVENT_LOG_v1 {"time_micros": 1759410335858008, "job": 100, "event": "compaction_finished", "compaction_time_micros": 62646, "compaction_time_cpu_micros": 27803, "output_level": 6, "num_output_files": 1, "total_output_size": 10455323, "num_input_records": 10118, "num_output_records": 9601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335858191, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335860538, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.793237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.860622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.860628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.860630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.860632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:05:35.860634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:05:36 compute-0 nova_compute[257802]: 2025-10-02 13:05:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 13:05:36 compute-0 podman[395287]: 2025-10-02 13:05:36.920100899 +0000 UTC m=+0.057574273 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:05:36 compute-0 podman[395289]: 2025-10-02 13:05:36.920234662 +0000 UTC m=+0.054859398 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 13:05:36 compute-0 podman[395288]: 2025-10-02 13:05:36.928050141 +0000 UTC m=+0.065293160 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:37 compute-0 nova_compute[257802]: 2025-10-02 13:05:37.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:37.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:37 compute-0 ceph-mon[73607]: pgmap v3208: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 13:05:38 compute-0 nova_compute[257802]: 2025-10-02 13:05:38.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 13:05:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-0 ceph-mon[73607]: pgmap v3209: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 13:05:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2523197540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:40 compute-0 nova_compute[257802]: 2025-10-02 13:05:40.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 898 KiB/s wr, 160 op/s
Oct 02 13:05:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/277765840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:41 compute-0 ceph-mon[73607]: pgmap v3210: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 898 KiB/s wr, 160 op/s
Oct 02 13:05:42 compute-0 nova_compute[257802]: 2025-10-02 13:05:42.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:42 compute-0 nova_compute[257802]: 2025-10-02 13:05:42.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:05:42 compute-0 nova_compute[257802]: 2025-10-02 13:05:42.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:05:42 compute-0 nova_compute[257802]: 2025-10-02 13:05:42.111 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:05:42
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images']
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 12 KiB/s wr, 133 op/s
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:42.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:43 compute-0 sudo[395346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:43 compute-0 sudo[395346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:43 compute-0 sudo[395346]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:43 compute-0 sudo[395371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:43 compute-0 sudo[395371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:43 compute-0 sudo[395371]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:43 compute-0 nova_compute[257802]: 2025-10-02 13:05:43.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:05:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:05:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:05:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:05:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:05:43 compute-0 ceph-mon[73607]: pgmap v3211: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 12 KiB/s wr, 133 op/s
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:05:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 240 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Oct 02 13:05:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:44.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:44 compute-0 ceph-mon[73607]: pgmap v3212: 305 pgs: 305 active+clean; 240 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Oct 02 13:05:45 compute-0 nova_compute[257802]: 2025-10-02 13:05:45.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:45 compute-0 podman[395397]: 2025-10-02 13:05:45.931373156 +0000 UTC m=+0.074665328 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:05:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 02 13:05:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.129 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.130 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499657768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.551 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:47 compute-0 ceph-mon[73607]: pgmap v3213: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 02 13:05:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2499657768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.726 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.727 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4226MB free_disk=20.921958923339844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.727 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.728 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.811 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.811 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.837 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.866 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.866 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.885 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.908 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:05:47 compute-0 nova_compute[257802]: 2025-10-02 13:05:47.928 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030892442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.336 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.341 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.361 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.391 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:05:48 compute-0 nova_compute[257802]: 2025-10-02 13:05:48.392 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:05:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2030892442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:50 compute-0 ceph-mon[73607]: pgmap v3214: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:05:50 compute-0 nova_compute[257802]: 2025-10-02 13:05:50.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 13:05:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:51.043 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:51 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:51.044 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:05:51 compute-0 nova_compute[257802]: 2025-10-02 13:05:51.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:51 compute-0 ceph-mon[73607]: pgmap v3215: 305 pgs: 305 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 13:05:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3817092173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1422738161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 13:05:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:53 compute-0 nova_compute[257802]: 2025-10-02 13:05:53.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:53 compute-0 ceph-mon[73607]: pgmap v3216: 305 pgs: 305 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 13:05:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:05:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:05:54 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:05:54.045 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021710476572678206 of space, bias 1.0, pg target 0.6513142971803462 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6488060964544947 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:05:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:05:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284501241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:05:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:05:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284501241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:05:55 compute-0 nova_compute[257802]: 2025-10-02 13:05:55.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:55.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:56 compute-0 ceph-mon[73607]: pgmap v3217: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 02 13:05:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4284501241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:05:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4284501241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:05:56 compute-0 nova_compute[257802]: 2025-10-02 13:05:56.393 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 571 KiB/s wr, 53 op/s
Oct 02 13:05:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:05:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:05:57 compute-0 ceph-mon[73607]: pgmap v3218: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 571 KiB/s wr, 53 op/s
Oct 02 13:05:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:57.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:58 compute-0 nova_compute[257802]: 2025-10-02 13:05:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 45 op/s
Oct 02 13:05:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:05:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:59.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:59 compute-0 ceph-mon[73607]: pgmap v3219: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 45 op/s
Oct 02 13:06:00 compute-0 nova_compute[257802]: 2025-10-02 13:06:00.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 55 op/s
Oct 02 13:06:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4114918587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:00.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:01.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:01 compute-0 ceph-mon[73607]: pgmap v3220: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 55 op/s
Oct 02 13:06:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Oct 02 13:06:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:02.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:02 compute-0 ceph-mon[73607]: pgmap v3221: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Oct 02 13:06:03 compute-0 sudo[395479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:03 compute-0 sudo[395479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:03 compute-0 sudo[395479]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:03 compute-0 sudo[395504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:03 compute-0 sudo[395504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:03 compute-0 sudo[395504]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:03 compute-0 nova_compute[257802]: 2025-10-02 13:06:03.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2961042422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2961042422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 45 op/s
Oct 02 13:06:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:06:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/321137154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:06:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/321137154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:05 compute-0 nova_compute[257802]: 2025-10-02 13:06:05.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 ceph-mon[73607]: pgmap v3222: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 45 op/s
Oct 02 13:06:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/321137154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/321137154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 3.3 KiB/s wr, 44 op/s
Oct 02 13:06:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:06.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1451230974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1451230974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:07.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:07 compute-0 ceph-mon[73607]: pgmap v3223: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 3.3 KiB/s wr, 44 op/s
Oct 02 13:06:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1451230974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1451230974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:07 compute-0 podman[395531]: 2025-10-02 13:06:07.911670777 +0000 UTC m=+0.049621472 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:07 compute-0 podman[395533]: 2025-10-02 13:06:07.943976188 +0000 UTC m=+0.066362856 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 13:06:07 compute-0 podman[395532]: 2025-10-02 13:06:07.949858741 +0000 UTC m=+0.082969029 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:06:08 compute-0 nova_compute[257802]: 2025-10-02 13:06:08.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.3 KiB/s wr, 64 op/s
Oct 02 13:06:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:08.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:09 compute-0 ceph-mon[73607]: pgmap v3224: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.3 KiB/s wr, 64 op/s
Oct 02 13:06:10 compute-0 nova_compute[257802]: 2025-10-02 13:06:10.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 57 op/s
Oct 02 13:06:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:10.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:11 compute-0 ceph-mon[73607]: pgmap v3225: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 57 op/s
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:12.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:13 compute-0 nova_compute[257802]: 2025-10-02 13:06:13.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:13 compute-0 ceph-mon[73607]: pgmap v3226: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Oct 02 13:06:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Oct 02 13:06:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:14.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:15 compute-0 nova_compute[257802]: 2025-10-02 13:06:15.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:15.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:15 compute-0 ceph-mon[73607]: pgmap v3227: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Oct 02 13:06:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 767 B/s wr, 38 op/s
Oct 02 13:06:16 compute-0 ceph-mon[73607]: pgmap v3228: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 767 B/s wr, 38 op/s
Oct 02 13:06:16 compute-0 podman[395594]: 2025-10-02 13:06:16.928803946 +0000 UTC m=+0.073489000 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:17.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:18 compute-0 nova_compute[257802]: 2025-10-02 13:06:18.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 255 B/s wr, 25 op/s
Oct 02 13:06:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:18.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:19 compute-0 ceph-mon[73607]: pgmap v3229: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 255 B/s wr, 25 op/s
Oct 02 13:06:20 compute-0 nova_compute[257802]: 2025-10-02 13:06:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Oct 02 13:06:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2478314371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:20.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:21.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:21 compute-0 ceph-mon[73607]: pgmap v3230: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Oct 02 13:06:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2674143807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:06:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2851169436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:22.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:23 compute-0 nova_compute[257802]: 2025-10-02 13:06:23.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:23 compute-0 nova_compute[257802]: 2025-10-02 13:06:23.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:06:23 compute-0 sudo[395623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:23 compute-0 sudo[395623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:23 compute-0 sudo[395623]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:23 compute-0 nova_compute[257802]: 2025-10-02 13:06:23.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:23 compute-0 sudo[395648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:23 compute-0 sudo[395648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:23 compute-0 sudo[395648]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:24 compute-0 ceph-mon[73607]: pgmap v3231: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:06:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:06:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:25 compute-0 ceph-mon[73607]: pgmap v3232: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:06:25 compute-0 nova_compute[257802]: 2025-10-02 13:06:25.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:25 compute-0 sshd-session[395674]: Invalid user vr from 167.99.55.34 port 34456
Oct 02 13:06:25 compute-0 sshd-session[395674]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:06:25 compute-0 sshd-session[395674]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 13:06:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 131 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 318 KiB/s wr, 13 op/s
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:26.988 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:26.988 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:26.989 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:27.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:27 compute-0 ceph-mon[73607]: pgmap v3233: 305 pgs: 305 active+clean; 131 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 318 KiB/s wr, 13 op/s
Oct 02 13:06:28 compute-0 sshd-session[395674]: Failed password for invalid user vr from 167.99.55.34 port 34456 ssh2
Oct 02 13:06:28 compute-0 nova_compute[257802]: 2025-10-02 13:06:28.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 154 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Oct 02 13:06:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/273144948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:29 compute-0 nova_compute[257802]: 2025-10-02 13:06:29.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:29.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:29 compute-0 sshd-session[395674]: Connection closed by invalid user vr 167.99.55.34 port 34456 [preauth]
Oct 02 13:06:29 compute-0 sudo[395678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:29 compute-0 sudo[395678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:29 compute-0 sudo[395678]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:29 compute-0 sudo[395703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:29 compute-0 sudo[395703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:29 compute-0 sudo[395703]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:30 compute-0 ceph-mon[73607]: pgmap v3234: 305 pgs: 305 active+clean; 154 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Oct 02 13:06:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/425471803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:30 compute-0 sudo[395728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:30 compute-0 sudo[395728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:30 compute-0 sudo[395728]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:30 compute-0 sudo[395753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:06:30 compute-0 sudo[395753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:30 compute-0 nova_compute[257802]: 2025-10-02 13:06:30.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:30 compute-0 sudo[395753]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:06:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:31 compute-0 nova_compute[257802]: 2025-10-02 13:06:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:31 compute-0 nova_compute[257802]: 2025-10-02 13:06:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:31 compute-0 ceph-mon[73607]: pgmap v3235: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:06:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:31.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:06:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:33 compute-0 ceph-mon[73607]: pgmap v3236: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.299 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.299 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.326 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.406 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.407 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.413 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.414 2 INFO nova.compute.claims [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:06:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:33.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.520 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372186077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.936 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.945 2 DEBUG nova.compute.provider_tree [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.963 2 DEBUG nova.scheduler.client.report [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.989 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:33 compute-0 nova_compute[257802]: 2025-10-02 13:06:33.990 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.052 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.053 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.096 2 INFO nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.142 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:06:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/372186077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.266 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.267 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.268 2 INFO nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Creating image(s)
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.306 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.338 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.367 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.372 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.438 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.439 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.440 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.440 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.474 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.479 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ededa679-1161-406f-bccd-e571b80d8f7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:06:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9429b457-e15a-468f-8e33-a234e2c527a8 does not exist
Oct 02 13:06:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ba2f49fb-1d39-435b-b393-a178293fe788 does not exist
Oct 02 13:06:34 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dabc948d-e088-455c-a2fb-77257a95e4fb does not exist
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:06:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:06:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:34 compute-0 sudo[395927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:34 compute-0 sudo[395927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:34 compute-0 sudo[395927]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.743 2 DEBUG nova.policy [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:06:34 compute-0 sudo[395952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:34 compute-0 sudo[395952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:34 compute-0 sudo[395952]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.816 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ededa679-1161-406f-bccd-e571b80d8f7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:34 compute-0 sudo[395977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:34 compute-0 sudo[395977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:34 compute-0 sudo[395977]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:34 compute-0 nova_compute[257802]: 2025-10-02 13:06:34.903 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:06:34 compute-0 sudo[396020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:06:34 compute-0 sudo[396020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:35.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.027 2 DEBUG nova.objects.instance [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.042 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.042 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Ensure instance console log exists: /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.043 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.044 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.045 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.237675186 +0000 UTC m=+0.061654082 container create 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.195873355 +0000 UTC m=+0.019852271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:35 compute-0 systemd[1]: Started libpod-conmon-1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01.scope.
Oct 02 13:06:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.433110045 +0000 UTC m=+0.257088961 container init 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.44080891 +0000 UTC m=+0.264787806 container start 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:06:35 compute-0 crazy_khayyam[396152]: 167 167
Oct 02 13:06:35 compute-0 systemd[1]: libpod-1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01.scope: Deactivated successfully.
Oct 02 13:06:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:35.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.530185593 +0000 UTC m=+0.354164509 container attach 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:06:35 compute-0 podman[396136]: 2025-10-02 13:06:35.530671884 +0000 UTC m=+0.354650780 container died 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:35 compute-0 ceph-mon[73607]: pgmap v3237: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:06:35 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:35 compute-0 nova_compute[257802]: 2025-10-02 13:06:35.763 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Successfully created port: ccb82929-088d-402f-87fd-a1b429d52ff7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f7fc1828c748e6740692906c3df2bd1f6b02e98f6c5ec847e5f76a387bfd94a-merged.mount: Deactivated successfully.
Oct 02 13:06:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:36 compute-0 podman[396136]: 2025-10-02 13:06:36.078540619 +0000 UTC m=+0.902519515 container remove 1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:36 compute-0 systemd[1]: libpod-conmon-1b1815ed65d49ea8770608e8dfbc551f044df25fd51d4b4487aa4304be289b01.scope: Deactivated successfully.
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.141 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:36 compute-0 podman[396176]: 2025-10-02 13:06:36.275171706 +0000 UTC m=+0.074937804 container create 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:36 compute-0 podman[396176]: 2025-10-02 13:06:36.222673927 +0000 UTC m=+0.022440025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:36 compute-0 systemd[1]: Started libpod-conmon-594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5.scope.
Oct 02 13:06:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:36 compute-0 podman[396176]: 2025-10-02 13:06:36.466227748 +0000 UTC m=+0.265993836 container init 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:06:36 compute-0 podman[396176]: 2025-10-02 13:06:36.47208049 +0000 UTC m=+0.271846568 container start 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.476 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Successfully updated port: ccb82929-088d-402f-87fd-a1b429d52ff7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.496 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.497 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.497 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:06:36 compute-0 podman[396176]: 2025-10-02 13:06:36.519687792 +0000 UTC m=+0.319453940 container attach 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 191 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 145 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.622 2 DEBUG nova.compute.manager [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.622 2 DEBUG nova.compute.manager [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing instance network info cache due to event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.622 2 DEBUG oslo_concurrency.lockutils [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:36 compute-0 nova_compute[257802]: 2025-10-02 13:06:36.670 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:06:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.141 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:37 compute-0 sad_kilby[396193]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:06:37 compute-0 sad_kilby[396193]: --> relative data size: 1.0
Oct 02 13:06:37 compute-0 sad_kilby[396193]: --> All data devices are unavailable
Oct 02 13:06:37 compute-0 systemd[1]: libpod-594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5.scope: Deactivated successfully.
Oct 02 13:06:37 compute-0 podman[396176]: 2025-10-02 13:06:37.25373811 +0000 UTC m=+1.053504188 container died 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2f73de7e85d4e616e3c1549ce7ecda5431f2cc382009958492f4775f594085-merged.mount: Deactivated successfully.
Oct 02 13:06:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.756 2 DEBUG nova.network.neutron [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.779 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.779 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance network_info: |[{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.779 2 DEBUG oslo_concurrency.lockutils [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.780 2 DEBUG nova.network.neutron [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:06:37 compute-0 podman[396176]: 2025-10-02 13:06:37.781269003 +0000 UTC m=+1.581035121 container remove 594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start _get_guest_xml network_info=[{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:06:37 compute-0 systemd[1]: libpod-conmon-594b8562c725796b70087cccf23b77381e83b99cdcc00cfd8a49ac1306108bb5.scope: Deactivated successfully.
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.788 2 WARNING nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.795 2 DEBUG nova.virt.libvirt.host [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.795 2 DEBUG nova.virt.libvirt.host [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.800 2 DEBUG nova.virt.libvirt.host [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.800 2 DEBUG nova.virt.libvirt.host [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.801 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.801 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.801 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.802 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.803 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.803 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.803 2 DEBUG nova.virt.hardware [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:06:37 compute-0 nova_compute[257802]: 2025-10-02 13:06:37.805 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:37 compute-0 sudo[396020]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:37 compute-0 sudo[396224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:37 compute-0 sudo[396224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:37 compute-0 sudo[396224]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:38 compute-0 ceph-mon[73607]: pgmap v3238: 305 pgs: 305 active+clean; 191 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 145 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Oct 02 13:06:38 compute-0 sudo[396266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:38 compute-0 sudo[396266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:38 compute-0 sudo[396266]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:38 compute-0 sudo[396312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:38 compute-0 sudo[396312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:38 compute-0 sudo[396312]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:38 compute-0 podman[396293]: 2025-10-02 13:06:38.1316635 +0000 UTC m=+0.095239305 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:06:38 compute-0 podman[396294]: 2025-10-02 13:06:38.146865548 +0000 UTC m=+0.105078453 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 13:06:38 compute-0 podman[396292]: 2025-10-02 13:06:38.147287338 +0000 UTC m=+0.108369553 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:06:38 compute-0 sudo[396371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:06:38 compute-0 sudo[396371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619620953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.265 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.310 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.317 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.2 MiB/s wr, 70 op/s
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.612878892 +0000 UTC m=+0.116143071 container create 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.519531143 +0000 UTC m=+0.022795352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:38 compute-0 systemd[1]: Started libpod-conmon-38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55.scope.
Oct 02 13:06:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508314921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.743 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.745 2 DEBUG nova.virt.libvirt.vif [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:34Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.746 2 DEBUG nova.network.os_vif_util [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.747 2 DEBUG nova.network.os_vif_util [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.748 2 DEBUG nova.objects.instance [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.769607164 +0000 UTC m=+0.272871383 container init 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.779152944 +0000 UTC m=+0.282417133 container start 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:38 compute-0 condescending_elbakyan[396494]: 167 167
Oct 02 13:06:38 compute-0 systemd[1]: libpod-38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55.scope: Deactivated successfully.
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.794074185 +0000 UTC m=+0.297338374 container attach 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.795253104 +0000 UTC m=+0.298517293 container died 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.796 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <uuid>ededa679-1161-406f-bccd-e571b80d8f7f</uuid>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <name>instance-000000cf</name>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-84071708</nova:name>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:06:37</nova:creationTime>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <nova:port uuid="ccb82929-088d-402f-87fd-a1b429d52ff7">
Oct 02 13:06:38 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <system>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="serial">ededa679-1161-406f-bccd-e571b80d8f7f</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="uuid">ededa679-1161-406f-bccd-e571b80d8f7f</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </system>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <os>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </os>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <features>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </features>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ededa679-1161-406f-bccd-e571b80d8f7f_disk">
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </source>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/ededa679-1161-406f-bccd-e571b80d8f7f_disk.config">
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </source>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:06:38 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:25:9d:a2"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <target dev="tapccb82929-08"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/console.log" append="off"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <video>
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </video>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:06:38 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:06:38 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:06:38 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:06:38 compute-0 nova_compute[257802]: </domain>
Oct 02 13:06:38 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.797 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Preparing to wait for external event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.797 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.797 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.797 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.798 2 DEBUG nova.virt.libvirt.vif [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:34Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.798 2 DEBUG nova.network.os_vif_util [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.799 2 DEBUG nova.network.os_vif_util [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.799 2 DEBUG os_vif [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccb82929-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccb82929-08, col_values=(('external_ids', {'iface-id': 'ccb82929-088d-402f-87fd-a1b429d52ff7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:9d:a2', 'vm-uuid': 'ededa679-1161-406f-bccd-e571b80d8f7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 NetworkManager[44987]: <info>  [1759410398.8075] manager: (tapccb82929-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.815 2 INFO os_vif [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08')
Oct 02 13:06:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b74ebf2c383bd0bc46ea236ebd8e731f7f58d4d2ad9de0972648fe7f75e1bdd-merged.mount: Deactivated successfully.
Oct 02 13:06:38 compute-0 podman[396476]: 2025-10-02 13:06:38.897265452 +0000 UTC m=+0.400529641 container remove 38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.902 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.903 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.903 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:25:9d:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.903 2 INFO nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Using config drive
Oct 02 13:06:38 compute-0 systemd[1]: libpod-conmon-38edcf1f7381092496e401606b42f1a2a18b00299037368fbd3ad4cfa1c69b55.scope: Deactivated successfully.
Oct 02 13:06:38 compute-0 nova_compute[257802]: 2025-10-02 13:06:38.934 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:39.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:39 compute-0 podman[396541]: 2025-10-02 13:06:39.126081307 +0000 UTC m=+0.107066011 container create 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:39 compute-0 podman[396541]: 2025-10-02 13:06:39.046594664 +0000 UTC m=+0.027579378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:39 compute-0 systemd[1]: Started libpod-conmon-2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75.scope.
Oct 02 13:06:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7c69dd1711592953df699ff88fe799dd57b848c164283602b9882536c24b9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7c69dd1711592953df699ff88fe799dd57b848c164283602b9882536c24b9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7c69dd1711592953df699ff88fe799dd57b848c164283602b9882536c24b9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7c69dd1711592953df699ff88fe799dd57b848c164283602b9882536c24b9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2619620953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:39 compute-0 ceph-mon[73607]: pgmap v3239: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.2 MiB/s wr, 70 op/s
Oct 02 13:06:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3508314921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:39 compute-0 podman[396541]: 2025-10-02 13:06:39.255589751 +0000 UTC m=+0.236574475 container init 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:06:39 compute-0 podman[396541]: 2025-10-02 13:06:39.262859386 +0000 UTC m=+0.243844090 container start 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:06:39 compute-0 podman[396541]: 2025-10-02 13:06:39.269014245 +0000 UTC m=+0.249999059 container attach 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:06:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.741 2 INFO nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Creating config drive at /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.746 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1tz20p_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.881 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1tz20p_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.916 2 DEBUG nova.storage.rbd_utils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ededa679-1161-406f-bccd-e571b80d8f7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.920 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config ededa679-1161-406f-bccd-e571b80d8f7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.986 2 DEBUG nova.network.neutron [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated VIF entry in instance network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:06:39 compute-0 nova_compute[257802]: 2025-10-02 13:06:39.987 2 DEBUG nova.network.neutron [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:40 compute-0 nova_compute[257802]: 2025-10-02 13:06:40.004 2 DEBUG oslo_concurrency.lockutils [req-5b964bc4-fa9d-4d86-83d9-7c15b1bcf666 req-de745a93-5eee-45e9-8ac3-5edbee862455 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:40 compute-0 musing_hugle[396557]: {
Oct 02 13:06:40 compute-0 musing_hugle[396557]:     "1": [
Oct 02 13:06:40 compute-0 musing_hugle[396557]:         {
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "devices": [
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "/dev/loop3"
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             ],
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "lv_name": "ceph_lv0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "lv_size": "7511998464",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "name": "ceph_lv0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "tags": {
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.cluster_name": "ceph",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.crush_device_class": "",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.encrypted": "0",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.osd_id": "1",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.type": "block",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:                 "ceph.vdo": "0"
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             },
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "type": "block",
Oct 02 13:06:40 compute-0 musing_hugle[396557]:             "vg_name": "ceph_vg0"
Oct 02 13:06:40 compute-0 musing_hugle[396557]:         }
Oct 02 13:06:40 compute-0 musing_hugle[396557]:     ]
Oct 02 13:06:40 compute-0 musing_hugle[396557]: }
Oct 02 13:06:40 compute-0 systemd[1]: libpod-2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75.scope: Deactivated successfully.
Oct 02 13:06:40 compute-0 podman[396541]: 2025-10-02 13:06:40.03554377 +0000 UTC m=+1.016528514 container died 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-df7c69dd1711592953df699ff88fe799dd57b848c164283602b9882536c24b9d-merged.mount: Deactivated successfully.
Oct 02 13:06:40 compute-0 podman[396541]: 2025-10-02 13:06:40.125180039 +0000 UTC m=+1.106164743 container remove 2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:40 compute-0 systemd[1]: libpod-conmon-2b9849721cdc8733fb4c1ab66336072111364476ecf9dce595d069d8b0bb2d75.scope: Deactivated successfully.
Oct 02 13:06:40 compute-0 sudo[396371]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:40 compute-0 sudo[396617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:40 compute-0 sudo[396617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:40 compute-0 sudo[396617]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:40 compute-0 nova_compute[257802]: 2025-10-02 13:06:40.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:40 compute-0 sudo[396642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:40 compute-0 sudo[396642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:40 compute-0 sudo[396642]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:40 compute-0 sudo[396667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:40 compute-0 sudo[396667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:40 compute-0 sudo[396667]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:40 compute-0 sudo[396692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:06:40 compute-0 sudo[396692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2940961265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.71881675 +0000 UTC m=+0.052164114 container create eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:06:40 compute-0 systemd[1]: Started libpod-conmon-eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba.scope.
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.691353715 +0000 UTC m=+0.024701109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.853102199 +0000 UTC m=+0.186449603 container init eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.86016113 +0000 UTC m=+0.193508494 container start eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:06:40 compute-0 angry_gates[396777]: 167 167
Oct 02 13:06:40 compute-0 systemd[1]: libpod-eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba.scope: Deactivated successfully.
Oct 02 13:06:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.878645257 +0000 UTC m=+0.211992641 container attach eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:06:40 compute-0 podman[396760]: 2025-10-02 13:06:40.87959049 +0000 UTC m=+0.212937864 container died eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4041c1b7ce7cf33fa5e2c3573a313f579eba16ea11cfce104342a1c44e6233bd-merged.mount: Deactivated successfully.
Oct 02 13:06:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.063 2 DEBUG oslo_concurrency.processutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config ededa679-1161-406f-bccd-e571b80d8f7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.065 2 INFO nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Deleting local config drive /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/disk.config because it was imported into RBD.
Oct 02 13:06:41 compute-0 kernel: tapccb82929-08: entered promiscuous mode
Oct 02 13:06:41 compute-0 podman[396760]: 2025-10-02 13:06:41.12681201 +0000 UTC m=+0.460159374 container remove eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.1280] manager: (tapccb82929-08): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_controller[148183]: 2025-10-02T13:06:41Z|00933|binding|INFO|Claiming lport ccb82929-088d-402f-87fd-a1b429d52ff7 for this chassis.
Oct 02 13:06:41 compute-0 ovn_controller[148183]: 2025-10-02T13:06:41Z|00934|binding|INFO|ccb82929-088d-402f-87fd-a1b429d52ff7: Claiming fa:16:3e:25:9d:a2 10.100.0.5
Oct 02 13:06:41 compute-0 systemd[1]: libpod-conmon-eda3d2504afec9e72465ea8e707bd9147f19ec0defef78e2b1681bcce7c766ba.scope: Deactivated successfully.
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.147 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9d:a2 10.100.0.5'], port_security=['fa:16:3e:25:9d:a2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ededa679-1161-406f-bccd-e571b80d8f7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0130049-f161-49c4-b94d-f41f73f65259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916b0cf7-14c0-4414-a8be-ed0d8198832b, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ccb82929-088d-402f-87fd-a1b429d52ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.148 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ccb82929-088d-402f-87fd-a1b429d52ff7 in datapath 6ac71ae6-e349-4379-83eb-f06b6ed86426 bound to our chassis
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.150 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ac71ae6-e349-4379-83eb-f06b6ed86426
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.163 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1d12d090-99ee-43e0-9cca-425021a1eb5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.164 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ac71ae6-e1 in ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.165 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ac71ae6-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.165 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[67a803da-df3f-4a4c-b224-fe219a44faaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 systemd-udevd[396812]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.167 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[209102cf-d4be-4ed2-963c-2633081df365]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 systemd-machined[211836]: New machine qemu-100-instance-000000cf.
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.1813] device (tapccb82929-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.1821] device (tapccb82929-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.181 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[7873bf79-7ece-43c0-982c-022a2583d85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-000000cf.
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_controller[148183]: 2025-10-02T13:06:41Z|00935|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 ovn-installed in OVS
Oct 02 13:06:41 compute-0 ovn_controller[148183]: 2025-10-02T13:06:41Z|00936|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 up in Southbound
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.206 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fafc93-54c8-473e-a1d4-5307624e3acd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.232 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6b85f63f-d14f-4f73-ab8d-c6e044b42d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 systemd-udevd[396816]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.2378] manager: (tap6ac71ae6-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/418)
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.237 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5164f0af-7ac0-4b78-a123-7e16220e13ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.269 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[1860de50-603c-4f79-ac96-0594ea7118a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.272 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b21267-ef46-4b61-89b8-6dfc1e80c370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.2963] device (tap6ac71ae6-e0): carrier: link connected
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.302 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[b31f25c4-9be0-42a9-990f-ff0feaa95a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.319 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[72cc8ce4-e92d-4bd8-bbf2-c1e0a5bfb3a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac71ae6-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:84:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 279], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846892, 'reachable_time': 35592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396864, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.336 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[af5c5c27-9a80-4ca0-9470-ec81e69c33f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:8441'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 846892, 'tstamp': 846892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396866, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 podman[396842]: 2025-10-02 13:06:41.339047895 +0000 UTC m=+0.067346980 container create dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.357 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c42591d0-059f-4e1e-a1c1-dc39cef0f63b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac71ae6-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:84:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 279], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846892, 'reachable_time': 35592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396867, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.371 2 DEBUG nova.compute.manager [req-d74e0b13-6d1a-4915-a814-7db2c93d8bfe req-514b75b0-923a-47e4-a119-35da1a4fddc7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.371 2 DEBUG oslo_concurrency.lockutils [req-d74e0b13-6d1a-4915-a814-7db2c93d8bfe req-514b75b0-923a-47e4-a119-35da1a4fddc7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.372 2 DEBUG oslo_concurrency.lockutils [req-d74e0b13-6d1a-4915-a814-7db2c93d8bfe req-514b75b0-923a-47e4-a119-35da1a4fddc7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.372 2 DEBUG oslo_concurrency.lockutils [req-d74e0b13-6d1a-4915-a814-7db2c93d8bfe req-514b75b0-923a-47e4-a119-35da1a4fddc7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.372 2 DEBUG nova.compute.manager [req-d74e0b13-6d1a-4915-a814-7db2c93d8bfe req-514b75b0-923a-47e4-a119-35da1a4fddc7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Processing event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:06:41 compute-0 podman[396842]: 2025-10-02 13:06:41.300453891 +0000 UTC m=+0.028752996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.390 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[53052be3-7237-4b1a-bbb8-7fed8e283a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 systemd[1]: Started libpod-conmon-dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7.scope.
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.449 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcbde0a-7914-4465-91f9-2eac294797b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.450 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac71ae6-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.451 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.451 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ac71ae6-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:41 compute-0 kernel: tap6ac71ae6-e0: entered promiscuous mode
Oct 02 13:06:41 compute-0 NetworkManager[44987]: <info>  [1759410401.4549] manager: (tap6ac71ae6-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Oct 02 13:06:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.462 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ac71ae6-e0, col_values=(('external_ids', {'iface-id': '0f9dbaf2-40b8-4571-bd96-64d17a764b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.465 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d439e27a87878053e10aa00f215286332160083a64c90ae3c9b058df851877d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:41 compute-0 ovn_controller[148183]: 2025-10-02T13:06:41Z|00937|binding|INFO|Releasing lport 0f9dbaf2-40b8-4571-bd96-64d17a764b41 from this chassis (sb_readonly=0)
Oct 02 13:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d439e27a87878053e10aa00f215286332160083a64c90ae3c9b058df851877d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d439e27a87878053e10aa00f215286332160083a64c90ae3c9b058df851877d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d439e27a87878053e10aa00f215286332160083a64c90ae3c9b058df851877d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.483 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[263cd78f-e25b-415a-ad87-668d4e9682f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:41 compute-0 nova_compute[257802]: 2025-10-02 13:06:41.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.485 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-6ac71ae6-e349-4379-83eb-f06b6ed86426
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 6ac71ae6-e349-4379-83eb-f06b6ed86426
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:06:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:06:41.486 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'env', 'PROCESS_TAG=haproxy-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ac71ae6-e349-4379-83eb-f06b6ed86426.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:06:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2993262296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:41 compute-0 ceph-mon[73607]: pgmap v3240: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 13:06:41 compute-0 podman[396842]: 2025-10-02 13:06:41.568678661 +0000 UTC m=+0.296977766 container init dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:06:41 compute-0 podman[396842]: 2025-10-02 13:06:41.576665683 +0000 UTC m=+0.304964768 container start dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:06:41 compute-0 podman[396842]: 2025-10-02 13:06:41.598521082 +0000 UTC m=+0.326820187 container attach dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:06:41 compute-0 podman[396948]: 2025-10-02 13:06:41.902554168 +0000 UTC m=+0.081308838 container create ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:06:41 compute-0 podman[396948]: 2025-10-02 13:06:41.845310513 +0000 UTC m=+0.024065223 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:06:41 compute-0 systemd[1]: Started libpod-conmon-ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496.scope.
Oct 02 13:06:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fd615894ba94465181cbdb7eb4ce8408183dd5946c2ba73a4afa736da2f9d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:42 compute-0 podman[396948]: 2025-10-02 13:06:42.015272375 +0000 UTC m=+0.194027055 container init ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:06:42 compute-0 podman[396948]: 2025-10-02 13:06:42.020716597 +0000 UTC m=+0.199471267 container start ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:06:42 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [NOTICE]   (396967) : New worker (396969) forked
Oct 02 13:06:42 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [NOTICE]   (396967) : Loading success.
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.118 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.118 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.164 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410402.1633048, ededa679-1161-406f-bccd-e571b80d8f7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.164 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Started (Lifecycle Event)
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.168 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.172 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.175 2 INFO nova.virt.libvirt.driver [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance spawned successfully.
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.176 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.192 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.199 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.202 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.203 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.203 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.204 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.204 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.205 2 DEBUG nova.virt.libvirt.driver [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.237 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.237 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410402.1634254, ededa679-1161-406f-bccd-e571b80d8f7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.237 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Paused (Lifecycle Event)
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.264 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.267 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410402.1722455, ededa679-1161-406f-bccd-e571b80d8f7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.267 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Resumed (Lifecycle Event)
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.285 2 INFO nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Took 8.02 seconds to spawn the instance on the hypervisor.
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.286 2 DEBUG nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.294 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.296 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.375 2 INFO nova.compute.manager [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Took 9.00 seconds to build instance.
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]: {
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:         "osd_id": 1,
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:         "type": "bluestore"
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]:     }
Oct 02 13:06:42 compute-0 xenodochial_hopper[396876]: }
Oct 02 13:06:42 compute-0 nova_compute[257802]: 2025-10-02 13:06:42.401 2 DEBUG oslo_concurrency.lockutils [None req-5a927022-2954-4b0a-b136-7c270ff09082 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:42 compute-0 systemd[1]: libpod-dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7.scope: Deactivated successfully.
Oct 02 13:06:42 compute-0 podman[396842]: 2025-10-02 13:06:42.410499016 +0000 UTC m=+1.138798101 container died dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:06:42
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms']
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d439e27a87878053e10aa00f215286332160083a64c90ae3c9b058df851877d2-merged.mount: Deactivated successfully.
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:42 compute-0 podman[396842]: 2025-10-02 13:06:42.964555581 +0000 UTC m=+1.692854676 container remove dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:06:42 compute-0 systemd[1]: libpod-conmon-dd4b02efee6f73e748171f62d4ae5ea85c631ac908bad15da989da00dd85f4e7.scope: Deactivated successfully.
Oct 02 13:06:43 compute-0 sudo[396692]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:06:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:43.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.451 2 DEBUG nova.compute.manager [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.452 2 DEBUG oslo_concurrency.lockutils [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.452 2 DEBUG oslo_concurrency.lockutils [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.452 2 DEBUG oslo_concurrency.lockutils [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.453 2 DEBUG nova.compute.manager [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.453 2 WARNING nova.compute.manager [req-4bcf50ad-78d6-41a1-8e04-387135e47f4c req-f64d7aa3-5846-4ff9-aeac-1d8a2d057eac d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state active and task_state None.
Oct 02 13:06:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:43.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:06:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:06:43 compute-0 nova_compute[257802]: 2025-10-02 13:06:43.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:44 compute-0 ceph-mon[73607]: pgmap v3241: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:06:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 13:06:44 compute-0 NetworkManager[44987]: <info>  [1759410404.8147] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Oct 02 13:06:44 compute-0 nova_compute[257802]: 2025-10-02 13:06:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:44 compute-0 NetworkManager[44987]: <info>  [1759410404.8159] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct 02 13:06:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5b43402a-0d24-47f3-aad4-32ad5a862ed1 does not exist
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ce274db1-6c1b-4fe8-bb0f-9dc19e9921c2 does not exist
Oct 02 13:06:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 79ea928a-1690-4faa-846b-593fa0aba92a does not exist
Oct 02 13:06:44 compute-0 nova_compute[257802]: 2025-10-02 13:06:44.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:44 compute-0 ovn_controller[148183]: 2025-10-02T13:06:44Z|00938|binding|INFO|Releasing lport 0f9dbaf2-40b8-4571-bd96-64d17a764b41 from this chassis (sb_readonly=0)
Oct 02 13:06:44 compute-0 nova_compute[257802]: 2025-10-02 13:06:44.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:44 compute-0 sudo[397013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:44 compute-0 sudo[397009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:44 compute-0 sudo[397013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:44 compute-0 sudo[397009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:44 compute-0 sudo[397009]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:44 compute-0 sudo[397013]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:45.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:45 compute-0 sudo[397061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:45 compute-0 sudo[397061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:45 compute-0 sudo[397061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:45 compute-0 sudo[397060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:06:45 compute-0 sudo[397060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:45 compute-0 sudo[397060]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.159 2 DEBUG nova.compute.manager [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.162 2 DEBUG nova.compute.manager [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing instance network info cache due to event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.163 2 DEBUG oslo_concurrency.lockutils [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.164 2 DEBUG oslo_concurrency.lockutils [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.164 2 DEBUG nova.network.neutron [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:06:45 compute-0 nova_compute[257802]: 2025-10-02 13:06:45.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:45 compute-0 ceph-mon[73607]: pgmap v3242: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 13:06:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:06:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 13:06:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:47.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:06:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:06:47 compute-0 ceph-mon[73607]: pgmap v3243: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 13:06:47 compute-0 podman[397111]: 2025-10-02 13:06:47.971282327 +0000 UTC m=+0.103874765 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 13:06:48 compute-0 nova_compute[257802]: 2025-10-02 13:06:48.338 2 DEBUG nova.network.neutron [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated VIF entry in instance network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:06:48 compute-0 nova_compute[257802]: 2025-10-02 13:06:48.339 2 DEBUG nova.network.neutron [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:48 compute-0 nova_compute[257802]: 2025-10-02 13:06:48.365 2 DEBUG oslo_concurrency.lockutils [req-48707a12-1cd4-428e-b3b7-0657d3242401 req-f659c6a6-ab02-41f0-afd0-cbdd7e01a2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 214 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Oct 02 13:06:48 compute-0 nova_compute[257802]: 2025-10-02 13:06:48.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:49.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.182 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.183 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.183 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.183 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.184 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:49.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-0 ceph-mon[73607]: pgmap v3244: 305 pgs: 305 active+clean; 214 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Oct 02 13:06:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215923069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.672 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.739 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.740 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.936 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.938 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4098MB free_disk=20.941749572753906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.938 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:49 compute-0 nova_compute[257802]: 2025-10-02 13:06:49.938 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.017 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance ededa679-1161-406f-bccd-e571b80d8f7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.018 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.018 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.205 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 224 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Oct 02 13:06:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305836914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.637 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.642 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.666 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.696 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:06:50 compute-0 nova_compute[257802]: 2025-10-02 13:06:50.697 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2215923069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:51.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:51.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:51 compute-0 ceph-mon[73607]: pgmap v3245: 305 pgs: 305 active+clean; 224 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Oct 02 13:06:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/305836914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 224 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 92 op/s
Oct 02 13:06:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:53.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:53 compute-0 nova_compute[257802]: 2025-10-02 13:06:53.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:53 compute-0 ceph-mon[73607]: pgmap v3246: 305 pgs: 305 active+clean; 224 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 92 op/s
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0032021362600037225 of space, bias 1.0, pg target 0.9606408780011167 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:06:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:55.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:06:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739963422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:06:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739963422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:55 compute-0 nova_compute[257802]: 2025-10-02 13:06:55.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:55 compute-0 ovn_controller[148183]: 2025-10-02T13:06:55Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:9d:a2 10.100.0.5
Oct 02 13:06:55 compute-0 ovn_controller[148183]: 2025-10-02T13:06:55Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:9d:a2 10.100.0.5
Oct 02 13:06:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:55.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:55 compute-0 ceph-mon[73607]: pgmap v3247: 305 pgs: 305 active+clean; 248 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Oct 02 13:06:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2739963422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2739963422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 263 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 162 op/s
Oct 02 13:06:56 compute-0 ceph-mon[73607]: pgmap v3248: 305 pgs: 305 active+clean; 263 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 162 op/s
Oct 02 13:06:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:57.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:06:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:57.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:06:57 compute-0 nova_compute[257802]: 2025-10-02 13:06:57.697 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 148 op/s
Oct 02 13:06:58 compute-0 nova_compute[257802]: 2025-10-02 13:06:58.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:59.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:06:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:59.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:59 compute-0 ceph-mon[73607]: pgmap v3249: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 148 op/s
Oct 02 13:07:00 compute-0 nova_compute[257802]: 2025-10-02 13:07:00.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:00 compute-0 nova_compute[257802]: 2025-10-02 13:07:00.352 2 INFO nova.compute.manager [None req-0e59136e-5648-4884-b3a2-055a1cd9ecc1 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Get console output
Oct 02 13:07:00 compute-0 nova_compute[257802]: 2025-10-02 13:07:00.357 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:07:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 794 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Oct 02 13:07:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:01.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:01 compute-0 ceph-mon[73607]: pgmap v3250: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 794 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Oct 02 13:07:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:02.000 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:02 compute-0 nova_compute[257802]: 2025-10-02 13:07:02.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:02.001 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:07:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 770 KiB/s rd, 3.1 MiB/s wr, 116 op/s
Oct 02 13:07:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:03 compute-0 nova_compute[257802]: 2025-10-02 13:07:03.111 2 INFO nova.compute.manager [None req-e01f593b-1a17-44f3-bb2e-434df11fe099 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Get console output
Oct 02 13:07:03 compute-0 nova_compute[257802]: 2025-10-02 13:07:03.116 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:07:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:03 compute-0 ceph-mon[73607]: pgmap v3251: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 770 KiB/s rd, 3.1 MiB/s wr, 116 op/s
Oct 02 13:07:03 compute-0 nova_compute[257802]: 2025-10-02 13:07:03.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 771 KiB/s rd, 3.1 MiB/s wr, 116 op/s
Oct 02 13:07:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:05.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:05 compute-0 sudo[397193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:05 compute-0 sudo[397193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:05 compute-0 sudo[397193]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:05 compute-0 sudo[397218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:05 compute-0 sudo[397218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:05 compute-0 sudo[397218]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:05 compute-0 nova_compute[257802]: 2025-10-02 13:07:05.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:05.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:05 compute-0 ceph-mon[73607]: pgmap v3252: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 771 KiB/s rd, 3.1 MiB/s wr, 116 op/s
Oct 02 13:07:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 589 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 13:07:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:07.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:07 compute-0 ceph-mon[73607]: pgmap v3253: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 589 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 13:07:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:07.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:08 compute-0 nova_compute[257802]: 2025-10-02 13:07:08.020 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:08 compute-0 nova_compute[257802]: 2025-10-02 13:07:08.020 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:08 compute-0 nova_compute[257802]: 2025-10-02 13:07:08.020 2 DEBUG nova.network.neutron [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:07:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/433554995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 451 KiB/s wr, 37 op/s
Oct 02 13:07:08 compute-0 nova_compute[257802]: 2025-10-02 13:07:08.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:08 compute-0 podman[397245]: 2025-10-02 13:07:08.921514467 +0000 UTC m=+0.050046371 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:07:08 compute-0 podman[397246]: 2025-10-02 13:07:08.934222025 +0000 UTC m=+0.058539707 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 02 13:07:08 compute-0 podman[397247]: 2025-10-02 13:07:08.943722734 +0000 UTC m=+0.063916396 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:09.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:09 compute-0 ceph-mon[73607]: pgmap v3254: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 451 KiB/s wr, 37 op/s
Oct 02 13:07:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:09.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.191 2 DEBUG nova.network.neutron [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.265 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.644 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.644 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Creating file /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/3ba67a60cf494610bc1cf74d4bd45fca.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 13:07:10 compute-0 nova_compute[257802]: 2025-10-02 13:07:10.645 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/3ba67a60cf494610bc1cf74d4bd45fca.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:11 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:11.003 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.040 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/3ba67a60cf494610bc1cf74d4bd45fca.tmp" returned: 1 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.042 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/3ba67a60cf494610bc1cf74d4bd45fca.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.043 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Creating directory /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.044 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.278 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:11 compute-0 nova_compute[257802]: 2025-10-02 13:07:11.285 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:07:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:11 compute-0 ceph-mon[73607]: pgmap v3255: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 0 op/s
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:13.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:13.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:13 compute-0 kernel: tapccb82929-08 (unregistering): left promiscuous mode
Oct 02 13:07:13 compute-0 NetworkManager[44987]: <info>  [1759410433.6156] device (tapccb82929-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:07:13 compute-0 ovn_controller[148183]: 2025-10-02T13:07:13Z|00939|binding|INFO|Releasing lport ccb82929-088d-402f-87fd-a1b429d52ff7 from this chassis (sb_readonly=0)
Oct 02 13:07:13 compute-0 ovn_controller[148183]: 2025-10-02T13:07:13Z|00940|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 down in Southbound
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 ovn_controller[148183]: 2025-10-02T13:07:13Z|00941|binding|INFO|Removing iface tapccb82929-08 ovn-installed in OVS
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.629 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9d:a2 10.100.0.5'], port_security=['fa:16:3e:25:9d:a2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ededa679-1161-406f-bccd-e571b80d8f7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0130049-f161-49c4-b94d-f41f73f65259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916b0cf7-14c0-4414-a8be-ed0d8198832b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=ccb82929-088d-402f-87fd-a1b429d52ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.631 158261 INFO neutron.agent.ovn.metadata.agent [-] Port ccb82929-088d-402f-87fd-a1b429d52ff7 in datapath 6ac71ae6-e349-4379-83eb-f06b6ed86426 unbound from our chassis
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.632 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ac71ae6-e349-4379-83eb-f06b6ed86426, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.633 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d665c4f6-80bc-4632-9d5a-db44d991a6f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.633 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 namespace which is not needed anymore
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Oct 02 13:07:13 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cf.scope: Consumed 14.216s CPU time.
Oct 02 13:07:13 compute-0 systemd-machined[211836]: Machine qemu-100-instance-000000cf terminated.
Oct 02 13:07:13 compute-0 ceph-mon[73607]: pgmap v3256: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 0 op/s
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [NOTICE]   (396967) : haproxy version is 2.8.14-c23fe91
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [NOTICE]   (396967) : path to executable is /usr/sbin/haproxy
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [WARNING]  (396967) : Exiting Master process...
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [WARNING]  (396967) : Exiting Master process...
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [ALERT]    (396967) : Current worker (396969) exited with code 143 (Terminated)
Oct 02 13:07:13 compute-0 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[396963]: [WARNING]  (396967) : All workers exited. Exiting... (0)
Oct 02 13:07:13 compute-0 systemd[1]: libpod-ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496.scope: Deactivated successfully.
Oct 02 13:07:13 compute-0 podman[397331]: 2025-10-02 13:07:13.776995124 +0000 UTC m=+0.044805075 container died ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496-userdata-shm.mount: Deactivated successfully.
Oct 02 13:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fd615894ba94465181cbdb7eb4ce8408183dd5946c2ba73a4afa736da2f9d5-merged.mount: Deactivated successfully.
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 podman[397331]: 2025-10-02 13:07:13.820670841 +0000 UTC m=+0.088480782 container cleanup ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 13:07:13 compute-0 systemd[1]: libpod-conmon-ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496.scope: Deactivated successfully.
Oct 02 13:07:13 compute-0 podman[397357]: 2025-10-02 13:07:13.893116473 +0000 UTC m=+0.048162356 container remove ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.900 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a3caf474-72e9-40a1-b7e6-211d029ec934]: (4, ('Thu Oct  2 01:07:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 (ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496)\nec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496\nThu Oct  2 01:07:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 (ec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496)\nec2325dbd6836eea49bb609dcfc4e6199b5c89068bb5f6f7d42cb537274eb496\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.904 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8430ad-f559-499b-98e4-a8d2a87df5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.906 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac71ae6-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 kernel: tap6ac71ae6-e0: left promiscuous mode
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.931 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a8e61f-6380-4af0-a394-2cc6d58063e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.957 2 DEBUG nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.957 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.957 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.957 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.957 2 DEBUG nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:13 compute-0 nova_compute[257802]: 2025-10-02 13:07:13.958 2 WARNING nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state active and task_state resize_migrating.
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.974 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[05b02a01-cd84-4209-a06a-e26835a868bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.976 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[734ff6e4-11ce-459e-9be4-76c716e05dfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:13.997 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4884be-e5c2-4e2c-8743-9a35e6ca1cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846885, 'reachable_time': 36994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397386, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:14.000 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:07:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:14.000 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e92e8189-f03d-4d49-8730-94715e31011e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ac71ae6\x2de349\x2d4379\x2d83eb\x2df06b6ed86426.mount: Deactivated successfully.
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.305 2 INFO nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance shutdown successfully after 3 seconds.
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.310 2 INFO nova.virt.libvirt.driver [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance destroyed successfully.
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.311 2 DEBUG nova.virt.libvirt.vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:07:07Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.311 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.312 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.312 2 DEBUG os_vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.315 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccb82929-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.319 2 INFO os_vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08')
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.322 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.323 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.461 2 DEBUG neutronclient.v2_0.client [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ccb82929-088d-402f-87fd-a1b429d52ff7 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.555 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.556 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:14 compute-0 nova_compute[257802]: 2025-10-02 13:07:14.556 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 29 KiB/s wr, 5 op/s
Oct 02 13:07:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:15.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:15.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.786 2 DEBUG nova.compute.manager [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.786 2 DEBUG nova.compute.manager [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing instance network info cache due to event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.787 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.787 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:15 compute-0 nova_compute[257802]: 2025-10-02 13:07:15.787 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:07:15 compute-0 ceph-mon[73607]: pgmap v3257: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 29 KiB/s wr, 5 op/s
Oct 02 13:07:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.063 2 DEBUG nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.063 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.064 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.064 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.064 2 DEBUG nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:16 compute-0 nova_compute[257802]: 2025-10-02 13:07:16.064 2 WARNING nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:07:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 263 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 20 KiB/s wr, 17 op/s
Oct 02 13:07:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1708553005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:17.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:17.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Oct 02 13:07:17 compute-0 ceph-mon[73607]: pgmap v3258: 305 pgs: 305 active+clean; 263 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 20 KiB/s wr, 17 op/s
Oct 02 13:07:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Oct 02 13:07:17 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Oct 02 13:07:18 compute-0 nova_compute[257802]: 2025-10-02 13:07:18.054 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated VIF entry in instance network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:07:18 compute-0 nova_compute[257802]: 2025-10-02 13:07:18.054 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:18 compute-0 nova_compute[257802]: 2025-10-02 13:07:18.071 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 24 KiB/s wr, 40 op/s
Oct 02 13:07:18 compute-0 ceph-mon[73607]: osdmap e400: 3 total, 3 up, 3 in
Oct 02 13:07:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1094018913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:18 compute-0 ceph-mon[73607]: pgmap v3260: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 24 KiB/s wr, 40 op/s
Oct 02 13:07:18 compute-0 podman[397390]: 2025-10-02 13:07:18.937529811 +0000 UTC m=+0.076897881 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:19 compute-0 nova_compute[257802]: 2025-10-02 13:07:19.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:19.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3034389863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 23 KiB/s wr, 63 op/s
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.648 2 DEBUG nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.649 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.649 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.649 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.649 2 DEBUG nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:20 compute-0 nova_compute[257802]: 2025-10-02 13:07:20.650 2 WARNING nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state resized and task_state None.
Oct 02 13:07:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:21.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:21 compute-0 ceph-mon[73607]: pgmap v3261: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 23 KiB/s wr, 63 op/s
Oct 02 13:07:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1844808281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2798188944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 23 KiB/s wr, 63 op/s
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.777 2 DEBUG nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.778 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.779 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.779 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.779 2 DEBUG nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.780 2 WARNING nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state resized and task_state None.
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.994 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.994 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:22 compute-0 nova_compute[257802]: 2025-10-02 13:07:22.995 2 DEBUG nova.compute.manager [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 13:07:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:23.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:23 compute-0 ceph-mon[73607]: pgmap v3262: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 23 KiB/s wr, 63 op/s
Oct 02 13:07:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:23.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.6 KiB/s wr, 104 op/s
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.791 2 DEBUG neutronclient.v2_0.client [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ccb82929-088d-402f-87fd-a1b429d52ff7 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.792 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.792 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.792 2 DEBUG nova.network.neutron [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:07:24 compute-0 nova_compute[257802]: 2025-10-02 13:07:24.793 2 DEBUG nova.objects.instance [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'info_cache' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:25 compute-0 nova_compute[257802]: 2025-10-02 13:07:25.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:25 compute-0 nova_compute[257802]: 2025-10-02 13:07:25.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:07:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:25.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:25 compute-0 nova_compute[257802]: 2025-10-02 13:07:25.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:25 compute-0 sudo[397419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:25 compute-0 sudo[397419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:25 compute-0 sudo[397419]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:25 compute-0 sudo[397444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:25 compute-0 sudo[397444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:25 compute-0 sudo[397444]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:25 compute-0 ceph-mon[73607]: pgmap v3263: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.6 KiB/s wr, 104 op/s
Oct 02 13:07:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 124 op/s
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:26.988 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:26.989 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:07:26.989 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:27.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:27.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:27 compute-0 ceph-mon[73607]: pgmap v3264: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 124 op/s
Oct 02 13:07:27 compute-0 nova_compute[257802]: 2025-10-02 13:07:27.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:27 compute-0 nova_compute[257802]: 2025-10-02 13:07:27.964 2 DEBUG nova.network.neutron [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:27 compute-0 nova_compute[257802]: 2025-10-02 13:07:27.991 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:27 compute-0 nova_compute[257802]: 2025-10-02 13:07:27.991 2 DEBUG nova.objects.instance [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.094 2 DEBUG nova.storage.rbd_utils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] removing snapshot(nova-resize) on rbd image(ededa679-1161-406f-bccd-e571b80d8f7f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:07:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 383 B/s wr, 97 op/s
Oct 02 13:07:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Oct 02 13:07:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Oct 02 13:07:28 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.760 2 DEBUG nova.virt.libvirt.vif [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:07:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:07:20Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.760 2 DEBUG nova.network.os_vif_util [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.761 2 DEBUG nova.network.os_vif_util [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.761 2 DEBUG os_vif [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccb82929-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.765 2 INFO os_vif [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08')
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.765 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.766 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.833 2 DEBUG oslo_concurrency.processutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.870 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410433.868419, ededa679-1161-406f-bccd-e571b80d8f7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.870 2 INFO nova.compute.manager [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Stopped (Lifecycle Event)
Oct 02 13:07:28 compute-0 nova_compute[257802]: 2025-10-02 13:07:28.890 2 DEBUG nova.compute.manager [None req-870670c2-2c5f-4c6f-8813-e499757e4d86 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:29.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562204391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.249 2 DEBUG oslo_concurrency.processutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.255 2 DEBUG nova.compute.provider_tree [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.281 2 DEBUG nova.scheduler.client.report [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.346 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.457 2 INFO nova.scheduler.client.report [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocation for migration 1bfd5135-23f5-42f1-b10d-5b0477925fb3
Oct 02 13:07:29 compute-0 nova_compute[257802]: 2025-10-02 13:07:29.506 2 DEBUG oslo_concurrency.lockutils [None req-720a67c2-9449-4642-a5c8-81148ea9e1e2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-0 ceph-mon[73607]: pgmap v3265: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 383 B/s wr, 97 op/s
Oct 02 13:07:29 compute-0 ceph-mon[73607]: osdmap e401: 3 total, 3 up, 3 in
Oct 02 13:07:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3562204391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:30 compute-0 nova_compute[257802]: 2025-10-02 13:07:30.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:30 compute-0 nova_compute[257802]: 2025-10-02 13:07:30.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 90 op/s
Oct 02 13:07:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:31 compute-0 nova_compute[257802]: 2025-10-02 13:07:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:31.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:31.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:31 compute-0 ceph-mon[73607]: pgmap v3267: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 90 op/s
Oct 02 13:07:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 90 op/s
Oct 02 13:07:33 compute-0 nova_compute[257802]: 2025-10-02 13:07:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:33.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-0 ceph-mon[73607]: pgmap v3268: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 90 op/s
Oct 02 13:07:34 compute-0 nova_compute[257802]: 2025-10-02 13:07:34.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 72 op/s
Oct 02 13:07:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:35 compute-0 nova_compute[257802]: 2025-10-02 13:07:35.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Oct 02 13:07:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Oct 02 13:07:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Oct 02 13:07:36 compute-0 nova_compute[257802]: 2025-10-02 13:07:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:36 compute-0 ceph-mon[73607]: pgmap v3269: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 72 op/s
Oct 02 13:07:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 735 KiB/s rd, 19 KiB/s wr, 73 op/s
Oct 02 13:07:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:37 compute-0 ceph-mon[73607]: osdmap e402: 3 total, 3 up, 3 in
Oct 02 13:07:37 compute-0 ceph-mon[73607]: pgmap v3271: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 735 KiB/s rd, 19 KiB/s wr, 73 op/s
Oct 02 13:07:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 651 KiB/s rd, 15 KiB/s wr, 64 op/s
Oct 02 13:07:39 compute-0 nova_compute[257802]: 2025-10-02 13:07:39.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:39 compute-0 nova_compute[257802]: 2025-10-02 13:07:39.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:39 compute-0 ceph-mon[73607]: pgmap v3272: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 651 KiB/s rd, 15 KiB/s wr, 64 op/s
Oct 02 13:07:39 compute-0 podman[397535]: 2025-10-02 13:07:39.930002815 +0000 UTC m=+0.067895973 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:07:39 compute-0 podman[397537]: 2025-10-02 13:07:39.937964548 +0000 UTC m=+0.071214944 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:07:39 compute-0 podman[397536]: 2025-10-02 13:07:39.952288134 +0000 UTC m=+0.084471854 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:07:40 compute-0 nova_compute[257802]: 2025-10-02 13:07:40.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 13:07:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:41.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:41 compute-0 ceph-mon[73607]: pgmap v3273: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 13:07:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1075869113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:07:42
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'volumes']
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1506125174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:43 compute-0 nova_compute[257802]: 2025-10-02 13:07:43.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:43 compute-0 nova_compute[257802]: 2025-10-02 13:07:43.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:07:43 compute-0 nova_compute[257802]: 2025-10-02 13:07:43.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:07:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:43.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:43 compute-0 nova_compute[257802]: 2025-10-02 13:07:43.140 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:07:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:07:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:43 compute-0 ceph-mon[73607]: pgmap v3274: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 14 KiB/s wr, 55 op/s
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:07:44 compute-0 nova_compute[257802]: 2025-10-02 13:07:44.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 455 KiB/s rd, 15 KiB/s wr, 41 op/s
Oct 02 13:07:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:45.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:45 compute-0 nova_compute[257802]: 2025-10-02 13:07:45.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:45 compute-0 sudo[397593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:45 compute-0 sudo[397593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 sudo[397593]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 sudo[397592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:45 compute-0 sudo[397592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 sudo[397592]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 sudo[397642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:45 compute-0 sudo[397642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 sudo[397642]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 sudo[397648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:45 compute-0 sudo[397648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 sudo[397648]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 sudo[397692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:45 compute-0 sudo[397692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 sudo[397692]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:45 compute-0 sudo[397717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:07:45 compute-0 sudo[397717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:45 compute-0 ceph-mon[73607]: pgmap v3275: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 455 KiB/s rd, 15 KiB/s wr, 41 op/s
Oct 02 13:07:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:07:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:07:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:45 compute-0 sudo[397717]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:07:45 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:07:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 sudo[397762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:46 compute-0 sudo[397762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397762]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 sudo[397787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:46 compute-0 sudo[397787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397787]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 sudo[397812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:46 compute-0 sudo[397812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397812]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 sudo[397837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:07:46 compute-0 sudo[397837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 153 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 13 KiB/s wr, 35 op/s
Oct 02 13:07:46 compute-0 sudo[397837]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5245acbc-c3be-4fa7-8607-7f5760e3388f does not exist
Oct 02 13:07:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 745a99c4-d750-4207-a502-c08acdadd823 does not exist
Oct 02 13:07:46 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cf830ae1-8b2c-4f39-a99a-b31c8cb0cdcb does not exist
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:46 compute-0 sudo[397895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:46 compute-0 sudo[397895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397895]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 sudo[397920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:46 compute-0 sudo[397920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397920]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3887397663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: pgmap v3276: 305 pgs: 305 active+clean; 153 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 13 KiB/s wr, 35 op/s
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:07:46 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:46 compute-0 sudo[397945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:46 compute-0 sudo[397945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:46 compute-0 sudo[397945]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:47 compute-0 sudo[397970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:07:47 compute-0 sudo[397970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:47.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.401553221 +0000 UTC m=+0.046148566 container create 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:47 compute-0 systemd[1]: Started libpod-conmon-7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a.scope.
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.381522197 +0000 UTC m=+0.026117562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.507467994 +0000 UTC m=+0.152063349 container init 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.524018954 +0000 UTC m=+0.168614329 container start 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.52880453 +0000 UTC m=+0.173399875 container attach 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:47 compute-0 systemd[1]: libpod-7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a.scope: Deactivated successfully.
Oct 02 13:07:47 compute-0 upbeat_neumann[398050]: 167 167
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.537169142 +0000 UTC m=+0.181764487 container died 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:07:47 compute-0 conmon[398050]: conmon 7e5d03054bae03482e28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a.scope/container/memory.events
Oct 02 13:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-40d47f06d3cc13b120d53ce4bbdb594b43b7269c9402f6560b191b21cbd2d602-merged.mount: Deactivated successfully.
Oct 02 13:07:47 compute-0 podman[398034]: 2025-10-02 13:07:47.586554478 +0000 UTC m=+0.231149823 container remove 7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:47 compute-0 systemd[1]: libpod-conmon-7e5d03054bae03482e282c1e83b7ca14bfd1c2f1d311cc02cd5c33cd6c9d3c4a.scope: Deactivated successfully.
Oct 02 13:07:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:47 compute-0 podman[398072]: 2025-10-02 13:07:47.762200676 +0000 UTC m=+0.041850613 container create 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:07:47 compute-0 systemd[1]: Started libpod-conmon-8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174.scope.
Oct 02 13:07:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:47 compute-0 podman[398072]: 2025-10-02 13:07:47.83799 +0000 UTC m=+0.117639957 container init 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:07:47 compute-0 podman[398072]: 2025-10-02 13:07:47.744070788 +0000 UTC m=+0.023720745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:47 compute-0 podman[398072]: 2025-10-02 13:07:47.848199567 +0000 UTC m=+0.127849504 container start 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:07:47 compute-0 podman[398072]: 2025-10-02 13:07:47.851331353 +0000 UTC m=+0.130981290 container attach 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:07:48 compute-0 interesting_bhaskara[398088]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:07:48 compute-0 interesting_bhaskara[398088]: --> relative data size: 1.0
Oct 02 13:07:48 compute-0 interesting_bhaskara[398088]: --> All data devices are unavailable
Oct 02 13:07:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 02 13:07:48 compute-0 systemd[1]: libpod-8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174.scope: Deactivated successfully.
Oct 02 13:07:48 compute-0 podman[398072]: 2025-10-02 13:07:48.647063934 +0000 UTC m=+0.926713871 container died 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-19e1f80a9e6766326c6f2dd331724707eda162d8a28465a2c3cef88e39b35549-merged.mount: Deactivated successfully.
Oct 02 13:07:48 compute-0 podman[398072]: 2025-10-02 13:07:48.701938711 +0000 UTC m=+0.981588668 container remove 8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:07:48 compute-0 systemd[1]: libpod-conmon-8904fb837e66865af0c860c0fc659dd9b36479187f8636384985901a38788174.scope: Deactivated successfully.
Oct 02 13:07:48 compute-0 sudo[397970]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:48 compute-0 sudo[398118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:48 compute-0 sudo[398118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:48 compute-0 sudo[398118]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:48 compute-0 sudo[398143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:48 compute-0 sudo[398143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:48 compute-0 sudo[398143]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:48 compute-0 sudo[398168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:48 compute-0 sudo[398168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:48 compute-0 sudo[398168]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:48 compute-0 sudo[398193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:07:48 compute-0 sudo[398193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:49 compute-0 podman[398217]: 2025-10-02 13:07:49.122571158 +0000 UTC m=+0.129290290 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.129 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.129 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.130 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:49.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.309297325 +0000 UTC m=+0.039042286 container create 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:07:49 compute-0 systemd[1]: Started libpod-conmon-00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3.scope.
Oct 02 13:07:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.379690658 +0000 UTC m=+0.109435629 container init 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.388711516 +0000 UTC m=+0.118456477 container start 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.295368738 +0000 UTC m=+0.025113729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.392072357 +0000 UTC m=+0.121817338 container attach 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:07:49 compute-0 wizardly_faraday[398315]: 167 167
Oct 02 13:07:49 compute-0 systemd[1]: libpod-00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3.scope: Deactivated successfully.
Oct 02 13:07:49 compute-0 conmon[398315]: conmon 00d5ec2de298d72a4e0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3.scope/container/memory.events
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.395320336 +0000 UTC m=+0.125065297 container died 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-01da21bf2befe5a5895f79d8b4d71f84d153f9cd87e2c87199ffa250c01d9e74-merged.mount: Deactivated successfully.
Oct 02 13:07:49 compute-0 podman[398301]: 2025-10-02 13:07:49.4289714 +0000 UTC m=+0.158716361 container remove 00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:07:49 compute-0 systemd[1]: libpod-conmon-00d5ec2de298d72a4e0ce560291904d7e100a4f0ab29b4cc222df9c6557b32f3.scope: Deactivated successfully.
Oct 02 13:07:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218818825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.567 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:49 compute-0 podman[398341]: 2025-10-02 13:07:49.590361645 +0000 UTC m=+0.042286254 container create f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:07:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:49 compute-0 systemd[1]: Started libpod-conmon-f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75.scope.
Oct 02 13:07:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea57d27bb30acf26c87c0f5d6970e44bd3c5da31f8deb676efeb2ee80e2465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea57d27bb30acf26c87c0f5d6970e44bd3c5da31f8deb676efeb2ee80e2465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea57d27bb30acf26c87c0f5d6970e44bd3c5da31f8deb676efeb2ee80e2465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea57d27bb30acf26c87c0f5d6970e44bd3c5da31f8deb676efeb2ee80e2465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:49 compute-0 podman[398341]: 2025-10-02 13:07:49.57240097 +0000 UTC m=+0.024325609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:49 compute-0 podman[398341]: 2025-10-02 13:07:49.673045125 +0000 UTC m=+0.124969764 container init f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:07:49 compute-0 podman[398341]: 2025-10-02 13:07:49.680960946 +0000 UTC m=+0.132885555 container start f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:07:49 compute-0 podman[398341]: 2025-10-02 13:07:49.684054592 +0000 UTC m=+0.135979201 container attach f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:07:49 compute-0 ceph-mon[73607]: pgmap v3277: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 02 13:07:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1218818825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.747 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.750 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4196MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.750 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:49 compute-0 nova_compute[257802]: 2025-10-02 13:07:49.751 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.034 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.034 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.052 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:50 compute-0 crazy_buck[398361]: {
Oct 02 13:07:50 compute-0 crazy_buck[398361]:     "1": [
Oct 02 13:07:50 compute-0 crazy_buck[398361]:         {
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "devices": [
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "/dev/loop3"
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             ],
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "lv_name": "ceph_lv0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "lv_size": "7511998464",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "name": "ceph_lv0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "tags": {
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.cluster_name": "ceph",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.crush_device_class": "",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.encrypted": "0",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.osd_id": "1",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.type": "block",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:                 "ceph.vdo": "0"
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             },
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "type": "block",
Oct 02 13:07:50 compute-0 crazy_buck[398361]:             "vg_name": "ceph_vg0"
Oct 02 13:07:50 compute-0 crazy_buck[398361]:         }
Oct 02 13:07:50 compute-0 crazy_buck[398361]:     ]
Oct 02 13:07:50 compute-0 crazy_buck[398361]: }
Oct 02 13:07:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577601925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:50 compute-0 systemd[1]: libpod-f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75.scope: Deactivated successfully.
Oct 02 13:07:50 compute-0 podman[398341]: 2025-10-02 13:07:50.480682104 +0000 UTC m=+0.932606713 container died f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.483 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.490 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9ea57d27bb30acf26c87c0f5d6970e44bd3c5da31f8deb676efeb2ee80e2465-merged.mount: Deactivated successfully.
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.520 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:50 compute-0 podman[398341]: 2025-10-02 13:07:50.534871905 +0000 UTC m=+0.986796514 container remove f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:50 compute-0 systemd[1]: libpod-conmon-f079e67a097e35ea004774e9b3403bc5eb66c143d9724ccaf9d118e4f391af75.scope: Deactivated successfully.
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.545 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:07:50 compute-0 nova_compute[257802]: 2025-10-02 13:07:50.546 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:50 compute-0 sudo[398193]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:50 compute-0 sudo[398404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:50 compute-0 sudo[398404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:50 compute-0 sudo[398404]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:50 compute-0 sudo[398429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:50 compute-0 sudo[398429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:50 compute-0 sudo[398429]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:50 compute-0 sudo[398454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:50 compute-0 sudo[398454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:50 compute-0 sudo[398454]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3577601925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:50 compute-0 sudo[398480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:07:50 compute-0 sudo[398480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.098834989 +0000 UTC m=+0.041210478 container create d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:07:51 compute-0 systemd[1]: Started libpod-conmon-d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a.scope.
Oct 02 13:07:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:51.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.07861814 +0000 UTC m=+0.020993659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.179699195 +0000 UTC m=+0.122074704 container init d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.185590987 +0000 UTC m=+0.127966476 container start d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.188655671 +0000 UTC m=+0.131031160 container attach d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:07:51 compute-0 upbeat_meninsky[398561]: 167 167
Oct 02 13:07:51 compute-0 systemd[1]: libpod-d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a.scope: Deactivated successfully.
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.190345752 +0000 UTC m=+0.132721241 container died d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a4243d267f92527b449105abdffe26f7727089898021d33501feb0097623ad-merged.mount: Deactivated successfully.
Oct 02 13:07:51 compute-0 podman[398544]: 2025-10-02 13:07:51.225887303 +0000 UTC m=+0.168262792 container remove d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:07:51 compute-0 systemd[1]: libpod-conmon-d694136f255cf1d4e8c00c3c7ad6cb926f6e0c8f0542c0f2cf6b1e1ee75d525a.scope: Deactivated successfully.
Oct 02 13:07:51 compute-0 podman[398586]: 2025-10-02 13:07:51.392681178 +0000 UTC m=+0.038960824 container create 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:07:51 compute-0 systemd[1]: Started libpod-conmon-5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48.scope.
Oct 02 13:07:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6c52386dc7b66c052a24089105e2696dcfac7110ada2221eb21b8da71f762e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6c52386dc7b66c052a24089105e2696dcfac7110ada2221eb21b8da71f762e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6c52386dc7b66c052a24089105e2696dcfac7110ada2221eb21b8da71f762e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6c52386dc7b66c052a24089105e2696dcfac7110ada2221eb21b8da71f762e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:51 compute-0 podman[398586]: 2025-10-02 13:07:51.373318039 +0000 UTC m=+0.019597665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:51 compute-0 podman[398586]: 2025-10-02 13:07:51.47958694 +0000 UTC m=+0.125866556 container init 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:07:51 compute-0 podman[398586]: 2025-10-02 13:07:51.485012501 +0000 UTC m=+0.131292117 container start 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:07:51 compute-0 podman[398586]: 2025-10-02 13:07:51.487961023 +0000 UTC m=+0.134240629 container attach 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:07:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:51 compute-0 ceph-mon[73607]: pgmap v3278: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]: {
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:         "osd_id": 1,
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:         "type": "bluestore"
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]:     }
Oct 02 13:07:52 compute-0 zealous_ptolemy[398601]: }
Oct 02 13:07:52 compute-0 systemd[1]: libpod-5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48.scope: Deactivated successfully.
Oct 02 13:07:52 compute-0 podman[398586]: 2025-10-02 13:07:52.405539371 +0000 UTC m=+1.051818977 container died 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d6c52386dc7b66c052a24089105e2696dcfac7110ada2221eb21b8da71f762e-merged.mount: Deactivated successfully.
Oct 02 13:07:52 compute-0 podman[398586]: 2025-10-02 13:07:52.462917839 +0000 UTC m=+1.109197445 container remove 5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:07:52 compute-0 systemd[1]: libpod-conmon-5c7727446bd0b5019b24d63acf5463e54693fcd0192179a5b856450a3acd9a48.scope: Deactivated successfully.
Oct 02 13:07:52 compute-0 sudo[398480]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:07:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:07:52 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 97e72e31-2808-4d1b-9fd6-5b190e380c01 does not exist
Oct 02 13:07:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7552769f-d571-4e36-8b81-accc9a89edb8 does not exist
Oct 02 13:07:52 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev aa52c7a9-2d0d-474d-b7b9-7d6504d9a0f1 does not exist
Oct 02 13:07:52 compute-0 sudo[398636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:52 compute-0 sudo[398636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:52 compute-0 sudo[398636]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:52 compute-0 sudo[398661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:07:52 compute-0 sudo[398661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:52 compute-0 sudo[398661]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:53.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:07:53 compute-0 ceph-mon[73607]: pgmap v3279: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:54 compute-0 nova_compute[257802]: 2025-10-02 13:07:54.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:07:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:55 compute-0 nova_compute[257802]: 2025-10-02 13:07:55.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:55 compute-0 ceph-mon[73607]: pgmap v3280: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:07:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3753809649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3753809649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 852 B/s wr, 16 op/s
Oct 02 13:07:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:07:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:07:57 compute-0 nova_compute[257802]: 2025-10-02 13:07:57.547 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:57.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:57 compute-0 ceph-mon[73607]: pgmap v3281: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 852 B/s wr, 16 op/s
Oct 02 13:07:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 2 op/s
Oct 02 13:07:59 compute-0 ceph-mon[73607]: pgmap v3282: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 2 op/s
Oct 02 13:07:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:07:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:59.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:07:59 compute-0 nova_compute[257802]: 2025-10-02 13:07:59.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:07:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:00 compute-0 nova_compute[257802]: 2025-10-02 13:08:00.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:08:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:08:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:01 compute-0 ceph-mon[73607]: pgmap v3283: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:03.635 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:03.636 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:08:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:03.636 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-0 ceph-mon[73607]: pgmap v3284: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.714 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.714 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.729 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.798 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.798 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.805 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.805 2 INFO nova.compute.claims [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:08:03 compute-0 nova_compute[257802]: 2025-10-02 13:08:03.929 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746399394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.361 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.367 2 DEBUG nova.compute.provider_tree [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.427 2 DEBUG nova.scheduler.client.report [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.464 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.465 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.531 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.532 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.565 2 INFO nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.582 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:08:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2014427386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1746399394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.761 2 DEBUG nova.policy [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.826 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.827 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.827 2 INFO nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Creating image(s)
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.850 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.874 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.900 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.904 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.985 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.986 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.987 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:04 compute-0 nova_compute[257802]: 2025-10-02 13:08:04.987 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.016 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.021 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:05.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.352 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.428 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:08:05 compute-0 sudo[398863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:05 compute-0 sudo[398863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:05 compute-0 sudo[398863]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:05 compute-0 sudo[398888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:05 compute-0 sudo[398888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:05 compute-0 sudo[398888]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.682 2 DEBUG nova.objects.instance [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.702 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.702 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Ensure instance console log exists: /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.703 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.703 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.703 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:05 compute-0 ceph-mon[73607]: pgmap v3285: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Oct 02 13:08:05 compute-0 nova_compute[257802]: 2025-10-02 13:08:05.880 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Successfully created port: 06105eee-1ccc-4976-9ef2-84b4765d9a79 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:08:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 959 KiB/s wr, 2 op/s
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.802 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Successfully updated port: 06105eee-1ccc-4976-9ef2-84b4765d9a79 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.817 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.818 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.818 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.947 2 DEBUG nova.compute.manager [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.947 2 DEBUG nova.compute.manager [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing instance network info cache due to event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:06 compute-0 nova_compute[257802]: 2025-10-02 13:08:06.947 2 DEBUG oslo_concurrency.lockutils [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.033 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:08:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:07.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:07.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.746 2 DEBUG nova.network.neutron [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.770 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.771 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance network_info: |[{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.771 2 DEBUG oslo_concurrency.lockutils [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.771 2 DEBUG nova.network.neutron [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.774 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start _get_guest_xml network_info=[{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.777 2 WARNING nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.782 2 DEBUG nova.virt.libvirt.host [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.783 2 DEBUG nova.virt.libvirt.host [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.786 2 DEBUG nova.virt.libvirt.host [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.787 2 DEBUG nova.virt.libvirt.host [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.788 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:08:07 compute-0 ceph-mon[73607]: pgmap v3286: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 959 KiB/s wr, 2 op/s
Oct 02 13:08:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3638219158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.789 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.790 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.790 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.790 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.791 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.791 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.791 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.792 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.792 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.792 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.793 2 DEBUG nova.virt.hardware [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:08:07 compute-0 nova_compute[257802]: 2025-10-02 13:08:07.796 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503413134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.234 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.262 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.266 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 165 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Oct 02 13:08:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257454176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.730 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.732 2 DEBUG nova.virt.libvirt.vif [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.733 2 DEBUG nova.network.os_vif_util [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.734 2 DEBUG nova.network.os_vif_util [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.735 2 DEBUG nova.objects.instance [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.750 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <uuid>6df9bd3b-6218-4859-aba9-bfbedf2b8f18</uuid>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <name>instance-000000d1</name>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-510358772</nova:name>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:08:07</nova:creationTime>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <nova:port uuid="06105eee-1ccc-4976-9ef2-84b4765d9a79">
Oct 02 13:08:08 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <system>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="serial">6df9bd3b-6218-4859-aba9-bfbedf2b8f18</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="uuid">6df9bd3b-6218-4859-aba9-bfbedf2b8f18</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </system>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <os>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </os>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <features>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </features>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk">
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </source>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config">
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </source>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:08:08 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3d:d7:ed"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <target dev="tap06105eee-1c"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/console.log" append="off"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <video>
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </video>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:08:08 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:08:08 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:08:08 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:08:08 compute-0 nova_compute[257802]: </domain>
Oct 02 13:08:08 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.752 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Preparing to wait for external event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.752 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.753 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.753 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.754 2 DEBUG nova.virt.libvirt.vif [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.754 2 DEBUG nova.network.os_vif_util [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.755 2 DEBUG nova.network.os_vif_util [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.755 2 DEBUG os_vif [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06105eee-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06105eee-1c, col_values=(('external_ids', {'iface-id': '06105eee-1ccc-4976-9ef2-84b4765d9a79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:d7:ed', 'vm-uuid': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-0 NetworkManager[44987]: <info>  [1759410488.7634] manager: (tap06105eee-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.769 2 INFO os_vif [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c')
Oct 02 13:08:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1576955163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2503413134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4257454176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.816 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.816 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.817 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:3d:d7:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.817 2 INFO nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Using config drive
Oct 02 13:08:08 compute-0 nova_compute[257802]: 2025-10-02 13:08:08.845 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.092 2 DEBUG nova.network.neutron [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updated VIF entry in instance network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.092 2 DEBUG nova.network.neutron [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.110 2 DEBUG oslo_concurrency.lockutils [req-dd6f50d6-4de4-4184-9182-8b28ef71607f req-2ddba90c-ba8b-49e7-baeb-340d6f7441cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.223 2 INFO nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Creating config drive at /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.228 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvz85b20k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.362 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvz85b20k" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.395 2 DEBUG nova.storage.rbd_utils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.400 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.563 2 DEBUG oslo_concurrency.processutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config 6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.563 2 INFO nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Deleting local config drive /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/disk.config because it was imported into RBD.
Oct 02 13:08:09 compute-0 kernel: tap06105eee-1c: entered promiscuous mode
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.6126] manager: (tap06105eee-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 ovn_controller[148183]: 2025-10-02T13:08:09Z|00942|binding|INFO|Claiming lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 for this chassis.
Oct 02 13:08:09 compute-0 ovn_controller[148183]: 2025-10-02T13:08:09Z|00943|binding|INFO|06105eee-1ccc-4976-9ef2-84b4765d9a79: Claiming fa:16:3e:3d:d7:ed 10.100.0.11
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 systemd-udevd[399066]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:08:09 compute-0 systemd-machined[211836]: New machine qemu-101-instance-000000d1.
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.6522] device (tap06105eee-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.6532] device (tap06105eee-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:08:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:08:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:09.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:08:09 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-000000d1.
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.679 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:d7:ed 10.100.0.11'], port_security=['fa:16:3e:3d:d7:ed 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '337a5b6a-7697-4b02-8d14-65af2374695f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f431a9f-5f6a-4914-ae7c-c00e97c25630, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=06105eee-1ccc-4976-9ef2-84b4765d9a79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.681 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 06105eee-1ccc-4976-9ef2-84b4765d9a79 in datapath 41354ccc-5b80-451f-9510-2c3d0788ecf7 bound to our chassis
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.682 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 ovn_controller[148183]: 2025-10-02T13:08:09Z|00944|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 ovn-installed in OVS
Oct 02 13:08:09 compute-0 ovn_controller[148183]: 2025-10-02T13:08:09Z|00945|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 up in Southbound
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.694 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[43b20e13-4f72-4f6b-a5e1-e8296846a28d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.694 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41354ccc-51 in ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.696 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41354ccc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.696 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1cf11e-0f17-434e-8d74-1f32b1d6c5f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.696 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2fd90e-7510-493e-a2a4-377fe4108452]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.711 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5aeeb1e7-4995-46b4-984b-ee8bbc23b8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.739 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[87164641-7137-4a98-84fa-41561e0a5c9c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.767 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1cf104-a2ee-45a3-935f-c50ed6b23c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.772 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f233fb61-df9c-4203-906b-6a6e5aa40481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.7749] manager: (tap41354ccc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.803 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bfba7419-4a89-40f1-b8c7-0bd54f6f9122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.808 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ea5358-c6d8-4def-9559-ebbedee8772e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ceph-mon[73607]: pgmap v3287: 305 pgs: 305 active+clean; 165 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.8353] device (tap41354ccc-50): carrier: link connected
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.843 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[04c55927-f81f-463b-a957-4b25c660763d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.859 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb1e305-19bb-4d89-9104-a28123cc20ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41354ccc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ea:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855746, 'reachable_time': 29075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399100, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.881 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1ab842-aa15-425b-b671-a817bb860036]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:eabc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855746, 'tstamp': 855746}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399115, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.904 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29f4ec01-0c2f-41f5-a77a-a736dfa6de92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41354ccc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ea:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855746, 'reachable_time': 29075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399120, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.935 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1c29876d-4b0b-44d9-8b71-c9953c95aa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.995 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[33dc6b43-6126-4cb5-8435-254d5842b60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.996 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41354ccc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.996 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:09.996 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41354ccc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:09 compute-0 nova_compute[257802]: 2025-10-02 13:08:09.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:09 compute-0 NetworkManager[44987]: <info>  [1759410489.9986] manager: (tap41354ccc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 02 13:08:09 compute-0 kernel: tap41354ccc-50: entered promiscuous mode
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:10.001 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41354ccc-50, col_values=(('external_ids', {'iface-id': '7cea9858-fead-45b4-8830-8edfb5209d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:10 compute-0 ovn_controller[148183]: 2025-10-02T13:08:10Z|00946|binding|INFO|Releasing lport 7cea9858-fead-45b4-8830-8edfb5209d69 from this chassis (sb_readonly=0)
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:10.017 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:10.017 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0cbb12-8210-457c-81b9-ba2abd2e7a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:10.018 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:08:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:10.019 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'env', 'PROCESS_TAG=haproxy-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41354ccc-5b80-451f-9510-2c3d0788ecf7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.077 2 DEBUG nova.compute.manager [req-9abf9658-53d4-4f99-a452-8b841afed763 req-642492db-294a-4fcc-977b-419f67d385a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.078 2 DEBUG oslo_concurrency.lockutils [req-9abf9658-53d4-4f99-a452-8b841afed763 req-642492db-294a-4fcc-977b-419f67d385a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.079 2 DEBUG oslo_concurrency.lockutils [req-9abf9658-53d4-4f99-a452-8b841afed763 req-642492db-294a-4fcc-977b-419f67d385a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.079 2 DEBUG oslo_concurrency.lockutils [req-9abf9658-53d4-4f99-a452-8b841afed763 req-642492db-294a-4fcc-977b-419f67d385a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.079 2 DEBUG nova.compute.manager [req-9abf9658-53d4-4f99-a452-8b841afed763 req-642492db-294a-4fcc-977b-419f67d385a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Processing event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:10 compute-0 podman[399177]: 2025-10-02 13:08:10.370759418 +0000 UTC m=+0.046112437 container create 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:10 compute-0 systemd[1]: Started libpod-conmon-12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2.scope.
Oct 02 13:08:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.423 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.425 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410490.424787, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.425 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Started (Lifecycle Event)
Oct 02 13:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed4fecb3c813daaa4ee89cfa51ee2d37a70851d64cba8fa76b0623cb777a966/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.430 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:08:10 compute-0 podman[399177]: 2025-10-02 13:08:10.438294142 +0000 UTC m=+0.113647181 container init 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:08:10 compute-0 podman[399177]: 2025-10-02 13:08:10.345537657 +0000 UTC m=+0.020890696 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.438 2 INFO nova.virt.libvirt.driver [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance spawned successfully.
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.439 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:08:10 compute-0 podman[399177]: 2025-10-02 13:08:10.445949527 +0000 UTC m=+0.121302546 container start 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:10 compute-0 podman[399193]: 2025-10-02 13:08:10.465413118 +0000 UTC m=+0.060193887 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:10 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [NOTICE]   (399243) : New worker (399253) forked
Oct 02 13:08:10 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [NOTICE]   (399243) : Loading success.
Oct 02 13:08:10 compute-0 podman[399190]: 2025-10-02 13:08:10.489968843 +0000 UTC m=+0.086329510 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:08:10 compute-0 podman[399194]: 2025-10-02 13:08:10.489939392 +0000 UTC m=+0.079494144 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:08:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 66 op/s
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.694 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.701 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.706 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.706 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.707 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.707 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.708 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.708 2 DEBUG nova.virt.libvirt.driver [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.756 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.757 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410490.4249475, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.757 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Paused (Lifecycle Event)
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.883 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.887 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410490.4303222, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:10 compute-0 nova_compute[257802]: 2025-10-02 13:08:10.887 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Resumed (Lifecycle Event)
Oct 02 13:08:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.024 2 INFO nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Took 6.20 seconds to spawn the instance on the hypervisor.
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.025 2 DEBUG nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.098 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.105 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.238 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.298 2 INFO nova.compute.manager [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Took 7.52 seconds to build instance.
Oct 02 13:08:11 compute-0 nova_compute[257802]: 2025-10-02 13:08:11.463 2 DEBUG oslo_concurrency.lockutils [None req-2c80e2e0-05fa-47cc-8971-53007fceaf34 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:11.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:11 compute-0 ceph-mon[73607]: pgmap v3288: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 66 op/s
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.283 2 DEBUG nova.compute.manager [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.283 2 DEBUG oslo_concurrency.lockutils [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.284 2 DEBUG oslo_concurrency.lockutils [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.284 2 DEBUG oslo_concurrency.lockutils [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.284 2 DEBUG nova.compute.manager [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:12 compute-0 nova_compute[257802]: 2025-10-02 13:08:12.284 2 WARNING nova.compute.manager [req-fe4647df-5fcc-4b6c-8ab4-81eb794384e4 req-6af2ec85-80eb-44ab-98e3-fac6b35b85c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state active and task_state None.
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 66 op/s
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:13.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:13 compute-0 nova_compute[257802]: 2025-10-02 13:08:13.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:13 compute-0 ceph-mon[73607]: pgmap v3289: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 66 op/s
Oct 02 13:08:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Oct 02 13:08:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:15 compute-0 nova_compute[257802]: 2025-10-02 13:08:15.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:15.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:15 compute-0 ceph-mon[73607]: pgmap v3290: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Oct 02 13:08:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:16 compute-0 nova_compute[257802]: 2025-10-02 13:08:16.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:16 compute-0 NetworkManager[44987]: <info>  [1759410496.2740] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Oct 02 13:08:16 compute-0 NetworkManager[44987]: <info>  [1759410496.2754] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct 02 13:08:16 compute-0 nova_compute[257802]: 2025-10-02 13:08:16.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:16 compute-0 ovn_controller[148183]: 2025-10-02T13:08:16Z|00947|binding|INFO|Releasing lport 7cea9858-fead-45b4-8830-8edfb5209d69 from this chassis (sb_readonly=0)
Oct 02 13:08:16 compute-0 nova_compute[257802]: 2025-10-02 13:08:16.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 13:08:17 compute-0 nova_compute[257802]: 2025-10-02 13:08:17.178 2 DEBUG nova.compute.manager [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:17 compute-0 nova_compute[257802]: 2025-10-02 13:08:17.178 2 DEBUG nova.compute.manager [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing instance network info cache due to event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:17 compute-0 nova_compute[257802]: 2025-10-02 13:08:17.178 2 DEBUG oslo_concurrency.lockutils [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:17 compute-0 nova_compute[257802]: 2025-10-02 13:08:17.178 2 DEBUG oslo_concurrency.lockutils [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:17 compute-0 nova_compute[257802]: 2025-10-02 13:08:17.179 2 DEBUG nova.network.neutron [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:17.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:17.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:18 compute-0 ceph-mon[73607]: pgmap v3291: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 13:08:18 compute-0 nova_compute[257802]: 2025-10-02 13:08:18.416 2 DEBUG nova.network.neutron [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updated VIF entry in instance network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:18 compute-0 nova_compute[257802]: 2025-10-02 13:08:18.416 2 DEBUG nova.network.neutron [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:18 compute-0 nova_compute[257802]: 2025-10-02 13:08:18.545 2 DEBUG oslo_concurrency.lockutils [req-30d562bf-8654-4633-8d6e-ded2fb8d1737 req-f2d78905-3446-4e4f-bb0e-e90b4f08d3ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 199 op/s
Oct 02 13:08:18 compute-0 nova_compute[257802]: 2025-10-02 13:08:18.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:19 compute-0 ceph-mon[73607]: pgmap v3292: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 199 op/s
Oct 02 13:08:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:19.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:19 compute-0 podman[399268]: 2025-10-02 13:08:19.949984285 +0000 UTC m=+0.082672201 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 13:08:20 compute-0 nova_compute[257802]: 2025-10-02 13:08:20.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Oct 02 13:08:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:08:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:21.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:08:21 compute-0 ceph-mon[73607]: pgmap v3293: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Oct 02 13:08:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 135 op/s
Oct 02 13:08:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:23.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:23.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:23 compute-0 ovn_controller[148183]: 2025-10-02T13:08:23Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:d7:ed 10.100.0.11
Oct 02 13:08:23 compute-0 ovn_controller[148183]: 2025-10-02T13:08:23Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:d7:ed 10.100.0.11
Oct 02 13:08:23 compute-0 ceph-mon[73607]: pgmap v3294: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 135 op/s
Oct 02 13:08:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/329128017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 nova_compute[257802]: 2025-10-02 13:08:23.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 245 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 203 op/s
Oct 02 13:08:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1850533019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:25 compute-0 nova_compute[257802]: 2025-10-02 13:08:25.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:25.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:25 compute-0 sudo[399297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:25 compute-0 sudo[399297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:25 compute-0 sudo[399297]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:25 compute-0 ceph-mon[73607]: pgmap v3295: 305 pgs: 305 active+clean; 245 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 203 op/s
Oct 02 13:08:25 compute-0 sudo[399322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:25 compute-0 sudo[399322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:25 compute-0 sudo[399322]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:26 compute-0 nova_compute[257802]: 2025-10-02 13:08:26.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:26 compute-0 nova_compute[257802]: 2025-10-02 13:08:26.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:08:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 273 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 176 op/s
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:26.990 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:26.991 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:26.991 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:27 compute-0 ceph-mon[73607]: pgmap v3296: 305 pgs: 305 active+clean; 273 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 176 op/s
Oct 02 13:08:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:28 compute-0 nova_compute[257802]: 2025-10-02 13:08:28.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:29 compute-0 ceph-mon[73607]: pgmap v3297: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:30 compute-0 nova_compute[257802]: 2025-10-02 13:08:30.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:30 compute-0 nova_compute[257802]: 2025-10-02 13:08:30.454 2 INFO nova.compute.manager [None req-1ad33977-7924-4e5b-a228-76eb58473a8a ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Get console output
Oct 02 13:08:30 compute-0 nova_compute[257802]: 2025-10-02 13:08:30.459 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:08:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:31 compute-0 nova_compute[257802]: 2025-10-02 13:08:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:31 compute-0 nova_compute[257802]: 2025-10-02 13:08:31.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:31 compute-0 ceph-mon[73607]: pgmap v3298: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:33.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:33 compute-0 ceph-mon[73607]: pgmap v3299: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Oct 02 13:08:33 compute-0 nova_compute[257802]: 2025-10-02 13:08:33.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:34 compute-0 nova_compute[257802]: 2025-10-02 13:08:34.065 2 INFO nova.compute.manager [None req-12ee08c0-2706-4a6f-a28e-68b67af45eff ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Get console output
Oct 02 13:08:34 compute-0 nova_compute[257802]: 2025-10-02 13:08:34.070 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:08:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Oct 02 13:08:35 compute-0 nova_compute[257802]: 2025-10-02 13:08:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:35 compute-0 nova_compute[257802]: 2025-10-02 13:08:35.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Oct 02 13:08:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Oct 02 13:08:35 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.793901) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515793957, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1872, "num_deletes": 252, "total_data_size": 3297400, "memory_usage": 3361632, "flush_reason": "Manual Compaction"}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 02 13:08:35 compute-0 ceph-mon[73607]: pgmap v3300: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515812012, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3236545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71350, "largest_seqno": 73221, "table_properties": {"data_size": 3227977, "index_size": 5253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17984, "raw_average_key_size": 20, "raw_value_size": 3210826, "raw_average_value_size": 3656, "num_data_blocks": 228, "num_entries": 878, "num_filter_entries": 878, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410336, "oldest_key_time": 1759410336, "file_creation_time": 1759410515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 18140 microseconds, and 6659 cpu microseconds.
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.812046) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3236545 bytes OK
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.812064) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.816797) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.816811) EVENT_LOG_v1 {"time_micros": 1759410515816807, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.816843) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3289684, prev total WAL file size 3289684, number of live WAL files 2.
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.817923) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3160KB)], [164(10210KB)]
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515817951, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13691868, "oldest_snapshot_seqno": -1}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 9954 keys, 11731637 bytes, temperature: kUnknown
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515871927, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 11731637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11669094, "index_size": 36574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 262441, "raw_average_key_size": 26, "raw_value_size": 11496519, "raw_average_value_size": 1154, "num_data_blocks": 1386, "num_entries": 9954, "num_filter_entries": 9954, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.872236) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 11731637 bytes
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.873868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.1 rd, 216.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 10479, records dropped: 525 output_compression: NoCompression
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.873889) EVENT_LOG_v1 {"time_micros": 1759410515873878, "job": 102, "event": "compaction_finished", "compaction_time_micros": 54106, "compaction_time_cpu_micros": 26566, "output_level": 6, "num_output_files": 1, "total_output_size": 11731637, "num_input_records": 10479, "num_output_records": 9954, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515874742, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515877079, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.817796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.877113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.877117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.877245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.877248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:08:35.877250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:35 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:36 compute-0 nova_compute[257802]: 2025-10-02 13:08:36.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 83 KiB/s wr, 18 op/s
Oct 02 13:08:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Oct 02 13:08:36 compute-0 ceph-mon[73607]: osdmap e403: 3 total, 3 up, 3 in
Oct 02 13:08:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Oct 02 13:08:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Oct 02 13:08:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:37.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:37 compute-0 nova_compute[257802]: 2025-10-02 13:08:37.760 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:37 compute-0 nova_compute[257802]: 2025-10-02 13:08:37.760 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:37 compute-0 nova_compute[257802]: 2025-10-02 13:08:37.761 2 DEBUG nova.network.neutron [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:08:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Oct 02 13:08:37 compute-0 ceph-mon[73607]: pgmap v3302: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 83 KiB/s wr, 18 op/s
Oct 02 13:08:37 compute-0 ceph-mon[73607]: osdmap e404: 3 total, 3 up, 3 in
Oct 02 13:08:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3513952075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:37 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Oct 02 13:08:37 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Oct 02 13:08:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 317 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 70 op/s
Oct 02 13:08:38 compute-0 nova_compute[257802]: 2025-10-02 13:08:38.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:38 compute-0 ceph-mon[73607]: osdmap e405: 3 total, 3 up, 3 in
Oct 02 13:08:39 compute-0 nova_compute[257802]: 2025-10-02 13:08:39.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:39.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:39 compute-0 nova_compute[257802]: 2025-10-02 13:08:39.764 2 DEBUG nova.network.neutron [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:39 compute-0 nova_compute[257802]: 2025-10-02 13:08:39.900 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:39 compute-0 ceph-mon[73607]: pgmap v3305: 305 pgs: 305 active+clean; 317 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 70 op/s
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.291 2 DEBUG nova.virt.libvirt.driver [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.292 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Creating file /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/c3d0f921c1fe48a1b4a85c806eda3e79.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.292 2 DEBUG oslo_concurrency.processutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/c3d0f921c1fe48a1b4a85c806eda3e79.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 156 op/s
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.823 2 DEBUG oslo_concurrency.processutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/c3d0f921c1fe48a1b4a85c806eda3e79.tmp" returned: 1 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.824 2 DEBUG oslo_concurrency.processutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/c3d0f921c1fe48a1b4a85c806eda3e79.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.824 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Creating directory /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 13:08:40 compute-0 nova_compute[257802]: 2025-10-02 13:08:40.824 2 DEBUG oslo_concurrency.processutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:40 compute-0 podman[399357]: 2025-10-02 13:08:40.923063198 +0000 UTC m=+0.051682141 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:08:40 compute-0 podman[399358]: 2025-10-02 13:08:40.930365925 +0000 UTC m=+0.056849486 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:08:40 compute-0 podman[399359]: 2025-10-02 13:08:40.933801268 +0000 UTC m=+0.053883205 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 13:08:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:40 compute-0 ceph-mon[73607]: pgmap v3306: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 156 op/s
Oct 02 13:08:41 compute-0 nova_compute[257802]: 2025-10-02 13:08:41.040 2 DEBUG oslo_concurrency.processutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:41 compute-0 nova_compute[257802]: 2025-10-02 13:08:41.045 2 DEBUG nova.virt.libvirt.driver [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:08:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:41.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/726771028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:08:42
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 134 op/s
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:43.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:43 compute-0 ceph-mon[73607]: pgmap v3307: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 134 op/s
Oct 02 13:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:08:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:08:43 compute-0 kernel: tap06105eee-1c (unregistering): left promiscuous mode
Oct 02 13:08:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:43 compute-0 NetworkManager[44987]: <info>  [1759410523.7400] device (tap06105eee-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:08:43 compute-0 ovn_controller[148183]: 2025-10-02T13:08:43Z|00948|binding|INFO|Releasing lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 from this chassis (sb_readonly=0)
Oct 02 13:08:43 compute-0 ovn_controller[148183]: 2025-10-02T13:08:43Z|00949|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 down in Southbound
Oct 02 13:08:43 compute-0 ovn_controller[148183]: 2025-10-02T13:08:43Z|00950|binding|INFO|Removing iface tap06105eee-1c ovn-installed in OVS
Oct 02 13:08:43 compute-0 nova_compute[257802]: 2025-10-02 13:08:43.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 nova_compute[257802]: 2025-10-02 13:08:43.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 nova_compute[257802]: 2025-10-02 13:08:43.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Oct 02 13:08:43 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d1.scope: Consumed 13.977s CPU time.
Oct 02 13:08:43 compute-0 systemd-machined[211836]: Machine qemu-101-instance-000000d1 terminated.
Oct 02 13:08:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:43.938 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:d7:ed 10.100.0.11'], port_security=['fa:16:3e:3d:d7:ed 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '337a5b6a-7697-4b02-8d14-65af2374695f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f431a9f-5f6a-4914-ae7c-c00e97c25630, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=06105eee-1ccc-4976-9ef2-84b4765d9a79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:43.939 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 06105eee-1ccc-4976-9ef2-84b4765d9a79 in datapath 41354ccc-5b80-451f-9510-2c3d0788ecf7 unbound from our chassis
Oct 02 13:08:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:43.940 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41354ccc-5b80-451f-9510-2c3d0788ecf7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:08:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:43.941 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6d15deac-9a13-4318-8e84-c836b0efedee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:43.942 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 namespace which is not needed anymore
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.065 2 INFO nova.virt.libvirt.driver [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance shutdown successfully after 3 seconds.
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.073 2 INFO nova.virt.libvirt.driver [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance destroyed successfully.
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.074 2 DEBUG nova.virt.libvirt.vif [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:36Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--166998389", "vif_mac": "fa:16:3e:3d:d7:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.074 2 DEBUG nova.network.os_vif_util [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--166998389", "vif_mac": "fa:16:3e:3d:d7:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.075 2 DEBUG nova.network.os_vif_util [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.076 2 DEBUG os_vif [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06105eee-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:44 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [NOTICE]   (399243) : haproxy version is 2.8.14-c23fe91
Oct 02 13:08:44 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [NOTICE]   (399243) : path to executable is /usr/sbin/haproxy
Oct 02 13:08:44 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [WARNING]  (399243) : Exiting Master process...
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:08:44 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [ALERT]    (399243) : Current worker (399253) exited with code 143 (Terminated)
Oct 02 13:08:44 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[399205]: [WARNING]  (399243) : All workers exited. Exiting... (0)
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.085 2 INFO os_vif [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c')
Oct 02 13:08:44 compute-0 systemd[1]: libpod-12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2.scope: Deactivated successfully.
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.090 2 DEBUG nova.virt.libvirt.driver [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.091 2 DEBUG nova.virt.libvirt.driver [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:44 compute-0 podman[399449]: 2025-10-02 13:08:44.092621688 +0000 UTC m=+0.048806352 container died 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2-userdata-shm.mount: Deactivated successfully.
Oct 02 13:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed4fecb3c813daaa4ee89cfa51ee2d37a70851d64cba8fa76b0623cb777a966-merged.mount: Deactivated successfully.
Oct 02 13:08:44 compute-0 podman[399449]: 2025-10-02 13:08:44.136204623 +0000 UTC m=+0.092389287 container cleanup 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:08:44 compute-0 systemd[1]: libpod-conmon-12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2.scope: Deactivated successfully.
Oct 02 13:08:44 compute-0 podman[399478]: 2025-10-02 13:08:44.250668802 +0000 UTC m=+0.083613414 container remove 12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.256 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fed56f-0aa6-43ee-840f-6ea5904f9e28]: (4, ('Thu Oct  2 01:08:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 (12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2)\n12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2\nThu Oct  2 01:08:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 (12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2)\n12ef6280c667e1d2aab51761608c4acb442e70a2d73d454db72303cd21015ae2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.258 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d408a7e0-d2f6-40bf-87fc-c14fd4b428f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.259 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41354ccc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:44 compute-0 kernel: tap41354ccc-50: left promiscuous mode
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.277 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[db18c76a-d9b7-47ec-951a-869075e5dc86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.296 2 DEBUG neutronclient.v2_0.client [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.307 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba08829-9591-4e4b-ade9-35b46d748f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.308 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[55ae88a7-9b55-4efd-93d3-f81771c492e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.324 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9a00ed-8486-4a0f-964d-54f6b733c340]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855738, 'reachable_time': 21684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399493, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.327 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:08:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d41354ccc\x2d5b80\x2d451f\x2d9510\x2d2c3d0788ecf7.mount: Deactivated successfully.
Oct 02 13:08:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:08:44.327 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4aed84-4045-497a-a599-ec5299b3ee8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.399 2 DEBUG nova.compute.manager [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.400 2 DEBUG oslo_concurrency.lockutils [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.400 2 DEBUG oslo_concurrency.lockutils [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.401 2 DEBUG oslo_concurrency.lockutils [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.401 2 DEBUG nova.compute.manager [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.401 2 WARNING nova.compute.manager [req-2356677a-cf58-4765-a0bf-6779f2b6532d req-c5eabc61-e3c0-498e-8daa-9a4974388c5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state active and task_state resize_migrating.
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.468 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.469 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:44 compute-0 nova_compute[257802]: 2025-10-02 13:08:44.469 2 DEBUG oslo_concurrency.lockutils [None req-082413b9-8d2a-4e7e-864e-9b289da5daf9 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3315371559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 137 op/s
Oct 02 13:08:45 compute-0 nova_compute[257802]: 2025-10-02 13:08:45.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:45 compute-0 nova_compute[257802]: 2025-10-02 13:08:45.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:08:45 compute-0 nova_compute[257802]: 2025-10-02 13:08:45.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:08:45 compute-0 nova_compute[257802]: 2025-10-02 13:08:45.154 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:08:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:45 compute-0 nova_compute[257802]: 2025-10-02 13:08:45.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:45 compute-0 ceph-mon[73607]: pgmap v3308: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 137 op/s
Oct 02 13:08:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3354737208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:45 compute-0 sudo[399495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:45 compute-0 sudo[399495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:45 compute-0 sudo[399495]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:45 compute-0 sudo[399520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:45 compute-0 sudo[399520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:45 compute-0 sudo[399520]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Oct 02 13:08:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Oct 02 13:08:46 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Oct 02 13:08:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 125 op/s
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.751 2 DEBUG nova.compute.manager [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.752 2 DEBUG oslo_concurrency.lockutils [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.752 2 DEBUG oslo_concurrency.lockutils [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.753 2 DEBUG oslo_concurrency.lockutils [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.753 2 DEBUG nova.compute.manager [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:46 compute-0 nova_compute[257802]: 2025-10-02 13:08:46.753 2 WARNING nova.compute.manager [req-85a86fbd-a37e-4386-9fef-614c0c1f970d req-4010acd0-2b12-46de-9310-67052e749d97 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:08:47 compute-0 ceph-mon[73607]: osdmap e406: 3 total, 3 up, 3 in
Oct 02 13:08:47 compute-0 ceph-mon[73607]: pgmap v3310: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 125 op/s
Oct 02 13:08:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:08:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:08:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Oct 02 13:08:48 compute-0 nova_compute[257802]: 2025-10-02 13:08:48.779 2 DEBUG nova.compute.manager [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:48 compute-0 nova_compute[257802]: 2025-10-02 13:08:48.780 2 DEBUG nova.compute.manager [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing instance network info cache due to event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:48 compute-0 nova_compute[257802]: 2025-10-02 13:08:48.780 2 DEBUG oslo_concurrency.lockutils [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:48 compute-0 nova_compute[257802]: 2025-10-02 13:08:48.780 2 DEBUG oslo_concurrency.lockutils [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:48 compute-0 nova_compute[257802]: 2025-10-02 13:08:48.780 2 DEBUG nova.network.neutron [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:49 compute-0 nova_compute[257802]: 2025-10-02 13:08:49.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:49.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:49 compute-0 ceph-mon[73607]: pgmap v3311: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Oct 02 13:08:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.196 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.197 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.197 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.197 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.197 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930201684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:50 compute-0 nova_compute[257802]: 2025-10-02 13:08:50.612 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 50 op/s
Oct 02 13:08:50 compute-0 podman[399570]: 2025-10-02 13:08:50.731702326 +0000 UTC m=+0.076472112 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:08:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3930201684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.012 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.012 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.136 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.137 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4214MB free_disk=20.897056579589844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.138 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.138 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:51.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.644 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 13:08:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.772 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating resource usage from migration aecd633a-2b42-456e-a50c-d6475dc25816
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.773 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Starting to track outgoing migration aecd633a-2b42-456e-a50c-d6475dc25816 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.804 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration aecd633a-2b42-456e-a50c-d6475dc25816 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.805 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.805 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:08:51 compute-0 ceph-mon[73607]: pgmap v3312: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 50 op/s
Oct 02 13:08:51 compute-0 nova_compute[257802]: 2025-10-02 13:08:51.862 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259318693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.271 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.277 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.298 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.321 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.322 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.486 2 DEBUG nova.network.neutron [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updated VIF entry in instance network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.486 2 DEBUG nova.network.neutron [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:52 compute-0 nova_compute[257802]: 2025-10-02 13:08:52.520 2 DEBUG oslo_concurrency.lockutils [req-d994fe06-4ab6-4653-a41f-15929b58a8a3 req-f62c90e3-135b-49b5-a448-88e26aeb259d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 50 op/s
Oct 02 13:08:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3259318693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:52 compute-0 sudo[399622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:52 compute-0 sudo[399622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:52 compute-0 sudo[399622]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:53 compute-0 sudo[399647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:53 compute-0 sudo[399647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:53 compute-0 sudo[399647]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:53 compute-0 sudo[399672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:53 compute-0 sudo[399672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:53 compute-0 sudo[399672]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:53 compute-0 sudo[399697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:08:53 compute-0 sudo[399697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:53 compute-0 sudo[399697]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:08:53 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:53.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Oct 02 13:08:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Oct 02 13:08:53 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Oct 02 13:08:53 compute-0 ceph-mon[73607]: pgmap v3313: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 50 op/s
Oct 02 13:08:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:53 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:54 compute-0 nova_compute[257802]: 2025-10-02 13:08:54.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5e344ee2-2926-432d-9364-e1b32de073ec does not exist
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 739fd74e-0620-4209-a45f-6134ac84f374 does not exist
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9be35c57-8745-403b-b0cd-7a1acbcf60eb does not exist
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:08:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:54 compute-0 sudo[399753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:54 compute-0 sudo[399753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:54 compute-0 sudo[399753]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Oct 02 13:08:54 compute-0 sudo[399778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:54 compute-0 sudo[399778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:54 compute-0 sudo[399778]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:54 compute-0 sudo[399804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:54 compute-0 sudo[399804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:54 compute-0 sudo[399804]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:54 compute-0 sudo[399829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:08:54 compute-0 sudo[399829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043466391564302995 of space, bias 1.0, pg target 1.3039917469290898 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0040689195398287735 of space, bias 1.0, pg target 1.2166069424088033 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 13:08:54 compute-0 ceph-mon[73607]: osdmap e407: 3 total, 3 up, 3 in
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2387503929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:08:54 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.153406928 +0000 UTC m=+0.041074705 container create 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 13:08:55 compute-0 systemd[1]: Started libpod-conmon-0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4.scope.
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.135225398 +0000 UTC m=+0.022893195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:55.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.262336434 +0000 UTC m=+0.150004231 container init 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.277197253 +0000 UTC m=+0.164865030 container start 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.281079677 +0000 UTC m=+0.168747484 container attach 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:55 compute-0 systemd[1]: libpod-0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4.scope: Deactivated successfully.
Oct 02 13:08:55 compute-0 trusting_germain[399912]: 167 167
Oct 02 13:08:55 compute-0 nova_compute[257802]: 2025-10-02 13:08:55.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:55 compute-0 conmon[399912]: conmon 0fa54091422376982a99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4.scope/container/memory.events
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.28989001 +0000 UTC m=+0.177557787 container died 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fce0767ded8ba468fa7e82282eef0ef7ef57f5edda9d133d4ec0fa19c86ced76-merged.mount: Deactivated successfully.
Oct 02 13:08:55 compute-0 podman[399895]: 2025-10-02 13:08:55.335009131 +0000 UTC m=+0.222676908 container remove 0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:55 compute-0 systemd[1]: libpod-conmon-0fa54091422376982a9947f940f0bdcaa47b325250213f70f1429ef04ad462b4.scope: Deactivated successfully.
Oct 02 13:08:55 compute-0 podman[399936]: 2025-10-02 13:08:55.48581807 +0000 UTC m=+0.036479574 container create c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:08:55 compute-0 systemd[1]: Started libpod-conmon-c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0.scope.
Oct 02 13:08:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:55 compute-0 podman[399936]: 2025-10-02 13:08:55.47094242 +0000 UTC m=+0.021603954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:55 compute-0 podman[399936]: 2025-10-02 13:08:55.573418339 +0000 UTC m=+0.124079853 container init c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:55 compute-0 podman[399936]: 2025-10-02 13:08:55.586450194 +0000 UTC m=+0.137111708 container start c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:08:55 compute-0 podman[399936]: 2025-10-02 13:08:55.590285387 +0000 UTC m=+0.140946891 container attach c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:08:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:08:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:08:55 compute-0 ceph-mon[73607]: pgmap v3315: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Oct 02 13:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1808770692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1808770692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:08:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1810644779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:56 compute-0 hardcore_yonath[399953]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:08:56 compute-0 hardcore_yonath[399953]: --> relative data size: 1.0
Oct 02 13:08:56 compute-0 hardcore_yonath[399953]: --> All data devices are unavailable
Oct 02 13:08:56 compute-0 systemd[1]: libpod-c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0.scope: Deactivated successfully.
Oct 02 13:08:56 compute-0 podman[399936]: 2025-10-02 13:08:56.451022601 +0000 UTC m=+1.001684105 container died c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f79b2df077ea7860cd6731298eeeffa897677a1a581973ca7552d50be58010d-merged.mount: Deactivated successfully.
Oct 02 13:08:56 compute-0 podman[399936]: 2025-10-02 13:08:56.510126631 +0000 UTC m=+1.060788135 container remove c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:56 compute-0 systemd[1]: libpod-conmon-c4cd095647403a96272087b86928d2a326957513297855409a76dd26c74e1df0.scope: Deactivated successfully.
Oct 02 13:08:56 compute-0 sudo[399829]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:56 compute-0 sudo[399979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:56 compute-0 sudo[399979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:56 compute-0 sudo[399979]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 13:08:56 compute-0 sudo[400004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:56 compute-0 sudo[400004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:56 compute-0 sudo[400004]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:56 compute-0 sudo[400030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:56 compute-0 sudo[400030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:56 compute-0 sudo[400030]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:56 compute-0 sudo[400055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:08:56 compute-0 sudo[400055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/564078248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:56 compute-0 ceph-mon[73607]: pgmap v3316: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 13:08:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2664833074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.070 2 DEBUG nova.compute.manager [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.073 2 DEBUG oslo_concurrency.lockutils [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.073 2 DEBUG oslo_concurrency.lockutils [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.073 2 DEBUG oslo_concurrency.lockutils [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.074 2 DEBUG nova.compute.manager [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:57 compute-0 nova_compute[257802]: 2025-10-02 13:08:57.074 2 WARNING nova.compute.manager [req-d1cd4141-8e56-481a-89d6-81035ebf91f4 req-89121e1f-b638-4d4f-b4c0-10c242ff434c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state resized and task_state None.
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.146182149 +0000 UTC m=+0.038651477 container create 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:57 compute-0 systemd[1]: Started libpod-conmon-17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee.scope.
Oct 02 13:08:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.223308994 +0000 UTC m=+0.115778332 container init 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.130671893 +0000 UTC m=+0.023141241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.231511803 +0000 UTC m=+0.123981171 container start 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.235391157 +0000 UTC m=+0.127860505 container attach 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:08:57 compute-0 vigorous_lumiere[400136]: 167 167
Oct 02 13:08:57 compute-0 systemd[1]: libpod-17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee.scope: Deactivated successfully.
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.237886498 +0000 UTC m=+0.130355826 container died 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:08:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:57.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5ba8bb57c102e19902cd74d7f868df3f6d205ae569f1c0d0115ccdd1b366f0-merged.mount: Deactivated successfully.
Oct 02 13:08:57 compute-0 podman[400120]: 2025-10-02 13:08:57.294313752 +0000 UTC m=+0.186783080 container remove 17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:08:57 compute-0 systemd[1]: libpod-conmon-17ad2205b09a3f39e07b2245acb208f2b7ccadce71233a8530d384c6d290e7ee.scope: Deactivated successfully.
Oct 02 13:08:57 compute-0 podman[400163]: 2025-10-02 13:08:57.465283899 +0000 UTC m=+0.051731763 container create f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:08:57 compute-0 systemd[1]: Started libpod-conmon-f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6.scope.
Oct 02 13:08:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f18cb30c42d4c34690d162b5daed6baf6050c7a027e763d4dc9a3975b4fa6b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f18cb30c42d4c34690d162b5daed6baf6050c7a027e763d4dc9a3975b4fa6b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f18cb30c42d4c34690d162b5daed6baf6050c7a027e763d4dc9a3975b4fa6b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f18cb30c42d4c34690d162b5daed6baf6050c7a027e763d4dc9a3975b4fa6b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:57 compute-0 podman[400163]: 2025-10-02 13:08:57.534226926 +0000 UTC m=+0.120674820 container init f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:08:57 compute-0 podman[400163]: 2025-10-02 13:08:57.441391021 +0000 UTC m=+0.027838905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:57 compute-0 podman[400163]: 2025-10-02 13:08:57.540842536 +0000 UTC m=+0.127290400 container start f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:08:57 compute-0 podman[400163]: 2025-10-02 13:08:57.543666395 +0000 UTC m=+0.130114259 container attach f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:08:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]: {
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:     "1": [
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:         {
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "devices": [
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "/dev/loop3"
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             ],
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "lv_name": "ceph_lv0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "lv_size": "7511998464",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "name": "ceph_lv0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "tags": {
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.cluster_name": "ceph",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.crush_device_class": "",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.encrypted": "0",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.osd_id": "1",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.type": "block",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:                 "ceph.vdo": "0"
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             },
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "type": "block",
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:             "vg_name": "ceph_vg0"
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:         }
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]:     ]
Oct 02 13:08:58 compute-0 hungry_goldstine[400180]: }
Oct 02 13:08:58 compute-0 systemd[1]: libpod-f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6.scope: Deactivated successfully.
Oct 02 13:08:58 compute-0 podman[400163]: 2025-10-02 13:08:58.289799386 +0000 UTC m=+0.876247250 container died f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f18cb30c42d4c34690d162b5daed6baf6050c7a027e763d4dc9a3975b4fa6b2-merged.mount: Deactivated successfully.
Oct 02 13:08:58 compute-0 podman[400163]: 2025-10-02 13:08:58.342781988 +0000 UTC m=+0.929229852 container remove f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:08:58 compute-0 systemd[1]: libpod-conmon-f27aea9ce6bb2b4cdb071b054a600648d37c1a402de7c29727742d3b4a96bfc6.scope: Deactivated successfully.
Oct 02 13:08:58 compute-0 sudo[400055]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:58 compute-0 sudo[400202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:58 compute-0 sudo[400202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:58 compute-0 sudo[400202]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:58 compute-0 sudo[400227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:58 compute-0 sudo[400227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:58 compute-0 sudo[400227]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:58 compute-0 sudo[400252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:58 compute-0 sudo[400252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:58 compute-0 sudo[400252]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:58 compute-0 sudo[400277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:08:58 compute-0 sudo[400277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 439 KiB/s rd, 921 B/s wr, 54 op/s
Oct 02 13:08:58 compute-0 nova_compute[257802]: 2025-10-02 13:08:58.986 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410523.9855955, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:58 compute-0 nova_compute[257802]: 2025-10-02 13:08:58.987 2 INFO nova.compute.manager [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Stopped (Lifecycle Event)
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.02410093 +0000 UTC m=+0.080955330 container create 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:59 compute-0 systemd[1]: Started libpod-conmon-9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae.scope.
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:58.971318554 +0000 UTC m=+0.028172944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.104094725 +0000 UTC m=+0.160949105 container init 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.110192683 +0000 UTC m=+0.167047033 container start 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.114408735 +0000 UTC m=+0.171263095 container attach 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:08:59 compute-0 happy_blackburn[400360]: 167 167
Oct 02 13:08:59 compute-0 systemd[1]: libpod-9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae.scope: Deactivated successfully.
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.11670619 +0000 UTC m=+0.173560550 container died 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.118 2 DEBUG nova.compute.manager [None req-31e12756-401c-44ce-a220-5038d253923e - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.124 2 DEBUG nova.compute.manager [None req-31e12756-401c-44ce-a220-5038d253923e - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.145 2 INFO nova.compute.manager [None req-31e12756-401c-44ce-a220-5038d253923e - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 13:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba8cc10eb09ddeb1d5d5a73d2e68ac20315b3169c2c2efedb0cddaebbca78c9-merged.mount: Deactivated successfully.
Oct 02 13:08:59 compute-0 podman[400344]: 2025-10-02 13:08:59.164263421 +0000 UTC m=+0.221117781 container remove 9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:08:59 compute-0 systemd[1]: libpod-conmon-9603d88e10ef715e4bd325ab69c173cb8b55daef7a96b64bb9725b7f2e61edae.scope: Deactivated successfully.
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.196 2 DEBUG nova.compute.manager [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.197 2 DEBUG oslo_concurrency.lockutils [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.197 2 DEBUG oslo_concurrency.lockutils [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.197 2 DEBUG oslo_concurrency.lockutils [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.198 2 DEBUG nova.compute.manager [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:59 compute-0 nova_compute[257802]: 2025-10-02 13:08:59.198 2 WARNING nova.compute.manager [req-c81f3f62-3df1-47a0-afb2-923137573bce req-898bf2ab-4b77-4352-ac91-89f5f9d82bc1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state resized and task_state resize_reverting.
Oct 02 13:08:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:59 compute-0 podman[400384]: 2025-10-02 13:08:59.371442033 +0000 UTC m=+0.046625089 container create 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:08:59 compute-0 systemd[1]: Started libpod-conmon-1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7.scope.
Oct 02 13:08:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ba99a329d1695705342b175ba53b2460646147d8ca5ea5cdd236e512e70484d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ba99a329d1695705342b175ba53b2460646147d8ca5ea5cdd236e512e70484d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ba99a329d1695705342b175ba53b2460646147d8ca5ea5cdd236e512e70484d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ba99a329d1695705342b175ba53b2460646147d8ca5ea5cdd236e512e70484d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:59 compute-0 podman[400384]: 2025-10-02 13:08:59.432410229 +0000 UTC m=+0.107593305 container init 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:08:59 compute-0 podman[400384]: 2025-10-02 13:08:59.439875909 +0000 UTC m=+0.115058985 container start 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:08:59 compute-0 podman[400384]: 2025-10-02 13:08:59.350684112 +0000 UTC m=+0.025867188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:59 compute-0 podman[400384]: 2025-10-02 13:08:59.445320601 +0000 UTC m=+0.120503657 container attach 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:08:59 compute-0 ceph-mon[73607]: pgmap v3317: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 439 KiB/s rd, 921 B/s wr, 54 op/s
Oct 02 13:08:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:08:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]: {
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:         "osd_id": 1,
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:         "type": "bluestore"
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]:     }
Oct 02 13:09:00 compute-0 infallible_blackburn[400400]: }
Oct 02 13:09:00 compute-0 systemd[1]: libpod-1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7.scope: Deactivated successfully.
Oct 02 13:09:00 compute-0 podman[400421]: 2025-10-02 13:09:00.278721233 +0000 UTC m=+0.021735287 container died 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:09:00 compute-0 nova_compute[257802]: 2025-10-02 13:09:00.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ba99a329d1695705342b175ba53b2460646147d8ca5ea5cdd236e512e70484d-merged.mount: Deactivated successfully.
Oct 02 13:09:00 compute-0 nova_compute[257802]: 2025-10-02 13:09:00.322 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:00 compute-0 podman[400421]: 2025-10-02 13:09:00.332293389 +0000 UTC m=+0.075307443 container remove 1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackburn, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:09:00 compute-0 systemd[1]: libpod-conmon-1bff3b17008525b71879a84a333e0b7e9c19bc49d3447d239dd400432539c5d7.scope: Deactivated successfully.
Oct 02 13:09:00 compute-0 sudo[400277]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:09:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:09:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:09:00 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 42ffdd59-c0d6-4d8e-bb82-341029c31a8f does not exist
Oct 02 13:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 20f273ea-2795-42c2-86c8-4692e9bcff7d does not exist
Oct 02 13:09:00 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 260f52e7-cb7a-4856-8303-730657639586 does not exist
Oct 02 13:09:00 compute-0 sudo[400435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:00 compute-0 sudo[400435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:00 compute-0 sudo[400435]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:00 compute-0 sudo[400460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:09:00 compute-0 sudo[400460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:00 compute-0 sudo[400460]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 110 op/s
Oct 02 13:09:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:09:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:09:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:09:01 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:09:01 compute-0 ceph-mon[73607]: pgmap v3318: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 110 op/s
Oct 02 13:09:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.357 2 DEBUG nova.compute.manager [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.357 2 DEBUG oslo_concurrency.lockutils [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.357 2 DEBUG oslo_concurrency.lockutils [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.358 2 DEBUG oslo_concurrency.lockutils [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.358 2 DEBUG nova.compute.manager [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:02 compute-0 nova_compute[257802]: 2025-10-02 13:09:02.358 2 WARNING nova.compute.manager [req-62354a59-0f21-4c13-93c7-c859853d0d98 req-4e5e5c00-c8c2-440c-972a-a16d77d45517 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state resized and task_state resize_reverting.
Oct 02 13:09:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 110 op/s
Oct 02 13:09:03 compute-0 nova_compute[257802]: 2025-10-02 13:09:03.153 2 INFO nova.compute.manager [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Swapping old allocation on dict_keys(['a293a24c-b5ed-43d1-8783-f02da4f75ad4']) held by migration aecd633a-2b42-456e-a50c-d6475dc25816 for instance
Oct 02 13:09:03 compute-0 nova_compute[257802]: 2025-10-02 13:09:03.186 2 DEBUG nova.scheduler.client.report [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Overwriting current allocation {'allocations': {'5abd2871-a992-42ab-bb6a-594a92f77d4d': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 100}}, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'consumer_generation': 1} on consumer 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 13:09:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:03 compute-0 nova_compute[257802]: 2025-10-02 13:09:03.415 2 INFO nova.network.neutron [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating port 06105eee-1ccc-4976-9ef2-84b4765d9a79 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:09:03 compute-0 ceph-mon[73607]: pgmap v3319: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 110 op/s
Oct 02 13:09:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1154508469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:04.078 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:04.079 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.324 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.325 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.325 2 DEBUG nova.network.neutron [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.411 2 DEBUG nova.compute.manager [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.411 2 DEBUG nova.compute.manager [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing instance network info cache due to event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.411 2 DEBUG oslo_concurrency.lockutils [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.535 2 DEBUG nova.compute.manager [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.536 2 DEBUG oslo_concurrency.lockutils [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.536 2 DEBUG oslo_concurrency.lockutils [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.536 2 DEBUG oslo_concurrency.lockutils [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.536 2 DEBUG nova.compute.manager [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:04 compute-0 nova_compute[257802]: 2025-10-02 13:09:04.536 2 WARNING nova.compute.manager [req-0997286d-ee4c-4d78-8c2b-bd1a512f0a6c req-2b41151a-334e-4267-87dc-54e65e477856 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state resized and task_state resize_reverting.
Oct 02 13:09:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 167 op/s
Oct 02 13:09:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:05.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.566 2 DEBUG nova.network.neutron [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.587 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.588 2 DEBUG nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.617 2 DEBUG oslo_concurrency.lockutils [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.617 2 DEBUG nova.network.neutron [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.657 2 DEBUG nova.storage.rbd_utils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rolling back rbd image(6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 13:09:05 compute-0 ceph-mon[73607]: pgmap v3320: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 167 op/s
Oct 02 13:09:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:05 compute-0 nova_compute[257802]: 2025-10-02 13:09:05.774 2 DEBUG nova.storage.rbd_utils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] removing snapshot(nova-resize) on rbd image(6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:09:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:06 compute-0 sudo[400542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:06 compute-0 sudo[400542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:06 compute-0 sudo[400542]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:06 compute-0 sudo[400567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:06 compute-0 sudo[400567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:06 compute-0 sudo[400567]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 160 op/s
Oct 02 13:09:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Oct 02 13:09:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Oct 02 13:09:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.799 2 DEBUG nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start _get_guest_xml network_info=[{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.802 2 WARNING nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.815 2 DEBUG nova.virt.libvirt.host [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.816 2 DEBUG nova.virt.libvirt.host [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.820 2 DEBUG nova.virt.libvirt.host [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.821 2 DEBUG nova.virt.libvirt.host [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.822 2 DEBUG nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.822 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.822 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.822 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.823 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.824 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.824 2 DEBUG nova.virt.hardware [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.824 2 DEBUG nova.objects.instance [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:06 compute-0 nova_compute[257802]: 2025-10-02 13:09:06.842 2 DEBUG oslo_concurrency.processutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.014 2 DEBUG nova.network.neutron [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updated VIF entry in instance network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.015 2 DEBUG nova.network.neutron [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.049 2 DEBUG oslo_concurrency.lockutils [req-9fed01f1-2637-4802-8d64-028f4337e409 req-4e7ea46d-f01e-43e9-b7be-1cf982696e17 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572049980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.306 2 DEBUG oslo_concurrency.processutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.344 2 DEBUG oslo_concurrency.processutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:07 compute-0 ceph-mon[73607]: pgmap v3321: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 160 op/s
Oct 02 13:09:07 compute-0 ceph-mon[73607]: osdmap e408: 3 total, 3 up, 3 in
Oct 02 13:09:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2572049980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:07.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474279428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.783 2 DEBUG oslo_concurrency.processutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.784 2 DEBUG nova.virt.libvirt.vif [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:59Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.785 2 DEBUG nova.network.os_vif_util [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.786 2 DEBUG nova.network.os_vif_util [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.788 2 DEBUG nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <uuid>6df9bd3b-6218-4859-aba9-bfbedf2b8f18</uuid>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <name>instance-000000d1</name>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-510358772</nova:name>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:09:06</nova:creationTime>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <nova:port uuid="06105eee-1ccc-4976-9ef2-84b4765d9a79">
Oct 02 13:09:07 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <system>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="serial">6df9bd3b-6218-4859-aba9-bfbedf2b8f18</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="uuid">6df9bd3b-6218-4859-aba9-bfbedf2b8f18</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </system>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <os>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </os>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <features>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </features>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk">
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </source>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_disk.config">
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </source>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:09:07 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3d:d7:ed"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <target dev="tap06105eee-1c"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18/console.log" append="off"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <video>
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </video>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <input type="keyboard" bus="usb"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:09:07 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:09:07 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:09:07 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:09:07 compute-0 nova_compute[257802]: </domain>
Oct 02 13:09:07 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.789 2 DEBUG nova.compute.manager [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Preparing to wait for external event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.790 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.790 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.790 2 DEBUG oslo_concurrency.lockutils [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.791 2 DEBUG nova.virt.libvirt.vif [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:59Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.791 2 DEBUG nova.network.os_vif_util [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.792 2 DEBUG nova.network.os_vif_util [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.792 2 DEBUG os_vif [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06105eee-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06105eee-1c, col_values=(('external_ids', {'iface-id': '06105eee-1ccc-4976-9ef2-84b4765d9a79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:d7:ed', 'vm-uuid': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 NetworkManager[44987]: <info>  [1759410547.7997] manager: (tap06105eee-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.806 2 INFO os_vif [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c')
Oct 02 13:09:07 compute-0 kernel: tap06105eee-1c: entered promiscuous mode
Oct 02 13:09:07 compute-0 ovn_controller[148183]: 2025-10-02T13:09:07Z|00951|binding|INFO|Claiming lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 for this chassis.
Oct 02 13:09:07 compute-0 NetworkManager[44987]: <info>  [1759410547.8821] manager: (tap06105eee-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Oct 02 13:09:07 compute-0 ovn_controller[148183]: 2025-10-02T13:09:07Z|00952|binding|INFO|06105eee-1ccc-4976-9ef2-84b4765d9a79: Claiming fa:16:3e:3d:d7:ed 10.100.0.11
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.896 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:d7:ed 10.100.0.11'], port_security=['fa:16:3e:3d:d7:ed 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '10', 'neutron:security_group_ids': '337a5b6a-7697-4b02-8d14-65af2374695f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f431a9f-5f6a-4914-ae7c-c00e97c25630, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=06105eee-1ccc-4976-9ef2-84b4765d9a79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.897 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 06105eee-1ccc-4976-9ef2-84b4765d9a79 in datapath 41354ccc-5b80-451f-9510-2c3d0788ecf7 bound to our chassis
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.898 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:09:07 compute-0 ovn_controller[148183]: 2025-10-02T13:09:07Z|00953|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 ovn-installed in OVS
Oct 02 13:09:07 compute-0 ovn_controller[148183]: 2025-10-02T13:09:07Z|00954|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 up in Southbound
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.911 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea4cfa1-7382-4e6f-b237-8e9e6a2ffd81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.911 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41354ccc-51 in ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:09:07 compute-0 nova_compute[257802]: 2025-10-02 13:09:07.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.914 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41354ccc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.915 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c98e175e-01f6-4dc4-b951-6ed113739830]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.915 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c4eb1e8a-1bd7-45b7-858a-30e3381d4788]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 systemd-udevd[400668]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.928 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[82a14011-c399-43e4-96f6-b8043fe2a246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 NetworkManager[44987]: <info>  [1759410547.9292] device (tap06105eee-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:07 compute-0 NetworkManager[44987]: <info>  [1759410547.9302] device (tap06105eee-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:07 compute-0 systemd-machined[211836]: New machine qemu-102-instance-000000d1.
Oct 02 13:09:07 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-000000d1.
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.945 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4846f32c-282b-4c0b-a579-8da1cbebcdce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.976 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[2dad29ab-a57d-4f71-9bac-d64061bb8e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:07.980 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b8918078-35b8-4a53-9ce5-bad0e478e63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:07 compute-0 systemd-udevd[400673]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:07 compute-0 NetworkManager[44987]: <info>  [1759410547.9831] manager: (tap41354ccc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.010 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf8975c-ea7b-4a41-add9-55f5f99d8844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.012 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[0020725a-81c0-4a64-bec5-49d95f97eff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 NetworkManager[44987]: <info>  [1759410548.0336] device (tap41354ccc-50): carrier: link connected
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.039 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fd160e-47c3-403a-81e7-8191651d15d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.057 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[088fb7fa-bbde-4647-bcb5-e9cb7174b6c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41354ccc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ea:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861565, 'reachable_time': 28889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400702, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.072 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[26041b6d-b17d-4e38-9cb6-c3854c7590b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:eabc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861565, 'tstamp': 861565}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400703, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.089 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a0f9b4-28d5-4e4e-9e79-b1980babfa95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41354ccc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ea:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861565, 'reachable_time': 28889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400704, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.116 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad6f429-7111-46ac-89a4-40ef2f0ebd15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.170 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2d71b3-a096-4722-85fe-6faa56b743a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.172 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41354ccc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.172 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.172 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41354ccc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-0 NetworkManager[44987]: <info>  [1759410548.1848] manager: (tap41354ccc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-0 kernel: tap41354ccc-50: entered promiscuous mode
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.187 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41354ccc-50, col_values=(('external_ids', {'iface-id': '7cea9858-fead-45b4-8830-8edfb5209d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-0 ovn_controller[148183]: 2025-10-02T13:09:08Z|00955|binding|INFO|Releasing lport 7cea9858-fead-45b4-8830-8edfb5209d69 from this chassis (sb_readonly=0)
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.202 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.203 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b3c4a8-a1a9-4979-b5b0-a9facd450e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.204 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/41354ccc-5b80-451f-9510-2c3d0788ecf7.pid.haproxy
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 41354ccc-5b80-451f-9510-2c3d0788ecf7
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:09:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:08.205 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'env', 'PROCESS_TAG=haproxy-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41354ccc-5b80-451f-9510-2c3d0788ecf7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:09:08 compute-0 podman[400736]: 2025-10-02 13:09:08.522779978 +0000 UTC m=+0.043635917 container create 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 13:09:08 compute-0 systemd[1]: Started libpod-conmon-8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a.scope.
Oct 02 13:09:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d793a3e36d600403b96e0bd2756c0969525386e1f59168242ce9e23c2c99fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:08 compute-0 podman[400736]: 2025-10-02 13:09:08.591692866 +0000 UTC m=+0.112548805 container init 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:09:08 compute-0 podman[400736]: 2025-10-02 13:09:08.499191878 +0000 UTC m=+0.020047837 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:09:08 compute-0 podman[400736]: 2025-10-02 13:09:08.597982988 +0000 UTC m=+0.118838927 container start 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:08 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [NOTICE]   (400792) : New worker (400798) forked
Oct 02 13:09:08 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [NOTICE]   (400792) : Loading success.
Oct 02 13:09:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 174 op/s
Oct 02 13:09:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2474279428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.763 2 DEBUG nova.compute.manager [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.764 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.764 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.765 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.765 2 DEBUG nova.compute.manager [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Processing event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.765 2 DEBUG nova.compute.manager [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.766 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.766 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.766 2 DEBUG oslo_concurrency.lockutils [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.767 2 DEBUG nova.compute.manager [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:08 compute-0 nova_compute[257802]: 2025-10-02 13:09:08.767 2 WARNING nova.compute.manager [req-54d59cdb-8d36-4349-ade5-5b5da97a6cc9 req-30a9e0c5-1043-4161-a64d-231256150091 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state resized and task_state resize_reverting.
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.047 2 DEBUG nova.compute.manager [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.048 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410549.0466695, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.048 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Started (Lifecycle Event)
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.054 2 INFO nova.virt.libvirt.driver [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance running successfully.
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.054 2 DEBUG nova.virt.libvirt.driver [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 13:09:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:09.081 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.110 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.113 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.160 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.160 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410549.0475302, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.161 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Paused (Lifecycle Event)
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.172 2 INFO nova.compute.manager [None req-769dc677-3637-4901-b833-47a3b06deff3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance to original state: 'active'
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.201 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.205 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410549.0516791, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.205 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Resumed (Lifecycle Event)
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.232 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:09 compute-0 nova_compute[257802]: 2025-10-02 13:09:09.235 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:09.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:09 compute-0 ceph-mon[73607]: pgmap v3323: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 174 op/s
Oct 02 13:09:10 compute-0 nova_compute[257802]: 2025-10-02 13:09:10.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.1 KiB/s wr, 140 op/s
Oct 02 13:09:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Oct 02 13:09:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Oct 02 13:09:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Oct 02 13:09:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:11.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:11 compute-0 ceph-mon[73607]: pgmap v3324: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.1 KiB/s wr, 140 op/s
Oct 02 13:09:11 compute-0 ceph-mon[73607]: osdmap e409: 3 total, 3 up, 3 in
Oct 02 13:09:11 compute-0 podman[400810]: 2025-10-02 13:09:11.9193618 +0000 UTC m=+0.054308795 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:09:11 compute-0 podman[400811]: 2025-10-02 13:09:11.923553122 +0000 UTC m=+0.058498097 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:09:11 compute-0 podman[400812]: 2025-10-02 13:09:11.941901876 +0000 UTC m=+0.071701816 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 83 op/s
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:12 compute-0 nova_compute[257802]: 2025-10-02 13:09:12.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:13.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:13 compute-0 ceph-mon[73607]: pgmap v3326: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 83 op/s
Oct 02 13:09:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 368 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 520 KiB/s wr, 148 op/s
Oct 02 13:09:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:09:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:09:15 compute-0 nova_compute[257802]: 2025-10-02 13:09:15.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:15 compute-0 ceph-mon[73607]: pgmap v3327: 305 pgs: 305 active+clean; 368 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 520 KiB/s wr, 148 op/s
Oct 02 13:09:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 373 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 605 KiB/s wr, 162 op/s
Oct 02 13:09:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:17 compute-0 nova_compute[257802]: 2025-10-02 13:09:17.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:17 compute-0 ceph-mon[73607]: pgmap v3328: 305 pgs: 305 active+clean; 373 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 605 KiB/s wr, 162 op/s
Oct 02 13:09:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 373 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 598 KiB/s wr, 150 op/s
Oct 02 13:09:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:19.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:20 compute-0 ceph-mon[73607]: pgmap v3329: 305 pgs: 305 active+clean; 373 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 598 KiB/s wr, 150 op/s
Oct 02 13:09:20 compute-0 nova_compute[257802]: 2025-10-02 13:09:20.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 642 KiB/s wr, 110 op/s
Oct 02 13:09:20 compute-0 podman[400870]: 2025-10-02 13:09:20.970008079 +0000 UTC m=+0.108981556 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 13:09:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:21 compute-0 ceph-mon[73607]: pgmap v3330: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 642 KiB/s wr, 110 op/s
Oct 02 13:09:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:21.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:21 compute-0 ovn_controller[148183]: 2025-10-02T13:09:21Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:d7:ed 10.100.0.11
Oct 02 13:09:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 551 KiB/s wr, 95 op/s
Oct 02 13:09:22 compute-0 nova_compute[257802]: 2025-10-02 13:09:22.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:09:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:09:23 compute-0 ceph-mon[73607]: pgmap v3331: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 551 KiB/s wr, 95 op/s
Oct 02 13:09:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 556 KiB/s wr, 129 op/s
Oct 02 13:09:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:25.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:25 compute-0 nova_compute[257802]: 2025-10-02 13:09:25.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:25 compute-0 ceph-mon[73607]: pgmap v3332: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 556 KiB/s wr, 129 op/s
Oct 02 13:09:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1313784074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:26 compute-0 sudo[400899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:26 compute-0 sudo[400899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:26 compute-0 sudo[400899]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:26 compute-0 sudo[400924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:26 compute-0 sudo[400924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:26 compute-0 sudo[400924]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 211 KiB/s wr, 84 op/s
Oct 02 13:09:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2710832825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:26 compute-0 nova_compute[257802]: 2025-10-02 13:09:26.950 2 INFO nova.compute.manager [None req-9492cc3f-0431-43bd-99c8-c2bbc474471a ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Get console output
Oct 02 13:09:26 compute-0 nova_compute[257802]: 2025-10-02 13:09:26.956 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:26.992 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:26.992 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:26.993 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:27 compute-0 nova_compute[257802]: 2025-10-02 13:09:27.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:27 compute-0 nova_compute[257802]: 2025-10-02 13:09:27.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:09:27 compute-0 nova_compute[257802]: 2025-10-02 13:09:27.120 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:09:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:27.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:27 compute-0 nova_compute[257802]: 2025-10-02 13:09:27.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:27 compute-0 ceph-mon[73607]: pgmap v3333: 305 pgs: 305 active+clean; 376 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 211 KiB/s wr, 84 op/s
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.120 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.120 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:09:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 620 KiB/s rd, 64 KiB/s wr, 50 op/s
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.996 2 DEBUG nova.compute.manager [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.997 2 DEBUG nova.compute.manager [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing instance network info cache due to event network-changed-06105eee-1ccc-4976-9ef2-84b4765d9a79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.997 2 DEBUG oslo_concurrency.lockutils [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.998 2 DEBUG oslo_concurrency.lockutils [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:28 compute-0 nova_compute[257802]: 2025-10-02 13:09:28.998 2 DEBUG nova.network.neutron [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Refreshing network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.075 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.075 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.075 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.075 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.076 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.077 2 INFO nova.compute.manager [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Terminating instance
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.077 2 DEBUG nova.compute.manager [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:09:29 compute-0 kernel: tap06105eee-1c (unregistering): left promiscuous mode
Oct 02 13:09:29 compute-0 NetworkManager[44987]: <info>  [1759410569.1345] device (tap06105eee-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 ovn_controller[148183]: 2025-10-02T13:09:29Z|00956|binding|INFO|Releasing lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 from this chassis (sb_readonly=0)
Oct 02 13:09:29 compute-0 ovn_controller[148183]: 2025-10-02T13:09:29Z|00957|binding|INFO|Setting lport 06105eee-1ccc-4976-9ef2-84b4765d9a79 down in Southbound
Oct 02 13:09:29 compute-0 ovn_controller[148183]: 2025-10-02T13:09:29Z|00958|binding|INFO|Removing iface tap06105eee-1c ovn-installed in OVS
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.195 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:d7:ed 10.100.0.11'], port_security=['fa:16:3e:3d:d7:ed 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6df9bd3b-6218-4859-aba9-bfbedf2b8f18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '12', 'neutron:security_group_ids': '337a5b6a-7697-4b02-8d14-65af2374695f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f431a9f-5f6a-4914-ae7c-c00e97c25630, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=06105eee-1ccc-4976-9ef2-84b4765d9a79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.196 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 06105eee-1ccc-4976-9ef2-84b4765d9a79 in datapath 41354ccc-5b80-451f-9510-2c3d0788ecf7 unbound from our chassis
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.197 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41354ccc-5b80-451f-9510-2c3d0788ecf7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.198 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b5768-18c4-43f2-8568-b91388fbefb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.198 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 namespace which is not needed anymore
Oct 02 13:09:29 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Oct 02 13:09:29 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d1.scope: Consumed 14.680s CPU time.
Oct 02 13:09:29 compute-0 systemd-machined[211836]: Machine qemu-102-instance-000000d1 terminated.
Oct 02 13:09:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.325 2 INFO nova.virt.libvirt.driver [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Instance destroyed successfully.
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.326 2 DEBUG nova.objects.instance [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [NOTICE]   (400792) : haproxy version is 2.8.14-c23fe91
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [NOTICE]   (400792) : path to executable is /usr/sbin/haproxy
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [WARNING]  (400792) : Exiting Master process...
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [WARNING]  (400792) : Exiting Master process...
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [ALERT]    (400792) : Current worker (400798) exited with code 143 (Terminated)
Oct 02 13:09:29 compute-0 neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7[400769]: [WARNING]  (400792) : All workers exited. Exiting... (0)
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.348 2 DEBUG nova.virt.libvirt.vif [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-510358772',display_name='tempest-TestNetworkAdvancedServerOps-server-510358772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-510358772',id=209,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQLYFaK6VzMZ4VXSjIB28DDIVujtRqXaihQsQXdMB+5rY8DD1rQi9P2Y1PwrrLaViv1jTWp23s6ULfYTCXiXfqd1pOSru0GKVbLKUc8HJqBymXrreI8FngJNgN4inx/nA==',key_name='tempest-TestNetworkAdvancedServerOps-178932410',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-r5xxc3cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:09Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6df9bd3b-6218-4859-aba9-bfbedf2b8f18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.349 2 DEBUG nova.network.os_vif_util [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:29 compute-0 systemd[1]: libpod-8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a.scope: Deactivated successfully.
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.351 2 DEBUG nova.network.os_vif_util [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.353 2 DEBUG os_vif [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:09:29 compute-0 podman[400975]: 2025-10-02 13:09:29.356151533 +0000 UTC m=+0.067628987 container died 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06105eee-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.363 2 INFO os_vif [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:d7:ed,bridge_name='br-int',has_traffic_filtering=True,id=06105eee-1ccc-4976-9ef2-84b4765d9a79,network=Network(41354ccc-5b80-451f-9510-2c3d0788ecf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06105eee-1c')
Oct 02 13:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a-userdata-shm.mount: Deactivated successfully.
Oct 02 13:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6d793a3e36d600403b96e0bd2756c0969525386e1f59168242ce9e23c2c99fb-merged.mount: Deactivated successfully.
Oct 02 13:09:29 compute-0 podman[400975]: 2025-10-02 13:09:29.399324137 +0000 UTC m=+0.110801591 container cleanup 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:29 compute-0 systemd[1]: libpod-conmon-8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a.scope: Deactivated successfully.
Oct 02 13:09:29 compute-0 podman[401031]: 2025-10-02 13:09:29.465039337 +0000 UTC m=+0.043246307 container remove 8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.465 2 DEBUG nova.compute.manager [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.465 2 DEBUG oslo_concurrency.lockutils [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.465 2 DEBUG oslo_concurrency.lockutils [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.466 2 DEBUG oslo_concurrency.lockutils [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.466 2 DEBUG nova.compute.manager [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.466 2 DEBUG nova.compute.manager [req-7c8493fb-48f7-49e3-86a4-bf8e24dd5a33 req-7769925e-a546-4338-8eaa-c1161615c632 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-unplugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.471 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0b06f284-63f3-4d30-969a-8cc895cf12e7]: (4, ('Thu Oct  2 01:09:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 (8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a)\n8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a\nThu Oct  2 01:09:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 (8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a)\n8ccaa47571f9187482b6e0d5b8648919b00868eba664ef2b57b5c2d910f7c60a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.474 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8bba52f3-019d-4d35-8f8d-e3dafb89ff56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.476 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41354ccc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 kernel: tap41354ccc-50: left promiscuous mode
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.514 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8e290d-205c-44fd-a786-75231556fb79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.545 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[68167929-0b89-4d38-989b-acd489f5f39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.546 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1251f448-4d6e-4b88-bd99-97dd50c29183]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.562 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[033b7520-4ba4-4bd4-a092-ca28076bb989]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861559, 'reachable_time': 20597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401047, 'error': None, 'target': 'ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d41354ccc\x2d5b80\x2d451f\x2d9510\x2d2c3d0788ecf7.mount: Deactivated successfully.
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.566 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41354ccc-5b80-451f-9510-2c3d0788ecf7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:09:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:29.566 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[e76cae9a-4416-4372-9a6e-42db4adc91d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.781 2 INFO nova.virt.libvirt.driver [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Deleting instance files /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_del
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.782 2 INFO nova.virt.libvirt.driver [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Deletion of /var/lib/nova/instances/6df9bd3b-6218-4859-aba9-bfbedf2b8f18_del complete
Oct 02 13:09:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:29 compute-0 ceph-mon[73607]: pgmap v3334: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 620 KiB/s rd, 64 KiB/s wr, 50 op/s
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.860 2 INFO nova.compute.manager [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.861 2 DEBUG oslo.service.loopingcall [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.861 2 DEBUG nova.compute.manager [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:09:29 compute-0 nova_compute[257802]: 2025-10-02 13:09:29.862 2 DEBUG nova.network.neutron [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.580 2 DEBUG nova.network.neutron [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updated VIF entry in instance network info cache for port 06105eee-1ccc-4976-9ef2-84b4765d9a79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.580 2 DEBUG nova.network.neutron [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [{"id": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "address": "fa:16:3e:3d:d7:ed", "network": {"id": "41354ccc-5b80-451f-9510-2c3d0788ecf7", "bridge": "br-int", "label": "tempest-network-smoke--166998389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06105eee-1c", "ovs_interfaceid": "06105eee-1ccc-4976-9ef2-84b4765d9a79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 73 KiB/s wr, 59 op/s
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.714 2 DEBUG oslo_concurrency.lockutils [req-bd7bfa21-38a7-4e8c-829a-834b57a44b8b req-cf013543-0fb0-498e-b321-23487c1b4d46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6df9bd3b-6218-4859-aba9-bfbedf2b8f18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.716 2 DEBUG nova.network.neutron [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.747 2 INFO nova.compute.manager [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Took 0.89 seconds to deallocate network for instance.
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.805 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.805 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.812 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:30 compute-0 nova_compute[257802]: 2025-10-02 13:09:30.865 2 INFO nova.scheduler.client.report [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance 6df9bd3b-6218-4859-aba9-bfbedf2b8f18
Oct 02 13:09:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.021 2 DEBUG oslo_concurrency.lockutils [None req-e3d67d8e-d6b8-41f2-85e8-c6510358dc8c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.139 2 DEBUG nova.compute.manager [req-0ec31374-8610-4816-8fc6-6f6da0a0765a req-cafd86fd-3dd8-401b-8ae7-6af60cf6729f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-deleted-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:31.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.586 2 DEBUG nova.compute.manager [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.586 2 DEBUG oslo_concurrency.lockutils [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.586 2 DEBUG oslo_concurrency.lockutils [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.586 2 DEBUG oslo_concurrency.lockutils [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6df9bd3b-6218-4859-aba9-bfbedf2b8f18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.587 2 DEBUG nova.compute.manager [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] No waiting events found dispatching network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:31 compute-0 nova_compute[257802]: 2025-10-02 13:09:31.587 2 WARNING nova.compute.manager [req-edc2a862-bef5-4558-999d-bd4acffbce8f req-c9ae0d6c-98fd-4686-a459-94e6a8f44303 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Received unexpected event network-vif-plugged-06105eee-1ccc-4976-9ef2-84b4765d9a79 for instance with vm_state deleted and task_state None.
Oct 02 13:09:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:31.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:31 compute-0 ceph-mon[73607]: pgmap v3335: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 73 KiB/s wr, 59 op/s
Oct 02 13:09:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 36 KiB/s wr, 58 op/s
Oct 02 13:09:33 compute-0 nova_compute[257802]: 2025-10-02 13:09:33.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:33.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:33 compute-0 ceph-mon[73607]: pgmap v3336: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 36 KiB/s wr, 58 op/s
Oct 02 13:09:34 compute-0 nova_compute[257802]: 2025-10-02 13:09:34.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 40 KiB/s wr, 75 op/s
Oct 02 13:09:35 compute-0 ceph-mon[73607]: pgmap v3337: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 40 KiB/s wr, 75 op/s
Oct 02 13:09:35 compute-0 nova_compute[257802]: 2025-10-02 13:09:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:35.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:35 compute-0 nova_compute[257802]: 2025-10-02 13:09:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:35.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:36 compute-0 nova_compute[257802]: 2025-10-02 13:09:36.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:36 compute-0 nova_compute[257802]: 2025-10-02 13:09:36.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 165 KiB/s rd, 19 KiB/s wr, 38 op/s
Oct 02 13:09:37 compute-0 nova_compute[257802]: 2025-10-02 13:09:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:37.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:37 compute-0 ceph-mon[73607]: pgmap v3338: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 165 KiB/s rd, 19 KiB/s wr, 38 op/s
Oct 02 13:09:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 18 KiB/s wr, 32 op/s
Oct 02 13:09:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Oct 02 13:09:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Oct 02 13:09:39 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Oct 02 13:09:39 compute-0 ceph-mon[73607]: pgmap v3339: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 18 KiB/s wr, 32 op/s
Oct 02 13:09:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:39 compute-0 nova_compute[257802]: 2025-10-02 13:09:39.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:39 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct 02 13:09:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:39.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Oct 02 13:09:40 compute-0 ceph-mon[73607]: osdmap e410: 3 total, 3 up, 3 in
Oct 02 13:09:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Oct 02 13:09:40 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Oct 02 13:09:40 compute-0 nova_compute[257802]: 2025-10-02 13:09:40.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:40 compute-0 nova_compute[257802]: 2025-10-02 13:09:40.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 339 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 81 op/s
Oct 02 13:09:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Oct 02 13:09:41 compute-0 ceph-mon[73607]: osdmap e411: 3 total, 3 up, 3 in
Oct 02 13:09:41 compute-0 ceph-mon[73607]: pgmap v3342: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 339 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 81 op/s
Oct 02 13:09:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Oct 02 13:09:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Oct 02 13:09:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:42 compute-0 ceph-mon[73607]: osdmap e412: 3 total, 3 up, 3 in
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:09:42
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 339 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.2 MiB/s wr, 73 op/s
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:42 compute-0 podman[401057]: 2025-10-02 13:09:42.919263469 +0000 UTC m=+0.047042189 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:09:42 compute-0 podman[401059]: 2025-10-02 13:09:42.951644883 +0000 UTC m=+0.083299927 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:42 compute-0 podman[401058]: 2025-10-02 13:09:42.951762755 +0000 UTC m=+0.084263189 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:09:43 compute-0 ceph-mon[73607]: pgmap v3344: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 339 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.2 MiB/s wr, 73 op/s
Oct 02 13:09:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:09:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:09:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:43.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3380021634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:09:44 compute-0 nova_compute[257802]: 2025-10-02 13:09:44.324 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410569.3232195, 6df9bd3b-6218-4859-aba9-bfbedf2b8f18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:44 compute-0 nova_compute[257802]: 2025-10-02 13:09:44.324 2 INFO nova.compute.manager [-] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] VM Stopped (Lifecycle Event)
Oct 02 13:09:44 compute-0 nova_compute[257802]: 2025-10-02 13:09:44.355 2 DEBUG nova.compute.manager [None req-9c99d042-777b-4d7f-80f2-76388097add6 - - - - - -] [instance: 6df9bd3b-6218-4859-aba9-bfbedf2b8f18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:44 compute-0 nova_compute[257802]: 2025-10-02 13:09:44.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 402 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 15 MiB/s wr, 195 op/s
Oct 02 13:09:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Oct 02 13:09:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2231093600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:45 compute-0 ceph-mon[73607]: pgmap v3345: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 402 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 15 MiB/s wr, 195 op/s
Oct 02 13:09:45 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Oct 02 13:09:45 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Oct 02 13:09:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:45 compute-0 nova_compute[257802]: 2025-10-02 13:09:45.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:09:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:09:45 compute-0 nova_compute[257802]: 2025-10-02 13:09:45.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:45.919 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:45.920 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:09:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:46 compute-0 nova_compute[257802]: 2025-10-02 13:09:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:46 compute-0 nova_compute[257802]: 2025-10-02 13:09:46.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:09:46 compute-0 nova_compute[257802]: 2025-10-02 13:09:46.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:09:46 compute-0 nova_compute[257802]: 2025-10-02 13:09:46.109 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:09:46 compute-0 ceph-mon[73607]: osdmap e413: 3 total, 3 up, 3 in
Oct 02 13:09:46 compute-0 sudo[401116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:46 compute-0 sudo[401116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:46 compute-0 sudo[401116]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 sudo[401141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:46 compute-0 sudo[401141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:46 compute-0 sudo[401141]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 402 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.6 MiB/s rd, 14 MiB/s wr, 172 op/s
Oct 02 13:09:47 compute-0 ceph-mon[73607]: pgmap v3347: 305 pgs: 305 active+clean; 402 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.6 MiB/s rd, 14 MiB/s wr, 172 op/s
Oct 02 13:09:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:47.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:47.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:48 compute-0 nova_compute[257802]: 2025-10-02 13:09:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 373 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.9 MiB/s wr, 126 op/s
Oct 02 13:09:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:09:48.922 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:49 compute-0 nova_compute[257802]: 2025-10-02 13:09:49.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:49 compute-0 ceph-mon[73607]: pgmap v3348: 305 pgs: 305 active+clean; 373 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.9 MiB/s wr, 126 op/s
Oct 02 13:09:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2119487108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:49.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:50 compute-0 nova_compute[257802]: 2025-10-02 13:09:50.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 147 op/s
Oct 02 13:09:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Oct 02 13:09:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Oct 02 13:09:50 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Oct 02 13:09:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Oct 02 13:09:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Oct 02 13:09:51 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.117 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.206 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.206 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.206 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.206 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.207 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774196212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.664 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:51 compute-0 ceph-mon[73607]: pgmap v3349: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 147 op/s
Oct 02 13:09:51 compute-0 ceph-mon[73607]: osdmap e414: 3 total, 3 up, 3 in
Oct 02 13:09:51 compute-0 ceph-mon[73607]: osdmap e415: 3 total, 3 up, 3 in
Oct 02 13:09:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3774196212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:51 compute-0 podman[401193]: 2025-10-02 13:09:51.847034255 +0000 UTC m=+0.125550338 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.890 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.891 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4207MB free_disk=20.942665100097656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.892 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.892 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.976 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:09:51 compute-0 nova_compute[257802]: 2025-10-02 13:09:51.976 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.009 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627699244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.423 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.429 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.486 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.560 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:09:52 compute-0 nova_compute[257802]: 2025-10-02 13:09:52.561 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 6.1 KiB/s wr, 79 op/s
Oct 02 13:09:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/627699244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:53 compute-0 ceph-mon[73607]: pgmap v3352: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 6.1 KiB/s wr, 79 op/s
Oct 02 13:09:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:54 compute-0 nova_compute[257802]: 2025-10-02 13:09:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:54 compute-0 nova_compute[257802]: 2025-10-02 13:09:54.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:09:54 compute-0 nova_compute[257802]: 2025-10-02 13:09:54.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 7.2 KiB/s wr, 119 op/s
Oct 02 13:09:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/801396701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2302425358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013731490205657918 of space, bias 1.0, pg target 0.41194470616973755 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002327355818444072 of space, bias 1.0, pg target 0.6982067455332217 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:09:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:55 compute-0 nova_compute[257802]: 2025-10-02 13:09:55.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:55.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:55 compute-0 ceph-mon[73607]: pgmap v3353: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 7.2 KiB/s wr, 119 op/s
Oct 02 13:09:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1640839144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:09:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1640839144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:09:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 KiB/s rd, 6.4 KiB/s wr, 125 op/s
Oct 02 13:09:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:09:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:57.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:09:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:57 compute-0 ceph-mon[73607]: pgmap v3354: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 KiB/s rd, 6.4 KiB/s wr, 125 op/s
Oct 02 13:09:58 compute-0 nova_compute[257802]: 2025-10-02 13:09:58.120 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 145 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 1.7 MiB/s wr, 96 op/s
Oct 02 13:09:58 compute-0 ceph-mon[73607]: pgmap v3355: 305 pgs: 305 active+clean; 145 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 1.7 MiB/s wr, 96 op/s
Oct 02 13:09:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:59 compute-0 nova_compute[257802]: 2025-10-02 13:09:59.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:09:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:59.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:10:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3009739097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:00 compute-0 nova_compute[257802]: 2025-10-02 13:10:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 13:10:00 compute-0 sudo[401247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:00 compute-0 sudo[401247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:00 compute-0 sudo[401247]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:00 compute-0 sudo[401272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:00 compute-0 sudo[401272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:00 compute-0 sudo[401272]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:00 compute-0 sudo[401297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:00 compute-0 sudo[401297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:00 compute-0 sudo[401297]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:01 compute-0 sudo[401322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:10:01 compute-0 sudo[401322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Oct 02 13:10:01 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 13:10:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3611521309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:01 compute-0 ceph-mon[73607]: pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Oct 02 13:10:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:01.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:01 compute-0 sudo[401322]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f21a2b59-7c1b-4c2e-a92d-bab15a641647 does not exist
Oct 02 13:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7ead68fb-7434-4781-b094-22c7e40ef1fe does not exist
Oct 02 13:10:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 71775696-d084-46e0-9849-5af45f147e89 does not exist
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:10:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:10:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:01 compute-0 sudo[401378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:01 compute-0 sudo[401378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:01 compute-0 sudo[401378]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:01 compute-0 sudo[401403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:01 compute-0 sudo[401403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:01 compute-0 sudo[401403]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:01 compute-0 sudo[401428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:01 compute-0 sudo[401428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:01 compute-0 sudo[401428]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:01 compute-0 sudo[401453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:10:01 compute-0 sudo[401453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:02 compute-0 ceph-mon[73607]: osdmap e416: 3 total, 3 up, 3 in
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:10:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.243750679 +0000 UTC m=+0.034733882 container create df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:10:02 compute-0 systemd[1]: Started libpod-conmon-df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75.scope.
Oct 02 13:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.312108422 +0000 UTC m=+0.103091645 container init df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.318017325 +0000 UTC m=+0.109000528 container start df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:10:02 compute-0 systemd[1]: libpod-df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75.scope: Deactivated successfully.
Oct 02 13:10:02 compute-0 relaxed_engelbart[401534]: 167 167
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.323426856 +0000 UTC m=+0.114410079 container attach df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.228813037 +0000 UTC m=+0.019796260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:02 compute-0 conmon[401534]: conmon df40d9b591685d6e7334 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75.scope/container/memory.events
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.325094497 +0000 UTC m=+0.116077700 container died df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c84b37b6832b7e492d7f6c2bc54a2392ce646353f0a3f678ac396f399aa42345-merged.mount: Deactivated successfully.
Oct 02 13:10:02 compute-0 podman[401518]: 2025-10-02 13:10:02.362174403 +0000 UTC m=+0.153157606 container remove df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:10:02 compute-0 systemd[1]: libpod-conmon-df40d9b591685d6e73340f094c1d701e0401aa635439c7c50173aab1f1c41d75.scope: Deactivated successfully.
Oct 02 13:10:02 compute-0 podman[401559]: 2025-10-02 13:10:02.511913516 +0000 UTC m=+0.042164791 container create baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:10:02 compute-0 systemd[1]: Started libpod-conmon-baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7.scope.
Oct 02 13:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:02 compute-0 podman[401559]: 2025-10-02 13:10:02.491337268 +0000 UTC m=+0.021588563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:02 compute-0 podman[401559]: 2025-10-02 13:10:02.604024744 +0000 UTC m=+0.134276039 container init baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:10:02 compute-0 podman[401559]: 2025-10-02 13:10:02.612284464 +0000 UTC m=+0.142535739 container start baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:10:02 compute-0 podman[401559]: 2025-10-02 13:10:02.615697707 +0000 UTC m=+0.145948982 container attach baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:10:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 13:10:03 compute-0 ceph-mon[73607]: pgmap v3358: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 02 13:10:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:03 compute-0 youthful_fermi[401576]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:10:03 compute-0 youthful_fermi[401576]: --> relative data size: 1.0
Oct 02 13:10:03 compute-0 youthful_fermi[401576]: --> All data devices are unavailable
Oct 02 13:10:03 compute-0 systemd[1]: libpod-baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7.scope: Deactivated successfully.
Oct 02 13:10:03 compute-0 podman[401559]: 2025-10-02 13:10:03.406776825 +0000 UTC m=+0.937028120 container died baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e39dcd424bc36ca43947c10301b75f37a9bf5c24134b191ead61e044b5fdc03-merged.mount: Deactivated successfully.
Oct 02 13:10:03 compute-0 podman[401559]: 2025-10-02 13:10:03.458575908 +0000 UTC m=+0.988827193 container remove baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:10:03 compute-0 systemd[1]: libpod-conmon-baf6eedd29ef3cac237adcc93804363734437e092c055b8bf23b2091df2c9bb7.scope: Deactivated successfully.
Oct 02 13:10:03 compute-0 sudo[401453]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:03 compute-0 sudo[401605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:03 compute-0 sudo[401605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:03 compute-0 sudo[401605]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:03 compute-0 sudo[401630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:03 compute-0 sudo[401630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:03 compute-0 sudo[401630]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:03 compute-0 sudo[401655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:03 compute-0 sudo[401655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:03 compute-0 sudo[401655]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:03 compute-0 sudo[401680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:10:03 compute-0 sudo[401680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.057034937 +0000 UTC m=+0.047062380 container create 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:10:04 compute-0 systemd[1]: Started libpod-conmon-34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7.scope.
Oct 02 13:10:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.129394747 +0000 UTC m=+0.119422200 container init 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.035309071 +0000 UTC m=+0.025336564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.136919769 +0000 UTC m=+0.126947212 container start 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.140016554 +0000 UTC m=+0.130044017 container attach 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:10:04 compute-0 wizardly_carver[401760]: 167 167
Oct 02 13:10:04 compute-0 systemd[1]: libpod-34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7.scope: Deactivated successfully.
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.142072583 +0000 UTC m=+0.132100046 container died 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef46e4ebaec5af644e9cf142a8c1f1d15decc004f5fc45c0c809fd0fb95684a-merged.mount: Deactivated successfully.
Oct 02 13:10:04 compute-0 podman[401744]: 2025-10-02 13:10:04.181055757 +0000 UTC m=+0.171083200 container remove 34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:04 compute-0 systemd[1]: libpod-conmon-34b1a2cd29e116af55a3b729cc4bec04851f56dc9a2efa6f40e36fa3b8533dd7.scope: Deactivated successfully.
Oct 02 13:10:04 compute-0 podman[401782]: 2025-10-02 13:10:04.328036603 +0000 UTC m=+0.038076522 container create b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:04 compute-0 nova_compute[257802]: 2025-10-02 13:10:04.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:04 compute-0 systemd[1]: Started libpod-conmon-b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d.scope.
Oct 02 13:10:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11da808d17dab5d6650a1fade251c803b1f448c761c474c297d282e90e649ea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11da808d17dab5d6650a1fade251c803b1f448c761c474c297d282e90e649ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11da808d17dab5d6650a1fade251c803b1f448c761c474c297d282e90e649ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11da808d17dab5d6650a1fade251c803b1f448c761c474c297d282e90e649ea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:04 compute-0 podman[401782]: 2025-10-02 13:10:04.311153904 +0000 UTC m=+0.021193843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:04 compute-0 podman[401782]: 2025-10-02 13:10:04.419810993 +0000 UTC m=+0.129850932 container init b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:04 compute-0 podman[401782]: 2025-10-02 13:10:04.425137232 +0000 UTC m=+0.135177151 container start b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:04 compute-0 podman[401782]: 2025-10-02 13:10:04.428045553 +0000 UTC m=+0.138085472 container attach b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:10:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 02 13:10:05 compute-0 magical_feistel[401798]: {
Oct 02 13:10:05 compute-0 magical_feistel[401798]:     "1": [
Oct 02 13:10:05 compute-0 magical_feistel[401798]:         {
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "devices": [
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "/dev/loop3"
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             ],
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "lv_name": "ceph_lv0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "lv_size": "7511998464",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "name": "ceph_lv0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "tags": {
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.cluster_name": "ceph",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.crush_device_class": "",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.encrypted": "0",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.osd_id": "1",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.type": "block",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:                 "ceph.vdo": "0"
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             },
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "type": "block",
Oct 02 13:10:05 compute-0 magical_feistel[401798]:             "vg_name": "ceph_vg0"
Oct 02 13:10:05 compute-0 magical_feistel[401798]:         }
Oct 02 13:10:05 compute-0 magical_feistel[401798]:     ]
Oct 02 13:10:05 compute-0 magical_feistel[401798]: }
Oct 02 13:10:05 compute-0 systemd[1]: libpod-b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d.scope: Deactivated successfully.
Oct 02 13:10:05 compute-0 podman[401782]: 2025-10-02 13:10:05.172443581 +0000 UTC m=+0.882483500 container died b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-11da808d17dab5d6650a1fade251c803b1f448c761c474c297d282e90e649ea8-merged.mount: Deactivated successfully.
Oct 02 13:10:05 compute-0 podman[401782]: 2025-10-02 13:10:05.269178201 +0000 UTC m=+0.979218120 container remove b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:10:05 compute-0 systemd[1]: libpod-conmon-b46dcde294a18119861b5903e38289f6e9c67328a5b45b2d342a9908fb44085d.scope: Deactivated successfully.
Oct 02 13:10:05 compute-0 sudo[401680]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:05 compute-0 sudo[401822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:05 compute-0 sudo[401822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:05 compute-0 sudo[401822]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:05 compute-0 sudo[401847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:05 compute-0 sudo[401847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:05 compute-0 sudo[401847]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:05 compute-0 nova_compute[257802]: 2025-10-02 13:10:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:05 compute-0 sudo[401872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:05 compute-0 sudo[401872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:05 compute-0 sudo[401872]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:05 compute-0 sudo[401897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:10:05 compute-0 sudo[401897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:05.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.852546095 +0000 UTC m=+0.085159742 container create b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.788327671 +0000 UTC m=+0.020941348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:05 compute-0 systemd[1]: Started libpod-conmon-b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512.scope.
Oct 02 13:10:05 compute-0 ceph-mon[73607]: pgmap v3359: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 02 13:10:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.917597759 +0000 UTC m=+0.150211426 container init b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.92343336 +0000 UTC m=+0.156047007 container start b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:10:05 compute-0 wizardly_merkle[401978]: 167 167
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.926595716 +0000 UTC m=+0.159209393 container attach b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.927246532 +0000 UTC m=+0.159860189 container died b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:05 compute-0 systemd[1]: libpod-b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512.scope: Deactivated successfully.
Oct 02 13:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a4ac74f6b391e2af8a112a744277f9b72993b41dae59a85778c83b94c19b669-merged.mount: Deactivated successfully.
Oct 02 13:10:05 compute-0 podman[401962]: 2025-10-02 13:10:05.959141923 +0000 UTC m=+0.191755570 container remove b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:10:05 compute-0 systemd[1]: libpod-conmon-b6ec0014cac784ae01afaf88f5e70d7adf3024cbabd4ab6460af2487dee6d512.scope: Deactivated successfully.
Oct 02 13:10:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:06 compute-0 podman[402003]: 2025-10-02 13:10:06.11564976 +0000 UTC m=+0.037581970 container create 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:10:06 compute-0 systemd[1]: Started libpod-conmon-5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438.scope.
Oct 02 13:10:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83511d51ee0a1a3ad545a96cb9d2af5a02051ccf032260b737161e625e05849e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83511d51ee0a1a3ad545a96cb9d2af5a02051ccf032260b737161e625e05849e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83511d51ee0a1a3ad545a96cb9d2af5a02051ccf032260b737161e625e05849e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83511d51ee0a1a3ad545a96cb9d2af5a02051ccf032260b737161e625e05849e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:06 compute-0 podman[402003]: 2025-10-02 13:10:06.190406588 +0000 UTC m=+0.112338818 container init 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:10:06 compute-0 podman[402003]: 2025-10-02 13:10:06.098630468 +0000 UTC m=+0.020562698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:06 compute-0 podman[402003]: 2025-10-02 13:10:06.198119915 +0000 UTC m=+0.120052125 container start 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:10:06 compute-0 podman[402003]: 2025-10-02 13:10:06.201103977 +0000 UTC m=+0.123036197 container attach 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:10:06 compute-0 sudo[402025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:06 compute-0 sudo[402025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:06 compute-0 sudo[402025]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:06 compute-0 sudo[402050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:06 compute-0 sudo[402050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:06 compute-0 sudo[402050]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:10:07 compute-0 exciting_wright[402020]: {
Oct 02 13:10:07 compute-0 exciting_wright[402020]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:10:07 compute-0 exciting_wright[402020]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:10:07 compute-0 exciting_wright[402020]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:10:07 compute-0 exciting_wright[402020]:         "osd_id": 1,
Oct 02 13:10:07 compute-0 exciting_wright[402020]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:10:07 compute-0 exciting_wright[402020]:         "type": "bluestore"
Oct 02 13:10:07 compute-0 exciting_wright[402020]:     }
Oct 02 13:10:07 compute-0 exciting_wright[402020]: }
Oct 02 13:10:07 compute-0 systemd[1]: libpod-5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438.scope: Deactivated successfully.
Oct 02 13:10:07 compute-0 podman[402003]: 2025-10-02 13:10:07.027180002 +0000 UTC m=+0.949112212 container died 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:10:07 compute-0 ceph-mon[73607]: pgmap v3360: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-83511d51ee0a1a3ad545a96cb9d2af5a02051ccf032260b737161e625e05849e-merged.mount: Deactivated successfully.
Oct 02 13:10:07 compute-0 podman[402003]: 2025-10-02 13:10:07.293593377 +0000 UTC m=+1.215525587 container remove 5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:10:07 compute-0 systemd[1]: libpod-conmon-5e3785b0f6fa4f409023e5764676a516431c5fc30d12adfd475723cbe4c15438.scope: Deactivated successfully.
Oct 02 13:10:07 compute-0 sudo[401897]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:10:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:07.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:10:07 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f35500d9-1ba3-4ac2-b582-dd5a1703f9e2 does not exist
Oct 02 13:10:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 01b7ab23-2d62-436f-a79b-6426d3e01607 does not exist
Oct 02 13:10:07 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5b05bbf9-14da-49fd-862e-531cc42cfc92 does not exist
Oct 02 13:10:07 compute-0 sudo[402105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:07 compute-0 sudo[402105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:07 compute-0 sudo[402105]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:07 compute-0 sudo[402130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:10:07 compute-0 sudo[402130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:07 compute-0 sudo[402130]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:07.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:10:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 834 KiB/s wr, 104 op/s
Oct 02 13:10:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:09 compute-0 nova_compute[257802]: 2025-10-02 13:10:09.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:09 compute-0 ceph-mon[73607]: pgmap v3361: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 834 KiB/s wr, 104 op/s
Oct 02 13:10:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:10 compute-0 nova_compute[257802]: 2025-10-02 13:10:10.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 02 13:10:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:11 compute-0 ceph-mon[73607]: pgmap v3362: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 02 13:10:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:13 compute-0 ceph-mon[73607]: pgmap v3363: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:10:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:10:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:10:13 compute-0 podman[402159]: 2025-10-02 13:10:13.945487124 +0000 UTC m=+0.074247347 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 02 13:10:13 compute-0 podman[402158]: 2025-10-02 13:10:13.968513782 +0000 UTC m=+0.100972534 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 13:10:13 compute-0 podman[402160]: 2025-10-02 13:10:13.971698889 +0000 UTC m=+0.094839486 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 13:10:14 compute-0 nova_compute[257802]: 2025-10-02 13:10:14.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 171 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 549 KiB/s wr, 85 op/s
Oct 02 13:10:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:15.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:15 compute-0 nova_compute[257802]: 2025-10-02 13:10:15.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-0 ceph-mon[73607]: pgmap v3364: 305 pgs: 305 active+clean; 171 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 549 KiB/s wr, 85 op/s
Oct 02 13:10:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.060128) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616060177, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1338, "num_deletes": 259, "total_data_size": 2036226, "memory_usage": 2068896, "flush_reason": "Manual Compaction"}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616069629, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1999361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73222, "largest_seqno": 74559, "table_properties": {"data_size": 1993082, "index_size": 3481, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13921, "raw_average_key_size": 20, "raw_value_size": 1980139, "raw_average_value_size": 2886, "num_data_blocks": 152, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410516, "oldest_key_time": 1759410516, "file_creation_time": 1759410616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 9538 microseconds, and 4493 cpu microseconds.
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.069663) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1999361 bytes OK
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.069682) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.071100) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.071113) EVENT_LOG_v1 {"time_micros": 1759410616071109, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.071128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2030237, prev total WAL file size 2030237, number of live WAL files 2.
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.071773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323631' seq:0, type:0; will stop at (end)
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1952KB)], [167(11MB)]
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616071806, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 13730998, "oldest_snapshot_seqno": -1}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10106 keys, 13596035 bytes, temperature: kUnknown
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616136025, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 13596035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13530142, "index_size": 39540, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 266716, "raw_average_key_size": 26, "raw_value_size": 13352576, "raw_average_value_size": 1321, "num_data_blocks": 1509, "num_entries": 10106, "num_filter_entries": 10106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.136275) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 13596035 bytes
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.137454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.6 rd, 211.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.2 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(13.7) write-amplify(6.8) OK, records in: 10640, records dropped: 534 output_compression: NoCompression
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.137470) EVENT_LOG_v1 {"time_micros": 1759410616137462, "job": 104, "event": "compaction_finished", "compaction_time_micros": 64292, "compaction_time_cpu_micros": 28608, "output_level": 6, "num_output_files": 1, "total_output_size": 13596035, "num_input_records": 10640, "num_output_records": 10106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616137946, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616140054, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.071694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.140108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.140113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.140114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.140115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:10:16.140117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 83 op/s
Oct 02 13:10:17 compute-0 ceph-mon[73607]: pgmap v3365: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 83 op/s
Oct 02 13:10:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:17.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:17.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 197 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct 02 13:10:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:19.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:19 compute-0 nova_compute[257802]: 2025-10-02 13:10:19.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:19 compute-0 ceph-mon[73607]: pgmap v3366: 305 pgs: 305 active+clean; 197 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct 02 13:10:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:20 compute-0 nova_compute[257802]: 2025-10-02 13:10:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:21 compute-0 ceph-mon[73607]: pgmap v3367: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:22 compute-0 podman[402214]: 2025-10-02 13:10:22.929742698 +0000 UTC m=+0.073962841 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:23.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:23.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:23 compute-0 ceph-mon[73607]: pgmap v3368: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:24 compute-0 nova_compute[257802]: 2025-10-02 13:10:24.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:25 compute-0 ceph-mon[73607]: pgmap v3369: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:10:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:25 compute-0 nova_compute[257802]: 2025-10-02 13:10:25.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:25.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1983806848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Oct 02 13:10:26 compute-0 sudo[402242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:26 compute-0 sudo[402242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:26 compute-0 sudo[402242]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:26 compute-0 sudo[402268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:26 compute-0 sudo[402268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:26 compute-0 sudo[402268]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:26.995 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:26.995 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:26.996 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:27 compute-0 ceph-mon[73607]: pgmap v3370: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Oct 02 13:10:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3228220737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:27.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:28 compute-0 nova_compute[257802]: 2025-10-02 13:10:28.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:28 compute-0 nova_compute[257802]: 2025-10-02 13:10:28.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:10:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 1.1 MiB/s wr, 38 op/s
Oct 02 13:10:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:29.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:29 compute-0 nova_compute[257802]: 2025-10-02 13:10:29.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:29.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:29 compute-0 ceph-mon[73607]: pgmap v3371: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 1.1 MiB/s wr, 38 op/s
Oct 02 13:10:30 compute-0 nova_compute[257802]: 2025-10-02 13:10:30.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 63 KiB/s wr, 5 op/s
Oct 02 13:10:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:31.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:31 compute-0 ceph-mon[73607]: pgmap v3372: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 63 KiB/s wr, 5 op/s
Oct 02 13:10:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 02 13:10:33 compute-0 nova_compute[257802]: 2025-10-02 13:10:33.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:33 compute-0 nova_compute[257802]: 2025-10-02 13:10:33.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:33.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:10:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:10:33 compute-0 ceph-mon[73607]: pgmap v3373: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 02 13:10:34 compute-0 nova_compute[257802]: 2025-10-02 13:10:34.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:34 compute-0 ovn_controller[148183]: 2025-10-02T13:10:34Z|00959|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 02 13:10:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 17 KiB/s wr, 1 op/s
Oct 02 13:10:35 compute-0 nova_compute[257802]: 2025-10-02 13:10:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:35 compute-0 ceph-mon[73607]: pgmap v3374: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 17 KiB/s wr, 1 op/s
Oct 02 13:10:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:35.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:35 compute-0 nova_compute[257802]: 2025-10-02 13:10:35.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:35 compute-0 nova_compute[257802]: 2025-10-02 13:10:35.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:35.498 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:35 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:35.500 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:10:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 6.2 KiB/s wr, 3 op/s
Oct 02 13:10:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:37.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:37 compute-0 ceph-mon[73607]: pgmap v3375: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 6.2 KiB/s wr, 3 op/s
Oct 02 13:10:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:37.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 KiB/s rd, 27 KiB/s wr, 9 op/s
Oct 02 13:10:39 compute-0 nova_compute[257802]: 2025-10-02 13:10:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:39.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:39 compute-0 nova_compute[257802]: 2025-10-02 13:10:39.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:39 compute-0 ceph-mon[73607]: pgmap v3376: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 KiB/s rd, 27 KiB/s wr, 9 op/s
Oct 02 13:10:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:10:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:39.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:10:40 compute-0 nova_compute[257802]: 2025-10-02 13:10:40.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 27 KiB/s wr, 10 op/s
Oct 02 13:10:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:41 compute-0 nova_compute[257802]: 2025-10-02 13:10:41.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:41 compute-0 nova_compute[257802]: 2025-10-02 13:10:41.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:41.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:41 compute-0 ceph-mon[73607]: pgmap v3377: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 27 KiB/s wr, 10 op/s
Oct 02 13:10:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2966095719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:10:42
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', '.rgw.root']
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 23 KiB/s wr, 10 op/s
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1909700081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:10:43.503 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:10:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:10:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:10:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:10:44 compute-0 ceph-mon[73607]: pgmap v3378: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 23 KiB/s wr, 10 op/s
Oct 02 13:10:44 compute-0 nova_compute[257802]: 2025-10-02 13:10:44.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 23 KiB/s wr, 10 op/s
Oct 02 13:10:44 compute-0 podman[402308]: 2025-10-02 13:10:44.943373276 +0000 UTC m=+0.057348499 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:10:44 compute-0 podman[402302]: 2025-10-02 13:10:44.949677769 +0000 UTC m=+0.083230666 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:44 compute-0 podman[402303]: 2025-10-02 13:10:44.964582089 +0000 UTC m=+0.080690063 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 13:10:45 compute-0 ceph-mon[73607]: pgmap v3379: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 23 KiB/s wr, 10 op/s
Oct 02 13:10:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:45.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:45 compute-0 nova_compute[257802]: 2025-10-02 13:10:45.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1665672065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/315835295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/339632512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2737764232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 22 KiB/s wr, 9 op/s
Oct 02 13:10:46 compute-0 sudo[402362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:46 compute-0 sudo[402362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:46 compute-0 sudo[402362]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:46 compute-0 sudo[402387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:46 compute-0 sudo[402387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:46 compute-0 sudo[402387]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:47 compute-0 nova_compute[257802]: 2025-10-02 13:10:47.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:47 compute-0 nova_compute[257802]: 2025-10-02 13:10:47.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:10:47 compute-0 nova_compute[257802]: 2025-10-02 13:10:47.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:10:47 compute-0 nova_compute[257802]: 2025-10-02 13:10:47.111 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:10:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2364750605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:47 compute-0 ceph-mon[73607]: pgmap v3380: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 22 KiB/s wr, 9 op/s
Oct 02 13:10:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 23 KiB/s wr, 7 op/s
Oct 02 13:10:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:10:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1582 writes, 7005 keys, 1582 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s
                                           Interval WAL: 1582 writes, 1582 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.2      2.22              0.28        52    0.043       0      0       0.0       0.0
                                             L6      1/0   12.97 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    106.8     91.5      5.70              1.45        51    0.112    380K    27K       0.0       0.0
                                            Sum      1/0   12.97 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     76.9     78.5      7.93              1.72       103    0.077    380K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7    146.3    152.1      0.56              0.21        12    0.046     61K   3145       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    106.8     91.5      5.70              1.45        51    0.112    380K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.3      2.22              0.28        51    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.098, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.61 GB write, 0.10 MB/s write, 0.60 GB read, 0.10 MB/s read, 7.9 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 65.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000481 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3797,62.89 MB,20.6864%) FilterBlock(104,1.03 MB,0.337335%) IndexBlock(104,1.73 MB,0.570182%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:10:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:10:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:49.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:10:49 compute-0 nova_compute[257802]: 2025-10-02 13:10:49.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:49 compute-0 ceph-mon[73607]: pgmap v3381: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 23 KiB/s wr, 7 op/s
Oct 02 13:10:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:49.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:50 compute-0 nova_compute[257802]: 2025-10-02 13:10:50.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 164 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 16 KiB/s wr, 21 op/s
Oct 02 13:10:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.124 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.124 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.124 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.124 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:51.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271657052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.537 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.692 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.694 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4226MB free_disk=20.962982177734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.694 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.695 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.786 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.787 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.812 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.829 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.829 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.844 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.864 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:10:51 compute-0 ceph-mon[73607]: pgmap v3382: 305 pgs: 305 active+clean; 164 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 16 KiB/s wr, 21 op/s
Oct 02 13:10:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1642908140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2271657052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:51 compute-0 nova_compute[257802]: 2025-10-02 13:10:51.878 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152816294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:52 compute-0 nova_compute[257802]: 2025-10-02 13:10:52.286 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:52 compute-0 nova_compute[257802]: 2025-10-02 13:10:52.292 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:52 compute-0 nova_compute[257802]: 2025-10-02 13:10:52.313 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:52 compute-0 nova_compute[257802]: 2025-10-02 13:10:52.347 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:10:52 compute-0 nova_compute[257802]: 2025-10-02 13:10:52.347 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 164 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 16 KiB/s wr, 20 op/s
Oct 02 13:10:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3152816294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:53.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:53 compute-0 ceph-mon[73607]: pgmap v3383: 305 pgs: 305 active+clean; 164 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 16 KiB/s wr, 20 op/s
Oct 02 13:10:53 compute-0 podman[402459]: 2025-10-02 13:10:53.93959641 +0000 UTC m=+0.081928994 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:10:54 compute-0 nova_compute[257802]: 2025-10-02 13:10:54.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 16 KiB/s wr, 39 op/s
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.906639679880886e-06 of space, bias 1.0, pg target 0.002071991903964266 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173592208728829 of space, bias 1.0, pg target 0.6520776626186487 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:10:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:55 compute-0 nova_compute[257802]: 2025-10-02 13:10:55.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:55 compute-0 ceph-mon[73607]: pgmap v3384: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 16 KiB/s wr, 39 op/s
Oct 02 13:10:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/953289654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 51 op/s
Oct 02 13:10:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:57.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:57 compute-0 ceph-mon[73607]: pgmap v3385: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 51 op/s
Oct 02 13:10:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3828257001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:10:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3828257001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:10:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 17 KiB/s wr, 55 op/s
Oct 02 13:10:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:59.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:59 compute-0 nova_compute[257802]: 2025-10-02 13:10:59.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:10:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:10:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:10:59 compute-0 ceph-mon[73607]: pgmap v3386: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 17 KiB/s wr, 55 op/s
Oct 02 13:11:00 compute-0 nova_compute[257802]: 2025-10-02 13:11:00.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 16 KiB/s wr, 65 op/s
Oct 02 13:11:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:01 compute-0 ceph-mon[73607]: pgmap v3387: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 16 KiB/s wr, 65 op/s
Oct 02 13:11:01 compute-0 nova_compute[257802]: 2025-10-02 13:11:01.348 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:01.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Oct 02 13:11:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:03.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:03 compute-0 ceph-mon[73607]: pgmap v3388: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Oct 02 13:11:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:04 compute-0 nova_compute[257802]: 2025-10-02 13:11:04.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 KiB/s wr, 53 op/s
Oct 02 13:11:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:05.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:05 compute-0 nova_compute[257802]: 2025-10-02 13:11:05.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:05 compute-0 ceph-mon[73607]: pgmap v3389: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 KiB/s wr, 53 op/s
Oct 02 13:11:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 131 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 411 KiB/s wr, 40 op/s
Oct 02 13:11:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Oct 02 13:11:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Oct 02 13:11:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2083019403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:06 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Oct 02 13:11:07 compute-0 sudo[402492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:07 compute-0 sudo[402492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:07 compute-0 sudo[402492]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:07 compute-0 sudo[402517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:07 compute-0 sudo[402517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:07 compute-0 sudo[402517]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:07.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:07 compute-0 ceph-mon[73607]: pgmap v3390: 305 pgs: 305 active+clean; 131 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 411 KiB/s wr, 40 op/s
Oct 02 13:11:07 compute-0 ceph-mon[73607]: osdmap e417: 3 total, 3 up, 3 in
Oct 02 13:11:07 compute-0 sudo[402542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:07 compute-0 sudo[402542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:07 compute-0 sudo[402542]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:07.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:07 compute-0 sudo[402567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:07 compute-0 sudo[402567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:07 compute-0 sudo[402567]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:07 compute-0 sudo[402592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:07 compute-0 sudo[402592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:07 compute-0 sudo[402592]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:08 compute-0 sudo[402617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:11:08 compute-0 sudo[402617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:08 compute-0 sudo[402617]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 520a4481-77b2-4de3-990d-41e2fa9a2414 does not exist
Oct 02 13:11:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 727762b7-9f8f-4881-8674-60fe55a844df does not exist
Oct 02 13:11:08 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1c53b39e-7282-408e-8dc8-dafbccc4e761 does not exist
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:11:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:08 compute-0 sudo[402672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:08 compute-0 sudo[402672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:08 compute-0 sudo[402672]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 176 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 13:11:08 compute-0 sudo[402697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:08 compute-0 sudo[402697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:08 compute-0 sudo[402697]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:08 compute-0 sudo[402723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:08 compute-0 sudo[402723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:08 compute-0 sudo[402723]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:08 compute-0 sudo[402748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:11:08 compute-0 sudo[402748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:11:08 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.154403995 +0000 UTC m=+0.047323016 container create 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:11:09 compute-0 systemd[1]: Started libpod-conmon-8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b.scope.
Oct 02 13:11:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.132566037 +0000 UTC m=+0.025485048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.246173366 +0000 UTC m=+0.139092347 container init 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.253570354 +0000 UTC m=+0.146489335 container start 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.256974056 +0000 UTC m=+0.149893067 container attach 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:11:09 compute-0 frosty_chaplygin[402831]: 167 167
Oct 02 13:11:09 compute-0 systemd[1]: libpod-8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b.scope: Deactivated successfully.
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.262791118 +0000 UTC m=+0.155710099 container died 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0a0a06384e3fbf3650bd3ba2c8543e6d4b96da15ec6d3cde6dbfb4f8ed212b-merged.mount: Deactivated successfully.
Oct 02 13:11:09 compute-0 podman[402814]: 2025-10-02 13:11:09.305887929 +0000 UTC m=+0.198806920 container remove 8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaplygin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:11:09 compute-0 systemd[1]: libpod-conmon-8abb30a3168c4c39bccdeefaa68c54a0fcaef4739586590a9c520a82c02c502b.scope: Deactivated successfully.
Oct 02 13:11:09 compute-0 nova_compute[257802]: 2025-10-02 13:11:09.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:09 compute-0 podman[402854]: 2025-10-02 13:11:09.473676909 +0000 UTC m=+0.042574691 container create 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 13:11:09 compute-0 systemd[1]: Started libpod-conmon-7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2.scope.
Oct 02 13:11:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 podman[402854]: 2025-10-02 13:11:09.45719246 +0000 UTC m=+0.026090262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:09 compute-0 podman[402854]: 2025-10-02 13:11:09.553347166 +0000 UTC m=+0.122244978 container init 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:11:09 compute-0 podman[402854]: 2025-10-02 13:11:09.561453563 +0000 UTC m=+0.130351375 container start 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:09 compute-0 podman[402854]: 2025-10-02 13:11:09.565547062 +0000 UTC m=+0.134444864 container attach 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:09 compute-0 ceph-mon[73607]: pgmap v3392: 305 pgs: 305 active+clean; 176 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 02 13:11:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:09.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:10 compute-0 busy_payne[402871]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:11:10 compute-0 busy_payne[402871]: --> relative data size: 1.0
Oct 02 13:11:10 compute-0 busy_payne[402871]: --> All data devices are unavailable
Oct 02 13:11:10 compute-0 systemd[1]: libpod-7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2.scope: Deactivated successfully.
Oct 02 13:11:10 compute-0 podman[402854]: 2025-10-02 13:11:10.480713572 +0000 UTC m=+1.049611374 container died 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fd3b12cfaa5579d8a366dfcb30bb61ec34c2d16dcf36a2890cfc304d3dc6129-merged.mount: Deactivated successfully.
Oct 02 13:11:10 compute-0 nova_compute[257802]: 2025-10-02 13:11:10.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:10 compute-0 podman[402854]: 2025-10-02 13:11:10.56823929 +0000 UTC m=+1.137137102 container remove 7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 13:11:10 compute-0 systemd[1]: libpod-conmon-7a3b36b4c2215fc5ac9f9f47637dcf50fb4866a7d5d5d5b8c370a89cfca022a2.scope: Deactivated successfully.
Oct 02 13:11:10 compute-0 sudo[402748]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:10 compute-0 sudo[402898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 90 op/s
Oct 02 13:11:10 compute-0 sudo[402898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:10 compute-0 sudo[402898]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:10 compute-0 sudo[402924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:10 compute-0 sudo[402924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:10 compute-0 sudo[402924]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:10 compute-0 sudo[402949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:10 compute-0 sudo[402949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:10 compute-0 sudo[402949]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1728221769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2338560981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:10 compute-0 sudo[402974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:11:10 compute-0 sudo[402974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.33210758 +0000 UTC m=+0.047873450 container create b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:11:11 compute-0 systemd[1]: Started libpod-conmon-b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f.scope.
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.313861368 +0000 UTC m=+0.029627268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.432128289 +0000 UTC m=+0.147894189 container init b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.440964483 +0000 UTC m=+0.156730353 container start b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.444061417 +0000 UTC m=+0.159827307 container attach b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:11:11 compute-0 distracted_edison[403056]: 167 167
Oct 02 13:11:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:11 compute-0 systemd[1]: libpod-b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f.scope: Deactivated successfully.
Oct 02 13:11:11 compute-0 conmon[403056]: conmon b299562cfce8b4a5d8de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f.scope/container/memory.events
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.450506514 +0000 UTC m=+0.166272424 container died b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4016439992b8db0b104ce9f3bf07542242446d14f4ec581403952922904a6410-merged.mount: Deactivated successfully.
Oct 02 13:11:11 compute-0 podman[403040]: 2025-10-02 13:11:11.49295232 +0000 UTC m=+0.208718190 container remove b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_edison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:11:11 compute-0 systemd[1]: libpod-conmon-b299562cfce8b4a5d8de764196db27e92ed478c8d3df5d867d6f8c22ce60940f.scope: Deactivated successfully.
Oct 02 13:11:11 compute-0 podman[403080]: 2025-10-02 13:11:11.6863306 +0000 UTC m=+0.050060423 container create 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:11 compute-0 systemd[1]: Started libpod-conmon-3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30.scope.
Oct 02 13:11:11 compute-0 podman[403080]: 2025-10-02 13:11:11.662628506 +0000 UTC m=+0.026358339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32e5a86f62f4fdd7a287c25b0a22cb5f4a4000d47822a696a50eb27708cd550/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32e5a86f62f4fdd7a287c25b0a22cb5f4a4000d47822a696a50eb27708cd550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32e5a86f62f4fdd7a287c25b0a22cb5f4a4000d47822a696a50eb27708cd550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32e5a86f62f4fdd7a287c25b0a22cb5f4a4000d47822a696a50eb27708cd550/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:11 compute-0 podman[403080]: 2025-10-02 13:11:11.784225217 +0000 UTC m=+0.147955060 container init 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:11 compute-0 podman[403080]: 2025-10-02 13:11:11.797769985 +0000 UTC m=+0.161499838 container start 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:11:11 compute-0 podman[403080]: 2025-10-02 13:11:11.80581473 +0000 UTC m=+0.169544563 container attach 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:11:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 13:11:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:11.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 13:11:11 compute-0 ceph-mon[73607]: pgmap v3393: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 90 op/s
Oct 02 13:11:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3052946091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:12 compute-0 crazy_mendel[403096]: {
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:     "1": [
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:         {
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "devices": [
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "/dev/loop3"
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             ],
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "lv_name": "ceph_lv0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "lv_size": "7511998464",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "name": "ceph_lv0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "tags": {
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.cluster_name": "ceph",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.crush_device_class": "",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.encrypted": "0",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.osd_id": "1",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.type": "block",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:                 "ceph.vdo": "0"
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             },
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "type": "block",
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:             "vg_name": "ceph_vg0"
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:         }
Oct 02 13:11:12 compute-0 crazy_mendel[403096]:     ]
Oct 02 13:11:12 compute-0 crazy_mendel[403096]: }
Oct 02 13:11:12 compute-0 systemd[1]: libpod-3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30.scope: Deactivated successfully.
Oct 02 13:11:12 compute-0 podman[403080]: 2025-10-02 13:11:12.658337735 +0000 UTC m=+1.022067548 container died 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f32e5a86f62f4fdd7a287c25b0a22cb5f4a4000d47822a696a50eb27708cd550-merged.mount: Deactivated successfully.
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 90 op/s
Oct 02 13:11:12 compute-0 podman[403080]: 2025-10-02 13:11:12.730692825 +0000 UTC m=+1.094422648 container remove 3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:12 compute-0 systemd[1]: libpod-conmon-3d272cded542b006ac15c1822d04f43486b22af03609b2717562dadfc03b0c30.scope: Deactivated successfully.
Oct 02 13:11:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:12 compute-0 sudo[402974]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:12 compute-0 sudo[403116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:12 compute-0 sudo[403116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:12 compute-0 sudo[403116]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:12 compute-0 sudo[403141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:12 compute-0 sudo[403141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:12 compute-0 sudo[403141]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:13 compute-0 sudo[403166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:13 compute-0 sudo[403166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:13 compute-0 sudo[403166]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:13 compute-0 sudo[403191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:11:13 compute-0 sudo[403191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:13.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.523016053 +0000 UTC m=+0.040380708 container create b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:11:13 compute-0 systemd[1]: Started libpod-conmon-b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115.scope.
Oct 02 13:11:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.505314416 +0000 UTC m=+0.022679091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.611885724 +0000 UTC m=+0.129250399 container init b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.619683322 +0000 UTC m=+0.137047997 container start b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.623506305 +0000 UTC m=+0.140870980 container attach b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:11:13 compute-0 systemd[1]: libpod-b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115.scope: Deactivated successfully.
Oct 02 13:11:13 compute-0 exciting_newton[403272]: 167 167
Oct 02 13:11:13 compute-0 conmon[403272]: conmon b3453960d6fb49bb363d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115.scope/container/memory.events
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.630418002 +0000 UTC m=+0.147782657 container died b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6665b40bd69790a324cdcfbf837fea4eeda9953dc6c49f88405110efe193d52c-merged.mount: Deactivated successfully.
Oct 02 13:11:13 compute-0 podman[403256]: 2025-10-02 13:11:13.663984884 +0000 UTC m=+0.181349539 container remove b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:11:13 compute-0 systemd[1]: libpod-conmon-b3453960d6fb49bb363d5991695e189e0e516d2e1bbb832c24b2f47ea4d63115.scope: Deactivated successfully.
Oct 02 13:11:13 compute-0 podman[403296]: 2025-10-02 13:11:13.82130706 +0000 UTC m=+0.038174725 container create 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:11:13 compute-0 systemd[1]: Started libpod-conmon-741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7.scope.
Oct 02 13:11:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d4e6388d7102991199eae6e7cb9dc732b9fdbdf8fe23add5025634a7b88dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d4e6388d7102991199eae6e7cb9dc732b9fdbdf8fe23add5025634a7b88dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d4e6388d7102991199eae6e7cb9dc732b9fdbdf8fe23add5025634a7b88dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d4e6388d7102991199eae6e7cb9dc732b9fdbdf8fe23add5025634a7b88dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:13 compute-0 podman[403296]: 2025-10-02 13:11:13.897713468 +0000 UTC m=+0.114581153 container init 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:11:13 compute-0 podman[403296]: 2025-10-02 13:11:13.805546449 +0000 UTC m=+0.022414134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:13 compute-0 podman[403296]: 2025-10-02 13:11:13.905456555 +0000 UTC m=+0.122324220 container start 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:11:13 compute-0 podman[403296]: 2025-10-02 13:11:13.909235717 +0000 UTC m=+0.126103462 container attach 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:11:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:13.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:13 compute-0 ceph-mon[73607]: pgmap v3394: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 90 op/s
Oct 02 13:11:14 compute-0 nova_compute[257802]: 2025-10-02 13:11:14.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:14 compute-0 goofy_mclean[403312]: {
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:         "osd_id": 1,
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:         "type": "bluestore"
Oct 02 13:11:14 compute-0 goofy_mclean[403312]:     }
Oct 02 13:11:14 compute-0 goofy_mclean[403312]: }
Oct 02 13:11:14 compute-0 systemd[1]: libpod-741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7.scope: Deactivated successfully.
Oct 02 13:11:14 compute-0 podman[403296]: 2025-10-02 13:11:14.683813166 +0000 UTC m=+0.900680841 container died 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:11:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Oct 02 13:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d4e6388d7102991199eae6e7cb9dc732b9fdbdf8fe23add5025634a7b88dd7-merged.mount: Deactivated successfully.
Oct 02 13:11:14 compute-0 podman[403296]: 2025-10-02 13:11:14.737025503 +0000 UTC m=+0.953893168 container remove 741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:11:14 compute-0 systemd[1]: libpod-conmon-741419314fadfed78df7597c435d22e130630c7674dcab7d7b352bfb6691f1d7.scope: Deactivated successfully.
Oct 02 13:11:14 compute-0 sudo[403191]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:11:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:11:14 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev caf01c3c-2ef6-4094-9edb-1d5365f13edf does not exist
Oct 02 13:11:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5cc77032-bba2-4c6b-aa91-6ad0f241e84f does not exist
Oct 02 13:11:14 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8d5e0423-3c3e-4346-bb60-b48f94cee03c does not exist
Oct 02 13:11:14 compute-0 sudo[403345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:14 compute-0 sudo[403345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:14 compute-0 sudo[403345]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:14 compute-0 sudo[403370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:11:14 compute-0 sudo[403370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:14 compute-0 sudo[403370]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:15 compute-0 podman[403394]: 2025-10-02 13:11:15.050983239 +0000 UTC m=+0.054796137 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 02 13:11:15 compute-0 podman[403396]: 2025-10-02 13:11:15.056672197 +0000 UTC m=+0.056960629 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:11:15 compute-0 podman[403395]: 2025-10-02 13:11:15.087951143 +0000 UTC m=+0.087758164 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:11:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:15 compute-0 nova_compute[257802]: 2025-10-02 13:11:15.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:15 compute-0 ceph-mon[73607]: pgmap v3395: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Oct 02 13:11:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:15 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:11:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4106694925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 134 op/s
Oct 02 13:11:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4106694925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:17.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:17.799 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:17 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:17.800 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:11:17 compute-0 nova_compute[257802]: 2025-10-02 13:11:17.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:17 compute-0 ceph-mon[73607]: pgmap v3396: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 134 op/s
Oct 02 13:11:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4198246259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 139 op/s
Oct 02 13:11:19 compute-0 nova_compute[257802]: 2025-10-02 13:11:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:19 compute-0 ceph-mon[73607]: pgmap v3397: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 139 op/s
Oct 02 13:11:20 compute-0 nova_compute[257802]: 2025-10-02 13:11:20.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Oct 02 13:11:21 compute-0 ceph-mon[73607]: pgmap v3398: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Oct 02 13:11:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 94 op/s
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:22.801 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:23 compute-0 ceph-mon[73607]: pgmap v3399: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 94 op/s
Oct 02 13:11:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:24 compute-0 nova_compute[257802]: 2025-10-02 13:11:24.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 291 KiB/s wr, 152 op/s
Oct 02 13:11:24 compute-0 podman[403459]: 2025-10-02 13:11:24.988969135 +0000 UTC m=+0.119702336 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:11:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:25 compute-0 nova_compute[257802]: 2025-10-02 13:11:25.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:25 compute-0 ceph-mon[73607]: pgmap v3400: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 291 KiB/s wr, 152 op/s
Oct 02 13:11:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:25.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 655 KiB/s wr, 149 op/s
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:26.996 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:26.997 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:11:26.997 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:27 compute-0 sudo[403488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:27 compute-0 sudo[403488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:27 compute-0 sudo[403488]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:27 compute-0 sudo[403513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:27 compute-0 sudo[403513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:27 compute-0 sudo[403513]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:27 compute-0 ceph-mon[73607]: pgmap v3401: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 655 KiB/s wr, 149 op/s
Oct 02 13:11:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.0 MiB/s wr, 164 op/s
Oct 02 13:11:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2021659220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2021659220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3003176227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1898368673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:29 compute-0 nova_compute[257802]: 2025-10-02 13:11:29.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:29.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Oct 02 13:11:29 compute-0 ceph-mon[73607]: pgmap v3402: 305 pgs: 305 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.0 MiB/s wr, 164 op/s
Oct 02 13:11:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1393390085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Oct 02 13:11:29 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Oct 02 13:11:30 compute-0 nova_compute[257802]: 2025-10-02 13:11:30.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:30 compute-0 nova_compute[257802]: 2025-10-02 13:11:30.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:11:30 compute-0 nova_compute[257802]: 2025-10-02 13:11:30.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 13:11:30 compute-0 ceph-mon[73607]: osdmap e418: 3 total, 3 up, 3 in
Oct 02 13:11:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:31.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:32 compute-0 ceph-mon[73607]: pgmap v3404: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 13:11:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:11:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3184112438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:11:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3184112438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 13:11:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3184112438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3184112438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:33 compute-0 ceph-mon[73607]: pgmap v3405: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 13:11:33 compute-0 nova_compute[257802]: 2025-10-02 13:11:33.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-0 nova_compute[257802]: 2025-10-02 13:11:33.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:33.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:34 compute-0 nova_compute[257802]: 2025-10-02 13:11:34.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 768 KiB/s rd, 2.3 MiB/s wr, 134 op/s
Oct 02 13:11:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:35.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:35 compute-0 nova_compute[257802]: 2025-10-02 13:11:35.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:35 compute-0 ceph-mon[73607]: pgmap v3406: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 768 KiB/s rd, 2.3 MiB/s wr, 134 op/s
Oct 02 13:11:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:36 compute-0 nova_compute[257802]: 2025-10-02 13:11:36.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 02 13:11:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:37 compute-0 ceph-mon[73607]: pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 02 13:11:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 148 KiB/s wr, 83 op/s
Oct 02 13:11:39 compute-0 nova_compute[257802]: 2025-10-02 13:11:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:39 compute-0 nova_compute[257802]: 2025-10-02 13:11:39.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:39 compute-0 ceph-mon[73607]: pgmap v3408: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 148 KiB/s wr, 83 op/s
Oct 02 13:11:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:40 compute-0 nova_compute[257802]: 2025-10-02 13:11:40.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 52 op/s
Oct 02 13:11:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Oct 02 13:11:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Oct 02 13:11:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Oct 02 13:11:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:41 compute-0 ceph-mon[73607]: pgmap v3409: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 52 op/s
Oct 02 13:11:41 compute-0 ceph-mon[73607]: osdmap e419: 3 total, 3 up, 3 in
Oct 02 13:11:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:11:42
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.meta']
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 31 op/s
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:43 compute-0 nova_compute[257802]: 2025-10-02 13:11:43.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:43.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:11:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:11:43 compute-0 ceph-mon[73607]: pgmap v3411: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 31 op/s
Oct 02 13:11:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:11:44 compute-0 nova_compute[257802]: 2025-10-02 13:11:44.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 02 13:11:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:45 compute-0 nova_compute[257802]: 2025-10-02 13:11:45.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:45 compute-0 podman[403549]: 2025-10-02 13:11:45.919676836 +0000 UTC m=+0.050517164 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:11:45 compute-0 podman[403547]: 2025-10-02 13:11:45.920087226 +0000 UTC m=+0.056003186 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:11:45 compute-0 podman[403548]: 2025-10-02 13:11:45.921421169 +0000 UTC m=+0.056612602 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:11:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:46 compute-0 ceph-mon[73607]: pgmap v3412: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 02 13:11:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3033729653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1865546156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 211 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:11:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/183916156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:47 compute-0 ceph-mon[73607]: pgmap v3413: 305 pgs: 305 active+clean; 211 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:11:47 compute-0 nova_compute[257802]: 2025-10-02 13:11:47.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:47 compute-0 nova_compute[257802]: 2025-10-02 13:11:47.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:11:47 compute-0 nova_compute[257802]: 2025-10-02 13:11:47.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:11:47 compute-0 nova_compute[257802]: 2025-10-02 13:11:47.113 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:11:47 compute-0 sudo[403606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:47 compute-0 sudo[403606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:47 compute-0 sudo[403606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:47 compute-0 sudo[403631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:47 compute-0 sudo[403631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:47 compute-0 sudo[403631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:47.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1733348881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:11:49 compute-0 ceph-mon[73607]: pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:11:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068125743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:49 compute-0 nova_compute[257802]: 2025-10-02 13:11:49.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:11:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:11:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3068125743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:50 compute-0 nova_compute[257802]: 2025-10-02 13:11:50.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:11:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:51 compute-0 ceph-mon[73607]: pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 13:11:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.198 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.198 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.199 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.199 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.199 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183522276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.639 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.779 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.780 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4220MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.780 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.780 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:53 compute-0 ceph-mon[73607]: pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 02 13:11:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/183522276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.814068) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713814113, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1132, "num_deletes": 252, "total_data_size": 1741798, "memory_usage": 1772672, "flush_reason": "Manual Compaction"}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713824588, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1720803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74560, "largest_seqno": 75691, "table_properties": {"data_size": 1715425, "index_size": 2773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12118, "raw_average_key_size": 20, "raw_value_size": 1704411, "raw_average_value_size": 2835, "num_data_blocks": 123, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410617, "oldest_key_time": 1759410617, "file_creation_time": 1759410713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 10543 microseconds, and 3856 cpu microseconds.
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.824621) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1720803 bytes OK
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.824638) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.825986) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.825999) EVENT_LOG_v1 {"time_micros": 1759410713825995, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.826017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1736682, prev total WAL file size 1736682, number of live WAL files 2.
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.826579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1680KB)], [170(12MB)]
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713826624, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 15316838, "oldest_snapshot_seqno": -1}
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.874 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.874 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:11:53 compute-0 nova_compute[257802]: 2025-10-02 13:11:53.893 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10186 keys, 13364004 bytes, temperature: kUnknown
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713897378, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13364004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13297734, "index_size": 39725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269170, "raw_average_key_size": 26, "raw_value_size": 13118912, "raw_average_value_size": 1287, "num_data_blocks": 1512, "num_entries": 10186, "num_filter_entries": 10186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.897648) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13364004 bytes
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.899123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.2 rd, 188.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 13.0 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 10707, records dropped: 521 output_compression: NoCompression
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.899141) EVENT_LOG_v1 {"time_micros": 1759410713899132, "job": 106, "event": "compaction_finished", "compaction_time_micros": 70835, "compaction_time_cpu_micros": 29325, "output_level": 6, "num_output_files": 1, "total_output_size": 13364004, "num_input_records": 10707, "num_output_records": 10186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713899568, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713902273, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.826499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.902349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.902356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.902358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.902359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:11:53.902361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:11:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:53.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180924015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.308 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.314 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.360 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.362 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.362 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:54 compute-0 nova_compute[257802]: 2025-10-02 13:11:54.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 13:11:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1180924015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031503364624046156 of space, bias 1.0, pg target 0.9451009387213847 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:11:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:55 compute-0 nova_compute[257802]: 2025-10-02 13:11:55.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:55 compute-0 ceph-mon[73607]: pgmap v3417: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 13:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4142249881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4142249881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1474483427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:55 compute-0 podman[403704]: 2025-10-02 13:11:55.958598534 +0000 UTC m=+0.102443510 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:11:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:55.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:11:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 667 KiB/s wr, 36 op/s
Oct 02 13:11:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:57 compute-0 ceph-mon[73607]: pgmap v3418: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 667 KiB/s wr, 36 op/s
Oct 02 13:11:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:57.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 KiB/s rd, 15 KiB/s wr, 13 op/s
Oct 02 13:11:59 compute-0 nova_compute[257802]: 2025-10-02 13:11:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:59.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:59 compute-0 ceph-mon[73607]: pgmap v3419: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 KiB/s rd, 15 KiB/s wr, 13 op/s
Oct 02 13:11:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:11:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:11:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:59.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:00 compute-0 nova_compute[257802]: 2025-10-02 13:12:00.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:12:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:01 compute-0 ceph-mon[73607]: pgmap v3420: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:12:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:01.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:02 compute-0 nova_compute[257802]: 2025-10-02 13:12:02.361 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:12:03 compute-0 ceph-mon[73607]: pgmap v3421: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 13:12:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:04 compute-0 nova_compute[257802]: 2025-10-02 13:12:04.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 69 op/s
Oct 02 13:12:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:05 compute-0 nova_compute[257802]: 2025-10-02 13:12:05.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:05.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:05 compute-0 ceph-mon[73607]: pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 69 op/s
Oct 02 13:12:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:12:07 compute-0 ceph-mon[73607]: pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:12:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:12:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:07.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:12:07 compute-0 sudo[403736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:07 compute-0 sudo[403736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:07 compute-0 sudo[403736]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:07 compute-0 sudo[403761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:07 compute-0 sudo[403761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:07 compute-0 sudo[403761]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:07.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:12:09 compute-0 nova_compute[257802]: 2025-10-02 13:12:09.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:09 compute-0 ceph-mon[73607]: pgmap v3424: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:12:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.338 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.338 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.357 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.456 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.456 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.465 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.465 2 INFO nova.compute.claims [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.564 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 316 KiB/s wr, 82 op/s
Oct 02 13:12:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583397079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.990 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:10 compute-0 nova_compute[257802]: 2025-10-02 13:12:10.995 2 DEBUG nova.compute.provider_tree [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.019 2 DEBUG nova.scheduler.client.report [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.042 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.043 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.089 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.089 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:12:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.120 2 INFO nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.154 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.261 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.263 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.264 2 INFO nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Creating image(s)
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.299 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.340 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.367 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.370 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.434 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.436 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.436 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.436 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.460 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.463 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:11.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:11 compute-0 ceph-mon[73607]: pgmap v3425: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 316 KiB/s wr, 82 op/s
Oct 02 13:12:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/583397079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:11 compute-0 nova_compute[257802]: 2025-10-02 13:12:11.973 2 DEBUG nova.policy [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:12:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:12:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:11.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.069 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.147 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.265 2 DEBUG nova.objects.instance [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid a6e76331-61cd-4b84-9c5a-6bcebfc50236 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.285 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.286 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Ensure instance console log exists: /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.286 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.286 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:12 compute-0 nova_compute[257802]: 2025-10-02 13:12:12.287 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 316 KiB/s wr, 56 op/s
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:13 compute-0 ceph-mon[73607]: pgmap v3426: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 316 KiB/s wr, 56 op/s
Oct 02 13:12:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:13.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:14 compute-0 nova_compute[257802]: 2025-10-02 13:12:14.335 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Successfully created port: eddb34a1-265c-47da-9fba-767aa4c99438 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:12:14 compute-0 nova_compute[257802]: 2025-10-02 13:12:14.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 107 op/s
Oct 02 13:12:15 compute-0 sudo[403979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:15 compute-0 sudo[403979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:15 compute-0 sudo[403979]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:15 compute-0 sudo[404004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:15 compute-0 sudo[404004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:15 compute-0 sudo[404004]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:15 compute-0 sudo[404029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:15 compute-0 sudo[404029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:15 compute-0 sudo[404029]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:15 compute-0 sudo[404054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:12:15 compute-0 sudo[404054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:15 compute-0 nova_compute[257802]: 2025-10-02 13:12:15.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 nova_compute[257802]: 2025-10-02 13:12:15.818 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Successfully updated port: eddb34a1-265c-47da-9fba-767aa4c99438 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:12:15 compute-0 nova_compute[257802]: 2025-10-02 13:12:15.949 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:15 compute-0 nova_compute[257802]: 2025-10-02 13:12:15.949 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:15 compute-0 nova_compute[257802]: 2025-10-02 13:12:15.949 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:12:15 compute-0 ceph-mon[73607]: pgmap v3427: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 107 op/s
Oct 02 13:12:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:15.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:16 compute-0 podman[404150]: 2025-10-02 13:12:16.012405509 +0000 UTC m=+0.065423574 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:16 compute-0 nova_compute[257802]: 2025-10-02 13:12:16.028 2 DEBUG nova.compute.manager [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:16 compute-0 nova_compute[257802]: 2025-10-02 13:12:16.029 2 DEBUG nova.compute.manager [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing instance network info cache due to event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:12:16 compute-0 nova_compute[257802]: 2025-10-02 13:12:16.030 2 DEBUG oslo_concurrency.lockutils [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:16 compute-0 podman[404150]: 2025-10-02 13:12:16.106135247 +0000 UTC m=+0.159153282 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:16 compute-0 nova_compute[257802]: 2025-10-02 13:12:16.165 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:12:16 compute-0 podman[404183]: 2025-10-02 13:12:16.234917563 +0000 UTC m=+0.062529114 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:12:16 compute-0 podman[404187]: 2025-10-02 13:12:16.235886296 +0000 UTC m=+0.063326423 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:16 compute-0 podman[404186]: 2025-10-02 13:12:16.235668551 +0000 UTC m=+0.061157401 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:12:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:12:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:12:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:16 compute-0 podman[404340]: 2025-10-02 13:12:16.627532751 +0000 UTC m=+0.057349969 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:12:16 compute-0 podman[404340]: 2025-10-02 13:12:16.633717251 +0000 UTC m=+0.063534459 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:12:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 13:12:16 compute-0 podman[404404]: 2025-10-02 13:12:16.833214637 +0000 UTC m=+0.052413649 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.28.2)
Oct 02 13:12:16 compute-0 podman[404404]: 2025-10-02 13:12:16.846175621 +0000 UTC m=+0.065374603 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, release=1793, io.buildah.version=1.28.2, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 13:12:16 compute-0 sudo[404054]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:12:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:12:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.171 2 DEBUG nova.network.neutron [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:17 compute-0 sudo[404453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.191 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.192 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance network_info: |[{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.192 2 DEBUG oslo_concurrency.lockutils [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.192 2 DEBUG nova.network.neutron [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.195 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Start _get_guest_xml network_info=[{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:12:17 compute-0 sudo[404453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.199 2 WARNING nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:17 compute-0 sudo[404453]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.203 2 DEBUG nova.virt.libvirt.host [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.204 2 DEBUG nova.virt.libvirt.host [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.208 2 DEBUG nova.virt.libvirt.host [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.208 2 DEBUG nova.virt.libvirt.host [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.209 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.209 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.210 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.210 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.210 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.210 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.211 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.211 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.211 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.211 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.211 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.212 2 DEBUG nova.virt.hardware [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.214 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:17 compute-0 sudo[404478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:17 compute-0 sudo[404478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:17 compute-0 sudo[404478]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:17 compute-0 sudo[404504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:17 compute-0 sudo[404504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:17 compute-0 sudo[404504]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:17 compute-0 sudo[404548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:12:17 compute-0 sudo[404548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 ceph-mon[73607]: pgmap v3428: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 13:12:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239053997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.709 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.734 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:17 compute-0 nova_compute[257802]: 2025-10-02 13:12:17.737 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:17 compute-0 sudo[404548]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:12:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:12:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:12:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:12:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:12:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 13:12:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:17.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 13:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5cc16ce6-7a5c-42e2-9cea-167ddaeac38b does not exist
Oct 02 13:12:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d340f62b-4199-4dd2-ad2a-81c923b3566e does not exist
Oct 02 13:12:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 246df056-211d-4978-91ed-99e4b653ab2a does not exist
Oct 02 13:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:12:18 compute-0 sudo[404643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:18 compute-0 sudo[404643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:18 compute-0 sudo[404643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337520356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:18 compute-0 sudo[404668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:18 compute-0 sudo[404668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:18 compute-0 sudo[404668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.163 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.165 2 DEBUG nova.virt.libvirt.vif [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1710981769',display_name='tempest-TestNetworkAdvancedServerOps-server-1710981769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1710981769',id=216,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF+oMSn5P+xoqvEw63G0Plc0Av/pnjbZbz+0lJVZCi7mAGtNM6HIVFIBh+PJdNpowWjoQaoGdr7QXqOABJI4p9p4Fmv+8P57z6CZu1tIPiDwua2k+BkVDF+iuQvfIG/QLw==',key_name='tempest-TestNetworkAdvancedServerOps-1766999338',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-8eo70ise',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:11Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=a6e76331-61cd-4b84-9c5a-6bcebfc50236,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.166 2 DEBUG nova.network.os_vif_util [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.167 2 DEBUG nova.network.os_vif_util [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.168 2 DEBUG nova.objects.instance [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid a6e76331-61cd-4b84-9c5a-6bcebfc50236 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.195 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <uuid>a6e76331-61cd-4b84-9c5a-6bcebfc50236</uuid>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <name>instance-000000d8</name>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1710981769</nova:name>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:12:17</nova:creationTime>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <nova:port uuid="eddb34a1-265c-47da-9fba-767aa4c99438">
Oct 02 13:12:18 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <system>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="serial">a6e76331-61cd-4b84-9c5a-6bcebfc50236</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="uuid">a6e76331-61cd-4b84-9c5a-6bcebfc50236</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </system>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <os>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </os>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <features>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </features>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk">
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </source>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config">
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </source>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:12:18 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:3a:5a:b9"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <target dev="tapeddb34a1-26"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/console.log" append="off"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <video>
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </video>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:12:18 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:12:18 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:12:18 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:12:18 compute-0 nova_compute[257802]: </domain>
Oct 02 13:12:18 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.197 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Preparing to wait for external event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.197 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.197 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.197 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.198 2 DEBUG nova.virt.libvirt.vif [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1710981769',display_name='tempest-TestNetworkAdvancedServerOps-server-1710981769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1710981769',id=216,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF+oMSn5P+xoqvEw63G0Plc0Av/pnjbZbz+0lJVZCi7mAGtNM6HIVFIBh+PJdNpowWjoQaoGdr7QXqOABJI4p9p4Fmv+8P57z6CZu1tIPiDwua2k+BkVDF+iuQvfIG/QLw==',key_name='tempest-TestNetworkAdvancedServerOps-1766999338',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-8eo70ise',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:11Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=a6e76331-61cd-4b84-9c5a-6bcebfc50236,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.198 2 DEBUG nova.network.os_vif_util [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:18 compute-0 sudo[404695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.198 2 DEBUG nova.network.os_vif_util [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.199 2 DEBUG os_vif [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:18 compute-0 sudo[404695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.204 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeddb34a1-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.204 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeddb34a1-26, col_values=(('external_ids', {'iface-id': 'eddb34a1-265c-47da-9fba-767aa4c99438', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:5a:b9', 'vm-uuid': 'a6e76331-61cd-4b84-9c5a-6bcebfc50236'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:18 compute-0 sudo[404695]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 NetworkManager[44987]: <info>  [1759410738.2084] manager: (tapeddb34a1-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.214 2 INFO os_vif [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26')
Oct 02 13:12:18 compute-0 sudo[404722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:12:18 compute-0 sudo[404722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.281 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.281 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.281 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:3a:5a:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.282 2 INFO nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Using config drive
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.311 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.520 2 DEBUG nova.network.neutron [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updated VIF entry in instance network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.520 2 DEBUG nova.network.neutron [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:18 compute-0 nova_compute[257802]: 2025-10-02 13:12:18.565 2 DEBUG oslo_concurrency.lockutils [req-d56f0c42-5fdc-405d-80e5-4f76e4bd71ac req-2a6c9671-1ca5-426c-97e3-26fdd33ec755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.610529985 +0000 UTC m=+0.040285306 container create fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:12:18 compute-0 systemd[1]: Started libpod-conmon-fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc.scope.
Oct 02 13:12:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.593653777 +0000 UTC m=+0.023409128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.69631976 +0000 UTC m=+0.126075111 container init fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.703356851 +0000 UTC m=+0.133112172 container start fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.706078987 +0000 UTC m=+0.135834328 container attach fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:12:18 compute-0 happy_borg[404822]: 167 167
Oct 02 13:12:18 compute-0 systemd[1]: libpod-fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc.scope: Deactivated successfully.
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.713460795 +0000 UTC m=+0.143216126 container died fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 02 13:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70abc257dc2958a543fbd93782c83610fdf1df231af17dbc09dddcfe4db1e3e-merged.mount: Deactivated successfully.
Oct 02 13:12:18 compute-0 podman[404806]: 2025-10-02 13:12:18.766098419 +0000 UTC m=+0.195853740 container remove fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4239053997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:12:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3337520356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:18 compute-0 systemd[1]: libpod-conmon-fea781e035ad98fe085c40d46013c25dfa5d1d8c74b4bd46aa0ccc54d6e582cc.scope: Deactivated successfully.
Oct 02 13:12:18 compute-0 podman[404848]: 2025-10-02 13:12:18.930722861 +0000 UTC m=+0.038510502 container create c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:12:18 compute-0 systemd[1]: Started libpod-conmon-c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235.scope.
Oct 02 13:12:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:19 compute-0 podman[404848]: 2025-10-02 13:12:19.004607929 +0000 UTC m=+0.112395570 container init c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:12:19 compute-0 podman[404848]: 2025-10-02 13:12:18.914154441 +0000 UTC m=+0.021942102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:19 compute-0 podman[404848]: 2025-10-02 13:12:19.012703715 +0000 UTC m=+0.120491356 container start c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:19 compute-0 podman[404848]: 2025-10-02 13:12:19.015747128 +0000 UTC m=+0.123534789 container attach c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.109 2 INFO nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Creating config drive at /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.114 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6213aqq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.260 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6213aqq" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.286 2 DEBUG nova.storage.rbd_utils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.291 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.468 2 DEBUG oslo_concurrency.processutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config a6e76331-61cd-4b84-9c5a-6bcebfc50236_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.469 2 INFO nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Deleting local config drive /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236/disk.config because it was imported into RBD.
Oct 02 13:12:19 compute-0 kernel: tapeddb34a1-26: entered promiscuous mode
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.5280] manager: (tapeddb34a1-26): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 ovn_controller[148183]: 2025-10-02T13:12:19Z|00960|binding|INFO|Claiming lport eddb34a1-265c-47da-9fba-767aa4c99438 for this chassis.
Oct 02 13:12:19 compute-0 ovn_controller[148183]: 2025-10-02T13:12:19Z|00961|binding|INFO|eddb34a1-265c-47da-9fba-767aa4c99438: Claiming fa:16:3e:3a:5a:b9 10.100.0.8
Oct 02 13:12:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:19.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:19 compute-0 systemd-machined[211836]: New machine qemu-103-instance-000000d8.
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.571 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:5a:b9 10.100.0.8'], port_security=['fa:16:3e:3a:5a:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6e76331-61cd-4b84-9c5a-6bcebfc50236', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc93fa2-a668-4911-9ce8-f25e326a9593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '229024b9-8e7c-4058-aeff-9a86d85d938a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ee8473-bad1-43f5-bdc0-9ebd0a9df9f9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eddb34a1-265c-47da-9fba-767aa4c99438) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.573 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eddb34a1-265c-47da-9fba-767aa4c99438 in datapath acc93fa2-a668-4911-9ce8-f25e326a9593 bound to our chassis
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.575 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.587 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8dff28a7-87a7-462d-84b3-885efe4e8e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.588 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacc93fa2-a1 in ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.590 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacc93fa2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.590 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fbda43-2a22-4390-b4d4-698fd9a5467c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.591 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47eb54-20eb-412f-8945-c3cdad43af2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.604 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9d55d3-4715-4b62-bd2f-572ed62d8b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_controller[148183]: 2025-10-02T13:12:19Z|00962|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 ovn-installed in OVS
Oct 02 13:12:19 compute-0 ovn_controller[148183]: 2025-10-02T13:12:19Z|00963|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 up in Southbound
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-000000d8.
Oct 02 13:12:19 compute-0 systemd-udevd[404925]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.6366] device (tapeddb34a1-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.6378] device (tapeddb34a1-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.641 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4a12477c-63b0-42a3-86d1-3b8c106ca1e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.671 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[edb779c2-101e-4dbb-9de5-d0de15c7292e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.6780] manager: (tapacc93fa2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/434)
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.677 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4e1775-64bb-4ce5-969b-14a205965f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.718 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd71e4f-54f8-4ca7-85a5-abfd6173c23b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.721 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[99317f7d-a67a-47a5-b49a-8fa67c501a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.7427] device (tapacc93fa2-a0): carrier: link connected
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.746 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[62b0f74d-27d6-4bfb-8e94-5810234b5e3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.769 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[639f22ca-61df-4aec-af17-e27cfde87526]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc93fa2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:06:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880736, 'reachable_time': 15060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404961, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ceph-mon[73607]: pgmap v3429: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.787 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[12d9058c-669a-4e04-9f23-83bfa5909c39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:67d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880736, 'tstamp': 880736}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404964, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.805 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb511a2-c3f5-4954-9b8c-f8d8f2edb4ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc93fa2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:06:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880736, 'reachable_time': 15060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404965, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.837 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4904d7-1dbe-4568-974f-a2908a451645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 awesome_gates[404864]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:12:19 compute-0 awesome_gates[404864]: --> relative data size: 1.0
Oct 02 13:12:19 compute-0 awesome_gates[404864]: --> All data devices are unavailable
Oct 02 13:12:19 compute-0 systemd[1]: libpod-c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235.scope: Deactivated successfully.
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.891 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[71e94719-b597-4aa2-b49e-e0e0de3b0252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.893 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc93fa2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.893 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.893 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacc93fa2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:19 compute-0 kernel: tapacc93fa2-a0: entered promiscuous mode
Oct 02 13:12:19 compute-0 NetworkManager[44987]: <info>  [1759410739.8957] manager: (tapacc93fa2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.898 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacc93fa2-a0, col_values=(('external_ids', {'iface-id': '4f4a9a9c-98b2-43c2-aac7-1be2d0091142'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 ovn_controller[148183]: 2025-10-02T13:12:19Z|00964|binding|INFO|Releasing lport 4f4a9a9c-98b2-43c2-aac7-1be2d0091142 from this chassis (sb_readonly=0)
Oct 02 13:12:19 compute-0 nova_compute[257802]: 2025-10-02 13:12:19.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.918 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.920 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[72b8e25f-bbd4-4807-a4b0-50431144fa0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.921 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:12:19 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:19.922 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'env', 'PROCESS_TAG=haproxy-acc93fa2-a668-4911-9ce8-f25e326a9593', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acc93fa2-a668-4911-9ce8-f25e326a9593.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:12:19 compute-0 podman[404992]: 2025-10-02 13:12:19.925367765 +0000 UTC m=+0.027748653 container died c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6439ad0616e0dcb0afe1b5517f45cd9c987e00093b46b1bb45806d849fcf881f-merged.mount: Deactivated successfully.
Oct 02 13:12:19 compute-0 podman[404992]: 2025-10-02 13:12:19.9776985 +0000 UTC m=+0.080079378 container remove c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gates, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:12:19 compute-0 systemd[1]: libpod-conmon-c5e2feb9f4cc1b4c35dd1042b784cbe42823ef7c1137d9db4e3c9132d5f47235.scope: Deactivated successfully.
Oct 02 13:12:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:19.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:20 compute-0 sudo[404722]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:20 compute-0 sudo[405034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:20 compute-0 sudo[405034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:20 compute-0 sudo[405034]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:20 compute-0 sudo[405060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:20 compute-0 sudo[405060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:20 compute-0 sudo[405060]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.185 2 DEBUG nova.compute.manager [req-236bce19-5944-4420-837b-d4e8690c99e4 req-6b7dc225-3b8e-404f-be9a-38b57597ee7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.189 2 DEBUG oslo_concurrency.lockutils [req-236bce19-5944-4420-837b-d4e8690c99e4 req-6b7dc225-3b8e-404f-be9a-38b57597ee7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.190 2 DEBUG oslo_concurrency.lockutils [req-236bce19-5944-4420-837b-d4e8690c99e4 req-6b7dc225-3b8e-404f-be9a-38b57597ee7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.190 2 DEBUG oslo_concurrency.lockutils [req-236bce19-5944-4420-837b-d4e8690c99e4 req-6b7dc225-3b8e-404f-be9a-38b57597ee7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.191 2 DEBUG nova.compute.manager [req-236bce19-5944-4420-837b-d4e8690c99e4 req-6b7dc225-3b8e-404f-be9a-38b57597ee7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Processing event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:12:20 compute-0 sudo[405090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:20 compute-0 sudo[405090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:20 compute-0 sudo[405090]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:20 compute-0 sudo[405132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:12:20 compute-0 sudo[405132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:20 compute-0 podman[405130]: 2025-10-02 13:12:20.302984399 +0000 UTC m=+0.048650267 container create 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:12:20 compute-0 systemd[1]: Started libpod-conmon-312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816.scope.
Oct 02 13:12:20 compute-0 podman[405130]: 2025-10-02 13:12:20.276476569 +0000 UTC m=+0.022142447 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:12:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0205fa3661c80920a0de8c703b76e2a135d30854d85250efde82d106238eba3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:20 compute-0 podman[405130]: 2025-10-02 13:12:20.404553897 +0000 UTC m=+0.150219765 container init 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:12:20 compute-0 podman[405130]: 2025-10-02 13:12:20.410983963 +0000 UTC m=+0.156649831 container start 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:12:20 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [NOTICE]   (405175) : New worker (405184) forked
Oct 02 13:12:20 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [NOTICE]   (405175) : Loading success.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.570 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410740.5696154, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.571 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Started (Lifecycle Event)
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.573 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.578 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.581 2 INFO nova.virt.libvirt.driver [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance spawned successfully.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.581 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.617 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.625 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.628 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.629 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.630 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.631 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.631 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.632 2 DEBUG nova.virt.libvirt.driver [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.636404797 +0000 UTC m=+0.042702555 container create dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.672 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.672 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410740.569871, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.672 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Paused (Lifecycle Event)
Oct 02 13:12:20 compute-0 systemd[1]: Started libpod-conmon-dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4.scope.
Oct 02 13:12:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.618207276 +0000 UTC m=+0.024505064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.719597809 +0000 UTC m=+0.125895587 container init dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.726119617 +0000 UTC m=+0.132417375 container start dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:12:20 compute-0 epic_blackwell[405242]: 167 167
Oct 02 13:12:20 compute-0 systemd[1]: libpod-dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4.scope: Deactivated successfully.
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.734716345 +0000 UTC m=+0.141014113 container attach dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.735173736 +0000 UTC m=+0.141471514 container died dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.736 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.741 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410740.575187, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.741 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Resumed (Lifecycle Event)
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.751 2 INFO nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Took 9.49 seconds to spawn the instance on the hypervisor.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.752 2 DEBUG nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.778 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.781 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2343bdd8df7a6c881a73bbc67c424544a813838a94798edc197059aed78c1bef-merged.mount: Deactivated successfully.
Oct 02 13:12:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Oct 02 13:12:20 compute-0 podman[405225]: 2025-10-02 13:12:20.816214896 +0000 UTC m=+0.222512654 container remove dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.826 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:12:20 compute-0 systemd[1]: libpod-conmon-dc58f9355dce7d0f28abdcfa45f7aec24637e9fbd56809303ff6dcb7dc3dd7a4.scope: Deactivated successfully.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.838 2 INFO nova.compute.manager [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Took 10.44 seconds to build instance.
Oct 02 13:12:20 compute-0 nova_compute[257802]: 2025-10-02 13:12:20.852 2 DEBUG oslo_concurrency.lockutils [None req-22768a32-ec1a-44a1-80e6-f6f987be4ab5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Oct 02 13:12:20 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Oct 02 13:12:20 compute-0 podman[405269]: 2025-10-02 13:12:20.99455793 +0000 UTC m=+0.036959844 container create d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:21 compute-0 systemd[1]: Started libpod-conmon-d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42.scope.
Oct 02 13:12:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b2a86d2f569bed76b5ec6bba4556ef805f7e04bfa2a6c0731c4a8cd990bcb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b2a86d2f569bed76b5ec6bba4556ef805f7e04bfa2a6c0731c4a8cd990bcb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b2a86d2f569bed76b5ec6bba4556ef805f7e04bfa2a6c0731c4a8cd990bcb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b2a86d2f569bed76b5ec6bba4556ef805f7e04bfa2a6c0731c4a8cd990bcb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:20.978661256 +0000 UTC m=+0.021063190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:21.086381502 +0000 UTC m=+0.128783436 container init d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:21.092216843 +0000 UTC m=+0.134618757 container start d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:21.095456141 +0000 UTC m=+0.137858055 container attach d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]: {
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:     "1": [
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:         {
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "devices": [
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "/dev/loop3"
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             ],
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "lv_name": "ceph_lv0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "lv_size": "7511998464",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "name": "ceph_lv0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "tags": {
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.cluster_name": "ceph",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.crush_device_class": "",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.encrypted": "0",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.osd_id": "1",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.type": "block",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:                 "ceph.vdo": "0"
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             },
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "type": "block",
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:             "vg_name": "ceph_vg0"
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:         }
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]:     ]
Oct 02 13:12:21 compute-0 happy_brahmagupta[405286]: }
Oct 02 13:12:21 compute-0 systemd[1]: libpod-d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42.scope: Deactivated successfully.
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:21.830498714 +0000 UTC m=+0.872900628 container died d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b2a86d2f569bed76b5ec6bba4556ef805f7e04bfa2a6c0731c4a8cd990bcb0-merged.mount: Deactivated successfully.
Oct 02 13:12:21 compute-0 ceph-mon[73607]: pgmap v3430: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 13:12:21 compute-0 ceph-mon[73607]: osdmap e420: 3 total, 3 up, 3 in
Oct 02 13:12:21 compute-0 podman[405269]: 2025-10-02 13:12:21.893707744 +0000 UTC m=+0.936109658 container remove d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:12:21 compute-0 systemd[1]: libpod-conmon-d189fc98282fc982f0c070a9c13df64fc701f31a930d2861fc84890c379e7d42.scope: Deactivated successfully.
Oct 02 13:12:21 compute-0 sudo[405132]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:21 compute-0 sudo[405308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:21 compute-0 sudo[405308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:21 compute-0 sudo[405308]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:22 compute-0 sudo[405333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:22 compute-0 sudo[405333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:22 compute-0 sudo[405333]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Oct 02 13:12:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Oct 02 13:12:22 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Oct 02 13:12:22 compute-0 sudo[405358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:22 compute-0 sudo[405358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:22 compute-0 sudo[405358]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:22 compute-0 sudo[405383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:12:22 compute-0 sudo[405383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.303 2 DEBUG nova.compute.manager [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.305 2 DEBUG oslo_concurrency.lockutils [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.306 2 DEBUG oslo_concurrency.lockutils [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.306 2 DEBUG oslo_concurrency.lockutils [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.307 2 DEBUG nova.compute.manager [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:22 compute-0 nova_compute[257802]: 2025-10-02 13:12:22.307 2 WARNING nova.compute.manager [req-e28b9d20-69eb-438c-a417-0162464bee07 req-4d4e767b-babd-4899-b5dd-239023d25f5b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state None.
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.479939396 +0000 UTC m=+0.040801548 container create 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:12:22 compute-0 systemd[1]: Started libpod-conmon-650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a.scope.
Oct 02 13:12:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.461452579 +0000 UTC m=+0.022314751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.558566298 +0000 UTC m=+0.119428450 container init 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.565312841 +0000 UTC m=+0.126174993 container start 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.568348585 +0000 UTC m=+0.129210737 container attach 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:12:22 compute-0 vigorous_lewin[405465]: 167 167
Oct 02 13:12:22 compute-0 systemd[1]: libpod-650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a.scope: Deactivated successfully.
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.570413705 +0000 UTC m=+0.131275857 container died 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e839e0687ffbc533732a3137ff0144c74d3db02000a5481951039b7096af3578-merged.mount: Deactivated successfully.
Oct 02 13:12:22 compute-0 podman[405449]: 2025-10-02 13:12:22.607178094 +0000 UTC m=+0.168040236 container remove 650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:22 compute-0 systemd[1]: libpod-conmon-650bc87a85f6ec8523d31e3807b53267db43af361868850f7f6e3effd8f9272a.scope: Deactivated successfully.
Oct 02 13:12:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Oct 02 13:12:22 compute-0 podman[405489]: 2025-10-02 13:12:22.78763694 +0000 UTC m=+0.065875724 container create 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:22 compute-0 systemd[1]: Started libpod-conmon-026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d.scope.
Oct 02 13:12:22 compute-0 podman[405489]: 2025-10-02 13:12:22.747355895 +0000 UTC m=+0.025594699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c13ae1c2d762dab673409502bbb5d450c3d54691f55df09689d28c6391fdd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c13ae1c2d762dab673409502bbb5d450c3d54691f55df09689d28c6391fdd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c13ae1c2d762dab673409502bbb5d450c3d54691f55df09689d28c6391fdd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c13ae1c2d762dab673409502bbb5d450c3d54691f55df09689d28c6391fdd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:22 compute-0 podman[405489]: 2025-10-02 13:12:22.904647181 +0000 UTC m=+0.182885985 container init 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:12:22 compute-0 podman[405489]: 2025-10-02 13:12:22.911551688 +0000 UTC m=+0.189790472 container start 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:22 compute-0 podman[405489]: 2025-10-02 13:12:22.929637575 +0000 UTC m=+0.207876369 container attach 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:23 compute-0 ceph-mon[73607]: osdmap e421: 3 total, 3 up, 3 in
Oct 02 13:12:23 compute-0 ceph-mon[73607]: pgmap v3433: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Oct 02 13:12:23 compute-0 nova_compute[257802]: 2025-10-02 13:12:23.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:23 compute-0 zen_montalcini[405507]: {
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:         "osd_id": 1,
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:         "type": "bluestore"
Oct 02 13:12:23 compute-0 zen_montalcini[405507]:     }
Oct 02 13:12:23 compute-0 zen_montalcini[405507]: }
Oct 02 13:12:23 compute-0 systemd[1]: libpod-026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d.scope: Deactivated successfully.
Oct 02 13:12:23 compute-0 podman[405489]: 2025-10-02 13:12:23.864956773 +0000 UTC m=+1.143195557 container died 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-85c13ae1c2d762dab673409502bbb5d450c3d54691f55df09689d28c6391fdd4-merged.mount: Deactivated successfully.
Oct 02 13:12:23 compute-0 podman[405489]: 2025-10-02 13:12:23.963170529 +0000 UTC m=+1.241409313 container remove 026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:12:23 compute-0 systemd[1]: libpod-conmon-026543d15b37cde95d645c36e0798d21f95ffb6682e4fec921e33774ec70774d.scope: Deactivated successfully.
Oct 02 13:12:23 compute-0 sudo[405383]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:24.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:12:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:12:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d11a212b-b2fd-4319-aceb-61bada30d1ec does not exist
Oct 02 13:12:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 1235e66c-3532-40ef-abae-881379f7316a does not exist
Oct 02 13:12:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cd84371a-a0c9-4491-a4bd-aae1a2b5d784 does not exist
Oct 02 13:12:24 compute-0 sudo[405539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:24 compute-0 sudo[405539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:24 compute-0 sudo[405539]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:24 compute-0 sudo[405564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:12:24 compute-0 sudo[405564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:24 compute-0 sudo[405564]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:24 compute-0 NetworkManager[44987]: <info>  [1759410744.3120] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 02 13:12:24 compute-0 NetworkManager[44987]: <info>  [1759410744.3132] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:24 compute-0 ovn_controller[148183]: 2025-10-02T13:12:24Z|00965|binding|INFO|Releasing lport 4f4a9a9c-98b2-43c2-aac7-1be2d0091142 from this chassis (sb_readonly=0)
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.616 2 DEBUG nova.compute.manager [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.616 2 DEBUG nova.compute.manager [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing instance network info cache due to event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.617 2 DEBUG oslo_concurrency.lockutils [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.617 2 DEBUG oslo_concurrency.lockutils [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:24 compute-0 nova_compute[257802]: 2025-10-02 13:12:24.617 2 DEBUG nova.network.neutron [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:12:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 80 op/s
Oct 02 13:12:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:12:25 compute-0 ceph-mon[73607]: pgmap v3434: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 80 op/s
Oct 02 13:12:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:25 compute-0 nova_compute[257802]: 2025-10-02 13:12:25.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:12:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:12:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:26 compute-0 nova_compute[257802]: 2025-10-02 13:12:26.196 2 DEBUG nova.network.neutron [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updated VIF entry in instance network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:12:26 compute-0 nova_compute[257802]: 2025-10-02 13:12:26.197 2 DEBUG nova.network.neutron [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:26 compute-0 nova_compute[257802]: 2025-10-02 13:12:26.216 2 DEBUG oslo_concurrency.lockutils [req-92954e56-b593-4363-b4f6-02e40e89ec42 req-676b95fb-5d50-48ae-adc1-a187496c77d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 135 op/s
Oct 02 13:12:26 compute-0 podman[405592]: 2025-10-02 13:12:26.945227412 +0000 UTC m=+0.085737515 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:26.997 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:26.998 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:26.999 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:27 compute-0 sudo[405618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:27 compute-0 sudo[405618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:27 compute-0 sudo[405618]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:27 compute-0 sudo[405643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:27 compute-0 sudo[405643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:27 compute-0 sudo[405643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:27 compute-0 ceph-mon[73607]: pgmap v3435: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 135 op/s
Oct 02 13:12:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:12:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:12:28 compute-0 nova_compute[257802]: 2025-10-02 13:12:28.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.5 KiB/s wr, 124 op/s
Oct 02 13:12:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4154291980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:29 compute-0 ceph-mon[73607]: pgmap v3436: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.5 KiB/s wr, 124 op/s
Oct 02 13:12:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:30 compute-0 nova_compute[257802]: 2025-10-02 13:12:30.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 KiB/s wr, 108 op/s
Oct 02 13:12:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1454490702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:31 compute-0 nova_compute[257802]: 2025-10-02 13:12:31.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:31 compute-0 nova_compute[257802]: 2025-10-02 13:12:31.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:12:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:31.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:31 compute-0 ceph-mon[73607]: pgmap v3437: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 KiB/s wr, 108 op/s
Oct 02 13:12:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3016931118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 KiB/s wr, 100 op/s
Oct 02 13:12:33 compute-0 ovn_controller[148183]: 2025-10-02T13:12:33Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:5a:b9 10.100.0.8
Oct 02 13:12:33 compute-0 ovn_controller[148183]: 2025-10-02T13:12:33Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:5a:b9 10.100.0.8
Oct 02 13:12:33 compute-0 ceph-mon[73607]: pgmap v3438: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 KiB/s wr, 100 op/s
Oct 02 13:12:33 compute-0 nova_compute[257802]: 2025-10-02 13:12:33.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:33.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:34 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:34 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964820405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:34 compute-0 nova_compute[257802]: 2025-10-02 13:12:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:34 compute-0 nova_compute[257802]: 2025-10-02 13:12:34.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2964820405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 270 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 125 op/s
Oct 02 13:12:35 compute-0 ceph-mon[73607]: pgmap v3439: 305 pgs: 305 active+clean; 270 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 125 op/s
Oct 02 13:12:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:35.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:35 compute-0 nova_compute[257802]: 2025-10-02 13:12:35.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2533678381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 274 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 963 KiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 02 13:12:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:37 compute-0 ceph-mon[73607]: pgmap v3440: 305 pgs: 305 active+clean; 274 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 963 KiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 02 13:12:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:38 compute-0 nova_compute[257802]: 2025-10-02 13:12:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:38 compute-0 nova_compute[257802]: 2025-10-02 13:12:38.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:12:39 compute-0 nova_compute[257802]: 2025-10-02 13:12:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:39.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:39 compute-0 ceph-mon[73607]: pgmap v3441: 305 pgs: 305 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:12:39 compute-0 nova_compute[257802]: 2025-10-02 13:12:39.870 2 INFO nova.compute.manager [None req-401f88c4-7fd0-465d-bceb-6f4ce6ac775d ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Get console output
Oct 02 13:12:39 compute-0 nova_compute[257802]: 2025-10-02 13:12:39.874 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:12:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:40.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.114 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.115 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.115 2 INFO nova.compute.manager [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Rebooting instance
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.130 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.130 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.130 2 DEBUG nova.network.neutron [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:12:40 compute-0 nova_compute[257802]: 2025-10-02 13:12:40.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 523 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 13:12:41 compute-0 nova_compute[257802]: 2025-10-02 13:12:41.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:41.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:41 compute-0 nova_compute[257802]: 2025-10-02 13:12:41.707 2 DEBUG nova.network.neutron [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:41 compute-0 nova_compute[257802]: 2025-10-02 13:12:41.735 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:41 compute-0 nova_compute[257802]: 2025-10-02 13:12:41.736 2 DEBUG nova.compute.manager [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:41 compute-0 ceph-mon[73607]: pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 523 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 13:12:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:12:42
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images']
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 518 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 13:12:43 compute-0 nova_compute[257802]: 2025-10-02 13:12:43.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:12:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:12:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:43.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:44.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:44 compute-0 ceph-mon[73607]: pgmap v3443: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 518 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 13:12:44 compute-0 kernel: tapeddb34a1-26 (unregistering): left promiscuous mode
Oct 02 13:12:44 compute-0 NetworkManager[44987]: <info>  [1759410764.1767] device (tapeddb34a1-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00966|binding|INFO|Releasing lport eddb34a1-265c-47da-9fba-767aa4c99438 from this chassis (sb_readonly=0)
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00967|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 down in Southbound
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00968|binding|INFO|Removing iface tapeddb34a1-26 ovn-installed in OVS
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.212 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:5a:b9 10.100.0.8'], port_security=['fa:16:3e:3a:5a:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6e76331-61cd-4b84-9c5a-6bcebfc50236', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc93fa2-a668-4911-9ce8-f25e326a9593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '229024b9-8e7c-4058-aeff-9a86d85d938a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ee8473-bad1-43f5-bdc0-9ebd0a9df9f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eddb34a1-265c-47da-9fba-767aa4c99438) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.213 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eddb34a1-265c-47da-9fba-767aa4c99438 in datapath acc93fa2-a668-4911-9ce8-f25e326a9593 unbound from our chassis
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.214 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acc93fa2-a668-4911-9ce8-f25e326a9593, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.216 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4c0c0e-430d-42f9-9ee5-00e46c04f47b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.216 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 namespace which is not needed anymore
Oct 02 13:12:44 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Oct 02 13:12:44 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d8.scope: Consumed 14.005s CPU time.
Oct 02 13:12:44 compute-0 systemd-machined[211836]: Machine qemu-103-instance-000000d8 terminated.
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [NOTICE]   (405175) : haproxy version is 2.8.14-c23fe91
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [NOTICE]   (405175) : path to executable is /usr/sbin/haproxy
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [WARNING]  (405175) : Exiting Master process...
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [WARNING]  (405175) : Exiting Master process...
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [ALERT]    (405175) : Current worker (405184) exited with code 143 (Terminated)
Oct 02 13:12:44 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405169]: [WARNING]  (405175) : All workers exited. Exiting... (0)
Oct 02 13:12:44 compute-0 systemd[1]: libpod-312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816.scope: Deactivated successfully.
Oct 02 13:12:44 compute-0 podman[405701]: 2025-10-02 13:12:44.364506111 +0000 UTC m=+0.046795374 container died 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816-userdata-shm.mount: Deactivated successfully.
Oct 02 13:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0205fa3661c80920a0de8c703b76e2a135d30854d85250efde82d106238eba3-merged.mount: Deactivated successfully.
Oct 02 13:12:44 compute-0 podman[405701]: 2025-10-02 13:12:44.402663644 +0000 UTC m=+0.084952937 container cleanup 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:12:44 compute-0 systemd[1]: libpod-conmon-312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816.scope: Deactivated successfully.
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.429 2 DEBUG nova.compute.manager [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.431 2 DEBUG oslo_concurrency.lockutils [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.431 2 DEBUG oslo_concurrency.lockutils [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.431 2 DEBUG oslo_concurrency.lockutils [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.432 2 DEBUG nova.compute.manager [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.432 2 WARNING nova.compute.manager [req-d82552dc-f525-497b-af04-ca1a3e0bd8c1 req-207e43d1-9d47-48f0-9603-9e929648c71f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state reboot_started.
Oct 02 13:12:44 compute-0 podman[405735]: 2025-10-02 13:12:44.481175823 +0000 UTC m=+0.048683849 container remove 312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.487 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad1cfcd-5df4-4d85-ba08-6df900388d87]: (4, ('Thu Oct  2 01:12:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 (312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816)\n312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816\nThu Oct  2 01:12:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 (312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816)\n312bdb1a376d282326481c01a6c6622eee3da04f88e0478bacd6e53371e8f816\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.489 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a82926-2743-4cf5-b5bd-3caf88aad74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.490 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc93fa2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:44 compute-0 kernel: tapacc93fa2-a0: left promiscuous mode
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.510 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[40d12fa2-f6a0-44ca-b1fa-a9ece77a8915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.540 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0daba76e-5149-470c-862c-ba090f4c9ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.541 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea132af-55e9-4942-8574-4d5ed287a5c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.557 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[066077ee-ddc1-4a8f-8572-106634d451e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880729, 'reachable_time': 18838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405756, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dacc93fa2\x2da668\x2d4911\x2d9ce8\x2df25e326a9593.mount: Deactivated successfully.
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.562 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.562 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[c49f5045-f900-4449-bfac-cdd41272bf54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.851 2 INFO nova.virt.libvirt.driver [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance shutdown successfully.
Oct 02 13:12:44 compute-0 kernel: tapeddb34a1-26: entered promiscuous mode
Oct 02 13:12:44 compute-0 NetworkManager[44987]: <info>  [1759410764.9051] manager: (tapeddb34a1-26): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 systemd-udevd[405680]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00969|binding|INFO|Claiming lport eddb34a1-265c-47da-9fba-767aa4c99438 for this chassis.
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00970|binding|INFO|eddb34a1-265c-47da-9fba-767aa4c99438: Claiming fa:16:3e:3a:5a:b9 10.100.0.8
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.913 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:5a:b9 10.100.0.8'], port_security=['fa:16:3e:3a:5a:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6e76331-61cd-4b84-9c5a-6bcebfc50236', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc93fa2-a668-4911-9ce8-f25e326a9593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '5', 'neutron:security_group_ids': '229024b9-8e7c-4058-aeff-9a86d85d938a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ee8473-bad1-43f5-bdc0-9ebd0a9df9f9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eddb34a1-265c-47da-9fba-767aa4c99438) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.915 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eddb34a1-265c-47da-9fba-767aa4c99438 in datapath acc93fa2-a668-4911-9ce8-f25e326a9593 bound to our chassis
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.916 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00971|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 ovn-installed in OVS
Oct 02 13:12:44 compute-0 ovn_controller[148183]: 2025-10-02T13:12:44Z|00972|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 up in Southbound
Oct 02 13:12:44 compute-0 nova_compute[257802]: 2025-10-02 13:12:44.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:44 compute-0 NetworkManager[44987]: <info>  [1759410764.9297] device (tapeddb34a1-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:12:44 compute-0 NetworkManager[44987]: <info>  [1759410764.9307] device (tapeddb34a1-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.931 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[795679a3-f8db-450c-a046-345dcfb6a306]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.932 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacc93fa2-a1 in ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.936 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacc93fa2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.936 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ec07ebe0-4760-471f-b925-40c9b0822638]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.938 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3e98df3e-da90-4d2f-93f2-334a88ace361]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 systemd-machined[211836]: New machine qemu-104-instance-000000d8.
Oct 02 13:12:44 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-000000d8.
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.953 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[866895c8-7b54-4732-80a4-6092722e79ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:44 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:44.979 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7834c57b-a6ed-483c-885a-e37d7021bfd2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.008 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6646bca3-702e-48b6-a490-918a9c8541d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.012 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9e3cb2-92c5-466b-a2fb-fcf60a6c7adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 NetworkManager[44987]: <info>  [1759410765.0142] manager: (tapacc93fa2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Oct 02 13:12:45 compute-0 ceph-mon[73607]: pgmap v3444: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.045 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbb446f-641c-4c6a-8216-dc34a0e03b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.047 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9c057ab8-3e96-437f-9110-5046ddff2410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 NetworkManager[44987]: <info>  [1759410765.0729] device (tapacc93fa2-a0): carrier: link connected
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.078 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6be3e739-d789-4f8e-9eb0-488e7587dc5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.093 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1de4007e-5276-42c7-b95e-e8821598f856]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc93fa2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:06:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883269, 'reachable_time': 31576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405802, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.109 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a313396b-bfc2-4b8a-8c22-e3c9688c19d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:67d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 883269, 'tstamp': 883269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405803, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 nova_compute[257802]: 2025-10-02 13:12:45.111 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.126 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[255b2a65-1090-46fd-a3eb-96bc34f99f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc93fa2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:06:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883269, 'reachable_time': 31576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405804, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.157 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[71083966-9fb6-43ba-9ad1-88dca32f641a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.209 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e4d860-4578-4e18-bf3e-943580983fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.211 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc93fa2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.212 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.212 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacc93fa2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:45 compute-0 NetworkManager[44987]: <info>  [1759410765.2152] manager: (tapacc93fa2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Oct 02 13:12:45 compute-0 kernel: tapacc93fa2-a0: entered promiscuous mode
Oct 02 13:12:45 compute-0 nova_compute[257802]: 2025-10-02 13:12:45.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.219 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacc93fa2-a0, col_values=(('external_ids', {'iface-id': '4f4a9a9c-98b2-43c2-aac7-1be2d0091142'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:45 compute-0 ovn_controller[148183]: 2025-10-02T13:12:45Z|00973|binding|INFO|Releasing lport 4f4a9a9c-98b2-43c2-aac7-1be2d0091142 from this chassis (sb_readonly=0)
Oct 02 13:12:45 compute-0 nova_compute[257802]: 2025-10-02 13:12:45.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.224 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:12:45 compute-0 nova_compute[257802]: 2025-10-02 13:12:45.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.235 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e76ec688-9ba8-421d-9452-582ed1cf6178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.237 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/acc93fa2-a668-4911-9ce8-f25e326a9593.pid.haproxy
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID acc93fa2-a668-4911-9ce8-f25e326a9593
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:12:45 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:12:45.239 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'env', 'PROCESS_TAG=haproxy-acc93fa2-a668-4911-9ce8-f25e326a9593', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acc93fa2-a668-4911-9ce8-f25e326a9593.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:12:45 compute-0 podman[405835]: 2025-10-02 13:12:45.593915024 +0000 UTC m=+0.050907024 container create 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:45 compute-0 systemd[1]: Started libpod-conmon-2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2.scope.
Oct 02 13:12:45 compute-0 nova_compute[257802]: 2025-10-02 13:12:45.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96db08e36021c13c2e868df7cc7f9799d869ef1d992613c7d85e1a80684aaac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:45 compute-0 podman[405835]: 2025-10-02 13:12:45.565530136 +0000 UTC m=+0.022522166 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:12:45 compute-0 podman[405835]: 2025-10-02 13:12:45.670530956 +0000 UTC m=+0.127522956 container init 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:12:45 compute-0 podman[405835]: 2025-10-02 13:12:45.678787996 +0000 UTC m=+0.135779996 container start 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 13:12:45 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [NOTICE]   (405896) : New worker (405899) forked
Oct 02 13:12:45 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [NOTICE]   (405896) : Loading success.
Oct 02 13:12:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:46.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.119 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for a6e76331-61cd-4b84-9c5a-6bcebfc50236 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:12:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.120 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410766.1193695, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.120 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Resumed (Lifecycle Event)
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.126 2 INFO nova.virt.libvirt.driver [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance running successfully.
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.126 2 INFO nova.virt.libvirt.driver [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance soft rebooted successfully.
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.127 2 DEBUG nova.compute.manager [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.150 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.153 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.182 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.182 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410766.1224022, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.182 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Started (Lifecycle Event)
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.186 2 DEBUG oslo_concurrency.lockutils [None req-e0bb5f75-54ec-4db9-82f8-a9906370383e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.217 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.221 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.519 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.520 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.520 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.520 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.521 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.521 2 WARNING nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state None.
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.521 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.521 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.522 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.522 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.522 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.522 2 WARNING nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state None.
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.523 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.523 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.523 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.523 2 DEBUG oslo_concurrency.lockutils [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.524 2 DEBUG nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:46 compute-0 nova_compute[257802]: 2025-10-02 13:12:46.524 2 WARNING nova.compute.manager [req-9d19561f-06f1-4ce3-bff4-40be6ba20cff req-aceb3e3c-d5ac-4329-9f75-fc7cc3b3e5bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state None.
Oct 02 13:12:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 478 KiB/s wr, 115 op/s
Oct 02 13:12:46 compute-0 podman[405909]: 2025-10-02 13:12:46.911807467 +0000 UTC m=+0.049644343 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:12:46 compute-0 podman[405910]: 2025-10-02 13:12:46.919514943 +0000 UTC m=+0.055482113 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:12:46 compute-0 podman[405911]: 2025-10-02 13:12:46.920123858 +0000 UTC m=+0.052315867 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.261 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.261 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.261 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:12:47 compute-0 nova_compute[257802]: 2025-10-02 13:12:47.261 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6e76331-61cd-4b84-9c5a-6bcebfc50236 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:12:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:12:47 compute-0 ceph-mon[73607]: pgmap v3445: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 478 KiB/s wr, 115 op/s
Oct 02 13:12:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1928115156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:47 compute-0 sudo[405964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:47 compute-0 sudo[405964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:47 compute-0 sudo[405964]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:47 compute-0 sudo[405989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:47 compute-0 sudo[405989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:47 compute-0 sudo[405989]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:48.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:48 compute-0 nova_compute[257802]: 2025-10-02 13:12:48.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 nova_compute[257802]: 2025-10-02 13:12:48.596 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:48 compute-0 nova_compute[257802]: 2025-10-02 13:12:48.614 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:48 compute-0 nova_compute[257802]: 2025-10-02 13:12:48.615 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:12:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 135 KiB/s wr, 117 op/s
Oct 02 13:12:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3033142157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:50 compute-0 ceph-mon[73607]: pgmap v3446: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 135 KiB/s wr, 117 op/s
Oct 02 13:12:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:50.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:50 compute-0 nova_compute[257802]: 2025-10-02 13:12:50.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 51 KiB/s wr, 155 op/s
Oct 02 13:12:51 compute-0 ceph-mon[73607]: pgmap v3447: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 51 KiB/s wr, 155 op/s
Oct 02 13:12:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:52.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 139 op/s
Oct 02 13:12:53 compute-0 nova_compute[257802]: 2025-10-02 13:12:53.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:12:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:12:53 compute-0 ceph-mon[73607]: pgmap v3448: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 139 op/s
Oct 02 13:12:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:54.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.124 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266701042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.543 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.612 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.613 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 289 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 264 KiB/s wr, 159 op/s
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.770 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.772 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.942352294921875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.773 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.774 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1266701042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.936 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance a6e76331-61cd-4b84-9c5a-6bcebfc50236 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.937 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.937 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002188314256467523 of space, bias 1.0, pg target 0.6564942769402569 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004453328564116881 of space, bias 1.0, pg target 1.3359985692350644 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:12:54 compute-0 nova_compute[257802]: 2025-10-02 13:12:54.989 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739094198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.414 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.419 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.433 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.457 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.458 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:55.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:55 compute-0 nova_compute[257802]: 2025-10-02 13:12:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:55 compute-0 ceph-mon[73607]: pgmap v3449: 305 pgs: 305 active+clean; 289 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 264 KiB/s wr, 159 op/s
Oct 02 13:12:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3726214819' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3726214819' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3739094198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 294 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 495 KiB/s wr, 141 op/s
Oct 02 13:12:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:57.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:57 compute-0 podman[406064]: 2025-10-02 13:12:57.943501283 +0000 UTC m=+0.084079525 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:58.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:58 compute-0 ceph-mon[73607]: pgmap v3450: 305 pgs: 305 active+clean; 294 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 495 KiB/s wr, 141 op/s
Oct 02 13:12:58 compute-0 nova_compute[257802]: 2025-10-02 13:12:58.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:12:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 65K writes, 256K keys, 65K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 65K writes, 23K syncs, 2.74 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4809 writes, 20K keys, 4809 commit groups, 1.0 writes per commit group, ingest: 19.69 MB, 0.03 MB/s
                                           Interval WAL: 4809 writes, 1955 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:12:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 294 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 514 KiB/s wr, 132 op/s
Oct 02 13:12:59 compute-0 ceph-mon[73607]: pgmap v3451: 305 pgs: 305 active+clean; 294 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 514 KiB/s wr, 132 op/s
Oct 02 13:12:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:12:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:00.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:00 compute-0 ovn_controller[148183]: 2025-10-02T13:13:00Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:5a:b9 10.100.0.8
Oct 02 13:13:00 compute-0 nova_compute[257802]: 2025-10-02 13:13:00.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 551 KiB/s wr, 137 op/s
Oct 02 13:13:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:01.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:01 compute-0 ceph-mon[73607]: pgmap v3452: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 551 KiB/s wr, 137 op/s
Oct 02 13:13:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 551 KiB/s wr, 87 op/s
Oct 02 13:13:03 compute-0 nova_compute[257802]: 2025-10-02 13:13:03.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:03 compute-0 nova_compute[257802]: 2025-10-02 13:13:03.459 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:03 compute-0 ceph-mon[73607]: pgmap v3453: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 551 KiB/s wr, 87 op/s
Oct 02 13:13:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 552 KiB/s wr, 99 op/s
Oct 02 13:13:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:05 compute-0 nova_compute[257802]: 2025-10-02 13:13:05.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:05 compute-0 ceph-mon[73607]: pgmap v3454: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 552 KiB/s wr, 99 op/s
Oct 02 13:13:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 330 KiB/s wr, 79 op/s
Oct 02 13:13:07 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 13:13:07 compute-0 nova_compute[257802]: 2025-10-02 13:13:07.088 2 INFO nova.compute.manager [None req-25c588ae-2204-4c0a-839c-a5b5ec132f20 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Get console output
Oct 02 13:13:07 compute-0 nova_compute[257802]: 2025-10-02 13:13:07.096 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:13:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:07 compute-0 ceph-mon[73607]: pgmap v3455: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 330 KiB/s wr, 79 op/s
Oct 02 13:13:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:08 compute-0 sudo[406096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.081 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:08 compute-0 sudo[406096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.082 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:13:08 compute-0 sudo[406096]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 sudo[406121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:08 compute-0 sudo[406121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 sudo[406121]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.338 2 DEBUG nova.compute.manager [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.338 2 DEBUG nova.compute.manager [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing instance network info cache due to event network-changed-eddb34a1-265c-47da-9fba-767aa4c99438. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.338 2 DEBUG oslo_concurrency.lockutils [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.338 2 DEBUG oslo_concurrency.lockutils [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.338 2 DEBUG nova.network.neutron [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Refreshing network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.389 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.389 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.389 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.390 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.390 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.391 2 INFO nova.compute.manager [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Terminating instance
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.392 2 DEBUG nova.compute.manager [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:13:08 compute-0 kernel: tapeddb34a1-26 (unregistering): left promiscuous mode
Oct 02 13:13:08 compute-0 NetworkManager[44987]: <info>  [1759410788.4973] device (tapeddb34a1-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:13:08 compute-0 ovn_controller[148183]: 2025-10-02T13:13:08Z|00974|binding|INFO|Releasing lport eddb34a1-265c-47da-9fba-767aa4c99438 from this chassis (sb_readonly=0)
Oct 02 13:13:08 compute-0 ovn_controller[148183]: 2025-10-02T13:13:08Z|00975|binding|INFO|Setting lport eddb34a1-265c-47da-9fba-767aa4c99438 down in Southbound
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 ovn_controller[148183]: 2025-10-02T13:13:08Z|00976|binding|INFO|Removing iface tapeddb34a1-26 ovn-installed in OVS
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.512 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:5a:b9 10.100.0.8'], port_security=['fa:16:3e:3a:5a:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6e76331-61cd-4b84-9c5a-6bcebfc50236', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc93fa2-a668-4911-9ce8-f25e326a9593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '6', 'neutron:security_group_ids': '229024b9-8e7c-4058-aeff-9a86d85d938a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ee8473-bad1-43f5-bdc0-9ebd0a9df9f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=eddb34a1-265c-47da-9fba-767aa4c99438) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.513 158261 INFO neutron.agent.ovn.metadata.agent [-] Port eddb34a1-265c-47da-9fba-767aa4c99438 in datapath acc93fa2-a668-4911-9ce8-f25e326a9593 unbound from our chassis
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.514 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acc93fa2-a668-4911-9ce8-f25e326a9593, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.515 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d91280e6-3f2e-4b89-a531-76d37ccd3901]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.515 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 namespace which is not needed anymore
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Oct 02 13:13:08 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000d8.scope: Consumed 13.734s CPU time.
Oct 02 13:13:08 compute-0 systemd-machined[211836]: Machine qemu-104-instance-000000d8 terminated.
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.626 2 INFO nova.virt.libvirt.driver [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Instance destroyed successfully.
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.627 2 DEBUG nova.objects.instance [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid a6e76331-61cd-4b84-9c5a-6bcebfc50236 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:08 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [NOTICE]   (405896) : haproxy version is 2.8.14-c23fe91
Oct 02 13:13:08 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [NOTICE]   (405896) : path to executable is /usr/sbin/haproxy
Oct 02 13:13:08 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [WARNING]  (405896) : Exiting Master process...
Oct 02 13:13:08 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [ALERT]    (405896) : Current worker (405899) exited with code 143 (Terminated)
Oct 02 13:13:08 compute-0 neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593[405882]: [WARNING]  (405896) : All workers exited. Exiting... (0)
Oct 02 13:13:08 compute-0 systemd[1]: libpod-2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2.scope: Deactivated successfully.
Oct 02 13:13:08 compute-0 podman[406168]: 2025-10-02 13:13:08.643969205 +0000 UTC m=+0.045829359 container died 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.639 2 DEBUG nova.virt.libvirt.vif [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:12:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1710981769',display_name='tempest-TestNetworkAdvancedServerOps-server-1710981769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1710981769',id=216,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF+oMSn5P+xoqvEw63G0Plc0Av/pnjbZbz+0lJVZCi7mAGtNM6HIVFIBh+PJdNpowWjoQaoGdr7QXqOABJI4p9p4Fmv+8P57z6CZu1tIPiDwua2k+BkVDF+iuQvfIG/QLw==',key_name='tempest-TestNetworkAdvancedServerOps-1766999338',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:12:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-8eo70ise',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:12:46Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=a6e76331-61cd-4b84-9c5a-6bcebfc50236,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.640 2 DEBUG nova.network.os_vif_util [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.641 2 DEBUG nova.network.os_vif_util [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.641 2 DEBUG os_vif [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeddb34a1-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.651 2 INFO os_vif [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:5a:b9,bridge_name='br-int',has_traffic_filtering=True,id=eddb34a1-265c-47da-9fba-767aa4c99438,network=Network(acc93fa2-a668-4911-9ce8-f25e326a9593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeddb34a1-26')
Oct 02 13:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2-userdata-shm.mount: Deactivated successfully.
Oct 02 13:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d96db08e36021c13c2e868df7cc7f9799d869ef1d992613c7d85e1a80684aaac-merged.mount: Deactivated successfully.
Oct 02 13:13:08 compute-0 podman[406168]: 2025-10-02 13:13:08.684581788 +0000 UTC m=+0.086441942 container cleanup 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:08 compute-0 systemd[1]: libpod-conmon-2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2.scope: Deactivated successfully.
Oct 02 13:13:08 compute-0 podman[406219]: 2025-10-02 13:13:08.746502767 +0000 UTC m=+0.039762634 container remove 2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.752 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[15cdea9d-332a-49a1-838e-9fb19b513948]: (4, ('Thu Oct  2 01:13:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 (2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2)\n2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2\nThu Oct  2 01:13:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 (2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2)\n2f2d83b776e295ffb8fd001eb6640468416c86b622309ad1e9e1ff38d729d3a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.754 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee74aad-b3a3-4e0e-864a-cf673d4e5149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.755 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc93fa2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:08 compute-0 kernel: tapacc93fa2-a0: left promiscuous mode
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 743 KiB/s rd, 75 KiB/s wr, 49 op/s
Oct 02 13:13:08 compute-0 nova_compute[257802]: 2025-10-02 13:13:08.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.774 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[d589a95c-a862-4025-bf62-3331f8b90843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.807 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9a09ca4d-7651-4d33-a4b6-98dd6eb8287d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.809 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8632d013-9b50-462f-bd07-bb3217cea00a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.824 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e96bdcd4-9906-4b30-bae1-3c112389fc8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883262, 'reachable_time': 27633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406237, 'error': None, 'target': 'ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dacc93fa2\x2da668\x2d4911\x2d9ce8\x2df25e326a9593.mount: Deactivated successfully.
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.830 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acc93fa2-a668-4911-9ce8-f25e326a9593 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:13:08 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:08.830 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[cca4a574-6c93-45b7-9017-9d5c70d0b0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.321 2 DEBUG nova.network.neutron [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updated VIF entry in instance network info cache for port eddb34a1-265c-47da-9fba-767aa4c99438. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.321 2 DEBUG nova.network.neutron [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [{"id": "eddb34a1-265c-47da-9fba-767aa4c99438", "address": "fa:16:3e:3a:5a:b9", "network": {"id": "acc93fa2-a668-4911-9ce8-f25e326a9593", "bridge": "br-int", "label": "tempest-network-smoke--2102075172", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeddb34a1-26", "ovs_interfaceid": "eddb34a1-265c-47da-9fba-767aa4c99438", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.341 2 DEBUG oslo_concurrency.lockutils [req-5a4daf64-601d-4dd3-b30d-932c48865e11 req-4e94ef4c-acee-4bb4-b66d-aecc9db1e5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a6e76331-61cd-4b84-9c5a-6bcebfc50236" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.520 2 INFO nova.virt.libvirt.driver [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Deleting instance files /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236_del
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.521 2 INFO nova.virt.libvirt.driver [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Deletion of /var/lib/nova/instances/a6e76331-61cd-4b84-9c5a-6bcebfc50236_del complete
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.583 2 INFO nova.compute.manager [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Took 1.19 seconds to destroy the instance on the hypervisor.
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.584 2 DEBUG oslo.service.loopingcall [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.584 2 DEBUG nova.compute.manager [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:13:09 compute-0 nova_compute[257802]: 2025-10-02 13:13:09.584 2 DEBUG nova.network.neutron [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:13:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:09.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:09 compute-0 ceph-mon[73607]: pgmap v3456: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 743 KiB/s rd, 75 KiB/s wr, 49 op/s
Oct 02 13:13:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:10.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:10 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:10.084 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.430 2 DEBUG nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.430 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.430 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.431 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.431 2 DEBUG nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.431 2 DEBUG nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-unplugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.431 2 DEBUG nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.431 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.432 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.432 2 DEBUG oslo_concurrency.lockutils [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.432 2 DEBUG nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] No waiting events found dispatching network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.432 2 WARNING nova.compute.manager [req-87110f1c-2edc-4acd-94e6-6400cb41ca36 req-62df160c-3d6d-4fb1-94ad-ffe1b007f3ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received unexpected event network-vif-plugged-eddb34a1-265c-47da-9fba-767aa4c99438 for instance with vm_state active and task_state deleting.
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.469 2 DEBUG nova.network.neutron [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.486 2 INFO nova.compute.manager [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Took 0.90 seconds to deallocate network for instance.
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.529 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.530 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.551 2 DEBUG nova.compute.manager [req-1f454c55-1d19-4c8b-9dee-1ffdb094ac2c req-3ef15596-2b3f-4c0a-a944-765a329a849b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Received event network-vif-deleted-eddb34a1-265c-47da-9fba-767aa4c99438 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.608 2 DEBUG oslo_concurrency.processutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:10 compute-0 nova_compute[257802]: 2025-10-02 13:13:10.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 620 KiB/s rd, 246 KiB/s wr, 57 op/s
Oct 02 13:13:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443556659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.030 2 DEBUG oslo_concurrency.processutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.037 2 DEBUG nova.compute.provider_tree [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.058 2 DEBUG nova.scheduler.client.report [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.079 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.119 2 INFO nova.scheduler.client.report [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance a6e76331-61cd-4b84-9c5a-6bcebfc50236
Oct 02 13:13:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:11 compute-0 nova_compute[257802]: 2025-10-02 13:13:11.178 2 DEBUG oslo_concurrency.lockutils [None req-3442f389-78f9-425b-9c2b-13359ca0cb13 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "a6e76331-61cd-4b84-9c5a-6bcebfc50236" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:11.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:11 compute-0 ceph-mon[73607]: pgmap v3457: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 620 KiB/s rd, 246 KiB/s wr, 57 op/s
Oct 02 13:13:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1443556659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:12.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 208 KiB/s wr, 32 op/s
Oct 02 13:13:13 compute-0 ceph-mon[73607]: pgmap v3458: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 208 KiB/s wr, 32 op/s
Oct 02 13:13:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:13 compute-0 nova_compute[257802]: 2025-10-02 13:13:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:14.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:14 compute-0 nova_compute[257802]: 2025-10-02 13:13:14.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:14 compute-0 nova_compute[257802]: 2025-10-02 13:13:14.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 212 KiB/s wr, 45 op/s
Oct 02 13:13:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:15 compute-0 nova_compute[257802]: 2025-10-02 13:13:15.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:15 compute-0 ceph-mon[73607]: pgmap v3459: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 212 KiB/s wr, 45 op/s
Oct 02 13:13:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 219 KiB/s wr, 36 op/s
Oct 02 13:13:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:17 compute-0 podman[406268]: 2025-10-02 13:13:17.923713308 +0000 UTC m=+0.058612179 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:13:17 compute-0 podman[406267]: 2025-10-02 13:13:17.925681555 +0000 UTC m=+0.063389894 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:17 compute-0 podman[406266]: 2025-10-02 13:13:17.944544622 +0000 UTC m=+0.086119495 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:13:17 compute-0 ceph-mon[73607]: pgmap v3460: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 219 KiB/s wr, 36 op/s
Oct 02 13:13:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:18.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:18 compute-0 nova_compute[257802]: 2025-10-02 13:13:18.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 209 KiB/s wr, 47 op/s
Oct 02 13:13:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:19.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:20 compute-0 ceph-mon[73607]: pgmap v3461: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 209 KiB/s wr, 47 op/s
Oct 02 13:13:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/794214939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:20 compute-0 nova_compute[257802]: 2025-10-02 13:13:20.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 204 KiB/s wr, 48 op/s
Oct 02 13:13:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:21 compute-0 ceph-mon[73607]: pgmap v3462: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 204 KiB/s wr, 48 op/s
Oct 02 13:13:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:22.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 31 op/s
Oct 02 13:13:23 compute-0 ceph-mon[73607]: pgmap v3463: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 31 op/s
Oct 02 13:13:23 compute-0 nova_compute[257802]: 2025-10-02 13:13:23.625 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410788.6243024, a6e76331-61cd-4b84-9c5a-6bcebfc50236 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:13:23 compute-0 nova_compute[257802]: 2025-10-02 13:13:23.625 2 INFO nova.compute.manager [-] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] VM Stopped (Lifecycle Event)
Oct 02 13:13:23 compute-0 nova_compute[257802]: 2025-10-02 13:13:23.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:23 compute-0 nova_compute[257802]: 2025-10-02 13:13:23.718 2 DEBUG nova.compute.manager [None req-c1c04437-71da-4dde-b67c-28cac8265cc3 - - - - - -] [instance: a6e76331-61cd-4b84-9c5a-6bcebfc50236] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:13:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:24.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1955584302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1955584302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:24 compute-0 sudo[406321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:24 compute-0 sudo[406321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:24 compute-0 sudo[406321]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:24 compute-0 sudo[406346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:24 compute-0 sudo[406346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:24 compute-0 sudo[406346]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:24 compute-0 sudo[406371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:24 compute-0 sudo[406371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:24 compute-0 sudo[406371]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:24 compute-0 sudo[406396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:13:24 compute-0 sudo[406396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 02 13:13:25 compute-0 sudo[406396]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Oct 02 13:13:25 compute-0 ceph-mon[73607]: pgmap v3464: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Oct 02 13:13:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:25.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:25 compute-0 nova_compute[257802]: 2025-10-02 13:13:25.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:25 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dfd898ad-0773-42a2-884e-383d91dcadc6 does not exist
Oct 02 13:13:25 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 773cbc11-de22-443d-b30d-598f58034230 does not exist
Oct 02 13:13:25 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 09e49bd2-8758-49c3-87cd-788740e14350 does not exist
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:13:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:13:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:25 compute-0 sudo[406453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:25 compute-0 sudo[406453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:25 compute-0 sudo[406453]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:25 compute-0 sudo[406478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:25 compute-0 sudo[406478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:25 compute-0 sudo[406478]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:25 compute-0 sudo[406503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:25 compute-0 sudo[406503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:25 compute-0 sudo[406503]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:26 compute-0 sudo[406528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:13:26 compute-0 sudo[406528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:13:26 compute-0 ceph-mon[73607]: osdmap e422: 3 total, 3 up, 3 in
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:13:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.39578573 +0000 UTC m=+0.055238948 container create b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:13:26 compute-0 systemd[1]: Started libpod-conmon-b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2.scope.
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.362041624 +0000 UTC m=+0.021494862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.478541752 +0000 UTC m=+0.137994990 container init b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.484942447 +0000 UTC m=+0.144395665 container start b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.487799566 +0000 UTC m=+0.147252824 container attach b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:26 compute-0 musing_chaum[406609]: 167 167
Oct 02 13:13:26 compute-0 systemd[1]: libpod-b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2.scope: Deactivated successfully.
Oct 02 13:13:26 compute-0 conmon[406609]: conmon b39f32bed45d5eb7bec8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2.scope/container/memory.events
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.491510356 +0000 UTC m=+0.150963574 container died b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c987ad68bd868385cfb121e609da47f034ba9895c6961a905e9c54615bfd773-merged.mount: Deactivated successfully.
Oct 02 13:13:26 compute-0 podman[406593]: 2025-10-02 13:13:26.526056512 +0000 UTC m=+0.185509730 container remove b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:26 compute-0 systemd[1]: libpod-conmon-b39f32bed45d5eb7bec857fb13f47cbe63c441d6f12fe6b4822274c534b699a2.scope: Deactivated successfully.
Oct 02 13:13:26 compute-0 podman[406635]: 2025-10-02 13:13:26.71408886 +0000 UTC m=+0.045788568 container create c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:26 compute-0 systemd[1]: Started libpod-conmon-c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc.scope.
Oct 02 13:13:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Oct 02 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:26 compute-0 podman[406635]: 2025-10-02 13:13:26.782219289 +0000 UTC m=+0.113919017 container init c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:26 compute-0 podman[406635]: 2025-10-02 13:13:26.788930662 +0000 UTC m=+0.120630370 container start c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:13:26 compute-0 podman[406635]: 2025-10-02 13:13:26.792344954 +0000 UTC m=+0.124044682 container attach c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:26 compute-0 podman[406635]: 2025-10-02 13:13:26.697576151 +0000 UTC m=+0.029275879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:26.999 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:27.000 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:13:27.000 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:27 compute-0 ceph-mon[73607]: pgmap v3466: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Oct 02 13:13:27 compute-0 exciting_chatelet[406653]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:13:27 compute-0 exciting_chatelet[406653]: --> relative data size: 1.0
Oct 02 13:13:27 compute-0 exciting_chatelet[406653]: --> All data devices are unavailable
Oct 02 13:13:27 compute-0 systemd[1]: libpod-c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc.scope: Deactivated successfully.
Oct 02 13:13:27 compute-0 podman[406635]: 2025-10-02 13:13:27.611048741 +0000 UTC m=+0.942748469 container died c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-953c9ac818707b5e8608e4f2af625dab447df2e4eb15bc61128147662b6e57a8-merged.mount: Deactivated successfully.
Oct 02 13:13:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:27 compute-0 podman[406635]: 2025-10-02 13:13:27.668005369 +0000 UTC m=+0.999705077 container remove c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:27 compute-0 systemd[1]: libpod-conmon-c5ae0e23183b0aadf0a6c1dcc30a30d1b13045882157448b3eb203d9805262cc.scope: Deactivated successfully.
Oct 02 13:13:27 compute-0 sudo[406528]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:27 compute-0 sudo[406681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:27 compute-0 sudo[406681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:27 compute-0 sudo[406681]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:27 compute-0 sudo[406706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:27 compute-0 sudo[406706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:27 compute-0 sudo[406706]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:27 compute-0 sudo[406731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:27 compute-0 sudo[406731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:27 compute-0 sudo[406731]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:27 compute-0 sudo[406756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:13:27 compute-0 sudo[406756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:28.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:28 compute-0 sudo[406817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:28 compute-0 sudo[406817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:28 compute-0 sudo[406817]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.271285903 +0000 UTC m=+0.036871613 container create a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:13:28 compute-0 systemd[1]: Started libpod-conmon-a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459.scope.
Oct 02 13:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:28 compute-0 sudo[406859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:28 compute-0 sudo[406859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:28 compute-0 sudo[406859]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.256349232 +0000 UTC m=+0.021934992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.358939734 +0000 UTC m=+0.124525474 container init a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:28 compute-0 podman[406857]: 2025-10-02 13:13:28.360595294 +0000 UTC m=+0.087052207 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.368929105 +0000 UTC m=+0.134514825 container start a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.372571764 +0000 UTC m=+0.138157514 container attach a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:28 compute-0 busy_jang[406900]: 167 167
Oct 02 13:13:28 compute-0 systemd[1]: libpod-a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459.scope: Deactivated successfully.
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.377899843 +0000 UTC m=+0.143485563 container died a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c4bb8cab5c2fd38a9f4a16b23123141ffdf72c263af9b8cbb277e8f2a59e11-merged.mount: Deactivated successfully.
Oct 02 13:13:28 compute-0 podman[406836]: 2025-10-02 13:13:28.419075329 +0000 UTC m=+0.184661049 container remove a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:13:28 compute-0 systemd[1]: libpod-conmon-a7d1356918cc37ed48eb0e445d763db79d4c45d6c1067f9c0876f31483a7e459.scope: Deactivated successfully.
Oct 02 13:13:28 compute-0 podman[406936]: 2025-10-02 13:13:28.573479884 +0000 UTC m=+0.037873417 container create d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:28 compute-0 systemd[1]: Started libpod-conmon-d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0.scope.
Oct 02 13:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57648afb573f3b97ce4e2537025c93957b06d38bf0406cb3a9fb92bc522cab98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57648afb573f3b97ce4e2537025c93957b06d38bf0406cb3a9fb92bc522cab98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57648afb573f3b97ce4e2537025c93957b06d38bf0406cb3a9fb92bc522cab98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57648afb573f3b97ce4e2537025c93957b06d38bf0406cb3a9fb92bc522cab98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:28 compute-0 podman[406936]: 2025-10-02 13:13:28.557725933 +0000 UTC m=+0.022119486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:28 compute-0 nova_compute[257802]: 2025-10-02 13:13:28.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:28 compute-0 podman[406936]: 2025-10-02 13:13:28.663544454 +0000 UTC m=+0.127938037 container init d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:13:28 compute-0 podman[406936]: 2025-10-02 13:13:28.670390608 +0000 UTC m=+0.134784131 container start d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:13:28 compute-0 podman[406936]: 2025-10-02 13:13:28.6737272 +0000 UTC m=+0.138120733 container attach d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:13:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 02 13:13:29 compute-0 thirsty_curran[406952]: {
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:     "1": [
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:         {
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "devices": [
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "/dev/loop3"
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             ],
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "lv_name": "ceph_lv0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "lv_size": "7511998464",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "name": "ceph_lv0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "tags": {
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.cluster_name": "ceph",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.crush_device_class": "",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.encrypted": "0",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.osd_id": "1",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.type": "block",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:                 "ceph.vdo": "0"
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             },
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "type": "block",
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:             "vg_name": "ceph_vg0"
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:         }
Oct 02 13:13:29 compute-0 thirsty_curran[406952]:     ]
Oct 02 13:13:29 compute-0 thirsty_curran[406952]: }
Oct 02 13:13:29 compute-0 systemd[1]: libpod-d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0.scope: Deactivated successfully.
Oct 02 13:13:29 compute-0 podman[406936]: 2025-10-02 13:13:29.509625642 +0000 UTC m=+0.974019175 container died d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-57648afb573f3b97ce4e2537025c93957b06d38bf0406cb3a9fb92bc522cab98-merged.mount: Deactivated successfully.
Oct 02 13:13:29 compute-0 podman[406936]: 2025-10-02 13:13:29.556980808 +0000 UTC m=+1.021374341 container remove d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_curran, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:13:29 compute-0 systemd[1]: libpod-conmon-d2839eedb99a1f6d090971ba020964ace48bfeb464614fd4a7b320513241b3f0.scope: Deactivated successfully.
Oct 02 13:13:29 compute-0 sudo[406756]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:29 compute-0 sudo[406974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:29 compute-0 sudo[406974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:29 compute-0 sudo[406974]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:29 compute-0 sudo[407000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:29 compute-0 sudo[407000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:29 compute-0 sudo[407000]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:29 compute-0 sudo[407025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:29 compute-0 sudo[407025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:29 compute-0 sudo[407025]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:29 compute-0 sudo[407050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:13:29 compute-0 sudo[407050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:29 compute-0 ceph-mon[73607]: pgmap v3467: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 02 13:13:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:30.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.206145202 +0000 UTC m=+0.056254301 container create 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:13:30 compute-0 systemd[1]: Started libpod-conmon-9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664.scope.
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.187790478 +0000 UTC m=+0.037899668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.305053616 +0000 UTC m=+0.155162705 container init 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.312095696 +0000 UTC m=+0.162204775 container start 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.314853523 +0000 UTC m=+0.164962642 container attach 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:13:30 compute-0 friendly_haibt[407130]: 167 167
Oct 02 13:13:30 compute-0 systemd[1]: libpod-9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664.scope: Deactivated successfully.
Oct 02 13:13:30 compute-0 conmon[407130]: conmon 9f1495db6bd7a026ab0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664.scope/container/memory.events
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.318169943 +0000 UTC m=+0.168279072 container died 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3186561be183a8c394f98f462ef8de4bd8f022e3bc6e582f5c1027b8ceebf9-merged.mount: Deactivated successfully.
Oct 02 13:13:30 compute-0 podman[407114]: 2025-10-02 13:13:30.356046209 +0000 UTC m=+0.206155298 container remove 9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:30 compute-0 systemd[1]: libpod-conmon-9f1495db6bd7a026ab0d3bf2f12b4389d3f94f53c9626d737e889bc81c9e3664.scope: Deactivated successfully.
Oct 02 13:13:30 compute-0 podman[407154]: 2025-10-02 13:13:30.551883337 +0000 UTC m=+0.060365361 container create 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:30 compute-0 systemd[1]: Started libpod-conmon-50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9.scope.
Oct 02 13:13:30 compute-0 podman[407154]: 2025-10-02 13:13:30.519546245 +0000 UTC m=+0.028028349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb4d4d9b1973b9fa509442596349cee4ed00d0f62ec8b8dcb1351ac4136eea7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb4d4d9b1973b9fa509442596349cee4ed00d0f62ec8b8dcb1351ac4136eea7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb4d4d9b1973b9fa509442596349cee4ed00d0f62ec8b8dcb1351ac4136eea7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb4d4d9b1973b9fa509442596349cee4ed00d0f62ec8b8dcb1351ac4136eea7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:30 compute-0 podman[407154]: 2025-10-02 13:13:30.633593794 +0000 UTC m=+0.142075848 container init 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:13:30 compute-0 podman[407154]: 2025-10-02 13:13:30.642875758 +0000 UTC m=+0.151357782 container start 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:13:30 compute-0 podman[407154]: 2025-10-02 13:13:30.646417144 +0000 UTC m=+0.154899178 container attach 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:30 compute-0 nova_compute[257802]: 2025-10-02 13:13:30.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 46 op/s
Oct 02 13:13:31 compute-0 ceph-mon[73607]: pgmap v3468: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 46 op/s
Oct 02 13:13:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Oct 02 13:13:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Oct 02 13:13:31 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Oct 02 13:13:31 compute-0 adoring_greider[407170]: {
Oct 02 13:13:31 compute-0 adoring_greider[407170]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:13:31 compute-0 adoring_greider[407170]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:13:31 compute-0 adoring_greider[407170]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:13:31 compute-0 adoring_greider[407170]:         "osd_id": 1,
Oct 02 13:13:31 compute-0 adoring_greider[407170]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:13:31 compute-0 adoring_greider[407170]:         "type": "bluestore"
Oct 02 13:13:31 compute-0 adoring_greider[407170]:     }
Oct 02 13:13:31 compute-0 adoring_greider[407170]: }
Oct 02 13:13:31 compute-0 systemd[1]: libpod-50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9.scope: Deactivated successfully.
Oct 02 13:13:31 compute-0 podman[407154]: 2025-10-02 13:13:31.504433612 +0000 UTC m=+1.012915646 container died 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb4d4d9b1973b9fa509442596349cee4ed00d0f62ec8b8dcb1351ac4136eea7-merged.mount: Deactivated successfully.
Oct 02 13:13:31 compute-0 podman[407154]: 2025-10-02 13:13:31.556223285 +0000 UTC m=+1.064705299 container remove 50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:31 compute-0 systemd[1]: libpod-conmon-50dd3f24f4ca7e4c52c8d68e4cbda236d43a17ac724e2a7b34971585cba1baf9.scope: Deactivated successfully.
Oct 02 13:13:31 compute-0 sudo[407050]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:13:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:13:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0ee1392d-0bd0-4560-9612-a0716c27db14 does not exist
Oct 02 13:13:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b8958ddc-7e3c-4483-b1cc-2517d321be49 does not exist
Oct 02 13:13:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b6bf3a80-2c2b-4859-879b-d4918439f24c does not exist
Oct 02 13:13:31 compute-0 sudo[407204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:31 compute-0 sudo[407204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:31 compute-0 sudo[407204]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:31 compute-0 sudo[407229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:13:31 compute-0 sudo[407229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:31 compute-0 sudo[407229]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:32 compute-0 nova_compute[257802]: 2025-10-02 13:13:32.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:32 compute-0 nova_compute[257802]: 2025-10-02 13:13:32.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:13:32 compute-0 ceph-mon[73607]: osdmap e423: 3 total, 3 up, 3 in
Oct 02 13:13:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:13:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/408768300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Oct 02 13:13:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/120125028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/785670362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/785670362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:33 compute-0 ceph-mon[73607]: pgmap v3470: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Oct 02 13:13:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3420663888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:33 compute-0 nova_compute[257802]: 2025-10-02 13:13:33.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:33.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 146 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 894 B/s wr, 24 op/s
Oct 02 13:13:35 compute-0 nova_compute[257802]: 2025-10-02 13:13:35.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:35 compute-0 nova_compute[257802]: 2025-10-02 13:13:35.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:35 compute-0 nova_compute[257802]: 2025-10-02 13:13:35.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:35 compute-0 ceph-mon[73607]: pgmap v3471: 305 pgs: 305 active+clean; 146 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 894 B/s wr, 24 op/s
Oct 02 13:13:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 02 13:13:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Oct 02 13:13:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Oct 02 13:13:36 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Oct 02 13:13:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:37 compute-0 ceph-mon[73607]: pgmap v3472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 02 13:13:37 compute-0 ceph-mon[73607]: osdmap e424: 3 total, 3 up, 3 in
Oct 02 13:13:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:38.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:38 compute-0 nova_compute[257802]: 2025-10-02 13:13:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:38 compute-0 nova_compute[257802]: 2025-10-02 13:13:38.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 02 13:13:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/902856131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:39 compute-0 nova_compute[257802]: 2025-10-02 13:13:39.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:39.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:39 compute-0 ceph-mon[73607]: pgmap v3474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 02 13:13:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:40.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:40 compute-0 nova_compute[257802]: 2025-10-02 13:13:40.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 153 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Oct 02 13:13:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Oct 02 13:13:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:41 compute-0 ceph-mon[73607]: pgmap v3475: 305 pgs: 305 active+clean; 153 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Oct 02 13:13:41 compute-0 ceph-mon[73607]: osdmap e425: 3 total, 3 up, 3 in
Oct 02 13:13:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:42.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:13:42
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'vms']
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 153 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 13:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:13:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:13:43 compute-0 nova_compute[257802]: 2025-10-02 13:13:43.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:43 compute-0 ceph-mon[73607]: pgmap v3477: 305 pgs: 305 active+clean; 153 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 13:13:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/707081822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:44.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:13:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Oct 02 13:13:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1761052138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:45 compute-0 nova_compute[257802]: 2025-10-02 13:13:45.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:46 compute-0 ceph-mon[73607]: pgmap v3478: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Oct 02 13:13:46 compute-0 nova_compute[257802]: 2025-10-02 13:13:46.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:13:47 compute-0 ceph-mon[73607]: pgmap v3479: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:13:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.098 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.098 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.099 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:48.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.099 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.100 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.100 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.119 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.128 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.129 2 WARNING nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Unknown base file: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.129 2 WARNING nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.129 2 WARNING nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Unknown base file: /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.129 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Removable base files: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.129 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.130 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.130 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/4baed542c9df4566caac038224dee0ff4dfdf888
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.130 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.130 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.130 2 DEBUG nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.131 2 INFO nova.virt.libvirt.imagecache [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 13:13:48 compute-0 sudo[407262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:48 compute-0 sudo[407262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:48 compute-0 sudo[407262]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:48 compute-0 sudo[407300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:48 compute-0 sudo[407300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:48 compute-0 sudo[407300]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:48 compute-0 podman[407288]: 2025-10-02 13:13:48.574306139 +0000 UTC m=+0.068990330 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:13:48 compute-0 podman[407286]: 2025-10-02 13:13:48.59666643 +0000 UTC m=+0.098889554 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 13:13:48 compute-0 podman[407287]: 2025-10-02 13:13:48.605505613 +0000 UTC m=+0.105258657 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:13:48 compute-0 nova_compute[257802]: 2025-10-02 13:13:48.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 02 13:13:49 compute-0 nova_compute[257802]: 2025-10-02 13:13:49.132 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:49 compute-0 nova_compute[257802]: 2025-10-02 13:13:49.132 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:13:49 compute-0 nova_compute[257802]: 2025-10-02 13:13:49.132 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:13:49 compute-0 nova_compute[257802]: 2025-10-02 13:13:49.147 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:13:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:49 compute-0 ceph-mon[73607]: pgmap v3480: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 02 13:13:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1417398520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:50.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:50 compute-0 nova_compute[257802]: 2025-10-02 13:13:50.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 453 KiB/s wr, 99 op/s
Oct 02 13:13:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3328327207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:51.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:51 compute-0 ceph-mon[73607]: pgmap v3481: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 453 KiB/s wr, 99 op/s
Oct 02 13:13:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:52.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 390 KiB/s wr, 85 op/s
Oct 02 13:13:53 compute-0 ceph-mon[73607]: pgmap v3482: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 390 KiB/s wr, 85 op/s
Oct 02 13:13:53 compute-0 nova_compute[257802]: 2025-10-02 13:13:53.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:13:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:54.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.138 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.138 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.139 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.139 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.139 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391039370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.625 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/391039370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 106 op/s
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.814 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.816 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4207MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.816 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.816 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.886 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.887 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:13:54 compute-0 nova_compute[257802]: 2025-10-02 13:13:54.925 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00285571375395496 of space, bias 1.0, pg target 0.8567141261864879 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:13:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250271013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.389 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.396 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.467 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.629 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.629 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:55 compute-0 ceph-mon[73607]: pgmap v3483: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 106 op/s
Oct 02 13:13:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/189000789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/189000789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/250271013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:55 compute-0 nova_compute[257802]: 2025-10-02 13:13:55.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.168123) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836168272, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1423, "num_deletes": 252, "total_data_size": 2321748, "memory_usage": 2365016, "flush_reason": "Manual Compaction"}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836186650, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1431528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75692, "largest_seqno": 77114, "table_properties": {"data_size": 1426258, "index_size": 2537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13959, "raw_average_key_size": 21, "raw_value_size": 1414718, "raw_average_value_size": 2153, "num_data_blocks": 112, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410714, "oldest_key_time": 1759410714, "file_creation_time": 1759410836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 18589 microseconds, and 8481 cpu microseconds.
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.186732) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1431528 bytes OK
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.186766) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.206151) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.206182) EVENT_LOG_v1 {"time_micros": 1759410836206171, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.206217) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2315567, prev total WAL file size 2315567, number of live WAL files 2.
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.207600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373632' seq:72057594037927935, type:22 .. '6D6772737461740033303133' seq:0, type:0; will stop at (end)
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1397KB)], [173(12MB)]
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836207656, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14795532, "oldest_snapshot_seqno": -1}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10375 keys, 11783971 bytes, temperature: kUnknown
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836287931, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 11783971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11719335, "index_size": 37579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273397, "raw_average_key_size": 26, "raw_value_size": 11540137, "raw_average_value_size": 1112, "num_data_blocks": 1425, "num_entries": 10375, "num_filter_entries": 10375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.288250) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11783971 bytes
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.296172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.0 rd, 146.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(18.6) write-amplify(8.2) OK, records in: 10843, records dropped: 468 output_compression: NoCompression
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.296190) EVENT_LOG_v1 {"time_micros": 1759410836296180, "job": 108, "event": "compaction_finished", "compaction_time_micros": 80430, "compaction_time_cpu_micros": 28136, "output_level": 6, "num_output_files": 1, "total_output_size": 11783971, "num_input_records": 10843, "num_output_records": 10375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836296508, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836298853, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.207497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.298889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.298893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.298895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.298989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:13:56.298992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:13:57 compute-0 ceph-mon[73607]: pgmap v3484: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:13:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:13:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:13:57 compute-0 sshd-session[407414]: Invalid user vr from 167.99.55.34 port 46914
Oct 02 13:13:57 compute-0 sshd-session[407414]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:13:57 compute-0 sshd-session[407414]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 13:13:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:58.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.462 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.462 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.482 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.603 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.603 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.610 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.610 2 INFO nova.compute.claims [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:58 compute-0 nova_compute[257802]: 2025-10-02 13:13:58.728 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:13:58 compute-0 podman[407418]: 2025-10-02 13:13:58.972395185 +0000 UTC m=+0.091879804 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:13:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248591772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.184 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.190 2 DEBUG nova.compute.provider_tree [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.226 2 DEBUG nova.scheduler.client.report [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.247 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.247 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.297 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.298 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.328 2 INFO nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.349 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.407 2 INFO nova.virt.block_device [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Booting with volume 970cb928-04fe-4f96-87d5-ef615b8da829 at /dev/vda
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.617 2 DEBUG os_brick.utils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.619 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.635 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.636 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[ad164b89-2746-47bd-b533-0ab70105303a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.637 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.651 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.651 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[3df26f36-4a09-4625-b1e4-8b8cecaa3471]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.653 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.669 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.670 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[b91ac920-c42e-42cf-86bb-0027cf82e369]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.672 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd0c2f2-e3ff-4bec-b401-6b7403f1bad0]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.672 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:13:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.717 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.720 2 DEBUG os_brick.initiator.connectors.lightos [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.720 2 DEBUG os_brick.initiator.connectors.lightos [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.721 2 DEBUG os_brick.initiator.connectors.lightos [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.721 2 DEBUG os_brick.utils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:13:59 compute-0 nova_compute[257802]: 2025-10-02 13:13:59.721 2 DEBUG nova.virt.block_device [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating existing volume attachment record: 01f8bf84-6d8f-4775-bd13-75d060fe0464 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:13:59 compute-0 ceph-mon[73607]: pgmap v3485: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:13:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4248591772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-0 nova_compute[257802]: 2025-10-02 13:14:00.054 2 DEBUG nova.policy [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c10de71fef00497981b8b7cec6a3fff3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:14:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:00.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:00 compute-0 sshd-session[407414]: Failed password for invalid user vr from 167.99.55.34 port 46914 ssh2
Oct 02 13:14:00 compute-0 nova_compute[257802]: 2025-10-02 13:14:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 102 op/s
Oct 02 13:14:00 compute-0 nova_compute[257802]: 2025-10-02 13:14:00.802 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Successfully created port: 982f9721-fa33-4427-b72d-50515b9106ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:14:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/922732795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:01 compute-0 ovn_controller[148183]: 2025-10-02T13:14:01Z|00977|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 13:14:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.175 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.176 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.176 2 INFO nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Creating image(s)
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.177 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.177 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Ensure instance console log exists: /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.178 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.178 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:01 compute-0 nova_compute[257802]: 2025-10-02 13:14:01.178 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:01 compute-0 sshd-session[407414]: Connection closed by invalid user vr 167.99.55.34 port 46914 [preauth]
Oct 02 13:14:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:02 compute-0 ceph-mon[73607]: pgmap v3486: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 102 op/s
Oct 02 13:14:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.311 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Successfully updated port: 982f9721-fa33-4427-b72d-50515b9106ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.326 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.326 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquired lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.327 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.428 2 DEBUG nova.compute.manager [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-changed-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.428 2 DEBUG nova.compute.manager [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Refreshing instance network info cache due to event network-changed-982f9721-fa33-4427-b72d-50515b9106ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.429 2 DEBUG oslo_concurrency.lockutils [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:02 compute-0 nova_compute[257802]: 2025-10-02 13:14:02.470 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:14:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 2.6 MiB/s wr, 45 op/s
Oct 02 13:14:03 compute-0 ceph-mon[73607]: pgmap v3487: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 2.6 MiB/s wr, 45 op/s
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.635 2 DEBUG nova.network.neutron [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating instance_info_cache with network_info: [{"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.656 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Releasing lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.656 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Instance network_info: |[{"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.656 2 DEBUG oslo_concurrency.lockutils [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.657 2 DEBUG nova.network.neutron [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Refreshing network info cache for port 982f9721-fa33-4427-b72d-50515b9106ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.660 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Start _get_guest_xml network_info=[{"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '01f8bf84-6d8f-4775-bd13-75d060fe0464', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-970cb928-04fe-4f96-87d5-ef615b8da829', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '970cb928-04fe-4f96-87d5-ef615b8da829', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '7398ed9d-ac95-47e9-8de9-af875d198202', 'attached_at': '', 'detached_at': '', 'volume_id': '970cb928-04fe-4f96-87d5-ef615b8da829', 'serial': '970cb928-04fe-4f96-87d5-ef615b8da829'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.663 2 WARNING nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.667 2 DEBUG nova.virt.libvirt.host [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.667 2 DEBUG nova.virt.libvirt.host [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.669 2 DEBUG nova.virt.libvirt.host [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.669 2 DEBUG nova.virt.libvirt.host [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.671 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.671 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.671 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.671 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.672 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.672 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.672 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.672 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.672 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.673 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.673 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.673 2 DEBUG nova.virt.hardware [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.702 2 DEBUG nova.storage.rbd_utils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image 7398ed9d-ac95-47e9-8de9-af875d198202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.706 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:03 compute-0 nova_compute[257802]: 2025-10-02 13:14:03.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110123279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:04.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.127 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.151 2 DEBUG nova.virt.libvirt.vif [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-147764333',display_name='tempest-TestVolumeBootPattern-server-147764333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-147764333',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-m5p48n1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:59Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=7398ed9d-ac95-47e9-8de9-af875d198202,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.151 2 DEBUG nova.network.os_vif_util [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.152 2 DEBUG nova.network.os_vif_util [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.153 2 DEBUG nova.objects.instance [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'pci_devices' on Instance uuid 7398ed9d-ac95-47e9-8de9-af875d198202 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3110123279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.173 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <uuid>7398ed9d-ac95-47e9-8de9-af875d198202</uuid>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <name>instance-000000db</name>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:name>tempest-TestVolumeBootPattern-server-147764333</nova:name>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:14:03</nova:creationTime>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:user uuid="c10de71fef00497981b8b7cec6a3fff3">tempest-TestVolumeBootPattern-1200415020-project-member</nova:user>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:project uuid="fbbc6cb494464fd9b31f64c1ad75fa6b">tempest-TestVolumeBootPattern-1200415020</nova:project>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <nova:port uuid="982f9721-fa33-4427-b72d-50515b9106ae">
Oct 02 13:14:04 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <system>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="serial">7398ed9d-ac95-47e9-8de9-af875d198202</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="uuid">7398ed9d-ac95-47e9-8de9-af875d198202</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </system>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <os>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </os>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <features>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </features>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/7398ed9d-ac95-47e9-8de9-af875d198202_disk.config">
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </source>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-970cb928-04fe-4f96-87d5-ef615b8da829">
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </source>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:14:04 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <serial>970cb928-04fe-4f96-87d5-ef615b8da829</serial>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:50:2e:5d"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <target dev="tap982f9721-fa"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/console.log" append="off"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <video>
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </video>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:14:04 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:14:04 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:14:04 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:14:04 compute-0 nova_compute[257802]: </domain>
Oct 02 13:14:04 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.175 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Preparing to wait for external event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.176 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.177 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.178 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.179 2 DEBUG nova.virt.libvirt.vif [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-147764333',display_name='tempest-TestVolumeBootPattern-server-147764333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-147764333',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-m5p48n1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:59Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=7398ed9d-ac95-47e9-8de9-af875d198202,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.180 2 DEBUG nova.network.os_vif_util [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.182 2 DEBUG nova.network.os_vif_util [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.182 2 DEBUG os_vif [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap982f9721-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap982f9721-fa, col_values=(('external_ids', {'iface-id': '982f9721-fa33-4427-b72d-50515b9106ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:2e:5d', 'vm-uuid': '7398ed9d-ac95-47e9-8de9-af875d198202'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:04 compute-0 NetworkManager[44987]: <info>  [1759410844.2022] manager: (tap982f9721-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.211 2 INFO os_vif [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa')
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.265 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.265 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.265 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No VIF found with MAC fa:16:3e:50:2e:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.266 2 INFO nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Using config drive
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.296 2 DEBUG nova.storage.rbd_utils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image 7398ed9d-ac95-47e9-8de9-af875d198202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.632 2 INFO nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Creating config drive at /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.636 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8qiby09 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.768 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8qiby09" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 244 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 289 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.799 2 DEBUG nova.storage.rbd_utils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image 7398ed9d-ac95-47e9-8de9-af875d198202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.802 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config 7398ed9d-ac95-47e9-8de9-af875d198202_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.966 2 DEBUG oslo_concurrency.processutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config 7398ed9d-ac95-47e9-8de9-af875d198202_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:04 compute-0 nova_compute[257802]: 2025-10-02 13:14:04.967 2 INFO nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Deleting local config drive /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202/disk.config because it was imported into RBD.
Oct 02 13:14:05 compute-0 kernel: tap982f9721-fa: entered promiscuous mode
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.0131] manager: (tap982f9721-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 ovn_controller[148183]: 2025-10-02T13:14:05Z|00978|binding|INFO|Claiming lport 982f9721-fa33-4427-b72d-50515b9106ae for this chassis.
Oct 02 13:14:05 compute-0 ovn_controller[148183]: 2025-10-02T13:14:05Z|00979|binding|INFO|982f9721-fa33-4427-b72d-50515b9106ae: Claiming fa:16:3e:50:2e:5d 10.100.0.14
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 systemd-udevd[407586]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:05 compute-0 systemd-machined[211836]: New machine qemu-105-instance-000000db.
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.0546] device (tap982f9721-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.053 2 DEBUG nova.network.neutron [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updated VIF entry in instance network info cache for port 982f9721-fa33-4427-b72d-50515b9106ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.0553] device (tap982f9721-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.054 2 DEBUG nova.network.neutron [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating instance_info_cache with network_info: [{"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:05 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-000000db.
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.076 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:2e:5d 10.100.0.14'], port_security=['fa:16:3e:50:2e:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7398ed9d-ac95-47e9-8de9-af875d198202', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a538a4f-f761-421e-aa00-1341aedd2ba6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=982f9721-fa33-4427-b72d-50515b9106ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.078 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 982f9721-fa33-4427-b72d-50515b9106ae in datapath 150508fb-9217-4982-8468-977a3b53121a bound to our chassis
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.079 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 ovn_controller[148183]: 2025-10-02T13:14:05Z|00980|binding|INFO|Setting lport 982f9721-fa33-4427-b72d-50515b9106ae ovn-installed in OVS
Oct 02 13:14:05 compute-0 ovn_controller[148183]: 2025-10-02T13:14:05Z|00981|binding|INFO|Setting lport 982f9721-fa33-4427-b72d-50515b9106ae up in Southbound
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.084 2 DEBUG oslo_concurrency.lockutils [req-4731409c-8b4c-4945-a96a-226c22f4a4ba req-4b307424-2db3-43fa-9674-8e8a435dadb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.090 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1f939538-04e3-4b97-a08f-ce63a7881a48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.090 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap150508fb-91 in ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.092 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap150508fb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.092 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a6084007-575a-4137-b85d-3b221ff1d689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.093 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[37db4412-e371-42c5-8094-eb1fdc62d002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.104 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[fd381eaf-a745-4cf6-b2d7-891d87100a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.117 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[23f11c6c-3b97-4764-b902-2950d4d1a774]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.142 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9d2274-749d-4338-b0c6-5d12c6ccd8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.1484] manager: (tap150508fb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Oct 02 13:14:05 compute-0 systemd-udevd[407589]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.149 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e28fc0d-ffd6-4cfe-a10b-a3e4edffc4b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ceph-mon[73607]: pgmap v3488: 305 pgs: 305 active+clean; 244 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 289 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.183 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e097ad-32c4-4258-bd47-60b5c4c3269e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.185 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6bc586-11ce-4e13-8836-299954e14cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.2079] device (tap150508fb-90): carrier: link connected
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.213 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef3b84c-b222-49ae-b2a2-9ed6a0160583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.229 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2160b93e-f049-4ccf-bc43-0f1790517920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891283, 'reachable_time': 41249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407620, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.244 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[306be39f-4a25-47e8-92bc-f6b22812e5b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:6993'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 891283, 'tstamp': 891283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407621, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.258 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[454e9e1d-40f4-472f-919c-d0f778beb858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891283, 'reachable_time': 41249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407622, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.283 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5320f2-eaad-4af8-afa3-b2ed4e8ad705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.332 2 DEBUG nova.compute.manager [req-93e2e534-a650-403f-9cd9-3a7f26931e7e req-ad9d323e-094a-4ce3-a45a-c19e98c19cb2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.332 2 DEBUG oslo_concurrency.lockutils [req-93e2e534-a650-403f-9cd9-3a7f26931e7e req-ad9d323e-094a-4ce3-a45a-c19e98c19cb2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.333 2 DEBUG oslo_concurrency.lockutils [req-93e2e534-a650-403f-9cd9-3a7f26931e7e req-ad9d323e-094a-4ce3-a45a-c19e98c19cb2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.333 2 DEBUG oslo_concurrency.lockutils [req-93e2e534-a650-403f-9cd9-3a7f26931e7e req-ad9d323e-094a-4ce3-a45a-c19e98c19cb2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.333 2 DEBUG nova.compute.manager [req-93e2e534-a650-403f-9cd9-3a7f26931e7e req-ad9d323e-094a-4ce3-a45a-c19e98c19cb2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Processing event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.341 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[74e6df06-66c1-4630-84cd-eba4006e0952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.342 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.342 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.343 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap150508fb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:05 compute-0 kernel: tap150508fb-90: entered promiscuous mode
Oct 02 13:14:05 compute-0 NetworkManager[44987]: <info>  [1759410845.3979] manager: (tap150508fb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.401 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap150508fb-90, col_values=(('external_ids', {'iface-id': '2a2f4068-0f5b-4d26-b914-4d32097d8b55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 ovn_controller[148183]: 2025-10-02T13:14:05Z|00982|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.403 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.404 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfac810-cd29-471e-a6dd-15f49a81ab92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.405 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:14:05 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:05.406 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'env', 'PROCESS_TAG=haproxy-150508fb-9217-4982-8468-977a3b53121a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/150508fb-9217-4982-8468-977a3b53121a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.629 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-0 podman[407696]: 2025-10-02 13:14:05.76848154 +0000 UTC m=+0.054437407 container create b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:14:05 compute-0 systemd[1]: Started libpod-conmon-b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf.scope.
Oct 02 13:14:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddaae5ae90683ac4b984e1daf2be7e8d799fc2e7af2330f2737a8669f965149f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:05 compute-0 podman[407696]: 2025-10-02 13:14:05.737941742 +0000 UTC m=+0.023897629 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:14:05 compute-0 podman[407696]: 2025-10-02 13:14:05.847192605 +0000 UTC m=+0.133148482 container init b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:14:05 compute-0 podman[407696]: 2025-10-02 13:14:05.853479847 +0000 UTC m=+0.139435714 container start b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:14:05 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [NOTICE]   (407716) : New worker (407718) forked
Oct 02 13:14:05 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [NOTICE]   (407716) : Loading success.
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.903 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410845.9031088, 7398ed9d-ac95-47e9-8de9-af875d198202 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.904 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] VM Started (Lifecycle Event)
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.906 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.908 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.912 2 INFO nova.virt.libvirt.driver [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Instance spawned successfully.
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.912 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.977 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.980 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.994 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.995 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.996 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.996 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.997 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:05 compute-0 nova_compute[257802]: 2025-10-02 13:14:05.997 2 DEBUG nova.virt.libvirt.driver [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.001 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.002 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410845.9032302, 7398ed9d-ac95-47e9-8de9-af875d198202 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.002 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] VM Paused (Lifecycle Event)
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.025 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.029 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410845.907903, 7398ed9d-ac95-47e9-8de9-af875d198202 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.030 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] VM Resumed (Lifecycle Event)
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.053 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.057 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.071 2 INFO nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Took 4.90 seconds to spawn the instance on the hypervisor.
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.072 2 DEBUG nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.105 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.152 2 INFO nova.compute.manager [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Took 7.60 seconds to build instance.
Oct 02 13:14:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:06 compute-0 nova_compute[257802]: 2025-10-02 13:14:06.171 2 DEBUG oslo_concurrency.lockutils [None req-04f7f1a3-1302-4889-80f6-c9133a5f69ed c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 293 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.445 2 DEBUG nova.compute.manager [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.445 2 DEBUG oslo_concurrency.lockutils [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.445 2 DEBUG oslo_concurrency.lockutils [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.445 2 DEBUG oslo_concurrency.lockutils [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.446 2 DEBUG nova.compute.manager [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] No waiting events found dispatching network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:07 compute-0 nova_compute[257802]: 2025-10-02 13:14:07.446 2 WARNING nova.compute.manager [req-d61e4cbd-e387-45e5-a9b8-b9229e67e71c req-ff5a9935-f5f1-441f-88c0-fb37fce62633 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received unexpected event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae for instance with vm_state active and task_state None.
Oct 02 13:14:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:14:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:07.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:14:07 compute-0 ceph-mon[73607]: pgmap v3489: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 293 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Oct 02 13:14:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:08.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:08 compute-0 sudo[407728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:08 compute-0 sudo[407728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:08 compute-0 sudo[407728]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:08 compute-0 sudo[407753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:08 compute-0 sudo[407753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:08 compute-0 sudo[407753]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 656 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Oct 02 13:14:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:09.087 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:09 compute-0 nova_compute[257802]: 2025-10-02 13:14:09.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:09.088 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:14:09 compute-0 nova_compute[257802]: 2025-10-02 13:14:09.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:09 compute-0 nova_compute[257802]: 2025-10-02 13:14:09.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:09 compute-0 NetworkManager[44987]: <info>  [1759410849.6201] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 02 13:14:09 compute-0 NetworkManager[44987]: <info>  [1759410849.6216] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct 02 13:14:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:09.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:09 compute-0 ovn_controller[148183]: 2025-10-02T13:14:09Z|00983|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:09 compute-0 nova_compute[257802]: 2025-10-02 13:14:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:09 compute-0 ovn_controller[148183]: 2025-10-02T13:14:09Z|00984|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:09 compute-0 ceph-mon[73607]: pgmap v3490: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 656 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Oct 02 13:14:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:10.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.281 2 DEBUG nova.compute.manager [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-changed-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.282 2 DEBUG nova.compute.manager [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Refreshing instance network info cache due to event network-changed-982f9721-fa33-4427-b72d-50515b9106ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.282 2 DEBUG oslo_concurrency.lockutils [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.282 2 DEBUG oslo_concurrency.lockutils [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.282 2 DEBUG nova.network.neutron [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Refreshing network info cache for port 982f9721-fa33-4427-b72d-50515b9106ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:10 compute-0 nova_compute[257802]: 2025-10-02 13:14:10.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 02 13:14:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:11.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:12 compute-0 ceph-mon[73607]: pgmap v3491: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 02 13:14:12 compute-0 nova_compute[257802]: 2025-10-02 13:14:12.066 2 DEBUG nova.network.neutron [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updated VIF entry in instance network info cache for port 982f9721-fa33-4427-b72d-50515b9106ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:12 compute-0 nova_compute[257802]: 2025-10-02 13:14:12.067 2 DEBUG nova.network.neutron [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating instance_info_cache with network_info: [{"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:12 compute-0 nova_compute[257802]: 2025-10-02 13:14:12.090 2 DEBUG oslo_concurrency.lockutils [req-133cc275-f59f-448e-921c-e594383975b2 req-31e9acdd-e917-4b42-85dc-2f0bdc927620 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7398ed9d-ac95-47e9-8de9-af875d198202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 124 op/s
Oct 02 13:14:13 compute-0 ceph-mon[73607]: pgmap v3492: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 124 op/s
Oct 02 13:14:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1775144993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3155891014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:14 compute-0 nova_compute[257802]: 2025-10-02 13:14:14.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 146 op/s
Oct 02 13:14:15 compute-0 ceph-mon[73607]: pgmap v3493: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 146 op/s
Oct 02 13:14:15 compute-0 nova_compute[257802]: 2025-10-02 13:14:15.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:15.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:14:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:16.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:14:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 02 13:14:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:17 compute-0 ceph-mon[73607]: pgmap v3494: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 02 13:14:18 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:18.091 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 163 op/s
Oct 02 13:14:18 compute-0 podman[407785]: 2025-10-02 13:14:18.925585115 +0000 UTC m=+0.061235942 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:14:18 compute-0 podman[407786]: 2025-10-02 13:14:18.954747321 +0000 UTC m=+0.076225845 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:14:18 compute-0 podman[407787]: 2025-10-02 13:14:18.961395722 +0000 UTC m=+0.086133775 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:19 compute-0 nova_compute[257802]: 2025-10-02 13:14:19.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:19 compute-0 ovn_controller[148183]: 2025-10-02T13:14:19Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:2e:5d 10.100.0.14
Oct 02 13:14:19 compute-0 ovn_controller[148183]: 2025-10-02T13:14:19Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:2e:5d 10.100.0.14
Oct 02 13:14:20 compute-0 ceph-mon[73607]: pgmap v3495: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 163 op/s
Oct 02 13:14:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:20 compute-0 nova_compute[257802]: 2025-10-02 13:14:20.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 219 op/s
Oct 02 13:14:21 compute-0 ceph-mon[73607]: pgmap v3496: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 219 op/s
Oct 02 13:14:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:21.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:22.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 161 op/s
Oct 02 13:14:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:23.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:23 compute-0 ceph-mon[73607]: pgmap v3497: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 161 op/s
Oct 02 13:14:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:24.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:24 compute-0 nova_compute[257802]: 2025-10-02 13:14:24.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Oct 02 13:14:25 compute-0 nova_compute[257802]: 2025-10-02 13:14:25.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:25.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:26 compute-0 ceph-mon[73607]: pgmap v3498: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Oct 02 13:14:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:26.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.000 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.000 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.001 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:27 compute-0 ceph-mon[73607]: pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.512 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.514 2 INFO nova.compute.manager [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Terminating instance
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.515 2 DEBUG nova.compute.manager [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:14:27 compute-0 kernel: tap982f9721-fa (unregistering): left promiscuous mode
Oct 02 13:14:27 compute-0 NetworkManager[44987]: <info>  [1759410867.7063] device (tap982f9721-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:27 compute-0 ovn_controller[148183]: 2025-10-02T13:14:27Z|00985|binding|INFO|Releasing lport 982f9721-fa33-4427-b72d-50515b9106ae from this chassis (sb_readonly=0)
Oct 02 13:14:27 compute-0 ovn_controller[148183]: 2025-10-02T13:14:27Z|00986|binding|INFO|Setting lport 982f9721-fa33-4427-b72d-50515b9106ae down in Southbound
Oct 02 13:14:27 compute-0 ovn_controller[148183]: 2025-10-02T13:14:27Z|00987|binding|INFO|Removing iface tap982f9721-fa ovn-installed in OVS
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:14:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:27.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:14:27 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000db.scope: Deactivated successfully.
Oct 02 13:14:27 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000db.scope: Consumed 13.818s CPU time.
Oct 02 13:14:27 compute-0 systemd-machined[211836]: Machine qemu-105-instance-000000db terminated.
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.841 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:2e:5d 10.100.0.14'], port_security=['fa:16:3e:50:2e:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7398ed9d-ac95-47e9-8de9-af875d198202', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a538a4f-f761-421e-aa00-1341aedd2ba6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=982f9721-fa33-4427-b72d-50515b9106ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.843 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 982f9721-fa33-4427-b72d-50515b9106ae in datapath 150508fb-9217-4982-8468-977a3b53121a unbound from our chassis
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.844 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 150508fb-9217-4982-8468-977a3b53121a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.845 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3a13c0f2-01a2-423a-8be4-d3cc900a346a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:27.846 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace which is not needed anymore
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.963 2 INFO nova.virt.libvirt.driver [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Instance destroyed successfully.
Oct 02 13:14:27 compute-0 nova_compute[257802]: 2025-10-02 13:14:27.964 2 DEBUG nova.objects.instance [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'resources' on Instance uuid 7398ed9d-ac95-47e9-8de9-af875d198202 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [NOTICE]   (407716) : haproxy version is 2.8.14-c23fe91
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [NOTICE]   (407716) : path to executable is /usr/sbin/haproxy
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [WARNING]  (407716) : Exiting Master process...
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [WARNING]  (407716) : Exiting Master process...
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [ALERT]    (407716) : Current worker (407718) exited with code 143 (Terminated)
Oct 02 13:14:27 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[407712]: [WARNING]  (407716) : All workers exited. Exiting... (0)
Oct 02 13:14:27 compute-0 systemd[1]: libpod-b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf.scope: Deactivated successfully.
Oct 02 13:14:27 compute-0 podman[407865]: 2025-10-02 13:14:27.981292578 +0000 UTC m=+0.053696089 container died b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf-userdata-shm.mount: Deactivated successfully.
Oct 02 13:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddaae5ae90683ac4b984e1daf2be7e8d799fc2e7af2330f2737a8669f965149f-merged.mount: Deactivated successfully.
Oct 02 13:14:28 compute-0 podman[407865]: 2025-10-02 13:14:28.025504348 +0000 UTC m=+0.097907869 container cleanup b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 13:14:28 compute-0 systemd[1]: libpod-conmon-b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf.scope: Deactivated successfully.
Oct 02 13:14:28 compute-0 podman[407904]: 2025-10-02 13:14:28.096333601 +0000 UTC m=+0.049546659 container remove b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.103 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1eaae0-6298-47e5-b7e3-b6f2b2d2cf0a]: (4, ('Thu Oct  2 01:14:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf)\nb4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf\nThu Oct  2 01:14:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (b4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf)\nb4eee3ddc3cd665896c270c2790b556eeed7620923cab9ae22226c1eed5a7faf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.105 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[13e8a57a-9d28-4941-a4df-f90b1e248ce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.106 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 kernel: tap150508fb-90: left promiscuous mode
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.132 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00110d5b-cc3b-467d-be59-643a88aa6e8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:28.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.169 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3666d0bd-2fd9-476c-a2ef-2f750d2ba0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.171 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[01b0d74a-d712-45b5-9af0-46dc46fd13b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.187 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1b140e4b-c417-434d-9381-1d73c9ac4dc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 891276, 'reachable_time': 16469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407923, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d150508fb\x2d9217\x2d4982\x2d8468\x2d977a3b53121a.mount: Deactivated successfully.
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.189 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-150508fb-9217-4982-8468-977a3b53121a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:28.190 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[320a5426-f909-4c14-92fa-ecb729cc89a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.192 2 DEBUG nova.virt.libvirt.vif [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-147764333',display_name='tempest-TestVolumeBootPattern-server-147764333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-147764333',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-m5p48n1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:06Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=7398ed9d-ac95-47e9-8de9-af875d198202,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.193 2 DEBUG nova.network.os_vif_util [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "982f9721-fa33-4427-b72d-50515b9106ae", "address": "fa:16:3e:50:2e:5d", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982f9721-fa", "ovs_interfaceid": "982f9721-fa33-4427-b72d-50515b9106ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.194 2 DEBUG nova.network.os_vif_util [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.194 2 DEBUG os_vif [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap982f9721-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.202 2 INFO os_vif [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:2e:5d,bridge_name='br-int',has_traffic_filtering=True,id=982f9721-fa33-4427-b72d-50515b9106ae,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap982f9721-fa')
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.421 2 INFO nova.virt.libvirt.driver [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Deleting instance files /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202_del
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.422 2 INFO nova.virt.libvirt.driver [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Deletion of /var/lib/nova/instances/7398ed9d-ac95-47e9-8de9-af875d198202_del complete
Oct 02 13:14:28 compute-0 sudo[407944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:28 compute-0 sudo[407944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Oct 02 13:14:28 compute-0 sudo[407944]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:28 compute-0 sudo[407969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:28 compute-0 sudo[407969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:28 compute-0 sudo[407969]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.875 2 INFO nova.compute.manager [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Took 1.36 seconds to destroy the instance on the hypervisor.
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.876 2 DEBUG oslo.service.loopingcall [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.877 2 DEBUG nova.compute.manager [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:14:28 compute-0 nova_compute[257802]: 2025-10-02 13:14:28.877 2 DEBUG nova.network.neutron [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.520 2 DEBUG nova.compute.manager [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-unplugged-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.521 2 DEBUG oslo_concurrency.lockutils [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.522 2 DEBUG oslo_concurrency.lockutils [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.523 2 DEBUG oslo_concurrency.lockutils [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.523 2 DEBUG nova.compute.manager [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] No waiting events found dispatching network-vif-unplugged-982f9721-fa33-4427-b72d-50515b9106ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:29 compute-0 nova_compute[257802]: 2025-10-02 13:14:29.524 2 DEBUG nova.compute.manager [req-2543d710-105e-49a6-9c54-0574345bfc67 req-e5c6c3b3-5ec8-47ea-a59f-d4b62b25b784 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-unplugged-982f9721-fa33-4427-b72d-50515b9106ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:14:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:29.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:29 compute-0 ceph-mon[73607]: pgmap v3500: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Oct 02 13:14:29 compute-0 podman[407995]: 2025-10-02 13:14:29.984736807 +0000 UTC m=+0.111581251 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:14:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 13:14:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:30.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 13:14:30 compute-0 nova_compute[257802]: 2025-10-02 13:14:30.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Oct 02 13:14:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.246 2 DEBUG nova.network.neutron [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.393 2 DEBUG nova.compute.manager [req-58ce4514-7d44-47a7-bf11-af7b3afac2b9 req-53157d6a-5064-41ec-8245-6d5f0565b3ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-deleted-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.394 2 INFO nova.compute.manager [req-58ce4514-7d44-47a7-bf11-af7b3afac2b9 req-53157d6a-5064-41ec-8245-6d5f0565b3ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Neutron deleted interface 982f9721-fa33-4427-b72d-50515b9106ae; detaching it from the instance and deleting it from the info cache
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.394 2 DEBUG nova.network.neutron [req-58ce4514-7d44-47a7-bf11-af7b3afac2b9 req-53157d6a-5064-41ec-8245-6d5f0565b3ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.429 2 INFO nova.compute.manager [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Took 2.55 seconds to deallocate network for instance.
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.480 2 DEBUG nova.compute.manager [req-58ce4514-7d44-47a7-bf11-af7b3afac2b9 req-53157d6a-5064-41ec-8245-6d5f0565b3ec d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Detach interface failed, port_id=982f9721-fa33-4427-b72d-50515b9106ae, reason: Instance 7398ed9d-ac95-47e9-8de9-af875d198202 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.677 2 DEBUG nova.compute.manager [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.678 2 DEBUG oslo_concurrency.lockutils [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.678 2 DEBUG oslo_concurrency.lockutils [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.678 2 DEBUG oslo_concurrency.lockutils [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.678 2 DEBUG nova.compute.manager [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] No waiting events found dispatching network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.679 2 WARNING nova.compute.manager [req-8fb063f5-a51b-4e39-95b8-a63c52f77e6c req-d7f18aa8-8ac1-433a-b21c-cd217c983d7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Received unexpected event network-vif-plugged-982f9721-fa33-4427-b72d-50515b9106ae for instance with vm_state active and task_state deleting.
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.701 2 INFO nova.compute.manager [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Took 0.27 seconds to detach 1 volumes for instance.
Oct 02 13:14:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:31.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.818 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.819 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:31 compute-0 nova_compute[257802]: 2025-10-02 13:14:31.869 2 DEBUG oslo_concurrency.processutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:31 compute-0 ceph-mon[73607]: pgmap v3501: 305 pgs: 305 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Oct 02 13:14:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:32.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:32 compute-0 sudo[408042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:32 compute-0 sudo[408042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:32 compute-0 sudo[408042]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:32 compute-0 sudo[408067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:32 compute-0 sudo[408067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:32 compute-0 sudo[408067]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:32 compute-0 sudo[408092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:32 compute-0 sudo[408092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:32 compute-0 sudo[408092]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:32 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:32 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926159709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:32 compute-0 sudo[408117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:14:32 compute-0 sudo[408117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:32 compute-0 nova_compute[257802]: 2025-10-02 13:14:32.376 2 DEBUG oslo_concurrency.processutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:32 compute-0 nova_compute[257802]: 2025-10-02 13:14:32.385 2 DEBUG nova.compute.provider_tree [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:32 compute-0 nova_compute[257802]: 2025-10-02 13:14:32.581 2 DEBUG nova.scheduler.client.report [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:32 compute-0 nova_compute[257802]: 2025-10-02 13:14:32.739 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 306 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Oct 02 13:14:32 compute-0 sudo[408117]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:32 compute-0 nova_compute[257802]: 2025-10-02 13:14:32.827 2 INFO nova.scheduler.client.report [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Deleted allocations for instance 7398ed9d-ac95-47e9-8de9-af875d198202
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2926159709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c582c115-13bb-413f-a745-7b893d485cdf does not exist
Oct 02 13:14:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e6dd2cf1-4bcf-4045-ba1a-f77f85997a08 does not exist
Oct 02 13:14:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 208fdd0e-ba94-496b-8afe-ea4f2d61b2b8 does not exist
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:33 compute-0 nova_compute[257802]: 2025-10-02 13:14:33.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:33 compute-0 nova_compute[257802]: 2025-10-02 13:14:33.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:14:33 compute-0 sudo[408177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:33 compute-0 sudo[408177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 sudo[408177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-0 sudo[408202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:33 compute-0 nova_compute[257802]: 2025-10-02 13:14:33.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:33 compute-0 sudo[408202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 sudo[408202]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-0 sudo[408227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:33 compute-0 sudo[408227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 sudo[408227]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-0 sudo[408252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:14:33 compute-0 sudo[408252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 nova_compute[257802]: 2025-10-02 13:14:33.519 2 DEBUG oslo_concurrency.lockutils [None req-6ff119fb-64a0-4bdc-a719-1fb4e8faaeb5 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "7398ed9d-ac95-47e9-8de9-af875d198202" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:33 compute-0 podman[408317]: 2025-10-02 13:14:33.60536323 +0000 UTC m=+0.022242290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:14:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:33.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:14:33 compute-0 podman[408317]: 2025-10-02 13:14:33.833511909 +0000 UTC m=+0.250390929 container create eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:33 compute-0 systemd[1]: Started libpod-conmon-eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396.scope.
Oct 02 13:14:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:34 compute-0 podman[408317]: 2025-10-02 13:14:34.0501578 +0000 UTC m=+0.467036870 container init eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:14:34 compute-0 ceph-mon[73607]: pgmap v3502: 305 pgs: 305 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 306 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1268725376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:34 compute-0 podman[408317]: 2025-10-02 13:14:34.061172326 +0000 UTC m=+0.478051366 container start eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:34 compute-0 podman[408317]: 2025-10-02 13:14:34.065619324 +0000 UTC m=+0.482498405 container attach eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:14:34 compute-0 modest_carver[408333]: 167 167
Oct 02 13:14:34 compute-0 systemd[1]: libpod-eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396.scope: Deactivated successfully.
Oct 02 13:14:34 compute-0 podman[408317]: 2025-10-02 13:14:34.06874812 +0000 UTC m=+0.485627170 container died eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:14:34 compute-0 nova_compute[257802]: 2025-10-02 13:14:34.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:34 compute-0 nova_compute[257802]: 2025-10-02 13:14:34.100 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a5a93fdd20325fa9fb9e7707e1b398de334f01af8903b97257574fd9649baba-merged.mount: Deactivated successfully.
Oct 02 13:14:34 compute-0 podman[408317]: 2025-10-02 13:14:34.121619049 +0000 UTC m=+0.538498119 container remove eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:14:34 compute-0 systemd[1]: libpod-conmon-eb52c92c56e215397c5ff1a7ec2ce87df128d74e1d0450b83bcb1ed690777396.scope: Deactivated successfully.
Oct 02 13:14:34 compute-0 nova_compute[257802]: 2025-10-02 13:14:34.138 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:14:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:34 compute-0 podman[408356]: 2025-10-02 13:14:34.361996014 +0000 UTC m=+0.068493238 container create 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:34 compute-0 podman[408356]: 2025-10-02 13:14:34.326574597 +0000 UTC m=+0.033071801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:34 compute-0 systemd[1]: Started libpod-conmon-85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce.scope.
Oct 02 13:14:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:34 compute-0 podman[408356]: 2025-10-02 13:14:34.490968694 +0000 UTC m=+0.197465898 container init 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:14:34 compute-0 podman[408356]: 2025-10-02 13:14:34.498425795 +0000 UTC m=+0.204922979 container start 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:14:34 compute-0 podman[408356]: 2025-10-02 13:14:34.502067482 +0000 UTC m=+0.208564706 container attach 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 13:14:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 378 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Oct 02 13:14:35 compute-0 ceph-mon[73607]: pgmap v3503: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 378 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Oct 02 13:14:35 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1851246415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:35 compute-0 nova_compute[257802]: 2025-10-02 13:14:35.140 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:35 compute-0 elastic_khayyam[408373]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:14:35 compute-0 elastic_khayyam[408373]: --> relative data size: 1.0
Oct 02 13:14:35 compute-0 elastic_khayyam[408373]: --> All data devices are unavailable
Oct 02 13:14:35 compute-0 systemd[1]: libpod-85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce.scope: Deactivated successfully.
Oct 02 13:14:35 compute-0 podman[408356]: 2025-10-02 13:14:35.32448956 +0000 UTC m=+1.030986754 container died 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f93fc1cb549008171d38160094241831af7ab32ccac3890705a3067239b7944-merged.mount: Deactivated successfully.
Oct 02 13:14:35 compute-0 podman[408356]: 2025-10-02 13:14:35.385518996 +0000 UTC m=+1.092016180 container remove 85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:14:35 compute-0 systemd[1]: libpod-conmon-85469ecf518e0921879d133184033688026e63d9ac710befab7988fe40cfafce.scope: Deactivated successfully.
Oct 02 13:14:35 compute-0 sudo[408252]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 sudo[408399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:35 compute-0 sudo[408399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 sudo[408399]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 sudo[408424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:35 compute-0 sudo[408424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 sudo[408424]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 sudo[408449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:35 compute-0 sudo[408449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 sudo[408449]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 sudo[408474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:14:35 compute-0 sudo[408474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 nova_compute[257802]: 2025-10-02 13:14:35.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:35.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:35 compute-0 podman[408539]: 2025-10-02 13:14:35.977971358 +0000 UTC m=+0.036904323 container create e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:14:36 compute-0 systemd[1]: Started libpod-conmon-e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4.scope.
Oct 02 13:14:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:36.055566336 +0000 UTC m=+0.114499351 container init e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:35.961856919 +0000 UTC m=+0.020789904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:36.064354989 +0000 UTC m=+0.123287964 container start e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:36.068303914 +0000 UTC m=+0.127236929 container attach e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:14:36 compute-0 determined_mccarthy[408556]: 167 167
Oct 02 13:14:36 compute-0 systemd[1]: libpod-e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4.scope: Deactivated successfully.
Oct 02 13:14:36 compute-0 conmon[408556]: conmon e4a291c1879cb366ea28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4.scope/container/memory.events
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:36.073328366 +0000 UTC m=+0.132261341 container died e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74c3ca66b9c20a8978e01ae0d42800970c9d7e252d3ac2eeb50dc839d8a0933-merged.mount: Deactivated successfully.
Oct 02 13:14:36 compute-0 podman[408539]: 2025-10-02 13:14:36.113816465 +0000 UTC m=+0.172749430 container remove e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:14:36 compute-0 systemd[1]: libpod-conmon-e4a291c1879cb366ea28578cb5ec6d034ea8d703c502368e82b4f818a123a8e4.scope: Deactivated successfully.
Oct 02 13:14:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:14:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:36.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:14:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:36 compute-0 podman[408580]: 2025-10-02 13:14:36.314302596 +0000 UTC m=+0.061586831 container create a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:14:36 compute-0 systemd[1]: Started libpod-conmon-a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c.scope.
Oct 02 13:14:36 compute-0 podman[408580]: 2025-10-02 13:14:36.28970156 +0000 UTC m=+0.036985755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3497f108038382c2b577a684e94fbce8a49ccf4cd925a60a6eab71400c344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3497f108038382c2b577a684e94fbce8a49ccf4cd925a60a6eab71400c344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3497f108038382c2b577a684e94fbce8a49ccf4cd925a60a6eab71400c344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3497f108038382c2b577a684e94fbce8a49ccf4cd925a60a6eab71400c344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:36 compute-0 podman[408580]: 2025-10-02 13:14:36.424637474 +0000 UTC m=+0.171921689 container init a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:14:36 compute-0 podman[408580]: 2025-10-02 13:14:36.436578844 +0000 UTC m=+0.183863049 container start a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:14:36 compute-0 podman[408580]: 2025-10-02 13:14:36.441970754 +0000 UTC m=+0.189254969 container attach a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:14:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Oct 02 13:14:37 compute-0 nova_compute[257802]: 2025-10-02 13:14:37.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:37 compute-0 determined_tu[408596]: {
Oct 02 13:14:37 compute-0 determined_tu[408596]:     "1": [
Oct 02 13:14:37 compute-0 determined_tu[408596]:         {
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "devices": [
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "/dev/loop3"
Oct 02 13:14:37 compute-0 determined_tu[408596]:             ],
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "lv_name": "ceph_lv0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "lv_size": "7511998464",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "name": "ceph_lv0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "tags": {
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.cluster_name": "ceph",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.crush_device_class": "",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.encrypted": "0",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.osd_id": "1",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.type": "block",
Oct 02 13:14:37 compute-0 determined_tu[408596]:                 "ceph.vdo": "0"
Oct 02 13:14:37 compute-0 determined_tu[408596]:             },
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "type": "block",
Oct 02 13:14:37 compute-0 determined_tu[408596]:             "vg_name": "ceph_vg0"
Oct 02 13:14:37 compute-0 determined_tu[408596]:         }
Oct 02 13:14:37 compute-0 determined_tu[408596]:     ]
Oct 02 13:14:37 compute-0 determined_tu[408596]: }
Oct 02 13:14:37 compute-0 systemd[1]: libpod-a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c.scope: Deactivated successfully.
Oct 02 13:14:37 compute-0 podman[408580]: 2025-10-02 13:14:37.281786162 +0000 UTC m=+1.029070397 container died a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb3497f108038382c2b577a684e94fbce8a49ccf4cd925a60a6eab71400c344-merged.mount: Deactivated successfully.
Oct 02 13:14:37 compute-0 podman[408580]: 2025-10-02 13:14:37.342673445 +0000 UTC m=+1.089957640 container remove a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:14:37 compute-0 systemd[1]: libpod-conmon-a52fa290c570c4cec5a8bf75feb34eb216d97ae7f55e1d143952f00315d8ce5c.scope: Deactivated successfully.
Oct 02 13:14:37 compute-0 sudo[408474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:37 compute-0 sudo[408621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:37 compute-0 sudo[408621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:37 compute-0 sudo[408621]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:37 compute-0 sudo[408646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:37 compute-0 sudo[408646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:37 compute-0 sudo[408646]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:37 compute-0 sudo[408671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:37 compute-0 sudo[408671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:37 compute-0 sudo[408671]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:37 compute-0 sudo[408696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:14:37 compute-0 sudo[408696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:37 compute-0 ceph-mon[73607]: pgmap v3504: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.053326876 +0000 UTC m=+0.041193072 container create 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:14:38 compute-0 systemd[1]: Started libpod-conmon-2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da.scope.
Oct 02 13:14:38 compute-0 nova_compute[257802]: 2025-10-02 13:14:38.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.115688594 +0000 UTC m=+0.103554810 container init 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.122619884 +0000 UTC m=+0.110486080 container start 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.126157212 +0000 UTC m=+0.114023408 container attach 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:14:38 compute-0 focused_hugle[408778]: 167 167
Oct 02 13:14:38 compute-0 systemd[1]: libpod-2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da.scope: Deactivated successfully.
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.128687394 +0000 UTC m=+0.116553590 container died 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.038321088 +0000 UTC m=+0.026187304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-93867e9f510cea723dd5be53fd8765af3b4e61633a86eb3e3587542daca447d6-merged.mount: Deactivated successfully.
Oct 02 13:14:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:38 compute-0 podman[408762]: 2025-10-02 13:14:38.166053899 +0000 UTC m=+0.153920095 container remove 2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 02 13:14:38 compute-0 systemd[1]: libpod-conmon-2ad072afa819a46d16af607450458ded7526f2d36cdd58da8b99c71f6a73b4da.scope: Deactivated successfully.
Oct 02 13:14:38 compute-0 nova_compute[257802]: 2025-10-02 13:14:38.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:38 compute-0 podman[408801]: 2025-10-02 13:14:38.320904946 +0000 UTC m=+0.039750306 container create ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:14:38 compute-0 systemd[1]: Started libpod-conmon-ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7.scope.
Oct 02 13:14:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf54835bfd04c735a3f0a78e9556ee26a2e94670608d03498156290dd48fa4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf54835bfd04c735a3f0a78e9556ee26a2e94670608d03498156290dd48fa4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf54835bfd04c735a3f0a78e9556ee26a2e94670608d03498156290dd48fa4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf54835bfd04c735a3f0a78e9556ee26a2e94670608d03498156290dd48fa4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:38 compute-0 podman[408801]: 2025-10-02 13:14:38.306211646 +0000 UTC m=+0.025057026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:38 compute-0 podman[408801]: 2025-10-02 13:14:38.402256141 +0000 UTC m=+0.121101531 container init ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:14:38 compute-0 podman[408801]: 2025-10-02 13:14:38.411072347 +0000 UTC m=+0.129917727 container start ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:14:38 compute-0 podman[408801]: 2025-10-02 13:14:38.415610708 +0000 UTC m=+0.134456088 container attach ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:14:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]: {
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:         "osd_id": 1,
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:         "type": "bluestore"
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]:     }
Oct 02 13:14:39 compute-0 dazzling_sanderson[408818]: }
Oct 02 13:14:39 compute-0 systemd[1]: libpod-ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7.scope: Deactivated successfully.
Oct 02 13:14:39 compute-0 conmon[408818]: conmon ac7b86194bee93100fb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7.scope/container/memory.events
Oct 02 13:14:39 compute-0 podman[408801]: 2025-10-02 13:14:39.285458476 +0000 UTC m=+1.004303836 container died ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 13:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf54835bfd04c735a3f0a78e9556ee26a2e94670608d03498156290dd48fa4a-merged.mount: Deactivated successfully.
Oct 02 13:14:39 compute-0 podman[408801]: 2025-10-02 13:14:39.338772483 +0000 UTC m=+1.057617843 container remove ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:14:39 compute-0 systemd[1]: libpod-conmon-ac7b86194bee93100fb426943ac65a56288452950e12aacef2516218bb7d90e7.scope: Deactivated successfully.
Oct 02 13:14:39 compute-0 sudo[408696]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:14:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:39 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:14:39 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 12c96fed-225a-498f-84d9-217ada601a31 does not exist
Oct 02 13:14:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dcd9b624-12c0-4eb9-905c-6f62cb527c87 does not exist
Oct 02 13:14:39 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 329e5cb6-ddb5-4055-a11b-dd8cbb5ddd1c does not exist
Oct 02 13:14:39 compute-0 sudo[408854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:39 compute-0 sudo[408854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:39 compute-0 sudo[408854]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:39 compute-0 sudo[408879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:14:39 compute-0 sudo[408879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:39 compute-0 sudo[408879]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:39 compute-0 ceph-mon[73607]: pgmap v3505: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Oct 02 13:14:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:14:39 compute-0 nova_compute[257802]: 2025-10-02 13:14:39.948 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:39 compute-0 nova_compute[257802]: 2025-10-02 13:14:39.949 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.033 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.142 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.143 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.149 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.149 2 INFO nova.compute.claims [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:14:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.541 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Oct 02 13:14:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:40 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78810527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.975 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:40 compute-0 nova_compute[257802]: 2025-10-02 13:14:40.981 2 DEBUG nova.compute.provider_tree [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.069 2 DEBUG nova.scheduler.client.report [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.112 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.112 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:14:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.169 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.277 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.277 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.305 2 INFO nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.381 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.450 2 INFO nova.virt.block_device [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Booting with volume 970cb928-04fe-4f96-87d5-ef615b8da829 at /dev/vda
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.500 2 DEBUG nova.policy [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c10de71fef00497981b8b7cec6a3fff3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.619 2 DEBUG os_brick.utils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.620 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.632 1650 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.633 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[1301fbd7-50a0-4ef1-a0fe-04da3401b14f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.634 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.643 1650 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.644 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[2d81d0f1-c32a-427b-a753-676261c0f101]: (4, ('InitiatorName=iqn.1994-05.com.redhat:89256e26a090', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.646 1650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.656 1650 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.656 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1bacc2-2b05-4ce4-a944-ab8516227d66]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.658 1650 DEBUG oslo.privsep.daemon [-] privsep: reply[411ae43e-8b15-4f6d-955d-159f32197bdd]: (4, '8a59133c-d138-4412-952a-4a6587089b61') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.659 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.712 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "nvme version" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.717 2 DEBUG os_brick.initiator.connectors.lightos [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.717 2 DEBUG os_brick.initiator.connectors.lightos [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.718 2 DEBUG os_brick.initiator.connectors.lightos [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.718 2 DEBUG os_brick.utils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:89256e26a090', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a59133c-d138-4412-952a-4a6587089b61', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:14:41 compute-0 nova_compute[257802]: 2025-10-02 13:14:41.719 2 DEBUG nova.virt.block_device [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating existing volume attachment record: 7dac326f-19dd-416f-82af-0781aea4dc7c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:14:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:41 compute-0 ceph-mon[73607]: pgmap v3506: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Oct 02 13:14:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/78810527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:42.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:14:42
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.control', 'volumes', 'images', '.rgw.root']
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 690 KiB/s wr, 24 op/s
Oct 02 13:14:42 compute-0 nova_compute[257802]: 2025-10-02 13:14:42.961 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410867.9603162, 7398ed9d-ac95-47e9-8de9-af875d198202 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:42 compute-0 nova_compute[257802]: 2025-10-02 13:14:42.962 2 INFO nova.compute.manager [-] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] VM Stopped (Lifecycle Event)
Oct 02 13:14:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1019955647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.020 2 DEBUG nova.compute.manager [None req-53727ab6-f3bb-4c90-ab85-cdd96d81034e - - - - - -] [instance: 7398ed9d-ac95-47e9-8de9-af875d198202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.094 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Successfully created port: 0fe78275-ad8d-4e77-a0e7-503702bf1242 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.406 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.407 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.408 2 INFO nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Creating image(s)
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.408 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.408 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Ensure instance console log exists: /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.409 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.409 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:43 compute-0 nova_compute[257802]: 2025-10-02 13:14:43.409 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:14:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:14:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:44 compute-0 ceph-mon[73607]: pgmap v3507: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 690 KiB/s wr, 24 op/s
Oct 02 13:14:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1611495803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:44.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:14:44 compute-0 nova_compute[257802]: 2025-10-02 13:14:44.491 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Successfully updated port: 0fe78275-ad8d-4e77-a0e7-503702bf1242 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:14:44 compute-0 nova_compute[257802]: 2025-10-02 13:14:44.547 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:44 compute-0 nova_compute[257802]: 2025-10-02 13:14:44.547 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquired lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:44 compute-0 nova_compute[257802]: 2025-10-02 13:14:44.547 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:14:44 compute-0 nova_compute[257802]: 2025-10-02 13:14:44.770 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:14:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 215 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 130 KiB/s rd, 691 KiB/s wr, 119 op/s
Oct 02 13:14:45 compute-0 ceph-mon[73607]: pgmap v3508: 305 pgs: 305 active+clean; 215 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 130 KiB/s rd, 691 KiB/s wr, 119 op/s
Oct 02 13:14:45 compute-0 nova_compute[257802]: 2025-10-02 13:14:45.250 2 DEBUG nova.compute.manager [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:45 compute-0 nova_compute[257802]: 2025-10-02 13:14:45.251 2 DEBUG nova.compute.manager [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing instance network info cache due to event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:45 compute-0 nova_compute[257802]: 2025-10-02 13:14:45.251 2 DEBUG oslo_concurrency.lockutils [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:45 compute-0 nova_compute[257802]: 2025-10-02 13:14:45.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.029 2 DEBUG nova.network.neutron [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.064 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Releasing lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.065 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Instance network_info: |[{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.065 2 DEBUG oslo_concurrency.lockutils [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.065 2 DEBUG nova.network.neutron [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.068 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Start _get_guest_xml network_info=[{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'attachment_id': '7dac326f-19dd-416f-82af-0781aea4dc7c', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-970cb928-04fe-4f96-87d5-ef615b8da829', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '970cb928-04fe-4f96-87d5-ef615b8da829', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bfb3e97d-8923-4293-b4ae-a37019fe104c', 'attached_at': '', 'detached_at': '', 'volume_id': '970cb928-04fe-4f96-87d5-ef615b8da829', 'serial': '970cb928-04fe-4f96-87d5-ef615b8da829'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.072 2 WARNING nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.076 2 DEBUG nova.virt.libvirt.host [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.077 2 DEBUG nova.virt.libvirt.host [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.081 2 DEBUG nova.virt.libvirt.host [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.081 2 DEBUG nova.virt.libvirt.host [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.082 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.082 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.082 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.083 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.083 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.083 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.083 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.083 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.084 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.084 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.084 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.084 2 DEBUG nova.virt.hardware [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.111 2 DEBUG nova.storage.rbd_utils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.115 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:46.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102964434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.568 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.601 2 DEBUG nova.virt.libvirt.vif [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1111797352',display_name='tempest-TestVolumeBootPattern-server-1111797352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1111797352',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-rrp61u00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:41Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=bfb3e97d-8923-4293-b4ae-a37019fe104c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.602 2 DEBUG nova.network.os_vif_util [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.602 2 DEBUG nova.network.os_vif_util [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.603 2 DEBUG nova.objects.instance [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'pci_devices' on Instance uuid bfb3e97d-8923-4293-b4ae-a37019fe104c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.622 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <uuid>bfb3e97d-8923-4293-b4ae-a37019fe104c</uuid>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <name>instance-000000dc</name>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:name>tempest-TestVolumeBootPattern-server-1111797352</nova:name>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:14:46</nova:creationTime>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:user uuid="c10de71fef00497981b8b7cec6a3fff3">tempest-TestVolumeBootPattern-1200415020-project-member</nova:user>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:project uuid="fbbc6cb494464fd9b31f64c1ad75fa6b">tempest-TestVolumeBootPattern-1200415020</nova:project>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <nova:port uuid="0fe78275-ad8d-4e77-a0e7-503702bf1242">
Oct 02 13:14:46 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <system>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="serial">bfb3e97d-8923-4293-b4ae-a37019fe104c</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="uuid">bfb3e97d-8923-4293-b4ae-a37019fe104c</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </system>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <os>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </os>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <features>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </features>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config">
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </source>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <source protocol="rbd" name="volumes/volume-970cb928-04fe-4f96-87d5-ef615b8da829">
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </source>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:14:46 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <serial>970cb928-04fe-4f96-87d5-ef615b8da829</serial>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:6b:f8:73"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <target dev="tap0fe78275-ad"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/console.log" append="off"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <video>
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </video>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:14:46 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:14:46 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:14:46 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:14:46 compute-0 nova_compute[257802]: </domain>
Oct 02 13:14:46 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.623 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Preparing to wait for external event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.624 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.624 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.624 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.625 2 DEBUG nova.virt.libvirt.vif [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1111797352',display_name='tempest-TestVolumeBootPattern-server-1111797352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1111797352',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-rrp61u00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:41Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=bfb3e97d-8923-4293-b4ae-a37019fe104c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.625 2 DEBUG nova.network.os_vif_util [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.626 2 DEBUG nova.network.os_vif_util [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.626 2 DEBUG os_vif [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fe78275-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0fe78275-ad, col_values=(('external_ids', {'iface-id': '0fe78275-ad8d-4e77-a0e7-503702bf1242', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:f8:73', 'vm-uuid': 'bfb3e97d-8923-4293-b4ae-a37019fe104c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-0 NetworkManager[44987]: <info>  [1759410886.6567] manager: (tap0fe78275-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.664 2 INFO os_vif [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad')
Oct 02 13:14:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 12 KiB/s wr, 142 op/s
Oct 02 13:14:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4102964434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.957 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.957 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.957 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No VIF found with MAC fa:16:3e:6b:f8:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:46 compute-0 nova_compute[257802]: 2025-10-02 13:14:46.958 2 INFO nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Using config drive
Oct 02 13:14:47 compute-0 nova_compute[257802]: 2025-10-02 13:14:47.264 2 DEBUG nova.storage.rbd_utils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:47 compute-0 nova_compute[257802]: 2025-10-02 13:14:47.274 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:47.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:47 compute-0 ceph-mon[73607]: pgmap v3509: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 12 KiB/s wr, 142 op/s
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.070 2 INFO nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Creating config drive at /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.075 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkypg9to execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:48.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.217 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkypg9to" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.255 2 DEBUG nova.storage.rbd_utils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.262 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.476 2 DEBUG oslo_concurrency.processutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config bfb3e97d-8923-4293-b4ae-a37019fe104c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.476 2 INFO nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Deleting local config drive /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c/disk.config because it was imported into RBD.
Oct 02 13:14:48 compute-0 kernel: tap0fe78275-ad: entered promiscuous mode
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.5385] manager: (tap0fe78275-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 02 13:14:48 compute-0 ovn_controller[148183]: 2025-10-02T13:14:48Z|00988|binding|INFO|Claiming lport 0fe78275-ad8d-4e77-a0e7-503702bf1242 for this chassis.
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ovn_controller[148183]: 2025-10-02T13:14:48Z|00989|binding|INFO|0fe78275-ad8d-4e77-a0e7-503702bf1242: Claiming fa:16:3e:6b:f8:73 10.100.0.11
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.551 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:f8:73 10.100.0.11'], port_security=['fa:16:3e:6b:f8:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bfb3e97d-8923-4293-b4ae-a37019fe104c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a538a4f-f761-421e-aa00-1341aedd2ba6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=0fe78275-ad8d-4e77-a0e7-503702bf1242) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.553 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 0fe78275-ad8d-4e77-a0e7-503702bf1242 in datapath 150508fb-9217-4982-8468-977a3b53121a bound to our chassis
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.554 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:48 compute-0 ovn_controller[148183]: 2025-10-02T13:14:48Z|00990|binding|INFO|Setting lport 0fe78275-ad8d-4e77-a0e7-503702bf1242 ovn-installed in OVS
Oct 02 13:14:48 compute-0 ovn_controller[148183]: 2025-10-02T13:14:48Z|00991|binding|INFO|Setting lport 0fe78275-ad8d-4e77-a0e7-503702bf1242 up in Southbound
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 systemd-udevd[409049]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.569 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8df6fb44-103e-4877-9a47-a674ac6cddae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.571 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap150508fb-91 in ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.574 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap150508fb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.574 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5de47994-998d-4213-9d5a-68eeea5551a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.576 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[87c5f575-806f-4704-9d75-530e4368ef63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.587 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[8a768b74-9b6b-434e-adc0-2c750463cca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.5899] device (tap0fe78275-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.5907] device (tap0fe78275-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:14:48 compute-0 systemd-machined[211836]: New machine qemu-106-instance-000000dc.
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.602 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[45d68e56-60d0-4116-9993-a5894a943564]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-000000dc.
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.632 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[37d2c475-3ffc-4a6f-afd5-f6b197ff92cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.637 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1cadf844-18df-4aa1-822c-c6d3d68ddd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.6385] manager: (tap150508fb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Oct 02 13:14:48 compute-0 systemd-udevd[409055]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.667 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[292191e4-c953-466d-990f-1a29d4251d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.670 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[f04dafc8-55b9-40e1-ba08-d71fa0d80bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.6928] device (tap150508fb-90): carrier: link connected
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.697 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[33549c06-c622-437c-8fde-f3bfb7ce62b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.712 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[14ae2687-9e02-4093-9e33-2a85d31626cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895631, 'reachable_time': 41265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409084, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.726 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed06af3-96c2-4151-b3bc-07c948b9c41d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:6993'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 895631, 'tstamp': 895631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409086, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.743 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[76190912-78b7-4cf2-8834-128cb4027f29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895631, 'reachable_time': 41265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409087, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.772 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[2f52cdc6-06cc-4134-a9c0-839dbde84d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 164 op/s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.826 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7ad967-26a5-4b74-9e0a-3453f3efcb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.827 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.827 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.827 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap150508fb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 kernel: tap150508fb-90: entered promiscuous mode
Oct 02 13:14:48 compute-0 NetworkManager[44987]: <info>  [1759410888.8297] manager: (tap150508fb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.833 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap150508fb-90, col_values=(('external_ids', {'iface-id': '2a2f4068-0f5b-4d26-b914-4d32097d8b55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ovn_controller[148183]: 2025-10-02T13:14:48Z|00992|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.838 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.838 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9f6175-dd1c-4f23-8877-a3af6b6f0708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.839 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 150508fb-9217-4982-8468-977a3b53121a
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:14:48 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:14:48.839 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'env', 'PROCESS_TAG=haproxy-150508fb-9217-4982-8468-977a3b53121a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/150508fb-9217-4982-8468-977a3b53121a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:14:48 compute-0 nova_compute[257802]: 2025-10-02 13:14:48.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:49 compute-0 sudo[409112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:49 compute-0 sudo[409112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:49 compute-0 sudo[409112]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:14:49 compute-0 sudo[409182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:49 compute-0 sudo[409182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:49 compute-0 sudo[409182]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:49 compute-0 podman[409162]: 2025-10-02 13:14:49.12700979 +0000 UTC m=+0.076139868 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.141 2 DEBUG nova.compute.manager [req-16987efb-7321-4c4e-93d5-578523e7c176 req-84ba4228-5bc3-461a-af59-53f4eea7fdee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.142 2 DEBUG oslo_concurrency.lockutils [req-16987efb-7321-4c4e-93d5-578523e7c176 req-84ba4228-5bc3-461a-af59-53f4eea7fdee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.142 2 DEBUG oslo_concurrency.lockutils [req-16987efb-7321-4c4e-93d5-578523e7c176 req-84ba4228-5bc3-461a-af59-53f4eea7fdee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.142 2 DEBUG oslo_concurrency.lockutils [req-16987efb-7321-4c4e-93d5-578523e7c176 req-84ba4228-5bc3-461a-af59-53f4eea7fdee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.143 2 DEBUG nova.compute.manager [req-16987efb-7321-4c4e-93d5-578523e7c176 req-84ba4228-5bc3-461a-af59-53f4eea7fdee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Processing event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:14:49 compute-0 podman[409157]: 2025-10-02 13:14:49.146160859 +0000 UTC m=+0.099527911 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:14:49 compute-0 podman[409163]: 2025-10-02 13:14:49.157967569 +0000 UTC m=+0.107313253 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.184 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.184 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:14:49 compute-0 podman[409263]: 2025-10-02 13:14:49.203548336 +0000 UTC m=+0.049741410 container create 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:49 compute-0 systemd[1]: Started libpod-conmon-85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a.scope.
Oct 02 13:14:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b631cf6bfec9cd7318e2a567791d07a805e9a03c2583366dfbaae986987dbbb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:49 compute-0 podman[409263]: 2025-10-02 13:14:49.273782319 +0000 UTC m=+0.119975413 container init 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:14:49 compute-0 podman[409263]: 2025-10-02 13:14:49.179469226 +0000 UTC m=+0.025662320 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:14:49 compute-0 podman[409263]: 2025-10-02 13:14:49.27917652 +0000 UTC m=+0.125369594 container start 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:14:49 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [NOTICE]   (409284) : New worker (409286) forked
Oct 02 13:14:49 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [NOTICE]   (409284) : Loading success.
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.529 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.530 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410889.5285065, bfb3e97d-8923-4293-b4ae-a37019fe104c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.530 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] VM Started (Lifecycle Event)
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.534 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.537 2 INFO nova.virt.libvirt.driver [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Instance spawned successfully.
Oct 02 13:14:49 compute-0 nova_compute[257802]: 2025-10-02 13:14:49.537 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:14:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:49.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:49 compute-0 ceph-mon[73607]: pgmap v3510: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 164 op/s
Oct 02 13:14:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3112505108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.113 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.115 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.115 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.116 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.116 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.116 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.116 2 DEBUG nova.virt.libvirt.driver [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.123 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.158 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.159 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410889.529762, bfb3e97d-8923-4293-b4ae-a37019fe104c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.159 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] VM Paused (Lifecycle Event)
Oct 02 13:14:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:50.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.198 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.218 2 INFO nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Took 6.81 seconds to spawn the instance on the hypervisor.
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.218 2 DEBUG nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.230 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410889.5333278, bfb3e97d-8923-4293-b4ae-a37019fe104c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.231 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] VM Resumed (Lifecycle Event)
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.273 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.275 2 DEBUG nova.network.neutron [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updated VIF entry in instance network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.276 2 DEBUG nova.network.neutron [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.277 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.309 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.311 2 DEBUG oslo_concurrency.lockutils [req-2d15c149-82bc-4cc2-9b64-259ddba84fc8 req-19dc64ca-613b-4f6f-abf5-5abd1d9dd1bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.323 2 INFO nova.compute.manager [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Took 10.20 seconds to build instance.
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.343 2 DEBUG oslo_concurrency.lockutils [None req-9889bbbe-a96b-410f-977d-3314719dac73 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:50 compute-0 nova_compute[257802]: 2025-10-02 13:14:50.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 12 KiB/s wr, 208 op/s
Oct 02 13:14:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3281219525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.264 2 DEBUG nova.compute.manager [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.264 2 DEBUG oslo_concurrency.lockutils [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.265 2 DEBUG oslo_concurrency.lockutils [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.265 2 DEBUG oslo_concurrency.lockutils [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.266 2 DEBUG nova.compute.manager [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] No waiting events found dispatching network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.266 2 WARNING nova.compute.manager [req-14fc612c-15d9-493d-88a0-c204e9c96b6a req-e60f4195-5b22-451a-a023-a1fd42bdcc8e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received unexpected event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 for instance with vm_state active and task_state None.
Oct 02 13:14:51 compute-0 nova_compute[257802]: 2025-10-02 13:14:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:52 compute-0 ceph-mon[73607]: pgmap v3511: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 12 KiB/s wr, 208 op/s
Oct 02 13:14:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 1.2 KiB/s wr, 208 op/s
Oct 02 13:14:53 compute-0 ceph-mon[73607]: pgmap v3512: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 1.2 KiB/s wr, 208 op/s
Oct 02 13:14:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:53.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:54 compute-0 ovn_controller[148183]: 2025-10-02T13:14:54Z|00993|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.649 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.650 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.650 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.650 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.650 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:54 compute-0 ovn_controller[148183]: 2025-10-02T13:14:54Z|00994|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:54 compute-0 nova_compute[257802]: 2025-10-02 13:14:54.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 260 op/s
Oct 02 13:14:54 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327191513121161 of space, bias 1.0, pg target 1.2981574539363483 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:14:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011434492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.064 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.173 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.175 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.327 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.329 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.329 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.329 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.473 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance bfb3e97d-8923-4293-b4ae-a37019fe104c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.474 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.475 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.529 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:55 compute-0 nova_compute[257802]: 2025-10-02 13:14:55.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:55.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:55 compute-0 ceph-mon[73607]: pgmap v3513: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 260 op/s
Oct 02 13:14:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2011434492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2527150884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2527150884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267886335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.012 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.018 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.040 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.074 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.074 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:14:56 compute-0 nova_compute[257802]: 2025-10-02 13:14:56.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 183 op/s
Oct 02 13:14:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4267886335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:57 compute-0 nova_compute[257802]: 2025-10-02 13:14:57.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:57 compute-0 NetworkManager[44987]: <info>  [1759410897.1912] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 02 13:14:57 compute-0 nova_compute[257802]: 2025-10-02 13:14:57.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 NetworkManager[44987]: <info>  [1759410897.1923] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 02 13:14:57 compute-0 nova_compute[257802]: 2025-10-02 13:14:57.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 ovn_controller[148183]: 2025-10-02T13:14:57Z|00995|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct 02 13:14:57 compute-0 nova_compute[257802]: 2025-10-02 13:14:57.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:57.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:57 compute-0 ceph-mon[73607]: pgmap v3514: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 183 op/s
Oct 02 13:14:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 136 op/s
Oct 02 13:14:59 compute-0 ceph-mon[73607]: pgmap v3515: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 136 op/s
Oct 02 13:14:59 compute-0 nova_compute[257802]: 2025-10-02 13:14:59.268 2 DEBUG nova.compute.manager [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:59 compute-0 nova_compute[257802]: 2025-10-02 13:14:59.268 2 DEBUG nova.compute.manager [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing instance network info cache due to event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:59 compute-0 nova_compute[257802]: 2025-10-02 13:14:59.269 2 DEBUG oslo_concurrency.lockutils [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:59 compute-0 nova_compute[257802]: 2025-10-02 13:14:59.269 2 DEBUG oslo_concurrency.lockutils [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:59 compute-0 nova_compute[257802]: 2025-10-02 13:14:59.269 2 DEBUG nova.network.neutron [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:14:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:14:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:59.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:00.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:00 compute-0 nova_compute[257802]: 2025-10-02 13:15:00.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 114 op/s
Oct 02 13:15:00 compute-0 podman[409348]: 2025-10-02 13:15:00.94681265 +0000 UTC m=+0.090423148 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:15:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:01 compute-0 nova_compute[257802]: 2025-10-02 13:15:01.338 2 DEBUG nova.network.neutron [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updated VIF entry in instance network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:15:01 compute-0 nova_compute[257802]: 2025-10-02 13:15:01.339 2 DEBUG nova.network.neutron [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:01 compute-0 nova_compute[257802]: 2025-10-02 13:15:01.376 2 DEBUG oslo_concurrency.lockutils [req-45d085f5-392d-4e93-9a96-f443fd07e69d req-2e7fd418-6f44-49da-b6f4-73a4b9de5d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:15:01 compute-0 nova_compute[257802]: 2025-10-02 13:15:01.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:01 compute-0 ovn_controller[148183]: 2025-10-02T13:15:01Z|00128|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Oct 02 13:15:01 compute-0 ovn_controller[148183]: 2025-10-02T13:15:01Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6b:f8:73 10.100.0.11
Oct 02 13:15:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:01.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:02 compute-0 ceph-mon[73607]: pgmap v3516: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 114 op/s
Oct 02 13:15:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:02.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 02 13:15:03 compute-0 nova_compute[257802]: 2025-10-02 13:15:03.121 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:03 compute-0 nova_compute[257802]: 2025-10-02 13:15:03.122 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:15:03 compute-0 ceph-mon[73607]: pgmap v3517: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 02 13:15:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:04.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 107 op/s
Oct 02 13:15:05 compute-0 nova_compute[257802]: 2025-10-02 13:15:05.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:05.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:05 compute-0 ovn_controller[148183]: 2025-10-02T13:15:05Z|00130|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Oct 02 13:15:05 compute-0 ovn_controller[148183]: 2025-10-02T13:15:05Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6b:f8:73 10.100.0.11
Oct 02 13:15:05 compute-0 ceph-mon[73607]: pgmap v3518: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 107 op/s
Oct 02 13:15:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:06 compute-0 nova_compute[257802]: 2025-10-02 13:15:06.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:06 compute-0 nova_compute[257802]: 2025-10-02 13:15:06.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:06 compute-0 ovn_controller[148183]: 2025-10-02T13:15:06Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:f8:73 10.100.0.11
Oct 02 13:15:06 compute-0 ovn_controller[148183]: 2025-10-02T13:15:06Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:f8:73 10.100.0.11
Oct 02 13:15:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 63 op/s
Oct 02 13:15:07 compute-0 nova_compute[257802]: 2025-10-02 13:15:07.039 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:07 compute-0 ceph-mon[73607]: pgmap v3519: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 63 op/s
Oct 02 13:15:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 23 KiB/s wr, 45 op/s
Oct 02 13:15:09 compute-0 sudo[409378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:09 compute-0 sudo[409378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:09 compute-0 sudo[409378]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:09 compute-0 sudo[409403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:09 compute-0 sudo[409403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:09 compute-0 sudo[409403]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:09.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:09 compute-0 ceph-mon[73607]: pgmap v3520: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 23 KiB/s wr, 45 op/s
Oct 02 13:15:09 compute-0 nova_compute[257802]: 2025-10-02 13:15:09.954 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:10 compute-0 nova_compute[257802]: 2025-10-02 13:15:10.014 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Triggering sync for uuid bfb3e97d-8923-4293-b4ae-a37019fe104c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 13:15:10 compute-0 nova_compute[257802]: 2025-10-02 13:15:10.014 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:10 compute-0 nova_compute[257802]: 2025-10-02 13:15:10.015 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:10 compute-0 nova_compute[257802]: 2025-10-02 13:15:10.044 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:10 compute-0 nova_compute[257802]: 2025-10-02 13:15:10.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 46 op/s
Oct 02 13:15:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:11 compute-0 nova_compute[257802]: 2025-10-02 13:15:11.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:11.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:11 compute-0 ceph-mon[73607]: pgmap v3521: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 46 op/s
Oct 02 13:15:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:12.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 46 op/s
Oct 02 13:15:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:13.253 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:13 compute-0 nova_compute[257802]: 2025-10-02 13:15:13.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:13 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:13.254 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:15:13 compute-0 nova_compute[257802]: 2025-10-02 13:15:13.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:13 compute-0 ceph-mon[73607]: pgmap v3522: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 46 op/s
Oct 02 13:15:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:14.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:14 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:14.257 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 27 KiB/s wr, 47 op/s
Oct 02 13:15:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Oct 02 13:15:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Oct 02 13:15:15 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Oct 02 13:15:15 compute-0 nova_compute[257802]: 2025-10-02 13:15:15.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:15:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:15:16 compute-0 ceph-mon[73607]: pgmap v3523: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 27 KiB/s wr, 47 op/s
Oct 02 13:15:16 compute-0 ceph-mon[73607]: osdmap e426: 3 total, 3 up, 3 in
Oct 02 13:15:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:16 compute-0 nova_compute[257802]: 2025-10-02 13:15:16.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 18 KiB/s wr, 11 op/s
Oct 02 13:15:17 compute-0 ceph-mon[73607]: pgmap v3525: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 18 KiB/s wr, 11 op/s
Oct 02 13:15:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:18.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 10 KiB/s wr, 20 op/s
Oct 02 13:15:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:19.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:19 compute-0 ceph-mon[73607]: pgmap v3526: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 10 KiB/s wr, 20 op/s
Oct 02 13:15:19 compute-0 podman[409434]: 2025-10-02 13:15:19.944624213 +0000 UTC m=+0.075176914 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:15:19 compute-0 podman[409433]: 2025-10-02 13:15:19.944980742 +0000 UTC m=+0.073801100 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 13:15:19 compute-0 podman[409435]: 2025-10-02 13:15:19.963733022 +0000 UTC m=+0.095068192 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:15:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:20 compute-0 nova_compute[257802]: 2025-10-02 13:15:20.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 6.4 KiB/s wr, 30 op/s
Oct 02 13:15:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3840151140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.221431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921221510, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 979, "num_deletes": 251, "total_data_size": 1495784, "memory_usage": 1528880, "flush_reason": "Manual Compaction"}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921331145, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1478566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77115, "largest_seqno": 78093, "table_properties": {"data_size": 1473836, "index_size": 2317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10624, "raw_average_key_size": 19, "raw_value_size": 1464188, "raw_average_value_size": 2731, "num_data_blocks": 103, "num_entries": 536, "num_filter_entries": 536, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410837, "oldest_key_time": 1759410837, "file_creation_time": 1759410921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 109741 microseconds, and 4764 cpu microseconds.
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.331188) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1478566 bytes OK
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.331207) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.408380) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.408426) EVENT_LOG_v1 {"time_micros": 1759410921408416, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.408447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1491248, prev total WAL file size 1491248, number of live WAL files 2.
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.409231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1443KB)], [176(11MB)]
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921409277, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 13262537, "oldest_snapshot_seqno": -1}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10392 keys, 11301247 bytes, temperature: kUnknown
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921608901, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 11301247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11236958, "index_size": 37232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 274451, "raw_average_key_size": 26, "raw_value_size": 11057902, "raw_average_value_size": 1064, "num_data_blocks": 1405, "num_entries": 10392, "num_filter_entries": 10392, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759410921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.609340) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11301247 bytes
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.645103) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.4 rd, 56.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(16.6) write-amplify(7.6) OK, records in: 10911, records dropped: 519 output_compression: NoCompression
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.645149) EVENT_LOG_v1 {"time_micros": 1759410921645131, "job": 110, "event": "compaction_finished", "compaction_time_micros": 199805, "compaction_time_cpu_micros": 26411, "output_level": 6, "num_output_files": 1, "total_output_size": 11301247, "num_input_records": 10911, "num_output_records": 10392, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921645946, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921650292, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.409141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.650405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.650414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.650417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.650420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:15:21.650423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:15:21 compute-0 nova_compute[257802]: 2025-10-02 13:15:21.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:22 compute-0 ceph-mon[73607]: pgmap v3527: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 6.4 KiB/s wr, 30 op/s
Oct 02 13:15:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 6.4 KiB/s wr, 30 op/s
Oct 02 13:15:23 compute-0 ceph-mon[73607]: pgmap v3528: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 6.4 KiB/s wr, 30 op/s
Oct 02 13:15:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:23.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/257275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.1 MiB/s wr, 58 op/s
Oct 02 13:15:25 compute-0 ceph-mon[73607]: pgmap v3529: 305 pgs: 305 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.1 MiB/s wr, 58 op/s
Oct 02 13:15:25 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:15:25 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:15:25 compute-0 nova_compute[257802]: 2025-10-02 13:15:25.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:26 compute-0 nova_compute[257802]: 2025-10-02 13:15:26.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2265586687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 188 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Oct 02 13:15:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:27.001 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:27.001 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:27.002 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:15:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178023868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:27 compute-0 ceph-mon[73607]: pgmap v3530: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 188 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Oct 02 13:15:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3603674874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1178023868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:15:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:15:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 02 13:15:29 compute-0 sudo[409494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:29 compute-0 sudo[409494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:29 compute-0 sudo[409494]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:29 compute-0 sudo[409519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:29 compute-0 sudo[409519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:29 compute-0 sudo[409519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:29 compute-0 ceph-mon[73607]: pgmap v3531: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 02 13:15:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:29.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:30 compute-0 nova_compute[257802]: 2025-10-02 13:15:30.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 13:15:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:31 compute-0 nova_compute[257802]: 2025-10-02 13:15:31.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:31 compute-0 ceph-mon[73607]: pgmap v3532: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 13:15:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1013938976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:32 compute-0 podman[409546]: 2025-10-02 13:15:32.01974884 +0000 UTC m=+0.131173367 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:15:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:15:33 compute-0 ceph-mon[73607]: pgmap v3533: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:15:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:33.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 993 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 13:15:35 compute-0 nova_compute[257802]: 2025-10-02 13:15:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:35 compute-0 nova_compute[257802]: 2025-10-02 13:15:35.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:35 compute-0 nova_compute[257802]: 2025-10-02 13:15:35.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:15:35 compute-0 nova_compute[257802]: 2025-10-02 13:15:35.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:35 compute-0 ceph-mon[73607]: pgmap v3534: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 993 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 13:15:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:36 compute-0 nova_compute[257802]: 2025-10-02 13:15:36.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 950 KiB/s wr, 83 op/s
Oct 02 13:15:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2077986245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:37 compute-0 nova_compute[257802]: 2025-10-02 13:15:37.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:37 compute-0 ceph-mon[73607]: pgmap v3535: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 950 KiB/s wr, 83 op/s
Oct 02 13:15:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1796848928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 101 op/s
Oct 02 13:15:39 compute-0 sudo[409576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:39 compute-0 sudo[409576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[409576]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:39.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:39 compute-0 sudo[409601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:39 compute-0 sudo[409601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[409601]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 sudo[409626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:39 compute-0 sudo[409626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[409626]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:40 compute-0 ceph-mon[73607]: pgmap v3536: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 101 op/s
Oct 02 13:15:40 compute-0 sudo[409651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:15:40 compute-0 sudo[409651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:40 compute-0 nova_compute[257802]: 2025-10-02 13:15:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:40.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:40 compute-0 sudo[409651]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:40 compute-0 nova_compute[257802]: 2025-10-02 13:15:40.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 152 op/s
Oct 02 13:15:41 compute-0 ceph-mon[73607]: pgmap v3537: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 152 op/s
Oct 02 13:15:41 compute-0 nova_compute[257802]: 2025-10-02 13:15:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:41 compute-0 nova_compute[257802]: 2025-10-02 13:15:41.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:15:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:15:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:15:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:15:42
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'backups']
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c8420533-3c6d-4803-afd3-9d332bd53d48 does not exist
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 77b0c0af-9892-40f9-a828-0756ab4c8d73 does not exist
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bcd5ac3c-6659-469f-828c-365cd5f6e1c9 does not exist
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:15:42 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:15:42 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:42 compute-0 sudo[409708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:42 compute-0 sudo[409708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:42 compute-0 sudo[409708]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 142 op/s
Oct 02 13:15:42 compute-0 sudo[409734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:42 compute-0 sudo[409734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:42 compute-0 sudo[409734]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:42 compute-0 sudo[409759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:42 compute-0 sudo[409759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:42 compute-0 sudo[409759]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:43 compute-0 sudo[409784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:15:43 compute-0 sudo[409784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.42771227 +0000 UTC m=+0.056592758 container create f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:15:43 compute-0 systemd[1]: Started libpod-conmon-f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4.scope.
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.399402306 +0000 UTC m=+0.028282804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.549052845 +0000 UTC m=+0.177933363 container init f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.557427961 +0000 UTC m=+0.186308449 container start f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.560634139 +0000 UTC m=+0.189514627 container attach f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:15:43 compute-0 elastic_hypatia[409867]: 167 167
Oct 02 13:15:43 compute-0 systemd[1]: libpod-f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4.scope: Deactivated successfully.
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.567114228 +0000 UTC m=+0.195994716 container died f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:15:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c1c9b24e205a8f68d6cdf06c71f55e5bf696cf91c8140383f1dea0b639940e5-merged.mount: Deactivated successfully.
Oct 02 13:15:43 compute-0 podman[409850]: 2025-10-02 13:15:43.62266527 +0000 UTC m=+0.251545758 container remove f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:43 compute-0 systemd[1]: libpod-conmon-f2eaa8732387a4a8b3102735a298a2ea4016b106420ecefc4bee6feb490061a4.scope: Deactivated successfully.
Oct 02 13:15:43 compute-0 ceph-mon[73607]: pgmap v3538: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 142 op/s
Oct 02 13:15:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:43 compute-0 podman[409893]: 2025-10-02 13:15:43.911346638 +0000 UTC m=+0.069073995 container create cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:43 compute-0 systemd[1]: Started libpod-conmon-cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253.scope.
Oct 02 13:15:43 compute-0 podman[409893]: 2025-10-02 13:15:43.879908027 +0000 UTC m=+0.037635444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:44 compute-0 podman[409893]: 2025-10-02 13:15:44.031304439 +0000 UTC m=+0.189031776 container init cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:15:44 compute-0 podman[409893]: 2025-10-02 13:15:44.044482143 +0000 UTC m=+0.202209510 container start cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:15:44 compute-0 podman[409893]: 2025-10-02 13:15:44.048576302 +0000 UTC m=+0.206303679 container attach cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:15:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:15:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:15:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 182 op/s
Oct 02 13:15:44 compute-0 romantic_colden[409909]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:15:44 compute-0 romantic_colden[409909]: --> relative data size: 1.0
Oct 02 13:15:44 compute-0 romantic_colden[409909]: --> All data devices are unavailable
Oct 02 13:15:44 compute-0 systemd[1]: libpod-cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253.scope: Deactivated successfully.
Oct 02 13:15:44 compute-0 podman[409893]: 2025-10-02 13:15:44.869364038 +0000 UTC m=+1.027091385 container died cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de3305c5b0e7a1a23879588d4d21e576ca26066f7aeb7f20dafb8d7e9291b06-merged.mount: Deactivated successfully.
Oct 02 13:15:44 compute-0 podman[409893]: 2025-10-02 13:15:44.942316267 +0000 UTC m=+1.100043594 container remove cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_colden, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:44 compute-0 systemd[1]: libpod-conmon-cd31c00fdf185e0db5e4385143ffe99d359d8ee7f0da8af517e1dd8587aea253.scope: Deactivated successfully.
Oct 02 13:15:44 compute-0 sudo[409784]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:45 compute-0 sudo[409938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:45 compute-0 sudo[409938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:45 compute-0 sudo[409938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:45 compute-0 sudo[409963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:45 compute-0 sudo[409963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:45 compute-0 sudo[409963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:45 compute-0 sudo[409988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:45 compute-0 sudo[409988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:45 compute-0 sudo[409988]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:45 compute-0 sudo[410013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:15:45 compute-0 sudo[410013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.603956809 +0000 UTC m=+0.040856483 container create f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:15:45 compute-0 systemd[1]: Started libpod-conmon-f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca.scope.
Oct 02 13:15:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.584062111 +0000 UTC m=+0.020961825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.689997579 +0000 UTC m=+0.126897303 container init f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.696912609 +0000 UTC m=+0.133812293 container start f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.700125997 +0000 UTC m=+0.137025671 container attach f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:15:45 compute-0 blissful_jang[410094]: 167 167
Oct 02 13:15:45 compute-0 systemd[1]: libpod-f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca.scope: Deactivated successfully.
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.705438177 +0000 UTC m=+0.142337851 container died f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-28e5d434f7ba44ac8cb88841c6cac392a91f2904501b72e8e30c45ec0b5cfb20-merged.mount: Deactivated successfully.
Oct 02 13:15:45 compute-0 podman[410078]: 2025-10-02 13:15:45.7398058 +0000 UTC m=+0.176705494 container remove f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:15:45 compute-0 systemd[1]: libpod-conmon-f079545ac48253392cc6b1efd013c78e1e5e7ae4091022486e43359924b398ca.scope: Deactivated successfully.
Oct 02 13:15:45 compute-0 nova_compute[257802]: 2025-10-02 13:15:45.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:45 compute-0 ceph-mon[73607]: pgmap v3539: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 182 op/s
Oct 02 13:15:45 compute-0 podman[410116]: 2025-10-02 13:15:45.954517655 +0000 UTC m=+0.046716557 container create 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:15:45 compute-0 systemd[1]: Started libpod-conmon-519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca.scope.
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:45.932895414 +0000 UTC m=+0.025094346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30521825432a9ccb89e8c4fb8d0eb3266af781997915b3945d083aeb38accaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30521825432a9ccb89e8c4fb8d0eb3266af781997915b3945d083aeb38accaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30521825432a9ccb89e8c4fb8d0eb3266af781997915b3945d083aeb38accaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30521825432a9ccb89e8c4fb8d0eb3266af781997915b3945d083aeb38accaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:46.048294214 +0000 UTC m=+0.140493136 container init 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:46.055390898 +0000 UTC m=+0.147589800 container start 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:46.059274303 +0000 UTC m=+0.151473225 container attach 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:46.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:46 compute-0 nova_compute[257802]: 2025-10-02 13:15:46.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]: {
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:     "1": [
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:         {
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "devices": [
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "/dev/loop3"
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             ],
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "lv_name": "ceph_lv0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "lv_size": "7511998464",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "name": "ceph_lv0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "tags": {
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.cluster_name": "ceph",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.crush_device_class": "",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.encrypted": "0",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.osd_id": "1",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.type": "block",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:                 "ceph.vdo": "0"
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             },
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "type": "block",
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:             "vg_name": "ceph_vg0"
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:         }
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]:     ]
Oct 02 13:15:46 compute-0 flamboyant_matsumoto[410133]: }
Oct 02 13:15:46 compute-0 systemd[1]: libpod-519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca.scope: Deactivated successfully.
Oct 02 13:15:46 compute-0 conmon[410133]: conmon 519fbb66e6dea174b3e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca.scope/container/memory.events
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:46.817106275 +0000 UTC m=+0.909305177 container died 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:15:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 02 13:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f30521825432a9ccb89e8c4fb8d0eb3266af781997915b3945d083aeb38accaa-merged.mount: Deactivated successfully.
Oct 02 13:15:46 compute-0 podman[410116]: 2025-10-02 13:15:46.869979451 +0000 UTC m=+0.962178353 container remove 519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:15:46 compute-0 systemd[1]: libpod-conmon-519fbb66e6dea174b3e61482313c3a4a890ae0c14854345d869d75f46c91e1ca.scope: Deactivated successfully.
Oct 02 13:15:46 compute-0 sudo[410013]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:46 compute-0 sudo[410154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:46 compute-0 sudo[410154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:46 compute-0 sudo[410154]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:47 compute-0 sudo[410179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:47 compute-0 sudo[410179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:47 compute-0 sudo[410179]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:47 compute-0 sudo[410204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:47 compute-0 sudo[410204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:47 compute-0 sudo[410204]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:47 compute-0 sudo[410229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:15:47 compute-0 sudo[410229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.470254719 +0000 UTC m=+0.038140746 container create 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:47 compute-0 systemd[1]: Started libpod-conmon-5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a.scope.
Oct 02 13:15:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.454216956 +0000 UTC m=+0.022102873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.558629565 +0000 UTC m=+0.126515482 container init 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.566255913 +0000 UTC m=+0.134141800 container start 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.569963553 +0000 UTC m=+0.137849490 container attach 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:47 compute-0 nice_noether[410313]: 167 167
Oct 02 13:15:47 compute-0 systemd[1]: libpod-5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a.scope: Deactivated successfully.
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.572476385 +0000 UTC m=+0.140362312 container died 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11a428ded0665363fd36bab2aa25b5c9045385a8d6c4733d464d0feb85487b7-merged.mount: Deactivated successfully.
Oct 02 13:15:47 compute-0 podman[410297]: 2025-10-02 13:15:47.630465747 +0000 UTC m=+0.198351654 container remove 5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:15:47 compute-0 systemd[1]: libpod-conmon-5c1473066abdeb048a384905fd9852dcef9f77d5d9b7cfba269a6b08ebcf9a7a.scope: Deactivated successfully.
Oct 02 13:15:47 compute-0 podman[410337]: 2025-10-02 13:15:47.842207829 +0000 UTC m=+0.051297979 container create d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:15:47 compute-0 systemd[1]: Started libpod-conmon-d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3.scope.
Oct 02 13:15:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:47.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:47 compute-0 podman[410337]: 2025-10-02 13:15:47.81900975 +0000 UTC m=+0.028099920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5357b0e92f952a3ce02f826df6cce51a657bb03f6cf9e6f0902beccc283af4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5357b0e92f952a3ce02f826df6cce51a657bb03f6cf9e6f0902beccc283af4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5357b0e92f952a3ce02f826df6cce51a657bb03f6cf9e6f0902beccc283af4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5357b0e92f952a3ce02f826df6cce51a657bb03f6cf9e6f0902beccc283af4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:47 compute-0 podman[410337]: 2025-10-02 13:15:47.944256971 +0000 UTC m=+0.153347121 container init d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:15:47 compute-0 podman[410337]: 2025-10-02 13:15:47.953250041 +0000 UTC m=+0.162340191 container start d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:47 compute-0 podman[410337]: 2025-10-02 13:15:47.956732987 +0000 UTC m=+0.165823157 container attach d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:15:47 compute-0 ceph-mon[73607]: pgmap v3540: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 02 13:15:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]: {
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:         "osd_id": 1,
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:         "type": "bluestore"
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]:     }
Oct 02 13:15:48 compute-0 brave_chebyshev[410354]: }
Oct 02 13:15:48 compute-0 systemd[1]: libpod-d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3.scope: Deactivated successfully.
Oct 02 13:15:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Oct 02 13:15:48 compute-0 podman[410376]: 2025-10-02 13:15:48.825641991 +0000 UTC m=+0.027127796 container died d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc5357b0e92f952a3ce02f826df6cce51a657bb03f6cf9e6f0902beccc283af4-merged.mount: Deactivated successfully.
Oct 02 13:15:48 compute-0 podman[410376]: 2025-10-02 13:15:48.874065209 +0000 UTC m=+0.075550984 container remove d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:15:48 compute-0 systemd[1]: libpod-conmon-d322eccbe27c81146988b4f90e67cb00766d53bce781621771a66c904027fdb3.scope: Deactivated successfully.
Oct 02 13:15:48 compute-0 sudo[410229]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:15:48 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:15:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4a5e92ee-2b72-4f51-9544-ba040646fc95 does not exist
Oct 02 13:15:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev fafdace1-64ee-4853-bdd9-3454bebf23cc does not exist
Oct 02 13:15:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d241ece8-efaa-488c-8881-daff7e24a97e does not exist
Oct 02 13:15:49 compute-0 nova_compute[257802]: 2025-10-02 13:15:49.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:49 compute-0 sudo[410392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:49 compute-0 sudo[410392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:49 compute-0 sudo[410392]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:49 compute-0 sudo[410417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:15:49 compute-0 sudo[410417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:49 compute-0 sudo[410417]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:49 compute-0 sudo[410442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:49 compute-0 sudo[410442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:49 compute-0 sudo[410442]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:49 compute-0 sudo[410467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:49 compute-0 sudo[410467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:49 compute-0 sudo[410467]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:15:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:15:49 compute-0 ceph-mon[73607]: pgmap v3541: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Oct 02 13:15:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:49 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:15:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:50 compute-0 nova_compute[257802]: 2025-10-02 13:15:50.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 158 op/s
Oct 02 13:15:50 compute-0 podman[410494]: 2025-10-02 13:15:50.944648707 +0000 UTC m=+0.070981672 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:15:50 compute-0 podman[410493]: 2025-10-02 13:15:50.946075712 +0000 UTC m=+0.071705159 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:15:50 compute-0 podman[410495]: 2025-10-02 13:15:50.985807035 +0000 UTC m=+0.099800507 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:15:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.329 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.330 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquired lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.330 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.330 2 DEBUG nova.objects.instance [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfb3e97d-8923-4293-b4ae-a37019fe104c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:51 compute-0 nova_compute[257802]: 2025-10-02 13:15:51.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:52 compute-0 ceph-mon[73607]: pgmap v3542: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 158 op/s
Oct 02 13:15:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/162551055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:15:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:15:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 107 op/s
Oct 02 13:15:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3437230346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:53 compute-0 nova_compute[257802]: 2025-10-02 13:15:53.142 2 DEBUG nova.network.neutron [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:53 compute-0 nova_compute[257802]: 2025-10-02 13:15:53.182 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Releasing lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:15:53 compute-0 nova_compute[257802]: 2025-10-02 13:15:53.182 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:15:53 compute-0 nova_compute[257802]: 2025-10-02 13:15:53.703 2 DEBUG nova.compute.manager [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 13:15:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:53.999 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.000 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:54 compute-0 ceph-mon[73607]: pgmap v3543: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 107 op/s
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.158 2 DEBUG nova.objects.instance [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_requests' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.203 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.203 2 INFO nova.compute.claims [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.204 2 DEBUG nova.objects.instance [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.257 2 DEBUG nova.objects.instance [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.335 2 INFO nova.compute.resource_tracker [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating resource usage from migration f57aae45-e0d4-4519-a990-fd077295e0a0
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.335 2 DEBUG nova.compute.resource_tracker [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Starting to track incoming migration f57aae45-e0d4-4519-a990-fd077295e0a0 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.380 2 DEBUG nova.scheduler.client.report [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.419 2 DEBUG nova.scheduler.client.report [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.420 2 DEBUG nova.compute.provider_tree [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.443 2 DEBUG nova.scheduler.client.report [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.464 2 DEBUG nova.scheduler.client.report [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.538 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 111 op/s
Oct 02 13:15:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741721116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.963 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:54 compute-0 nova_compute[257802]: 2025-10-02 13:15:54.969 2 DEBUG nova.compute.provider_tree [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021832251535455053 of space, bias 1.0, pg target 0.6549675460636516 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004604729376046901 of space, bias 1.0, pg target 1.3814188128140703 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.045 2 DEBUG nova.scheduler.client.report [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:55 compute-0 ceph-mon[73607]: pgmap v3544: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 111 op/s
Oct 02 13:15:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3741721116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.131 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.132 2 INFO nova.compute.manager [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Migrating
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.217 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.217 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.218 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.218 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.219 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994097466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.642 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.740 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.741 2 DEBUG nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.880 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.881 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3936MB free_disk=20.942459106445312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.881 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.882 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:55 compute-0 nova_compute[257802]: 2025-10-02 13:15:55.995 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Migration for instance 0a899aee-b0ae-4350-9f56-446d42ef77d2 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.018 2 INFO nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating resource usage from migration f57aae45-e0d4-4519-a990-fd077295e0a0
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.018 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Starting to track incoming migration f57aae45-e0d4-4519-a990-fd077295e0a0 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.048 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance bfb3e97d-8923-4293-b4ae-a37019fe104c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.087 2 WARNING nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Instance 0a899aee-b0ae-4350-9f56-446d42ef77d2 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.088 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.088 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:15:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/929483847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/929483847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2994097466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.172 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:56.550 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:56 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:15:56.551 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109827013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.585 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.590 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.680 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.684 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.684 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:56 compute-0 nova_compute[257802]: 2025-10-02 13:15:56.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 72 op/s
Oct 02 13:15:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/109827013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:57 compute-0 ceph-mon[73607]: pgmap v3545: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 72 op/s
Oct 02 13:15:57 compute-0 sshd-session[410623]: Accepted publickey for nova from 192.168.122.101 port 57556 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 13:15:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:57.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:57 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 13:15:57 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 13:15:57 compute-0 systemd-logind[789]: New session 75 of user nova.
Oct 02 13:15:57 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 13:15:57 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 13:15:57 compute-0 systemd[410627]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:15:58 compute-0 systemd[410627]: Queued start job for default target Main User Target.
Oct 02 13:15:58 compute-0 systemd[410627]: Created slice User Application Slice.
Oct 02 13:15:58 compute-0 systemd[410627]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 13:15:58 compute-0 systemd[410627]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 13:15:58 compute-0 systemd[410627]: Reached target Paths.
Oct 02 13:15:58 compute-0 systemd[410627]: Reached target Timers.
Oct 02 13:15:58 compute-0 systemd[410627]: Starting D-Bus User Message Bus Socket...
Oct 02 13:15:58 compute-0 systemd[410627]: Starting Create User's Volatile Files and Directories...
Oct 02 13:15:58 compute-0 systemd[410627]: Listening on D-Bus User Message Bus Socket.
Oct 02 13:15:58 compute-0 systemd[410627]: Reached target Sockets.
Oct 02 13:15:58 compute-0 systemd[410627]: Finished Create User's Volatile Files and Directories.
Oct 02 13:15:58 compute-0 systemd[410627]: Reached target Basic System.
Oct 02 13:15:58 compute-0 systemd[410627]: Reached target Main User Target.
Oct 02 13:15:58 compute-0 systemd[410627]: Startup finished in 155ms.
Oct 02 13:15:58 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 13:15:58 compute-0 systemd[1]: Started Session 75 of User nova.
Oct 02 13:15:58 compute-0 sshd-session[410623]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:15:58 compute-0 sshd-session[410642]: Received disconnect from 192.168.122.101 port 57556:11: disconnected by user
Oct 02 13:15:58 compute-0 sshd-session[410642]: Disconnected from user nova 192.168.122.101 port 57556
Oct 02 13:15:58 compute-0 sshd-session[410623]: pam_unix(sshd:session): session closed for user nova
Oct 02 13:15:58 compute-0 systemd[1]: session-75.scope: Deactivated successfully.
Oct 02 13:15:58 compute-0 systemd-logind[789]: Session 75 logged out. Waiting for processes to exit.
Oct 02 13:15:58 compute-0 systemd-logind[789]: Removed session 75.
Oct 02 13:15:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:58 compute-0 sshd-session[410644]: Accepted publickey for nova from 192.168.122.101 port 57570 ssh2: ECDSA SHA256:RlBMWn3An7DGjBe9yfwGQtrEA9dOakLcJHFiZKvkVOc
Oct 02 13:15:58 compute-0 systemd-logind[789]: New session 77 of user nova.
Oct 02 13:15:58 compute-0 systemd[1]: Started Session 77 of User nova.
Oct 02 13:15:58 compute-0 sshd-session[410644]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:15:58 compute-0 sshd-session[410647]: Received disconnect from 192.168.122.101 port 57570:11: disconnected by user
Oct 02 13:15:58 compute-0 sshd-session[410647]: Disconnected from user nova 192.168.122.101 port 57570
Oct 02 13:15:58 compute-0 sshd-session[410644]: pam_unix(sshd:session): session closed for user nova
Oct 02 13:15:58 compute-0 systemd[1]: session-77.scope: Deactivated successfully.
Oct 02 13:15:58 compute-0 systemd-logind[789]: Session 77 logged out. Waiting for processes to exit.
Oct 02 13:15:58 compute-0 systemd-logind[789]: Removed session 77.
Oct 02 13:15:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 593 KiB/s wr, 63 op/s
Oct 02 13:15:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:15:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:59 compute-0 ceph-mon[73607]: pgmap v3546: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 593 KiB/s wr, 63 op/s
Oct 02 13:16:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:00 compute-0 nova_compute[257802]: 2025-10-02 13:16:00.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 542 KiB/s wr, 63 op/s
Oct 02 13:16:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.338 2 DEBUG nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.339 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.339 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.339 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.339 2 DEBUG nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.339 2 WARNING nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state resize_migrating.
Oct 02 13:16:01 compute-0 nova_compute[257802]: 2025-10-02 13:16:01.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:01 compute-0 ceph-mon[73607]: pgmap v3547: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 542 KiB/s wr, 63 op/s
Oct 02 13:16:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1743447586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:02 compute-0 nova_compute[257802]: 2025-10-02 13:16:02.060 2 INFO nova.network.neutron [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:16:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:02.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:02 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:02.552 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 104 KiB/s wr, 27 op/s
Oct 02 13:16:02 compute-0 podman[410652]: 2025-10-02 13:16:02.935674522 +0000 UTC m=+0.077636815 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.136 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.136 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.137 2 DEBUG nova.network.neutron [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.266 2 DEBUG nova.compute.manager [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.266 2 DEBUG nova.compute.manager [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing instance network info cache due to event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.266 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.493 2 DEBUG nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.494 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.494 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.494 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.494 2 DEBUG nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:03 compute-0 nova_compute[257802]: 2025-10-02 13:16:03.494 2 WARNING nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state resize_migrated.
Oct 02 13:16:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:03 compute-0 ceph-mon[73607]: pgmap v3548: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 104 KiB/s wr, 27 op/s
Oct 02 13:16:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/498093854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/498093854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.386 2 DEBUG nova.network.neutron [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.433 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.440 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.441 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.580 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.583 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.584 2 INFO nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Creating image(s)
Oct 02 13:16:04 compute-0 nova_compute[257802]: 2025-10-02 13:16:04.645 2 DEBUG nova.storage.rbd_utils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] creating snapshot(nova-resize) on rbd image(0a899aee-b0ae-4350-9f56-446d42ef77d2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:16:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 289 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 119 KiB/s wr, 33 op/s
Oct 02 13:16:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Oct 02 13:16:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Oct 02 13:16:05 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.086 2 DEBUG nova.objects.instance [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.223 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.223 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Ensure instance console log exists: /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.224 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.224 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.224 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.227 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start _get_guest_xml network_info=[{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.232 2 WARNING nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.240 2 DEBUG nova.virt.libvirt.host [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.241 2 DEBUG nova.virt.libvirt.host [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.246 2 DEBUG nova.virt.libvirt.host [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.247 2 DEBUG nova.virt.libvirt.host [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.248 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.248 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.248 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.249 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.249 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.249 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.249 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.249 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.250 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.250 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.250 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.250 2 DEBUG nova.virt.hardware [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.251 2 DEBUG nova.objects.instance [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.269 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:16:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116494503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.721 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.767 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:05 compute-0 nova_compute[257802]: 2025-10-02 13:16:05.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:05.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:06 compute-0 ceph-mon[73607]: pgmap v3549: 305 pgs: 305 active+clean; 289 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 119 KiB/s wr, 33 op/s
Oct 02 13:16:06 compute-0 ceph-mon[73607]: osdmap e427: 3 total, 3 up, 3 in
Oct 02 13:16:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4116494503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:16:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91716838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.227 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.229 2 DEBUG nova.virt.libvirt.vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:15:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:16:01Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.229 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.230 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.233 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <uuid>0a899aee-b0ae-4350-9f56-446d42ef77d2</uuid>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <name>instance-000000dd</name>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <memory>196608</memory>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1996915229</nova:name>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:16:05</nova:creationTime>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:flavor name="m1.micro">
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:memory>192</nova:memory>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <nova:port uuid="c7bc4d7b-3f13-4f9c-be12-6edcff4d424e">
Oct 02 13:16:06 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <system>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="serial">0a899aee-b0ae-4350-9f56-446d42ef77d2</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="uuid">0a899aee-b0ae-4350-9f56-446d42ef77d2</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </system>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <os>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </os>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <features>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </features>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0a899aee-b0ae-4350-9f56-446d42ef77d2_disk">
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </source>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config">
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </source>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:16:06 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:c7:1d:48"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <target dev="tapc7bc4d7b-3f"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/console.log" append="off"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <video>
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </video>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:16:06 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:16:06 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:16:06 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:16:06 compute-0 nova_compute[257802]: </domain>
Oct 02 13:16:06 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.235 2 DEBUG nova.virt.libvirt.vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:15:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:16:01Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.236 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.238 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.239 2 DEBUG os_vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7bc4d7b-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7bc4d7b-3f, col_values=(('external_ids', {'iface-id': 'c7bc4d7b-3f13-4f9c-be12-6edcff4d424e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:1d:48', 'vm-uuid': '0a899aee-b0ae-4350-9f56-446d42ef77d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.2495] manager: (tapc7bc4d7b-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.256 2 INFO os_vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f')
Oct 02 13:16:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:06.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.529 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.530 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.530 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:c7:1d:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.531 2 INFO nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Using config drive
Oct 02 13:16:06 compute-0 kernel: tapc7bc4d7b-3f: entered promiscuous mode
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.6086] manager: (tapc7bc4d7b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 ovn_controller[148183]: 2025-10-02T13:16:06Z|00996|binding|INFO|Claiming lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for this chassis.
Oct 02 13:16:06 compute-0 ovn_controller[148183]: 2025-10-02T13:16:06Z|00997|binding|INFO|c7bc4d7b-3f13-4f9c-be12-6edcff4d424e: Claiming fa:16:3e:c7:1d:48 10.100.0.7
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.623 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:1d:48 10.100.0.7'], port_security=['fa:16:3e:c7:1d:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a899aee-b0ae-4350-9f56-446d42ef77d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bd81072c-a0ca-4da6-a274-e2604cd954b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8837441d-fb89-46a9-8aa6-bfdcde8d99f9, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.624 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e in datapath faf67f78-db61-49af-8b59-741a7fc7cb4c bound to our chassis
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.626 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faf67f78-db61-49af-8b59-741a7fc7cb4c
Oct 02 13:16:06 compute-0 ovn_controller[148183]: 2025-10-02T13:16:06Z|00998|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e ovn-installed in OVS
Oct 02 13:16:06 compute-0 ovn_controller[148183]: 2025-10-02T13:16:06Z|00999|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e up in Southbound
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.637 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f08419ed-66de-45e9-ad29-7539688c10e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.638 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaf67f78-d1 in ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.640 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaf67f78-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.640 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[909853f9-06fd-4ddf-9953-d3d0eec922ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.640 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f2509119-3894-4e07-9551-b6af018a5bf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 systemd-udevd[410845]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:16:06 compute-0 systemd-machined[211836]: New machine qemu-107-instance-000000dd.
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.6568] device (tapc7bc4d7b-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.6590] device (tapc7bc4d7b-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.659 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[3e74609b-673d-4299-b3c1-67b919e682f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-000000dd.
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.682 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e9f4e4-4aa5-4d3f-93ae-6f0dff6ad659]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.709 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[38cbbbe4-cd80-4497-8180-25ab0af27987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.7158] manager: (tapfaf67f78-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.715 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[99fc64d4-34ae-493b-934e-775eda9b4e71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.751 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[344bf9bd-bae1-4921-b7f7-4aa5e17fe406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.754 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[fbff5c56-13a5-470d-abd0-c6f66c5aeff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.7751] device (tapfaf67f78-d0): carrier: link connected
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.781 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[6d04fe68-f905-43b4-b8cb-ac0f169ac446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.797 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[07e4fcb0-8f3f-4d2e-87df-5d4ca110c059]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaf67f78-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:4b:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903440, 'reachable_time': 40448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410879, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.811 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[72d4164d-9a91-4922-b77d-0de3a8c2284f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4b6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903440, 'tstamp': 903440}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410880, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.825 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8efca871-6a29-42b3-888f-70a72a201a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaf67f78-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:4b:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903440, 'reachable_time': 40448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 410881, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 45 KiB/s wr, 54 op/s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.853 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1af9499d-5b8d-4a92-8055-8f18ec9aefcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.901 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[06075088-9bdc-4456-9e10-9c0a56043c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.902 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf67f78-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.903 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.903 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf67f78-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 kernel: tapfaf67f78-d0: entered promiscuous mode
Oct 02 13:16:06 compute-0 NetworkManager[44987]: <info>  [1759410966.9058] manager: (tapfaf67f78-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.908 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaf67f78-d0, col_values=(('external_ids', {'iface-id': '76911ddd-98dc-48ab-a192-244a4cada461'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:06 compute-0 ovn_controller[148183]: 2025-10-02T13:16:06Z|01000|binding|INFO|Releasing lport 76911ddd-98dc-48ab-a192-244a4cada461 from this chassis (sb_readonly=0)
Oct 02 13:16:06 compute-0 nova_compute[257802]: 2025-10-02 13:16:06.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.923 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.924 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[246e339f-34fe-46ce-ac57-30385c4d3feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.925 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-faf67f78-db61-49af-8b59-741a7fc7cb4c
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID faf67f78-db61-49af-8b59-741a7fc7cb4c
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:16:06 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:06.926 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'env', 'PROCESS_TAG=haproxy-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faf67f78-db61-49af-8b59-741a7fc7cb4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:16:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/91716838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:07 compute-0 ceph-mon[73607]: pgmap v3551: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 45 KiB/s wr, 54 op/s
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.204 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated VIF entry in instance network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.205 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.223 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:16:07 compute-0 podman[410911]: 2025-10-02 13:16:07.308411877 +0000 UTC m=+0.073719899 container create a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:07 compute-0 systemd[1]: Started libpod-conmon-a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218.scope.
Oct 02 13:16:07 compute-0 podman[410911]: 2025-10-02 13:16:07.258152985 +0000 UTC m=+0.023461047 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:16:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7af141870c3c5e9c888c73f9e873b3d0f3b3f7ceef67abf371d60645ed5b1a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:07 compute-0 podman[410911]: 2025-10-02 13:16:07.374760234 +0000 UTC m=+0.140068276 container init a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:16:07 compute-0 podman[410911]: 2025-10-02 13:16:07.379743876 +0000 UTC m=+0.145051888 container start a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.402 2 DEBUG nova.compute.manager [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.402 2 DEBUG nova.compute.manager [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing instance network info cache due to event network-changed-0fe78275-ad8d-4e77-a0e7-503702bf1242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.402 2 DEBUG oslo_concurrency.lockutils [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.402 2 DEBUG oslo_concurrency.lockutils [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.403 2 DEBUG nova.network.neutron [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Refreshing network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [NOTICE]   (410930) : New worker (410932) forked
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [NOTICE]   (410930) : Loading success.
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.539 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.540 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.540 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.540 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.540 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.541 2 INFO nova.compute.manager [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Terminating instance
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.542 2 DEBUG nova.compute.manager [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:16:07 compute-0 kernel: tap0fe78275-ad (unregistering): left promiscuous mode
Oct 02 13:16:07 compute-0 NetworkManager[44987]: <info>  [1759410967.5988] device (tap0fe78275-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:16:07 compute-0 ovn_controller[148183]: 2025-10-02T13:16:07Z|01001|binding|INFO|Releasing lport 0fe78275-ad8d-4e77-a0e7-503702bf1242 from this chassis (sb_readonly=0)
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 ovn_controller[148183]: 2025-10-02T13:16:07Z|01002|binding|INFO|Setting lport 0fe78275-ad8d-4e77-a0e7-503702bf1242 down in Southbound
Oct 02 13:16:07 compute-0 ovn_controller[148183]: 2025-10-02T13:16:07Z|01003|binding|INFO|Removing iface tap0fe78275-ad ovn-installed in OVS
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.619 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:f8:73 10.100.0.11'], port_security=['fa:16:3e:6b:f8:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bfb3e97d-8923-4293-b4ae-a37019fe104c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a538a4f-f761-421e-aa00-1341aedd2ba6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=0fe78275-ad8d-4e77-a0e7-503702bf1242) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.623 158261 INFO neutron.agent.ovn.metadata.agent [-] Port 0fe78275-ad8d-4e77-a0e7-503702bf1242 in datapath 150508fb-9217-4982-8468-977a3b53121a unbound from our chassis
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.628 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 150508fb-9217-4982-8468-977a3b53121a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.630 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3415df-a051-42d3-95eb-82a8652fbe6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.632 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace which is not needed anymore
Oct 02 13:16:07 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Oct 02 13:16:07 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000dc.scope: Consumed 15.858s CPU time.
Oct 02 13:16:07 compute-0 systemd-machined[211836]: Machine qemu-106-instance-000000dc terminated.
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [NOTICE]   (409284) : haproxy version is 2.8.14-c23fe91
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [NOTICE]   (409284) : path to executable is /usr/sbin/haproxy
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [WARNING]  (409284) : Exiting Master process...
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [ALERT]    (409284) : Current worker (409286) exited with code 143 (Terminated)
Oct 02 13:16:07 compute-0 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[409280]: [WARNING]  (409284) : All workers exited. Exiting... (0)
Oct 02 13:16:07 compute-0 systemd[1]: libpod-85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a.scope: Deactivated successfully.
Oct 02 13:16:07 compute-0 podman[410962]: 2025-10-02 13:16:07.771002189 +0000 UTC m=+0.043452956 container died 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.779 2 INFO nova.virt.libvirt.driver [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Instance destroyed successfully.
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.780 2 DEBUG nova.objects.instance [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'resources' on Instance uuid bfb3e97d-8923-4293-b4ae-a37019fe104c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a-userdata-shm.mount: Deactivated successfully.
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.798 2 DEBUG nova.virt.libvirt.vif [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:14:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1111797352',display_name='tempest-TestVolumeBootPattern-server-1111797352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1111797352',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBTAKk7vcjhR2v3hQpxHbnD8D+5EFYQASqHngnH89TfDPr9LwfPo4GlBaSvBU1kSzEsKlDYOjmvBACnYkU3g9qIDzQdk5Sxb5IqNRVXiy650FCjpN5wXe8XSUYo7rJct4A==',key_name='tempest-TestVolumeBootPattern-892734420',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-rrp61u00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:50Z,user_data=None,user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=bfb3e97d-8923-4293-b4ae-a37019fe104c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b631cf6bfec9cd7318e2a567791d07a805e9a03c2583366dfbaae986987dbbb1-merged.mount: Deactivated successfully.
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.801 2 DEBUG nova.network.os_vif_util [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.801 2 DEBUG nova.network.os_vif_util [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.803 2 DEBUG os_vif [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fe78275-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.808 2 INFO os_vif [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=0fe78275-ad8d-4e77-a0e7-503702bf1242,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fe78275-ad')
Oct 02 13:16:07 compute-0 podman[410962]: 2025-10-02 13:16:07.810813396 +0000 UTC m=+0.083264143 container cleanup 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:16:07 compute-0 systemd[1]: libpod-conmon-85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a.scope: Deactivated successfully.
Oct 02 13:16:07 compute-0 podman[411015]: 2025-10-02 13:16:07.875764238 +0000 UTC m=+0.035363138 container remove 85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.881 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f19a863e-9970-4b6a-91f2-bb775da60eaa]: (4, ('Thu Oct  2 01:16:07 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a)\n85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a\nThu Oct  2 01:16:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a)\n85617ebf0cb309a9e2cca40a6fe51dd244365e9fa0355f4783d93c5c040d2e5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.883 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a85f012c-d568-42d0-a368-31d5e960e146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.884 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:07 compute-0 kernel: tap150508fb-90: left promiscuous mode
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 nova_compute[257802]: 2025-10-02 13:16:07.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.903 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[50986cbf-a755-4671-865a-a900dc0dd24e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:07.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.931 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[20464091-5aa4-40da-b513-731d6de611e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.932 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c345bcad-33ae-4361-9922-f25d4270d2bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.950 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ba11f97d-c2aa-4b24-aa55-7c595aa78625]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895625, 'reachable_time': 19049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411070, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d150508fb\x2d9217\x2d4982\x2d8468\x2d977a3b53121a.mount: Deactivated successfully.
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.953 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-150508fb-9217-4982-8468-977a3b53121a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:16:07 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:07.954 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[58da7222-c7ab-4721-bd0a-af57ff98be33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.100 2 INFO nova.virt.libvirt.driver [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Deleting instance files /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c_del
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.101 2 INFO nova.virt.libvirt.driver [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Deletion of /var/lib/nova/instances/bfb3e97d-8923-4293-b4ae-a37019fe104c_del complete
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.174 2 INFO nova.compute.manager [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Took 0.63 seconds to destroy the instance on the hypervisor.
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.174 2 DEBUG oslo.service.loopingcall [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.174 2 DEBUG nova.compute.manager [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.174 2 DEBUG nova.network.neutron [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:16:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:08.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.409 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410968.4086554, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.409 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Resumed (Lifecycle Event)
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.411 2 DEBUG nova.compute.manager [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.414 2 INFO nova.virt.libvirt.driver [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance running successfully.
Oct 02 13:16:08 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.415 2 DEBUG nova.virt.libvirt.guest [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.415 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.435 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.438 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.475 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.475 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759410968.4108744, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.475 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Started (Lifecycle Event)
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.508 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.510 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.572 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 13:16:08 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 13:16:08 compute-0 systemd[410627]: Activating special unit Exit the Session...
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped target Main User Target.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped target Basic System.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped target Paths.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped target Sockets.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped target Timers.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 13:16:08 compute-0 systemd[410627]: Closed D-Bus User Message Bus Socket.
Oct 02 13:16:08 compute-0 systemd[410627]: Stopped Create User's Volatile Files and Directories.
Oct 02 13:16:08 compute-0 systemd[410627]: Removed slice User Application Slice.
Oct 02 13:16:08 compute-0 systemd[410627]: Reached target Shutdown.
Oct 02 13:16:08 compute-0 systemd[410627]: Finished Exit the Session.
Oct 02 13:16:08 compute-0 systemd[410627]: Reached target Exit the Session.
Oct 02 13:16:08 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 13:16:08 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 13:16:08 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 13:16:08 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 13:16:08 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 13:16:08 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 13:16:08 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.703 2 DEBUG nova.network.neutron [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updated VIF entry in instance network info cache for port 0fe78275-ad8d-4e77-a0e7-503702bf1242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.704 2 DEBUG nova.network.neutron [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [{"id": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "address": "fa:16:3e:6b:f8:73", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fe78275-ad", "ovs_interfaceid": "0fe78275-ad8d-4e77-a0e7-503702bf1242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:08 compute-0 nova_compute[257802]: 2025-10-02 13:16:08.739 2 DEBUG oslo_concurrency.lockutils [req-3afaf3af-88c2-408f-b893-58d0d856e661 req-2fff9fa8-7b81-415e-b589-406142bf8ca3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-bfb3e97d-8923-4293-b4ae-a37019fe104c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:16:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 32 KiB/s wr, 78 op/s
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.236 2 DEBUG nova.network.neutron [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.261 2 INFO nova.compute.manager [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Took 1.09 seconds to deallocate network for instance.
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.579 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.579 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.580 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.580 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.580 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.581 2 WARNING nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state resized and task_state None.
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.581 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.581 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.582 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.582 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.582 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.582 2 WARNING nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state resized and task_state None.
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.583 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-unplugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.583 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.583 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.584 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.584 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] No waiting events found dispatching network-vif-unplugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.584 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-unplugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.585 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.585 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.585 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.585 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.586 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] No waiting events found dispatching network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.586 2 WARNING nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received unexpected event network-vif-plugged-0fe78275-ad8d-4e77-a0e7-503702bf1242 for instance with vm_state active and task_state deleting.
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.586 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Received event network-vif-deleted-0fe78275-ad8d-4e77-a0e7-503702bf1242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:09 compute-0 sudo[411081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:09 compute-0 sudo[411081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:09 compute-0 sudo[411081]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.684 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.702 2 INFO nova.compute.manager [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Took 0.44 seconds to detach 1 volumes for instance.
Oct 02 13:16:09 compute-0 sudo[411106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:09 compute-0 sudo[411106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:09 compute-0 sudo[411106]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.810 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.811 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:09 compute-0 nova_compute[257802]: 2025-10-02 13:16:09.879 2 DEBUG oslo_concurrency.processutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:09 compute-0 ceph-mon[73607]: pgmap v3552: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 32 KiB/s wr, 78 op/s
Oct 02 13:16:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:10.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065660982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.372 2 DEBUG oslo_concurrency.processutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.378 2 DEBUG nova.compute.provider_tree [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.396 2 DEBUG nova.scheduler.client.report [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.422 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.454 2 INFO nova.scheduler.client.report [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Deleted allocations for instance bfb3e97d-8923-4293-b4ae-a37019fe104c
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.527 2 DEBUG oslo_concurrency.lockutils [None req-67ef9343-cfde-453c-8a8a-d05196dcbab2 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "bfb3e97d-8923-4293-b4ae-a37019fe104c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:10 compute-0 nova_compute[257802]: 2025-10-02 13:16:10.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 126 op/s
Oct 02 13:16:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3065660982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Oct 02 13:16:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Oct 02 13:16:11 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Oct 02 13:16:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:11 compute-0 ceph-mon[73607]: pgmap v3553: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 126 op/s
Oct 02 13:16:11 compute-0 ceph-mon[73607]: osdmap e428: 3 total, 3 up, 3 in
Oct 02 13:16:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:12 compute-0 nova_compute[257802]: 2025-10-02 13:16:12.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.0 KiB/s wr, 148 op/s
Oct 02 13:16:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Oct 02 13:16:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Oct 02 13:16:12 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Oct 02 13:16:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:13.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:13 compute-0 ceph-mon[73607]: pgmap v3555: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.0 KiB/s wr, 148 op/s
Oct 02 13:16:13 compute-0 ceph-mon[73607]: osdmap e429: 3 total, 3 up, 3 in
Oct 02 13:16:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2133771657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2133771657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1352451436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 KiB/s wr, 180 op/s
Oct 02 13:16:15 compute-0 nova_compute[257802]: 2025-10-02 13:16:15.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:15.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:15 compute-0 ceph-mon[73607]: pgmap v3557: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 KiB/s wr, 180 op/s
Oct 02 13:16:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 KiB/s wr, 152 op/s
Oct 02 13:16:17 compute-0 nova_compute[257802]: 2025-10-02 13:16:17.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:18 compute-0 ceph-mon[73607]: pgmap v3558: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 KiB/s wr, 152 op/s
Oct 02 13:16:18 compute-0 ovn_controller[148183]: 2025-10-02T13:16:18Z|01004|binding|INFO|Releasing lport 76911ddd-98dc-48ab-a192-244a4cada461 from this chassis (sb_readonly=0)
Oct 02 13:16:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:18.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:18 compute-0 nova_compute[257802]: 2025-10-02 13:16:18.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 KiB/s wr, 82 op/s
Oct 02 13:16:19 compute-0 ceph-mon[73607]: pgmap v3559: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 KiB/s wr, 82 op/s
Oct 02 13:16:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:20 compute-0 ovn_controller[148183]: 2025-10-02T13:16:20Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:1d:48 10.100.0.7
Oct 02 13:16:20 compute-0 nova_compute[257802]: 2025-10-02 13:16:20.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 02 13:16:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Oct 02 13:16:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Oct 02 13:16:21 compute-0 ceph-mon[73607]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Oct 02 13:16:21 compute-0 podman[411159]: 2025-10-02 13:16:21.917564439 +0000 UTC m=+0.055432951 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:16:21 compute-0 podman[411161]: 2025-10-02 13:16:21.922967531 +0000 UTC m=+0.056547408 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:16:21 compute-0 ceph-mon[73607]: pgmap v3560: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 02 13:16:21 compute-0 ceph-mon[73607]: osdmap e430: 3 total, 3 up, 3 in
Oct 02 13:16:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:21 compute-0 podman[411160]: 2025-10-02 13:16:21.94860327 +0000 UTC m=+0.085189530 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 13:16:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:16:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:22.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:16:22 compute-0 nova_compute[257802]: 2025-10-02 13:16:22.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410967.773819, bfb3e97d-8923-4293-b4ae-a37019fe104c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:16:22 compute-0 nova_compute[257802]: 2025-10-02 13:16:22.778 2 INFO nova.compute.manager [-] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] VM Stopped (Lifecycle Event)
Oct 02 13:16:22 compute-0 nova_compute[257802]: 2025-10-02 13:16:22.800 2 DEBUG nova.compute.manager [None req-fe7978e0-fbf1-4151-aa87-bb079642f208 - - - - - -] [instance: bfb3e97d-8923-4293-b4ae-a37019fe104c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:16:22 compute-0 nova_compute[257802]: 2025-10-02 13:16:22.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 82 op/s
Oct 02 13:16:23 compute-0 ovn_controller[148183]: 2025-10-02T13:16:23Z|01005|binding|INFO|Releasing lport 76911ddd-98dc-48ab-a192-244a4cada461 from this chassis (sb_readonly=0)
Oct 02 13:16:23 compute-0 nova_compute[257802]: 2025-10-02 13:16:23.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:24 compute-0 ceph-mon[73607]: pgmap v3562: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 82 op/s
Oct 02 13:16:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:16:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:24.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:16:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 549 KiB/s rd, 15 KiB/s wr, 68 op/s
Oct 02 13:16:25 compute-0 ceph-mon[73607]: pgmap v3563: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 549 KiB/s rd, 15 KiB/s wr, 68 op/s
Oct 02 13:16:25 compute-0 nova_compute[257802]: 2025-10-02 13:16:25.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:26 compute-0 nova_compute[257802]: 2025-10-02 13:16:26.178 2 INFO nova.compute.manager [None req-f4c5f070-3055-4bcf-87dd-0e13f6dd1c87 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Get console output
Oct 02 13:16:26 compute-0 nova_compute[257802]: 2025-10-02 13:16:26.183 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:16:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 633 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 02 13:16:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:27.002 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:27.003 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:27.003 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:27 compute-0 nova_compute[257802]: 2025-10-02 13:16:27.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:27 compute-0 ceph-mon[73607]: pgmap v3564: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 633 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 02 13:16:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:27.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.154 2 DEBUG nova.compute.manager [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.154 2 DEBUG nova.compute.manager [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing instance network info cache due to event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.154 2 DEBUG oslo_concurrency.lockutils [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.155 2 DEBUG oslo_concurrency.lockutils [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.155 2 DEBUG nova.network.neutron [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:16:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.300 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.301 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.301 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.302 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.302 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.304 2 INFO nova.compute.manager [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Terminating instance
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.305 2 DEBUG nova.compute.manager [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:16:28 compute-0 kernel: tapc7bc4d7b-3f (unregistering): left promiscuous mode
Oct 02 13:16:28 compute-0 NetworkManager[44987]: <info>  [1759410988.3658] device (tapc7bc4d7b-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:28 compute-0 ovn_controller[148183]: 2025-10-02T13:16:28Z|01006|binding|INFO|Releasing lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e from this chassis (sb_readonly=0)
Oct 02 13:16:28 compute-0 ovn_controller[148183]: 2025-10-02T13:16:28Z|01007|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e down in Southbound
Oct 02 13:16:28 compute-0 ovn_controller[148183]: 2025-10-02T13:16:28Z|01008|binding|INFO|Removing iface tapc7bc4d7b-3f ovn-installed in OVS
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.383 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:1d:48 10.100.0.7'], port_security=['fa:16:3e:c7:1d:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a899aee-b0ae-4350-9f56-446d42ef77d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'bd81072c-a0ca-4da6-a274-e2604cd954b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8837441d-fb89-46a9-8aa6-bfdcde8d99f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.385 158261 INFO neutron.agent.ovn.metadata.agent [-] Port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e in datapath faf67f78-db61-49af-8b59-741a7fc7cb4c unbound from our chassis
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.386 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faf67f78-db61-49af-8b59-741a7fc7cb4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.388 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5815538e-b935-40d1-8d17-cbf961cacf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.403 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c namespace which is not needed anymore
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:28 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Oct 02 13:16:28 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000dd.scope: Consumed 14.724s CPU time.
Oct 02 13:16:28 compute-0 systemd-machined[211836]: Machine qemu-107-instance-000000dd terminated.
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.547 2 INFO nova.virt.libvirt.driver [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance destroyed successfully.
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.548 2 DEBUG nova.objects.instance [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.573 2 DEBUG nova.virt.libvirt.vif [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:16:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:16:13Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.575 2 DEBUG nova.network.os_vif_util [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.576 2 DEBUG nova.network.os_vif_util [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.576 2 DEBUG os_vif [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7bc4d7b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.585 2 INFO os_vif [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f')
Oct 02 13:16:28 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [NOTICE]   (410930) : haproxy version is 2.8.14-c23fe91
Oct 02 13:16:28 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [NOTICE]   (410930) : path to executable is /usr/sbin/haproxy
Oct 02 13:16:28 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [WARNING]  (410930) : Exiting Master process...
Oct 02 13:16:28 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [ALERT]    (410930) : Current worker (410932) exited with code 143 (Terminated)
Oct 02 13:16:28 compute-0 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[410926]: [WARNING]  (410930) : All workers exited. Exiting... (0)
Oct 02 13:16:28 compute-0 systemd[1]: libpod-a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218.scope: Deactivated successfully.
Oct 02 13:16:28 compute-0 podman[411247]: 2025-10-02 13:16:28.665306604 +0000 UTC m=+0.165277503 container died a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218-userdata-shm.mount: Deactivated successfully.
Oct 02 13:16:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 632 KiB/s rd, 17 KiB/s wr, 54 op/s
Oct 02 13:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7af141870c3c5e9c888c73f9e873b3d0f3b3f7ceef67abf371d60645ed5b1a0-merged.mount: Deactivated successfully.
Oct 02 13:16:28 compute-0 podman[411247]: 2025-10-02 13:16:28.85274583 +0000 UTC m=+0.352716729 container cleanup a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:16:28 compute-0 systemd[1]: libpod-conmon-a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218.scope: Deactivated successfully.
Oct 02 13:16:28 compute-0 podman[411304]: 2025-10-02 13:16:28.978567635 +0000 UTC m=+0.083981150 container remove a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.988 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4bdc81-a80e-46b7-bb4c-a99f774028eb]: (4, ('Thu Oct  2 01:16:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c (a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218)\na2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218\nThu Oct  2 01:16:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c (a2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218)\na2411d50b722aa89d2f4e56fbcbc0388e32ba21eb487fa8d6c9189f959fc2218\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.991 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3026f3e9-1b41-45e9-ba5e-eacda94bc6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:28 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:28.992 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf67f78-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:28 compute-0 kernel: tapfaf67f78-d0: left promiscuous mode
Oct 02 13:16:28 compute-0 nova_compute[257802]: 2025-10-02 13:16:28.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.011 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e84635b5-a4ab-46f3-b1b4-05977c45664a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.040 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[68321ce8-af65-4421-8f11-253940ec51b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.041 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[56520183-8a4a-4e82-a3c9-f6f7da3a8166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.055 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe51f280-2f73-424a-8631-850b2fd274b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903433, 'reachable_time': 30607, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411320, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.063 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:16:29 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:16:29.063 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[8455ad47-85f1-44ea-9f48-3c5c4233d90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:16:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dfaf67f78\x2ddb61\x2d49af\x2d8b59\x2d741a7fc7cb4c.mount: Deactivated successfully.
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.126 2 INFO nova.virt.libvirt.driver [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Deleting instance files /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2_del
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.127 2 INFO nova.virt.libvirt.driver [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Deletion of /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2_del complete
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.189 2 INFO nova.compute.manager [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.190 2 DEBUG oslo.service.loopingcall [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.191 2 DEBUG nova.compute.manager [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:16:29 compute-0 nova_compute[257802]: 2025-10-02 13:16:29.191 2 DEBUG nova.network.neutron [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:16:29 compute-0 sudo[411321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:29 compute-0 sudo[411321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:29 compute-0 sudo[411321]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:29 compute-0 sudo[411346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:29 compute-0 sudo[411346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:29 compute-0 sudo[411346]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:29 compute-0 ceph-mon[73607]: pgmap v3565: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 632 KiB/s rd, 17 KiB/s wr, 54 op/s
Oct 02 13:16:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.264 2 DEBUG nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.265 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.265 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.265 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.265 2 DEBUG nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.266 2 DEBUG nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.266 2 DEBUG nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.266 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.267 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.267 2 DEBUG oslo_concurrency.lockutils [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.267 2 DEBUG nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.268 2 WARNING nova.compute.manager [req-c94d4d19-3e52-416d-95d7-a4f0243f1fc2 req-665309b3-8341-4ec6-9cbe-432b7c23e322 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state deleting.
Oct 02 13:16:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:30 compute-0 nova_compute[257802]: 2025-10-02 13:16:30.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 555 KiB/s rd, 18 KiB/s wr, 61 op/s
Oct 02 13:16:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.457 2 DEBUG nova.network.neutron [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.479 2 INFO nova.compute.manager [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Took 2.29 seconds to deallocate network for instance.
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.490 2 DEBUG nova.compute.manager [req-a47d4159-7c28-464a-aea3-c76fe9323f5f req-18ae1e59-6002-4b22-b321-11716defba0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-deleted-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.490 2 INFO nova.compute.manager [req-a47d4159-7c28-464a-aea3-c76fe9323f5f req-18ae1e59-6002-4b22-b321-11716defba0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Neutron deleted interface c7bc4d7b-3f13-4f9c-be12-6edcff4d424e; detaching it from the instance and deleting it from the info cache
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.491 2 DEBUG nova.network.neutron [req-a47d4159-7c28-464a-aea3-c76fe9323f5f req-18ae1e59-6002-4b22-b321-11716defba0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.508 2 DEBUG nova.network.neutron [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated VIF entry in instance network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.509 2 DEBUG nova.network.neutron [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.525 2 DEBUG nova.compute.manager [req-a47d4159-7c28-464a-aea3-c76fe9323f5f req-18ae1e59-6002-4b22-b321-11716defba0b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Detach interface failed, port_id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e, reason: Instance 0a899aee-b0ae-4350-9f56-446d42ef77d2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.545 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.546 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.554 2 DEBUG oslo_concurrency.lockutils [req-a2873c8a-41a3-49bd-b4b4-c43836f64978 req-83083cd7-2b76-4512-8533-b483ee86ccfc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.555 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.592 2 INFO nova.scheduler.client.report [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance 0a899aee-b0ae-4350-9f56-446d42ef77d2
Oct 02 13:16:31 compute-0 nova_compute[257802]: 2025-10-02 13:16:31.679 2 DEBUG oslo_concurrency.lockutils [None req-64c82187-7ff3-46a3-a17d-ca91b97976d2 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:31 compute-0 ceph-mon[73607]: pgmap v3566: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 555 KiB/s rd, 18 KiB/s wr, 61 op/s
Oct 02 13:16:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 481 KiB/s rd, 15 KiB/s wr, 53 op/s
Oct 02 13:16:33 compute-0 nova_compute[257802]: 2025-10-02 13:16:33.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:33 compute-0 podman[411373]: 2025-10-02 13:16:33.958817126 +0000 UTC m=+0.101514619 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:16:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:33.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:34 compute-0 ceph-mon[73607]: pgmap v3567: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 481 KiB/s rd, 15 KiB/s wr, 53 op/s
Oct 02 13:16:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:34.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 15 KiB/s wr, 61 op/s
Oct 02 13:16:35 compute-0 ceph-mon[73607]: pgmap v3568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 15 KiB/s wr, 61 op/s
Oct 02 13:16:35 compute-0 nova_compute[257802]: 2025-10-02 13:16:35.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-0 nova_compute[257802]: 2025-10-02 13:16:35.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:16:35 compute-0 nova_compute[257802]: 2025-10-02 13:16:35.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:36 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1757148826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:36.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 02 13:16:37 compute-0 nova_compute[257802]: 2025-10-02 13:16:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-0 nova_compute[257802]: 2025-10-02 13:16:37.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-0 ceph-mon[73607]: pgmap v3569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 02 13:16:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2173683488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:38 compute-0 nova_compute[257802]: 2025-10-02 13:16:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:38 compute-0 nova_compute[257802]: 2025-10-02 13:16:38.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:38 compute-0 nova_compute[257802]: 2025-10-02 13:16:38.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:16:39 compute-0 ceph-mon[73607]: pgmap v3570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 13:16:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:40.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:40 compute-0 nova_compute[257802]: 2025-10-02 13:16:40.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Oct 02 13:16:41 compute-0 nova_compute[257802]: 2025-10-02 13:16:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:41 compute-0 ceph-mon[73607]: pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Oct 02 13:16:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:16:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:16:42 compute-0 nova_compute[257802]: 2025-10-02 13:16:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:42.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:16:42
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct 02 13:16:43 compute-0 nova_compute[257802]: 2025-10-02 13:16:43.546 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410988.5444295, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:16:43 compute-0 nova_compute[257802]: 2025-10-02 13:16:43.547 2 INFO nova.compute.manager [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Stopped (Lifecycle Event)
Oct 02 13:16:43 compute-0 nova_compute[257802]: 2025-10-02 13:16:43.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:16:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:16:43 compute-0 nova_compute[257802]: 2025-10-02 13:16:43.609 2 DEBUG nova.compute.manager [None req-3d551057-be0e-476f-93a8-7bcb91c96dbd - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:16:43 compute-0 ceph-mon[73607]: pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 13:16:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:16:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 13:16:45 compute-0 nova_compute[257802]: 2025-10-02 13:16:45.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:45 compute-0 ceph-mon[73607]: pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 13:16:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:46 compute-0 nova_compute[257802]: 2025-10-02 13:16:46.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:47 compute-0 ceph-mon[73607]: pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:48 compute-0 nova_compute[257802]: 2025-10-02 13:16:48.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:49 compute-0 sudo[411409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:49 compute-0 sudo[411409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 sudo[411409]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:49 compute-0 sudo[411434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:49 compute-0 sudo[411434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 sudo[411434]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:49 compute-0 sudo[411459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:49 compute-0 sudo[411459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 sudo[411459]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:49 compute-0 sudo[411484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:16:49 compute-0 sudo[411484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 sudo[411524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:49 compute-0 sudo[411524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 sudo[411524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:50 compute-0 ceph-mon[73607]: pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:50 compute-0 sudo[411551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:50 compute-0 sudo[411551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[411551]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 nova_compute[257802]: 2025-10-02 13:16:50.111 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:50 compute-0 sudo[411484]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 680b7a64-1e06-45ae-98b1-04c5103f69ae does not exist
Oct 02 13:16:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 98999b3a-086e-4c90-813c-db92ef18c5fe does not exist
Oct 02 13:16:50 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b722ece8-770f-469b-923e-2cd59e484052 does not exist
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:16:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:50 compute-0 sudo[411590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:50 compute-0 sudo[411590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[411590]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 sudo[411615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:50 compute-0 sudo[411615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[411615]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 sudo[411640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:50 compute-0 sudo[411640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[411640]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 sudo[411665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:16:50 compute-0 sudo[411665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 nova_compute[257802]: 2025-10-02 13:16:50.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.797306749 +0000 UTC m=+0.041203811 container create e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:16:50 compute-0 systemd[1]: Started libpod-conmon-e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710.scope.
Oct 02 13:16:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.779872682 +0000 UTC m=+0.023769764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.87928436 +0000 UTC m=+0.123181442 container init e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.886008155 +0000 UTC m=+0.129905217 container start e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.889296975 +0000 UTC m=+0.133194057 container attach e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:16:50 compute-0 distracted_brattain[411752]: 167 167
Oct 02 13:16:50 compute-0 systemd[1]: libpod-e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710.scope: Deactivated successfully.
Oct 02 13:16:50 compute-0 conmon[411752]: conmon e4e86e38d3c14bf54a55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710.scope/container/memory.events
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.893007256 +0000 UTC m=+0.136904318 container died e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d839d36e1325ed9687d24d96d430eefc11c4c50be1d2fe45450a2e45fbabddeb-merged.mount: Deactivated successfully.
Oct 02 13:16:50 compute-0 podman[411735]: 2025-10-02 13:16:50.932932625 +0000 UTC m=+0.176829687 container remove e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:16:50 compute-0 systemd[1]: libpod-conmon-e4e86e38d3c14bf54a55b70551093641b9d6fe44f2c9adf68b3586eadde28710.scope: Deactivated successfully.
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:16:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:51 compute-0 podman[411776]: 2025-10-02 13:16:51.083308352 +0000 UTC m=+0.040290159 container create 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:51 compute-0 nova_compute[257802]: 2025-10-02 13:16:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:51 compute-0 nova_compute[257802]: 2025-10-02 13:16:51.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:16:51 compute-0 nova_compute[257802]: 2025-10-02 13:16:51.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:16:51 compute-0 nova_compute[257802]: 2025-10-02 13:16:51.114 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:16:51 compute-0 systemd[1]: Started libpod-conmon-5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc.scope.
Oct 02 13:16:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:51 compute-0 podman[411776]: 2025-10-02 13:16:51.067088904 +0000 UTC m=+0.024070731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:51 compute-0 podman[411776]: 2025-10-02 13:16:51.176688031 +0000 UTC m=+0.133669848 container init 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:51 compute-0 podman[411776]: 2025-10-02 13:16:51.187030915 +0000 UTC m=+0.144012722 container start 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:51 compute-0 podman[411776]: 2025-10-02 13:16:51.189894706 +0000 UTC m=+0.146876513 container attach 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:16:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:51.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:16:52 compute-0 reverent_burnell[411792]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:16:52 compute-0 reverent_burnell[411792]: --> relative data size: 1.0
Oct 02 13:16:52 compute-0 reverent_burnell[411792]: --> All data devices are unavailable
Oct 02 13:16:52 compute-0 ceph-mon[73607]: pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2965026992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:52 compute-0 systemd[1]: libpod-5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc.scope: Deactivated successfully.
Oct 02 13:16:52 compute-0 podman[411776]: 2025-10-02 13:16:52.066392607 +0000 UTC m=+1.023374404 container died 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9d3ca1c155d4d6c413d2ec8b03145e16e1b336eb46f24126e83bcf34a3bf2e-merged.mount: Deactivated successfully.
Oct 02 13:16:52 compute-0 podman[411776]: 2025-10-02 13:16:52.129171156 +0000 UTC m=+1.086152963 container remove 5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:52 compute-0 systemd[1]: libpod-conmon-5f3097d48f789f15fdc1fa26c84be1aba49ea6601eec4972aca51cbc1a00febc.scope: Deactivated successfully.
Oct 02 13:16:52 compute-0 sudo[411665]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:52 compute-0 podman[411808]: 2025-10-02 13:16:52.165704492 +0000 UTC m=+0.070963202 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:16:52 compute-0 podman[411816]: 2025-10-02 13:16:52.170284474 +0000 UTC m=+0.074857187 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:16:52 compute-0 podman[411817]: 2025-10-02 13:16:52.170510769 +0000 UTC m=+0.068244885 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:16:52 compute-0 sudo[411878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:52 compute-0 sudo[411878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:52 compute-0 sudo[411878]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:52 compute-0 sudo[411903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:52 compute-0 sudo[411903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:52 compute-0 sudo[411903]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:52 compute-0 sudo[411928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:52 compute-0 sudo[411928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:52 compute-0 sudo[411928]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:52 compute-0 sudo[411953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:16:52 compute-0 sudo[411953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.644657775 +0000 UTC m=+0.034268672 container create 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:52 compute-0 systemd[1]: Started libpod-conmon-6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6.scope.
Oct 02 13:16:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.709810392 +0000 UTC m=+0.099421289 container init 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.71622995 +0000 UTC m=+0.105840857 container start 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.719292275 +0000 UTC m=+0.108903172 container attach 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:52 compute-0 objective_euclid[412033]: 167 167
Oct 02 13:16:52 compute-0 systemd[1]: libpod-6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6.scope: Deactivated successfully.
Oct 02 13:16:52 compute-0 conmon[412033]: conmon 6da35b41d78d95586ed3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6.scope/container/memory.events
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.721399287 +0000 UTC m=+0.111010184 container died 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.63021257 +0000 UTC m=+0.019823487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-330195b3ffaf23a8ccbb1e574e378c58ef5ac3772007194d83b6ad6676a2d327-merged.mount: Deactivated successfully.
Oct 02 13:16:52 compute-0 podman[412017]: 2025-10-02 13:16:52.752426057 +0000 UTC m=+0.142036944 container remove 6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:16:52 compute-0 systemd[1]: libpod-conmon-6da35b41d78d95586ed3eb27c1a3a4f687e389a610b451e802f632fb47eb6ca6.scope: Deactivated successfully.
Oct 02 13:16:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:52 compute-0 podman[412056]: 2025-10-02 13:16:52.895188688 +0000 UTC m=+0.038204789 container create c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:16:52 compute-0 systemd[1]: Started libpod-conmon-c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37.scope.
Oct 02 13:16:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704dc8da9e07ed40718ad4fead8ec15a53b290126854ab7ee43562425b75d590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704dc8da9e07ed40718ad4fead8ec15a53b290126854ab7ee43562425b75d590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704dc8da9e07ed40718ad4fead8ec15a53b290126854ab7ee43562425b75d590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704dc8da9e07ed40718ad4fead8ec15a53b290126854ab7ee43562425b75d590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:52 compute-0 podman[412056]: 2025-10-02 13:16:52.878667313 +0000 UTC m=+0.021683434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:52 compute-0 podman[412056]: 2025-10-02 13:16:52.98380639 +0000 UTC m=+0.126822491 container init c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:16:52 compute-0 podman[412056]: 2025-10-02 13:16:52.995180109 +0000 UTC m=+0.138196210 container start c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:52 compute-0 podman[412056]: 2025-10-02 13:16:52.998994723 +0000 UTC m=+0.142010904 container attach c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:16:53 compute-0 ceph-mon[73607]: pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:16:53 compute-0 nova_compute[257802]: 2025-10-02 13:16:53.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:53 compute-0 busy_murdock[412074]: {
Oct 02 13:16:53 compute-0 busy_murdock[412074]:     "1": [
Oct 02 13:16:53 compute-0 busy_murdock[412074]:         {
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "devices": [
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "/dev/loop3"
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             ],
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "lv_name": "ceph_lv0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "lv_size": "7511998464",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "name": "ceph_lv0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "tags": {
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.cluster_name": "ceph",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.crush_device_class": "",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.encrypted": "0",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.osd_id": "1",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.type": "block",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:                 "ceph.vdo": "0"
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             },
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "type": "block",
Oct 02 13:16:53 compute-0 busy_murdock[412074]:             "vg_name": "ceph_vg0"
Oct 02 13:16:53 compute-0 busy_murdock[412074]:         }
Oct 02 13:16:53 compute-0 busy_murdock[412074]:     ]
Oct 02 13:16:53 compute-0 busy_murdock[412074]: }
Oct 02 13:16:53 compute-0 systemd[1]: libpod-c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37.scope: Deactivated successfully.
Oct 02 13:16:53 compute-0 podman[412056]: 2025-10-02 13:16:53.830425569 +0000 UTC m=+0.973441670 container died c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-704dc8da9e07ed40718ad4fead8ec15a53b290126854ab7ee43562425b75d590-merged.mount: Deactivated successfully.
Oct 02 13:16:53 compute-0 podman[412056]: 2025-10-02 13:16:53.887407555 +0000 UTC m=+1.030423656 container remove c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:16:53 compute-0 systemd[1]: libpod-conmon-c000d88604a11e4117ae7c1b5224075258981e3fe1349f78feddafbc0a616f37.scope: Deactivated successfully.
Oct 02 13:16:53 compute-0 sudo[411953]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:53 compute-0 sudo[412095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:53 compute-0 sudo[412095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:53 compute-0 sudo[412095]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:54.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:54 compute-0 sudo[412120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:54 compute-0 sudo[412120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:54 compute-0 sudo[412120]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4278459100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:54 compute-0 sudo[412145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:54 compute-0 sudo[412145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:54 compute-0 sudo[412145]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:54 compute-0 sudo[412170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:16:54 compute-0 sudo[412170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:54.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.441230885 +0000 UTC m=+0.039721936 container create eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 13:16:54 compute-0 systemd[1]: Started libpod-conmon-eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5.scope.
Oct 02 13:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.5132306 +0000 UTC m=+0.111721671 container init eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.42551633 +0000 UTC m=+0.024007401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.519825042 +0000 UTC m=+0.118316093 container start eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.522293373 +0000 UTC m=+0.120784424 container attach eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:16:54 compute-0 goofy_elgamal[412251]: 167 167
Oct 02 13:16:54 compute-0 systemd[1]: libpod-eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5.scope: Deactivated successfully.
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.52503932 +0000 UTC m=+0.123530371 container died eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-db090fc46e456ba8678c18e559985af91be64983c5a47f8751d3ca4cb85b442a-merged.mount: Deactivated successfully.
Oct 02 13:16:54 compute-0 podman[412235]: 2025-10-02 13:16:54.561608266 +0000 UTC m=+0.160099307 container remove eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:16:54 compute-0 systemd[1]: libpod-conmon-eafabd608241eaaba71d340a132307c7a16a1d2a408d50d0075d2e00211f41b5.scope: Deactivated successfully.
Oct 02 13:16:54 compute-0 podman[412275]: 2025-10-02 13:16:54.694228999 +0000 UTC m=+0.021603072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:54 compute-0 podman[412275]: 2025-10-02 13:16:54.812224672 +0000 UTC m=+0.139598735 container create 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:16:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 148 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1017 KiB/s wr, 25 op/s
Oct 02 13:16:54 compute-0 systemd[1]: Started libpod-conmon-35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871.scope.
Oct 02 13:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f567e7eb727051016971a8a5aab4fa7c425d6faffa24b843c2e455128d137d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f567e7eb727051016971a8a5aab4fa7c425d6faffa24b843c2e455128d137d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f567e7eb727051016971a8a5aab4fa7c425d6faffa24b843c2e455128d137d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f567e7eb727051016971a8a5aab4fa7c425d6faffa24b843c2e455128d137d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:54 compute-0 podman[412275]: 2025-10-02 13:16:54.949740763 +0000 UTC m=+0.277114856 container init 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:16:54 compute-0 podman[412275]: 2025-10-02 13:16:54.956907768 +0000 UTC m=+0.284281841 container start 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:16:54 compute-0 podman[412275]: 2025-10-02 13:16:54.992563803 +0000 UTC m=+0.319937916 container attach 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005536216964451889 of space, bias 1.0, pg target 0.16608650893355667 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:16:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1138178111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:55 compute-0 ceph-mon[73607]: pgmap v3578: 305 pgs: 305 active+clean; 148 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1017 KiB/s wr, 25 op/s
Oct 02 13:16:55 compute-0 objective_mclaren[412293]: {
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:         "osd_id": 1,
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:         "type": "bluestore"
Oct 02 13:16:55 compute-0 objective_mclaren[412293]:     }
Oct 02 13:16:55 compute-0 objective_mclaren[412293]: }
Oct 02 13:16:55 compute-0 systemd[1]: libpod-35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871.scope: Deactivated successfully.
Oct 02 13:16:55 compute-0 podman[412275]: 2025-10-02 13:16:55.789465862 +0000 UTC m=+1.116839945 container died 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:55 compute-0 nova_compute[257802]: 2025-10-02 13:16:55.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f567e7eb727051016971a8a5aab4fa7c425d6faffa24b843c2e455128d137d7-merged.mount: Deactivated successfully.
Oct 02 13:16:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:56.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:56 compute-0 podman[412275]: 2025-10-02 13:16:56.389522684 +0000 UTC m=+1.716896747 container remove 35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/993108586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/993108586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:56 compute-0 sudo[412170]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:16:56 compute-0 systemd[1]: libpod-conmon-35ef45c688dcf62e540e74eaf6fce81a9b2082224c954d2d3138bba73c5c3871.scope: Deactivated successfully.
Oct 02 13:16:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:16:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b3259f5b-6c69-492c-92f6-1d9360733752 does not exist
Oct 02 13:16:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0b22fc83-ce23-47c1-b14f-27c88adcd495 does not exist
Oct 02 13:16:56 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dcbd40f1-bbce-440e-a8a0-2fa8973fc872 does not exist
Oct 02 13:16:56 compute-0 sudo[412325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:56 compute-0 sudo[412325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:56 compute-0 sudo[412325]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:56 compute-0 sudo[412351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:16:56 compute-0 sudo[412351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:56 compute-0 sudo[412351]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.120 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.121 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.121 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.121 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:16:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3897488117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:57 compute-0 ceph-mon[73607]: pgmap v3579: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1857333575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297563423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.567 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.714 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.715 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4149MB free_disk=20.976661682128906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.715 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.716 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.779 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.779 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:16:57 compute-0 nova_compute[257802]: 2025-10-02 13:16:57.851 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217109709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.279 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.284 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.298 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.319 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:16:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:16:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.319 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3297563423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4217109709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:58 compute-0 nova_compute[257802]: 2025-10-02 13:16:58.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:59 compute-0 ceph-mon[73607]: pgmap v3580: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:17:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:00 compute-0 nova_compute[257802]: 2025-10-02 13:17:00.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 02 13:17:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.388580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021388619, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1214, "num_deletes": 257, "total_data_size": 1890824, "memory_usage": 1916080, "flush_reason": "Manual Compaction"}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021403187, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1857610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78094, "largest_seqno": 79307, "table_properties": {"data_size": 1851818, "index_size": 3122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12837, "raw_average_key_size": 20, "raw_value_size": 1839913, "raw_average_value_size": 2874, "num_data_blocks": 137, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410922, "oldest_key_time": 1759410922, "file_creation_time": 1759411021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 14643 microseconds, and 4289 cpu microseconds.
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.403220) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1857610 bytes OK
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.403235) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.405702) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.405713) EVENT_LOG_v1 {"time_micros": 1759411021405709, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.405728) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 1885357, prev total WAL file size 1885357, number of live WAL files 2.
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.406323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323630' seq:72057594037927935, type:22 .. '6C6F676D0033353131' seq:0, type:0; will stop at (end)
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1814KB)], [179(10MB)]
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021406379, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 13158857, "oldest_snapshot_seqno": -1}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10499 keys, 13018954 bytes, temperature: kUnknown
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021482788, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 13018954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12951782, "index_size": 39795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 277739, "raw_average_key_size": 26, "raw_value_size": 12768734, "raw_average_value_size": 1216, "num_data_blocks": 1511, "num_entries": 10499, "num_filter_entries": 10499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.483115) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 13018954 bytes
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.484374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.9 rd, 170.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.8 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.1) write-amplify(7.0) OK, records in: 11032, records dropped: 533 output_compression: NoCompression
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.484390) EVENT_LOG_v1 {"time_micros": 1759411021484382, "job": 112, "event": "compaction_finished", "compaction_time_micros": 76544, "compaction_time_cpu_micros": 44785, "output_level": 6, "num_output_files": 1, "total_output_size": 13018954, "num_input_records": 11032, "num_output_records": 10499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021484868, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021486679, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.406214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:01 compute-0 ceph-mon[73607]: pgmap v3581: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 02 13:17:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:02.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 02 13:17:03 compute-0 nova_compute[257802]: 2025-10-02 13:17:03.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:03.700 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:17:03 compute-0 nova_compute[257802]: 2025-10-02 13:17:03.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:03 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:03.702 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:17:03 compute-0 ceph-mon[73607]: pgmap v3582: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 02 13:17:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 13:17:04 compute-0 podman[412425]: 2025-10-02 13:17:04.95654491 +0000 UTC m=+0.089655819 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 02 13:17:05 compute-0 nova_compute[257802]: 2025-10-02 13:17:05.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:05 compute-0 ceph-mon[73607]: pgmap v3583: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 13:17:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:06.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 810 KiB/s wr, 75 op/s
Oct 02 13:17:07 compute-0 ceph-mon[73607]: pgmap v3584: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 810 KiB/s wr, 75 op/s
Oct 02 13:17:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:08.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:08 compute-0 nova_compute[257802]: 2025-10-02 13:17:08.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:17:09 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:09.704 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:17:10 compute-0 ceph-mon[73607]: pgmap v3585: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:17:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:10 compute-0 sudo[412452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:10 compute-0 sudo[412452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:10 compute-0 sudo[412452]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:10 compute-0 sudo[412477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:10 compute-0 sudo[412477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:10 compute-0 sudo[412477]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:10.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:10 compute-0 nova_compute[257802]: 2025-10-02 13:17:10.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:17:11 compute-0 ceph-mon[73607]: pgmap v3586: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:17:11 compute-0 nova_compute[257802]: 2025-10-02 13:17:11.320 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 426 B/s wr, 61 op/s
Oct 02 13:17:13 compute-0 nova_compute[257802]: 2025-10-02 13:17:13.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:13 compute-0 ceph-mon[73607]: pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 426 B/s wr, 61 op/s
Oct 02 13:17:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:14.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Oct 02 13:17:15 compute-0 nova_compute[257802]: 2025-10-02 13:17:15.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:15 compute-0 ovn_controller[148183]: 2025-10-02T13:17:15Z|01009|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 13:17:16 compute-0 ceph-mon[73607]: pgmap v3588: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Oct 02 13:17:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 196 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 13:17:17 compute-0 ceph-mon[73607]: pgmap v3589: 305 pgs: 305 active+clean; 196 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 13:17:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:18.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:18 compute-0 nova_compute[257802]: 2025-10-02 13:17:18.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:17:19 compute-0 ceph-mon[73607]: pgmap v3590: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:17:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:20.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:20 compute-0 nova_compute[257802]: 2025-10-02 13:17:20.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 13:17:21 compute-0 ceph-mon[73607]: pgmap v3591: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 13:17:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:22.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 13:17:22 compute-0 podman[412509]: 2025-10-02 13:17:22.910841921 +0000 UTC m=+0.048486399 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:17:22 compute-0 podman[412510]: 2025-10-02 13:17:22.91652092 +0000 UTC m=+0.053227986 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:17:22 compute-0 podman[412511]: 2025-10-02 13:17:22.922751914 +0000 UTC m=+0.054379365 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:17:23 compute-0 nova_compute[257802]: 2025-10-02 13:17:23.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:24.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:24 compute-0 ceph-mon[73607]: pgmap v3592: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 13:17:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:24.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 02 13:17:25 compute-0 ceph-mon[73607]: pgmap v3593: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 02 13:17:25 compute-0 nova_compute[257802]: 2025-10-02 13:17:25.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 936 KiB/s wr, 36 op/s
Oct 02 13:17:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:27.003 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:27.003 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:27.003 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:27 compute-0 ceph-mon[73607]: pgmap v3594: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 936 KiB/s wr, 36 op/s
Oct 02 13:17:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:28.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:28 compute-0 nova_compute[257802]: 2025-10-02 13:17:28.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 41 KiB/s wr, 5 op/s
Oct 02 13:17:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1594647742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:17:29 compute-0 ceph-mon[73607]: pgmap v3595: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 41 KiB/s wr, 5 op/s
Oct 02 13:17:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3463319671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:17:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:30 compute-0 sudo[412574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:30 compute-0 sudo[412574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:30 compute-0 sudo[412574]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:30 compute-0 sudo[412599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:30 compute-0 sudo[412599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:30.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:30 compute-0 sudo[412599]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:30 compute-0 nova_compute[257802]: 2025-10-02 13:17:30.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 KiB/s rd, 27 KiB/s wr, 7 op/s
Oct 02 13:17:31 compute-0 ceph-mon[73607]: pgmap v3596: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 KiB/s rd, 27 KiB/s wr, 7 op/s
Oct 02 13:17:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:32.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:32.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 2.7 KiB/s wr, 5 op/s
Oct 02 13:17:33 compute-0 nova_compute[257802]: 2025-10-02 13:17:33.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:33 compute-0 ceph-mon[73607]: pgmap v3597: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 2.7 KiB/s wr, 5 op/s
Oct 02 13:17:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:34.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:17:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:34.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:17:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1008 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Oct 02 13:17:35 compute-0 nova_compute[257802]: 2025-10-02 13:17:35.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:35 compute-0 podman[412627]: 2025-10-02 13:17:35.978108496 +0000 UTC m=+0.104336089 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 13:17:36 compute-0 ceph-mon[73607]: pgmap v3598: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1008 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Oct 02 13:17:36 compute-0 nova_compute[257802]: 2025-10-02 13:17:36.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:36 compute-0 nova_compute[257802]: 2025-10-02 13:17:36.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:17:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:37 compute-0 ceph-mon[73607]: pgmap v3599: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:38 compute-0 nova_compute[257802]: 2025-10-02 13:17:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:38 compute-0 nova_compute[257802]: 2025-10-02 13:17:38.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1240062327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:39 compute-0 nova_compute[257802]: 2025-10-02 13:17:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:39 compute-0 ceph-mon[73607]: pgmap v3600: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:39 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1698312112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:40.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:40 compute-0 nova_compute[257802]: 2025-10-02 13:17:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:41 compute-0 nova_compute[257802]: 2025-10-02 13:17:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:42 compute-0 ceph-mon[73607]: pgmap v3601: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 02 13:17:42 compute-0 nova_compute[257802]: 2025-10-02 13:17:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:17:42
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 13:17:43 compute-0 ceph-mon[73607]: pgmap v3602: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 13:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:17:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:17:43 compute-0 nova_compute[257802]: 2025-10-02 13:17:43.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:17:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 KiB/s wr, 80 op/s
Oct 02 13:17:45 compute-0 nova_compute[257802]: 2025-10-02 13:17:45.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:46 compute-0 ceph-mon[73607]: pgmap v3603: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 KiB/s wr, 80 op/s
Oct 02 13:17:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:46.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:17:47 compute-0 ceph-mon[73607]: pgmap v3604: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:17:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:48 compute-0 nova_compute[257802]: 2025-10-02 13:17:48.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 13:17:49 compute-0 ceph-mon[73607]: pgmap v3605: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 13:17:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:50 compute-0 sudo[412660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:50 compute-0 sudo[412660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:50 compute-0 sudo[412660]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:50 compute-0 sudo[412685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:50 compute-0 sudo[412685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:50 compute-0 sudo[412685]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:50 compute-0 nova_compute[257802]: 2025-10-02 13:17:50.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:17:51 compute-0 nova_compute[257802]: 2025-10-02 13:17:51.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:51 compute-0 nova_compute[257802]: 2025-10-02 13:17:51.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:51 compute-0 nova_compute[257802]: 2025-10-02 13:17:51.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:17:51 compute-0 nova_compute[257802]: 2025-10-02 13:17:51.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:17:51 compute-0 nova_compute[257802]: 2025-10-02 13:17:51.113 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:17:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:52 compute-0 ceph-mon[73607]: pgmap v3606: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:17:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:17:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:17:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:53.211 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:17:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:53.212 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:17:53 compute-0 nova_compute[257802]: 2025-10-02 13:17:53.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:53 compute-0 nova_compute[257802]: 2025-10-02 13:17:53.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:53 compute-0 podman[412712]: 2025-10-02 13:17:53.927592467 +0000 UTC m=+0.065983669 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:17:53 compute-0 podman[412713]: 2025-10-02 13:17:53.936634818 +0000 UTC m=+0.069360721 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:17:53 compute-0 podman[412717]: 2025-10-02 13:17:53.965975458 +0000 UTC m=+0.090782837 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:17:54 compute-0 ceph-mon[73607]: pgmap v3607: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:17:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2595809564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:54.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 542 KiB/s rd, 23 KiB/s wr, 62 op/s
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000923490426670389 of space, bias 1.0, pg target 0.2770471280011167 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:17:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/584603106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:55 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:17:55.214 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:17:55 compute-0 nova_compute[257802]: 2025-10-02 13:17:55.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:56 compute-0 ceph-mon[73607]: pgmap v3608: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 542 KiB/s rd, 23 KiB/s wr, 62 op/s
Oct 02 13:17:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2770769049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2770769049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/148150339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 21 KiB/s wr, 59 op/s
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:57 compute-0 sudo[412772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 sudo[412772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412772]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.154 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.154 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.154 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.154 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.155 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:57 compute-0 sudo[412797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:57 compute-0 sudo[412797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412797]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[412823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 sudo[412823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412823]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 ceph-mon[73607]: pgmap v3609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 21 KiB/s wr, 59 op/s
Oct 02 13:17:57 compute-0 sudo[412848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:17:57 compute-0 sudo[412848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412848]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:17:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:17:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:17:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:17:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213980598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.596 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:57 compute-0 sudo[412912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 sudo[412912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412912]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[412938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:57 compute-0 sudo[412938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[412938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[412963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.743 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.745 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.745 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:57 compute-0 sudo[412963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.746 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:57 compute-0 sudo[412963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[412988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:17:57 compute-0 sudo[412988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.831 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.832 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:17:57 compute-0 nova_compute[257802]: 2025-10-02 13:17:57.888 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:17:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:58.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:17:58 compute-0 sudo[412988]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621150503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.300 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev e05b00b2-ae39-4782-b4ad-7aefe619f787 does not exist
Oct 02 13:17:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev de121d98-b9bf-4106-a94b-bedfa0da7f4e does not exist
Oct 02 13:17:58 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 16a769ca-4a72-4fe6-937e-e5f412202859 does not exist
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.308 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:17:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:17:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.328 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.330 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.330 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:58 compute-0 sudo[413068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:58 compute-0 sudo[413068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:58 compute-0 sudo[413068]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:17:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:58.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:58 compute-0 sudo[413093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:58 compute-0 sudo[413093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:58 compute-0 sudo[413093]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:58 compute-0 sudo[413118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:58 compute-0 sudo[413118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:58 compute-0 sudo[413118]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:58 compute-0 sudo[413143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:17:58 compute-0 sudo[413143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4213980598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/621150503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:17:58 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:58 compute-0 nova_compute[257802]: 2025-10-02 13:17:58.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.827614069 +0000 UTC m=+0.036192848 container create c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:17:58 compute-0 systemd[1]: Started libpod-conmon-c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3.scope.
Oct 02 13:17:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 11 KiB/s wr, 32 op/s
Oct 02 13:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.811930195 +0000 UTC m=+0.020508984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.911283321 +0000 UTC m=+0.119862090 container init c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.917252378 +0000 UTC m=+0.125831147 container start c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.919919742 +0000 UTC m=+0.128498511 container attach c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:17:58 compute-0 stoic_blackwell[413225]: 167 167
Oct 02 13:17:58 compute-0 systemd[1]: libpod-c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3.scope: Deactivated successfully.
Oct 02 13:17:58 compute-0 conmon[413225]: conmon c9f2a6fe3e04487bbbed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3.scope/container/memory.events
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.924860354 +0000 UTC m=+0.133439163 container died c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2a33b1dae184dadb8c9a8b63bee43f7dabf3360622972c213ec70310b65a9e0-merged.mount: Deactivated successfully.
Oct 02 13:17:58 compute-0 podman[413209]: 2025-10-02 13:17:58.974163803 +0000 UTC m=+0.182742582 container remove c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:17:58 compute-0 systemd[1]: libpod-conmon-c9f2a6fe3e04487bbbed70a1c5f69920f2ce43ae78cce9892c8dc39aea2308a3.scope: Deactivated successfully.
Oct 02 13:17:59 compute-0 podman[413249]: 2025-10-02 13:17:59.128929408 +0000 UTC m=+0.037625474 container create ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:17:59 compute-0 systemd[1]: Started libpod-conmon-ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8.scope.
Oct 02 13:17:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:59 compute-0 podman[413249]: 2025-10-02 13:17:59.114079933 +0000 UTC m=+0.022776029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:59 compute-0 podman[413249]: 2025-10-02 13:17:59.22609303 +0000 UTC m=+0.134789106 container init ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:59 compute-0 podman[413249]: 2025-10-02 13:17:59.236819853 +0000 UTC m=+0.145515919 container start ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:17:59 compute-0 podman[413249]: 2025-10-02 13:17:59.240101133 +0000 UTC m=+0.148797219 container attach ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:17:59 compute-0 ceph-mon[73607]: pgmap v3610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 11 KiB/s wr, 32 op/s
Oct 02 13:17:59 compute-0 pensive_jemison[413265]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:17:59 compute-0 pensive_jemison[413265]: --> relative data size: 1.0
Oct 02 13:17:59 compute-0 pensive_jemison[413265]: --> All data devices are unavailable
Oct 02 13:18:00 compute-0 systemd[1]: libpod-ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8.scope: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[413280]: 2025-10-02 13:18:00.062480767 +0000 UTC m=+0.034050006 container died ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b2652532a0b468ad31fe239a13b6ddedb7bb6bbb4ebba67b693e5f522f91a6-merged.mount: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[413280]: 2025-10-02 13:18:00.114199765 +0000 UTC m=+0.085768974 container remove ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:18:00 compute-0 systemd[1]: libpod-conmon-ef9128c29f8fc6ff2f357797c93eb4d01b4c3f2f667be4f216a3fc83248183e8.scope: Deactivated successfully.
Oct 02 13:18:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:00 compute-0 sudo[413143]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:00 compute-0 sudo[413295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:00 compute-0 sudo[413295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:00 compute-0 sudo[413295]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:00 compute-0 sudo[413320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:18:00 compute-0 sudo[413320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:00 compute-0 sudo[413320]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:00 compute-0 sudo[413345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:00 compute-0 sudo[413345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:00 compute-0 sudo[413345]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:00.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:00 compute-0 sudo[413370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:18:00 compute-0 sudo[413370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.754935824 +0000 UTC m=+0.044697066 container create dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:18:00 compute-0 systemd[1]: Started libpod-conmon-dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518.scope.
Oct 02 13:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.734458342 +0000 UTC m=+0.024219614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.84406471 +0000 UTC m=+0.133825972 container init dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:18:00 compute-0 nova_compute[257802]: 2025-10-02 13:18:00.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.853822349 +0000 UTC m=+0.143583591 container start dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:18:00 compute-0 affectionate_dewdney[413452]: 167 167
Oct 02 13:18:00 compute-0 systemd[1]: libpod-dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518.scope: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.862937252 +0000 UTC m=+0.152698494 container attach dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.863944617 +0000 UTC m=+0.153705889 container died dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:18:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 02 13:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0c3aa16cca1a1842f80177623cf0ee9a42d397e55bb21e44bc49be72237f343-merged.mount: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[413435]: 2025-10-02 13:18:00.907853233 +0000 UTC m=+0.197614475 container remove dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:18:00 compute-0 systemd[1]: libpod-conmon-dcab31194a5d3fb4610d2a1a34be290271e81e0c1ed4bfe8e5e5c8fa5506b518.scope: Deactivated successfully.
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.093535766 +0000 UTC m=+0.054112208 container create a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:18:01 compute-0 systemd[1]: Started libpod-conmon-a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986.scope.
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.069592519 +0000 UTC m=+0.030169061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5562974743559fa90e2d14983dd45c19219d55774204d4c149efcfadf1ecff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5562974743559fa90e2d14983dd45c19219d55774204d4c149efcfadf1ecff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5562974743559fa90e2d14983dd45c19219d55774204d4c149efcfadf1ecff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5562974743559fa90e2d14983dd45c19219d55774204d4c149efcfadf1ecff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.192203875 +0000 UTC m=+0.152780407 container init a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.199938355 +0000 UTC m=+0.160514797 container start a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.204177309 +0000 UTC m=+0.164753761 container attach a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:18:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:01 compute-0 ceph-mon[73607]: pgmap v3611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 02 13:18:01 compute-0 priceless_leakey[413496]: {
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:     "1": [
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:         {
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "devices": [
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "/dev/loop3"
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             ],
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "lv_name": "ceph_lv0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "lv_size": "7511998464",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "name": "ceph_lv0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "tags": {
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.cluster_name": "ceph",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.crush_device_class": "",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.encrypted": "0",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.osd_id": "1",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.type": "block",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:                 "ceph.vdo": "0"
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             },
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "type": "block",
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:             "vg_name": "ceph_vg0"
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:         }
Oct 02 13:18:01 compute-0 priceless_leakey[413496]:     ]
Oct 02 13:18:01 compute-0 priceless_leakey[413496]: }
Oct 02 13:18:01 compute-0 systemd[1]: libpod-a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986.scope: Deactivated successfully.
Oct 02 13:18:01 compute-0 podman[413479]: 2025-10-02 13:18:01.998252998 +0000 UTC m=+0.958829440 container died a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5562974743559fa90e2d14983dd45c19219d55774204d4c149efcfadf1ecff-merged.mount: Deactivated successfully.
Oct 02 13:18:02 compute-0 podman[413479]: 2025-10-02 13:18:02.055240515 +0000 UTC m=+1.015816957 container remove a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leakey, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:18:02 compute-0 systemd[1]: libpod-conmon-a8e18e29bdb0ed4177f37c4314665d92ea37888d7b5108b420c4ab452448b986.scope: Deactivated successfully.
Oct 02 13:18:02 compute-0 sudo[413370]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:02 compute-0 sudo[413519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:02 compute-0 sudo[413519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:02 compute-0 sudo[413519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:02.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:02 compute-0 sudo[413544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:18:02 compute-0 sudo[413544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:02 compute-0 sudo[413544]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:02 compute-0 sudo[413569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:02 compute-0 sudo[413569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:02 compute-0 sudo[413569]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:02 compute-0 sudo[413594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:18:02 compute-0 sudo[413594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:02.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.626815249 +0000 UTC m=+0.067548447 container create ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:18:02 compute-0 systemd[1]: Started libpod-conmon-ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd.scope.
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.580587916 +0000 UTC m=+0.021321154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.717616425 +0000 UTC m=+0.158349633 container init ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.723280304 +0000 UTC m=+0.164013512 container start ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:02 compute-0 condescending_hodgkin[413674]: 167 167
Oct 02 13:18:02 compute-0 systemd[1]: libpod-ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd.scope: Deactivated successfully.
Oct 02 13:18:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.992055394 +0000 UTC m=+0.432788602 container attach ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:18:02 compute-0 podman[413658]: 2025-10-02 13:18:02.993193632 +0000 UTC m=+0.433926840 container died ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c06dd17fc8b67df2e0f583d9246057aa326897089fa6ece8497cb9ccbfe2c33-merged.mount: Deactivated successfully.
Oct 02 13:18:03 compute-0 podman[413658]: 2025-10-02 13:18:03.063811753 +0000 UTC m=+0.504544961 container remove ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:18:03 compute-0 systemd[1]: libpod-conmon-ca4d7c1cc6a3b3bb301db24b78fda3a2452d5ee5b043703bf00ba1409591a5dd.scope: Deactivated successfully.
Oct 02 13:18:03 compute-0 podman[413698]: 2025-10-02 13:18:03.253438082 +0000 UTC m=+0.058351911 container create 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:18:03 compute-0 systemd[1]: Started libpod-conmon-0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e.scope.
Oct 02 13:18:03 compute-0 podman[413698]: 2025-10-02 13:18:03.21864378 +0000 UTC m=+0.023557619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d69a6f2b3dbd2bf6e0c4138132e9c9cf8d0f090202a10605fd16813bd074060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d69a6f2b3dbd2bf6e0c4138132e9c9cf8d0f090202a10605fd16813bd074060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d69a6f2b3dbd2bf6e0c4138132e9c9cf8d0f090202a10605fd16813bd074060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d69a6f2b3dbd2bf6e0c4138132e9c9cf8d0f090202a10605fd16813bd074060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:03 compute-0 podman[413698]: 2025-10-02 13:18:03.350529784 +0000 UTC m=+0.155443633 container init 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:18:03 compute-0 podman[413698]: 2025-10-02 13:18:03.357447823 +0000 UTC m=+0.162361642 container start 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:18:03 compute-0 podman[413698]: 2025-10-02 13:18:03.362546648 +0000 UTC m=+0.167460497 container attach 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:18:03 compute-0 nova_compute[257802]: 2025-10-02 13:18:03.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:04 compute-0 ceph-mon[73607]: pgmap v3612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 13:18:04 compute-0 sad_cartwright[413714]: {
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:         "osd_id": 1,
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:         "type": "bluestore"
Oct 02 13:18:04 compute-0 sad_cartwright[413714]:     }
Oct 02 13:18:04 compute-0 sad_cartwright[413714]: }
Oct 02 13:18:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:04 compute-0 systemd[1]: libpod-0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e.scope: Deactivated successfully.
Oct 02 13:18:04 compute-0 podman[413698]: 2025-10-02 13:18:04.187198077 +0000 UTC m=+0.992111906 container died 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d69a6f2b3dbd2bf6e0c4138132e9c9cf8d0f090202a10605fd16813bd074060-merged.mount: Deactivated successfully.
Oct 02 13:18:04 compute-0 podman[413698]: 2025-10-02 13:18:04.239104019 +0000 UTC m=+1.044017838 container remove 0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:18:04 compute-0 systemd[1]: libpod-conmon-0aeaf691685b2f628dd2db72b3af0355bb747fd512f0276eb6e45c697dbf255e.scope: Deactivated successfully.
Oct 02 13:18:04 compute-0 sudo[413594]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:18:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:18:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:18:04 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:18:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 76aaaa05-f9b7-416a-8759-0bb21c772036 does not exist
Oct 02 13:18:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8d15189c-6e55-4079-9067-2741485c7f11 does not exist
Oct 02 13:18:04 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f9b35792-19c5-43cf-a0fd-77175e9d28a8 does not exist
Oct 02 13:18:04 compute-0 sudo[413747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:04 compute-0 sudo[413747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:04 compute-0 sudo[413747]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:04.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:04 compute-0 sudo[413772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:18:04 compute-0 sudo[413772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:04 compute-0 sudo[413772]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 13:18:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:18:05 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:18:05 compute-0 ceph-mon[73607]: pgmap v3613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 13:18:05 compute-0 nova_compute[257802]: 2025-10-02 13:18:05.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 02 13:18:06 compute-0 podman[413799]: 2025-10-02 13:18:06.971210527 +0000 UTC m=+0.107256011 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:07 compute-0 ceph-mon[73607]: pgmap v3614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 02 13:18:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:08 compute-0 nova_compute[257802]: 2025-10-02 13:18:08.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:09 compute-0 ceph-mon[73607]: pgmap v3615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:10.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:10.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:10 compute-0 sudo[413826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:10 compute-0 sudo[413826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:10 compute-0 sudo[413826]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:10 compute-0 sudo[413851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:10 compute-0 sudo[413851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:10 compute-0 sudo[413851]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:10 compute-0 nova_compute[257802]: 2025-10-02 13:18:10.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:11 compute-0 ceph-mon[73607]: pgmap v3616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:11 compute-0 nova_compute[257802]: 2025-10-02 13:18:11.331 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:12.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:13 compute-0 ceph-mon[73607]: pgmap v3617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.644 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.644 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.664 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.772 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.772 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.780 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.781 2 INFO nova.compute.claims [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:18:13 compute-0 nova_compute[257802]: 2025-10-02 13:18:13.924 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:14.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876581884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.331 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.336 2 DEBUG nova.compute.provider_tree [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.349 2 DEBUG nova.scheduler.client.report [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.368 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.369 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:18:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:14.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.414 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.414 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.435 2 INFO nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.453 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:18:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3876581884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.578 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.580 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.580 2 INFO nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Creating image(s)
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.606 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.634 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.659 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.663 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.726 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.727 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.728 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.728 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.758 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:14 compute-0 nova_compute[257802]: 2025-10-02 13:18:14.762 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.155 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.223 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.258 2 DEBUG nova.policy [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.328 2 DEBUG nova.objects.instance [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.350 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.350 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Ensure instance console log exists: /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.350 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.350 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.351 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:15 compute-0 ceph-mon[73607]: pgmap v3618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:15 compute-0 nova_compute[257802]: 2025-10-02 13:18:15.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:16 compute-0 nova_compute[257802]: 2025-10-02 13:18:16.559 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Successfully created port: a810367a-1f5d-4568-a28c-5ab99cb8bf57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:18:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 132 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 78 KiB/s wr, 1 op/s
Oct 02 13:18:17 compute-0 nova_compute[257802]: 2025-10-02 13:18:17.921 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Successfully updated port: a810367a-1f5d-4568-a28c-5ab99cb8bf57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:18:17 compute-0 nova_compute[257802]: 2025-10-02 13:18:17.939 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:17 compute-0 nova_compute[257802]: 2025-10-02 13:18:17.939 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:17 compute-0 nova_compute[257802]: 2025-10-02 13:18:17.939 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:18:17 compute-0 ceph-mon[73607]: pgmap v3619: 305 pgs: 305 active+clean; 132 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 78 KiB/s wr, 1 op/s
Oct 02 13:18:18 compute-0 nova_compute[257802]: 2025-10-02 13:18:18.020 2 DEBUG nova.compute.manager [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:18 compute-0 nova_compute[257802]: 2025-10-02 13:18:18.020 2 DEBUG nova.compute.manager [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing instance network info cache due to event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:18 compute-0 nova_compute[257802]: 2025-10-02 13:18:18.020 2 DEBUG oslo_concurrency.lockutils [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:18 compute-0 nova_compute[257802]: 2025-10-02 13:18:18.061 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:18:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:18 compute-0 nova_compute[257802]: 2025-10-02 13:18:18.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 162 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.217 2 DEBUG nova.network.neutron [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.232 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.232 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance network_info: |[{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.233 2 DEBUG oslo_concurrency.lockutils [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.233 2 DEBUG nova.network.neutron [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.235 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Start _get_guest_xml network_info=[{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.284 2 WARNING nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.287 2 DEBUG nova.virt.libvirt.host [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.288 2 DEBUG nova.virt.libvirt.host [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.291 2 DEBUG nova.virt.libvirt.host [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.291 2 DEBUG nova.virt.libvirt.host [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.292 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.293 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.293 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.293 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.293 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.293 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.294 2 DEBUG nova.virt.hardware [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.297 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:18:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50229642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.739 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.764 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:19 compute-0 nova_compute[257802]: 2025-10-02 13:18:19.767 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:19 compute-0 ceph-mon[73607]: pgmap v3620: 305 pgs: 305 active+clean; 162 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Oct 02 13:18:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/50229642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:18:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934812873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.185 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.187 2 DEBUG nova.virt.libvirt.vif [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1532373923',display_name='tempest-TestNetworkAdvancedServerOps-server-1532373923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1532373923',id=224,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxNV81Fq7eK+6ST2Jt1hNcnCBbZFrK+0k2aVOLxgQZlGJmOH1cMJsOmeMX4LxyryLs76B+yz31Ns2YEGZ2X4ixvB4zRgY23pSFnzyUUOeaDgM5tsHwSMti+IeIqi4QgdQ==',key_name='tempest-TestNetworkAdvancedServerOps-206612891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-9l3506wu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:18:14Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=701a8696-62d7-4046-a17e-da0dbbffa7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.187 2 DEBUG nova.network.os_vif_util [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.188 2 DEBUG nova.network.os_vif_util [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.190 2 DEBUG nova.objects.instance [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.238 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <uuid>701a8696-62d7-4046-a17e-da0dbbffa7f4</uuid>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <name>instance-000000e0</name>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <memory>131072</memory>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <vcpu>1</vcpu>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <metadata>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1532373923</nova:name>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:creationTime>2025-10-02 13:18:19</nova:creationTime>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:flavor name="m1.nano">
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:memory>128</nova:memory>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:disk>1</nova:disk>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:swap>0</nova:swap>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </nova:flavor>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:owner>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </nova:owner>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <nova:ports>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <nova:port uuid="a810367a-1f5d-4568-a28c-5ab99cb8bf57">
Oct 02 13:18:20 compute-0 nova_compute[257802]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         </nova:port>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </nova:ports>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </nova:instance>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </metadata>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <sysinfo type="smbios">
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <system>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="serial">701a8696-62d7-4046-a17e-da0dbbffa7f4</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="uuid">701a8696-62d7-4046-a17e-da0dbbffa7f4</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </system>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </sysinfo>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <os>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <boot dev="hd"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <smbios mode="sysinfo"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </os>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <features>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <acpi/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <apic/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <vmcoreinfo/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </features>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <clock offset="utc">
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <timer name="hpet" present="no"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </clock>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <cpu mode="custom" match="exact">
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <model>Nehalem</model>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </cpu>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   <devices>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <disk type="network" device="disk">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/701a8696-62d7-4046-a17e-da0dbbffa7f4_disk">
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </source>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <target dev="vda" bus="virtio"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <disk type="network" device="cdrom">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <driver type="raw" cache="none"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <source protocol="rbd" name="vms/701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config">
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </source>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <auth username="openstack">
Oct 02 13:18:20 compute-0 nova_compute[257802]:         <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       </auth>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <target dev="sda" bus="sata"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </disk>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <interface type="ethernet">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <mac address="fa:16:3e:c4:e1:b1"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <mtu size="1442"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <target dev="tapa810367a-1f"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </interface>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <serial type="pty">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <log file="/var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/console.log" append="off"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </serial>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <video>
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <model type="virtio"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </video>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <input type="tablet" bus="usb"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <rng model="virtio">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </rng>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <controller type="usb" index="0"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     <memballoon model="virtio">
Oct 02 13:18:20 compute-0 nova_compute[257802]:       <stats period="10"/>
Oct 02 13:18:20 compute-0 nova_compute[257802]:     </memballoon>
Oct 02 13:18:20 compute-0 nova_compute[257802]:   </devices>
Oct 02 13:18:20 compute-0 nova_compute[257802]: </domain>
Oct 02 13:18:20 compute-0 nova_compute[257802]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.239 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Preparing to wait for external event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.239 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.239 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.239 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.240 2 DEBUG nova.virt.libvirt.vif [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1532373923',display_name='tempest-TestNetworkAdvancedServerOps-server-1532373923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1532373923',id=224,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxNV81Fq7eK+6ST2Jt1hNcnCBbZFrK+0k2aVOLxgQZlGJmOH1cMJsOmeMX4LxyryLs76B+yz31Ns2YEGZ2X4ixvB4zRgY23pSFnzyUUOeaDgM5tsHwSMti+IeIqi4QgdQ==',key_name='tempest-TestNetworkAdvancedServerOps-206612891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-9l3506wu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:18:14Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=701a8696-62d7-4046-a17e-da0dbbffa7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.240 2 DEBUG nova.network.os_vif_util [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.241 2 DEBUG nova.network.os_vif_util [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.241 2 DEBUG os_vif [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.242 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.242 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa810367a-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa810367a-1f, col_values=(('external_ids', {'iface-id': 'a810367a-1f5d-4568-a28c-5ab99cb8bf57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:e1:b1', 'vm-uuid': '701a8696-62d7-4046-a17e-da0dbbffa7f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:20 compute-0 NetworkManager[44987]: <info>  [1759411100.2658] manager: (tapa810367a-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.273 2 INFO os_vif [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f')
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.316 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.317 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.317 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:c4:e1:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.317 2 INFO nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Using config drive
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.342 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:20.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.637 2 INFO nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Creating config drive at /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.641 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuat2upc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.668 2 DEBUG nova.network.neutron [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updated VIF entry in instance network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.669 2 DEBUG nova.network.neutron [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.688 2 DEBUG oslo_concurrency.lockutils [req-2276d1d7-bc9a-4f89-a447-46b335be2a40 req-359a883c-a774-4e7a-857f-7c603a2cfa44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.774 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuat2upc" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.804 2 DEBUG nova.storage.rbd_utils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.807 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.989 2 DEBUG oslo_concurrency.processutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config 701a8696-62d7-4046-a17e-da0dbbffa7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:20 compute-0 nova_compute[257802]: 2025-10-02 13:18:20.990 2 INFO nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Deleting local config drive /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4/disk.config because it was imported into RBD.
Oct 02 13:18:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1934812873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:21 compute-0 kernel: tapa810367a-1f: entered promiscuous mode
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.0424] manager: (tapa810367a-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Oct 02 13:18:21 compute-0 ovn_controller[148183]: 2025-10-02T13:18:21Z|01010|binding|INFO|Claiming lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 for this chassis.
Oct 02 13:18:21 compute-0 ovn_controller[148183]: 2025-10-02T13:18:21Z|01011|binding|INFO|a810367a-1f5d-4568-a28c-5ab99cb8bf57: Claiming fa:16:3e:c4:e1:b1 10.100.0.3
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.057 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e1:b1 10.100.0.3'], port_security=['fa:16:3e:c4:e1:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '701a8696-62d7-4046-a17e-da0dbbffa7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e968718-4c14-4683-834f-9f8b2842e831', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a88184f-66e4-43e9-9667-38387ec5ec50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c427a773-c561-49a7-aeba-7befd0a6f84e, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a810367a-1f5d-4568-a28c-5ab99cb8bf57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.058 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a810367a-1f5d-4568-a28c-5ab99cb8bf57 in datapath 2e968718-4c14-4683-834f-9f8b2842e831 bound to our chassis
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.059 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:21 compute-0 systemd-udevd[414205]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.074 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3e0ee1-2bee-43ab-8470-ccc758648c5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.075 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e968718-41 in ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.077 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e968718-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.077 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[c243517b-f813-4d1f-8b34-98811d3f7cb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 systemd-machined[211836]: New machine qemu-108-instance-000000e0.
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.078 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e576a6a3-d957-416b-8e49-39c526f79679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.0910] device (tapa810367a-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.0917] device (tapa810367a-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.093 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[130107ac-4a58-4dbc-a753-27936007825f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_controller[148183]: 2025-10-02T13:18:21Z|01012|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 ovn-installed in OVS
Oct 02 13:18:21 compute-0 ovn_controller[148183]: 2025-10-02T13:18:21Z|01013|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 up in Southbound
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.120 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[6015dba0-6fc8-4ed8-81f0-b0f3f0b01942]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-000000e0.
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.148 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[a73c822c-af4c-4183-ba85-2a3108fc0131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.153 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ced0bf77-1d1e-475c-b5d9-029194dedb88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.1541] manager: (tap2e968718-40): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.181 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc62c23-56f0-4adf-9011-65712ba89be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.185 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c4925-39f4-476f-a558-f9359f2b8a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.2077] device (tap2e968718-40): carrier: link connected
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.212 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7f856-d1ca-4a47-943a-836e5f4f366d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.227 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[df24a52e-e0de-4dd7-b2b4-7a11c6f8d700]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e968718-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:1f:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916883, 'reachable_time': 31857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414237, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.243 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4acc9b88-8ef3-41ea-aec6-f18d397cf104]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:1f3a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 916883, 'tstamp': 916883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414238, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.259 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[30e6ae98-a6c4-4488-8dfb-e3fa082b1843]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e968718-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:1f:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916883, 'reachable_time': 31857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414239, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.284 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f9691c42-a953-48df-988e-ab4a1c0e32be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.344 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[97a52ed6-5798-438f-aa6c-0e6ec6ad4d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.346 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e968718-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.346 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.347 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e968718-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 NetworkManager[44987]: <info>  [1759411101.3511] manager: (tap2e968718-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 02 13:18:21 compute-0 kernel: tap2e968718-40: entered promiscuous mode
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.355 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e968718-40, col_values=(('external_ids', {'iface-id': 'd0899bf2-1dee-4c8d-a606-66b23eef0d04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_controller[148183]: 2025-10-02T13:18:21Z|01014|binding|INFO|Releasing lport d0899bf2-1dee-4c8d-a606-66b23eef0d04 from this chassis (sb_readonly=0)
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.374 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.375 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[3758443d-09f4-4fd2-9243-603553359274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.376 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:18:21 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:21.376 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'env', 'PROCESS_TAG=haproxy-2e968718-4c14-4683-834f-9f8b2842e831', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e968718-4c14-4683-834f-9f8b2842e831.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:18:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.469 2 DEBUG nova.compute.manager [req-342d378e-ca3e-4aad-ab85-ce7af25f2110 req-332956d4-03b2-4dfd-b741-cce507ebc327 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.470 2 DEBUG oslo_concurrency.lockutils [req-342d378e-ca3e-4aad-ab85-ce7af25f2110 req-332956d4-03b2-4dfd-b741-cce507ebc327 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.470 2 DEBUG oslo_concurrency.lockutils [req-342d378e-ca3e-4aad-ab85-ce7af25f2110 req-332956d4-03b2-4dfd-b741-cce507ebc327 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.470 2 DEBUG oslo_concurrency.lockutils [req-342d378e-ca3e-4aad-ab85-ce7af25f2110 req-332956d4-03b2-4dfd-b741-cce507ebc327 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:21 compute-0 nova_compute[257802]: 2025-10-02 13:18:21.470 2 DEBUG nova.compute.manager [req-342d378e-ca3e-4aad-ab85-ce7af25f2110 req-332956d4-03b2-4dfd-b741-cce507ebc327 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Processing event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:18:21 compute-0 podman[414313]: 2025-10-02 13:18:21.771796167 +0000 UTC m=+0.051707595 container create 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:21 compute-0 systemd[1]: Started libpod-conmon-23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc.scope.
Oct 02 13:18:21 compute-0 podman[414313]: 2025-10-02 13:18:21.745782061 +0000 UTC m=+0.025693529 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:18:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78144a815eb4af8858db03ac1a3f0bc37fbdb4cf956ca916c28e4f45065f0b97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:21 compute-0 podman[414313]: 2025-10-02 13:18:21.887284685 +0000 UTC m=+0.167196143 container init 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:18:21 compute-0 podman[414313]: 2025-10-02 13:18:21.893969536 +0000 UTC m=+0.173880974 container start 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:18:21 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [NOTICE]   (414333) : New worker (414335) forked
Oct 02 13:18:21 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [NOTICE]   (414333) : Loading success.
Oct 02 13:18:22 compute-0 ceph-mon[73607]: pgmap v3621: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.059 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.061 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411102.0587776, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.061 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Started (Lifecycle Event)
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.066 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.070 2 INFO nova.virt.libvirt.driver [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance spawned successfully.
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.070 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.081 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.089 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.090 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.090 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.090 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.091 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.091 2 DEBUG nova.virt.libvirt.driver [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.095 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.129 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.129 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411102.0594075, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.129 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Paused (Lifecycle Event)
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.159 2 INFO nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Took 7.58 seconds to spawn the instance on the hypervisor.
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.159 2 DEBUG nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.161 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.168 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411102.0654156, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.168 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Resumed (Lifecycle Event)
Oct 02 13:18:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.207 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.209 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.233 2 INFO nova.compute.manager [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Took 8.49 seconds to build instance.
Oct 02 13:18:22 compute-0 nova_compute[257802]: 2025-10-02 13:18:22.250 2 DEBUG oslo_concurrency.lockutils [None req-b492992c-de99-423f-8252-88918bc6fb75 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:22.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.621 2 DEBUG nova.compute.manager [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.621 2 DEBUG oslo_concurrency.lockutils [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.621 2 DEBUG oslo_concurrency.lockutils [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.621 2 DEBUG oslo_concurrency.lockutils [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.621 2 DEBUG nova.compute.manager [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:23 compute-0 nova_compute[257802]: 2025-10-02 13:18:23.622 2 WARNING nova.compute.manager [req-75a00309-478a-4a2f-9347-f6de1d1bcbc5 req-a99ccaab-3d1a-4d60-8b58-9bd94b518fd1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state active and task_state None.
Oct 02 13:18:24 compute-0 ceph-mon[73607]: pgmap v3622: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:18:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:24.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:24.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 13:18:24 compute-0 podman[414346]: 2025-10-02 13:18:24.912561599 +0000 UTC m=+0.051112181 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 02 13:18:24 compute-0 podman[414348]: 2025-10-02 13:18:24.919537347 +0000 UTC m=+0.053404436 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:18:24 compute-0 podman[414347]: 2025-10-02 13:18:24.921944444 +0000 UTC m=+0.058768045 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:18:24 compute-0 nova_compute[257802]: 2025-10-02 13:18:24.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:24 compute-0 NetworkManager[44987]: <info>  [1759411104.9709] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 02 13:18:24 compute-0 NetworkManager[44987]: <info>  [1759411104.9719] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Oct 02 13:18:24 compute-0 ovn_controller[148183]: 2025-10-02T13:18:24Z|01015|binding|INFO|Releasing lport d0899bf2-1dee-4c8d-a606-66b23eef0d04 from this chassis (sb_readonly=0)
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:25 compute-0 ovn_controller[148183]: 2025-10-02T13:18:25Z|01016|binding|INFO|Releasing lport d0899bf2-1dee-4c8d-a606-66b23eef0d04 from this chassis (sb_readonly=0)
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:25 compute-0 ceph-mon[73607]: pgmap v3623: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.733 2 DEBUG nova.compute.manager [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.733 2 DEBUG nova.compute.manager [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing instance network info cache due to event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.734 2 DEBUG oslo_concurrency.lockutils [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.734 2 DEBUG oslo_concurrency.lockutils [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.734 2 DEBUG nova.network.neutron [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:25 compute-0 nova_compute[257802]: 2025-10-02 13:18:25.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:26.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 848 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 13:18:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:27.004 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:28 compute-0 ceph-mon[73607]: pgmap v3624: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 848 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 13:18:28 compute-0 nova_compute[257802]: 2025-10-02 13:18:28.100 2 DEBUG nova.network.neutron [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updated VIF entry in instance network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:28 compute-0 nova_compute[257802]: 2025-10-02 13:18:28.101 2 DEBUG nova.network.neutron [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:28 compute-0 nova_compute[257802]: 2025-10-02 13:18:28.124 2 DEBUG oslo_concurrency.lockutils [req-822b6e80-ee70-4ae4-892f-de273eca35d9 req-96ae830c-4c35-418d-800f-87f8e4e1104d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct 02 13:18:30 compute-0 ceph-mon[73607]: pgmap v3625: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct 02 13:18:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:30 compute-0 nova_compute[257802]: 2025-10-02 13:18:30.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:30.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:30 compute-0 sudo[414403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:30 compute-0 sudo[414403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:30 compute-0 sudo[414403]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:30 compute-0 sudo[414429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:30 compute-0 sudo[414429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:30 compute-0 sudo[414429]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:30 compute-0 nova_compute[257802]: 2025-10-02 13:18:30.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 281 KiB/s wr, 85 op/s
Oct 02 13:18:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:32 compute-0 ceph-mon[73607]: pgmap v3626: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 281 KiB/s wr, 85 op/s
Oct 02 13:18:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:32.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:32.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:18:33 compute-0 ceph-mon[73607]: pgmap v3627: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:18:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 444 KiB/s wr, 78 op/s
Oct 02 13:18:35 compute-0 nova_compute[257802]: 2025-10-02 13:18:35.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:35 compute-0 ovn_controller[148183]: 2025-10-02T13:18:35Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:e1:b1 10.100.0.3
Oct 02 13:18:35 compute-0 ovn_controller[148183]: 2025-10-02T13:18:35Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:e1:b1 10.100.0.3
Oct 02 13:18:35 compute-0 nova_compute[257802]: 2025-10-02 13:18:35.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:35 compute-0 ceph-mon[73607]: pgmap v3628: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 444 KiB/s wr, 78 op/s
Oct 02 13:18:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:36.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 176 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 758 KiB/s wr, 53 op/s
Oct 02 13:18:37 compute-0 nova_compute[257802]: 2025-10-02 13:18:37.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:37 compute-0 nova_compute[257802]: 2025-10-02 13:18:37.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:18:37 compute-0 podman[414458]: 2025-10-02 13:18:37.49736399 +0000 UTC m=+0.131948765 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:37 compute-0 ceph-mon[73607]: pgmap v3629: 305 pgs: 305 active+clean; 176 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 758 KiB/s wr, 53 op/s
Oct 02 13:18:37 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3402411611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:38 compute-0 nova_compute[257802]: 2025-10-02 13:18:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:38.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 13:18:38 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4179534562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:39 compute-0 nova_compute[257802]: 2025-10-02 13:18:39.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:40 compute-0 ceph-mon[73607]: pgmap v3630: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 13:18:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:40.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:40 compute-0 nova_compute[257802]: 2025-10-02 13:18:40.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:40 compute-0 nova_compute[257802]: 2025-10-02 13:18:40.656 2 INFO nova.compute.manager [None req-8707d174-f0a8-4d2c-b1f0-3ec7160f922e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Get console output
Oct 02 13:18:40 compute-0 nova_compute[257802]: 2025-10-02 13:18:40.662 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:18:40 compute-0 nova_compute[257802]: 2025-10-02 13:18:40.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.019 2 DEBUG nova.objects.instance [None req-bedeb3b4-567f-4103-8573-37955d44d93d ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.041 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411121.0407844, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.041 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Paused (Lifecycle Event)
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.059 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.065 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.085 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 13:18:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:41 compute-0 kernel: tapa810367a-1f (unregistering): left promiscuous mode
Oct 02 13:18:41 compute-0 NetworkManager[44987]: <info>  [1759411121.9046] device (tapa810367a-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:18:41 compute-0 ovn_controller[148183]: 2025-10-02T13:18:41Z|01017|binding|INFO|Releasing lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 from this chassis (sb_readonly=0)
Oct 02 13:18:41 compute-0 ovn_controller[148183]: 2025-10-02T13:18:41Z|01018|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 down in Southbound
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:41 compute-0 ovn_controller[148183]: 2025-10-02T13:18:41Z|01019|binding|INFO|Removing iface tapa810367a-1f ovn-installed in OVS
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:41.975 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e1:b1 10.100.0.3'], port_security=['fa:16:3e:c4:e1:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '701a8696-62d7-4046-a17e-da0dbbffa7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e968718-4c14-4683-834f-9f8b2842e831', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a88184f-66e4-43e9-9667-38387ec5ec50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c427a773-c561-49a7-aeba-7befd0a6f84e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a810367a-1f5d-4568-a28c-5ab99cb8bf57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:41.976 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a810367a-1f5d-4568-a28c-5ab99cb8bf57 in datapath 2e968718-4c14-4683-834f-9f8b2842e831 unbound from our chassis
Oct 02 13:18:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:41.977 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e968718-4c14-4683-834f-9f8b2842e831, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:18:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:41.979 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[11b6356c-b8fc-4afc-9fc0-cff1032e6cd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:41 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:41.979 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 namespace which is not needed anymore
Oct 02 13:18:41 compute-0 nova_compute[257802]: 2025-10-02 13:18:41.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:42 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Oct 02 13:18:42 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000e0.scope: Consumed 14.013s CPU time.
Oct 02 13:18:42 compute-0 systemd-machined[211836]: Machine qemu-108-instance-000000e0 terminated.
Oct 02 13:18:42 compute-0 ceph-mon[73607]: pgmap v3631: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [NOTICE]   (414333) : haproxy version is 2.8.14-c23fe91
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [NOTICE]   (414333) : path to executable is /usr/sbin/haproxy
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [WARNING]  (414333) : Exiting Master process...
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [WARNING]  (414333) : Exiting Master process...
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [ALERT]    (414333) : Current worker (414335) exited with code 143 (Terminated)
Oct 02 13:18:42 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414329]: [WARNING]  (414333) : All workers exited. Exiting... (0)
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:42 compute-0 systemd[1]: libpod-23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc.scope: Deactivated successfully.
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:42 compute-0 podman[414514]: 2025-10-02 13:18:42.105793429 +0000 UTC m=+0.042696307 container died 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc-userdata-shm.mount: Deactivated successfully.
Oct 02 13:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-78144a815eb4af8858db03ac1a3f0bc37fbdb4cf956ca916c28e4f45065f0b97-merged.mount: Deactivated successfully.
Oct 02 13:18:42 compute-0 podman[414514]: 2025-10-02 13:18:42.138345253 +0000 UTC m=+0.075248151 container cleanup 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:18:42 compute-0 systemd[1]: libpod-conmon-23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc.scope: Deactivated successfully.
Oct 02 13:18:42 compute-0 podman[414544]: 2025-10-02 13:18:42.200729084 +0000 UTC m=+0.042596776 container remove 23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.207 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[73f13000-73ed-449a-b4ef-6746e7c0ae54]: (4, ('Thu Oct  2 01:18:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 (23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc)\n23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc\nThu Oct  2 01:18:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 (23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc)\n23d7d272d9b0a9163cbed7d951d609a94b4193aa3186ba4948f970aae0d9b4bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.208 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[e118838e-8099-4cd6-8ed9-ea80e65c94bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.209 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e968718-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:42 compute-0 kernel: tap2e968718-40: left promiscuous mode
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.226 2 DEBUG nova.compute.manager [None req-bedeb3b4-567f-4103-8573-37955d44d93d ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.232 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b16600ac-00c6-4a6e-baba-c6b4735c82e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:42.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.261 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[29a608ac-607a-4f58-9c4e-be376bc6a46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.262 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[f878376c-4fb3-4bd1-b8cb-dfa917997ca8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.276 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5a3d91-eb48-4196-9fca-e0e5aeb6219b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916877, 'reachable_time': 41264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414571, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.279 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:18:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:42.279 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[5a619001-3f1a-48ab-9cb3-6b04f5aba081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d2e968718\x2d4c14\x2d4683\x2d834f\x2d9f8b2842e831.mount: Deactivated successfully.
Oct 02 13:18:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.644 2 DEBUG nova.compute.manager [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.644 2 DEBUG oslo_concurrency.lockutils [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.644 2 DEBUG oslo_concurrency.lockutils [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.644 2 DEBUG oslo_concurrency.lockutils [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.644 2 DEBUG nova.compute.manager [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:42 compute-0 nova_compute[257802]: 2025-10-02 13:18:42.645 2 WARNING nova.compute.manager [req-8fa72b0f-4cb9-4440-9ee2-ff3d8da22921 req-ade158c8-37e7-4c8f-8546-a19eea4949e9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state suspended and task_state None.
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:18:42
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log']
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:18:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:18:44 compute-0 ceph-mon[73607]: pgmap v3632: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.199 2 INFO nova.compute.manager [None req-b5e05056-2ac1-4dc4-b8e6-42153349562d ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Get console output
Oct 02 13:18:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.336 2 INFO nova.compute.manager [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Resuming
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.337 2 DEBUG nova.objects.instance [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'flavor' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.378 2 DEBUG oslo_concurrency.lockutils [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.378 2 DEBUG oslo_concurrency.lockutils [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.378 2 DEBUG nova.network.neutron [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:18:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.782 2 DEBUG nova.compute.manager [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.783 2 DEBUG oslo_concurrency.lockutils [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.783 2 DEBUG oslo_concurrency.lockutils [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.783 2 DEBUG oslo_concurrency.lockutils [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.784 2 DEBUG nova.compute.manager [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:44 compute-0 nova_compute[257802]: 2025-10-02 13:18:44.784 2 WARNING nova.compute.manager [req-67eae2a6-382c-4c76-9f1e-5309c3969daf req-43b3e545-9a7b-4dba-b07d-e1f00161dd8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state suspended and task_state resuming.
Oct 02 13:18:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:18:45 compute-0 ceph-mon[73607]: pgmap v3633: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:18:45 compute-0 nova_compute[257802]: 2025-10-02 13:18:45.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:45 compute-0 nova_compute[257802]: 2025-10-02 13:18:45.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.476 2 DEBUG nova.network.neutron [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.494 2 DEBUG oslo_concurrency.lockutils [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.501 2 DEBUG nova.virt.libvirt.vif [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1532373923',display_name='tempest-TestNetworkAdvancedServerOps-server-1532373923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1532373923',id=224,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxNV81Fq7eK+6ST2Jt1hNcnCBbZFrK+0k2aVOLxgQZlGJmOH1cMJsOmeMX4LxyryLs76B+yz31Ns2YEGZ2X4ixvB4zRgY23pSFnzyUUOeaDgM5tsHwSMti+IeIqi4QgdQ==',key_name='tempest-TestNetworkAdvancedServerOps-206612891',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-9l3506wu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:42Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=701a8696-62d7-4046-a17e-da0dbbffa7f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.502 2 DEBUG nova.network.os_vif_util [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.503 2 DEBUG nova.network.os_vif_util [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.503 2 DEBUG os_vif [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa810367a-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa810367a-1f, col_values=(('external_ids', {'iface-id': 'a810367a-1f5d-4568-a28c-5ab99cb8bf57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:e1:b1', 'vm-uuid': '701a8696-62d7-4046-a17e-da0dbbffa7f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.508 2 INFO os_vif [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f')
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.526 2 DEBUG nova.objects.instance [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'numa_topology' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:47 compute-0 kernel: tapa810367a-1f: entered promiscuous mode
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ovn_controller[148183]: 2025-10-02T13:18:47Z|01020|binding|INFO|Claiming lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 for this chassis.
Oct 02 13:18:47 compute-0 ovn_controller[148183]: 2025-10-02T13:18:47Z|01021|binding|INFO|a810367a-1f5d-4568-a28c-5ab99cb8bf57: Claiming fa:16:3e:c4:e1:b1 10.100.0.3
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.5996] manager: (tapa810367a-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.602 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e1:b1 10.100.0.3'], port_security=['fa:16:3e:c4:e1:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '701a8696-62d7-4046-a17e-da0dbbffa7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e968718-4c14-4683-834f-9f8b2842e831', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8a88184f-66e4-43e9-9667-38387ec5ec50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c427a773-c561-49a7-aeba-7befd0a6f84e, chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a810367a-1f5d-4568-a28c-5ab99cb8bf57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.604 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a810367a-1f5d-4568-a28c-5ab99cb8bf57 in datapath 2e968718-4c14-4683-834f-9f8b2842e831 bound to our chassis
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.605 158261 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:47 compute-0 ovn_controller[148183]: 2025-10-02T13:18:47Z|01022|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 ovn-installed in OVS
Oct 02 13:18:47 compute-0 ovn_controller[148183]: 2025-10-02T13:18:47Z|01023|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 up in Southbound
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.621 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3c98db-67e4-4ae1-b028-3ca226137d85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.622 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e968718-41 in ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.624 264953 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e968718-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.624 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0f7065-d37c-46f7-be4b-b790bcb548c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.625 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[24332ae3-2229-40b6-b40d-dd83abfe1194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 systemd-udevd[414589]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:18:47 compute-0 systemd-machined[211836]: New machine qemu-109-instance-000000e0.
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.637 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[257211c6-e1c1-4e43-99f9-8d78afb6de54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.6389] device (tapa810367a-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.6398] device (tapa810367a-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:18:47 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-000000e0.
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.661 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0cfa95-92a7-4ad2-afa5-a62867d8dd34]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.688 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[98500603-4a0f-49c8-adb7-96c3f5d7e253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.693 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4c91fd-0740-4a26-be08-4a261adf06a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.6945] manager: (tap2e968718-40): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.722 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[73eb6cde-4488-4e58-9c34-dc92f232ec44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.725 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[48b831eb-aded-489f-8b26-01960bea8216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.7435] device (tap2e968718-40): carrier: link connected
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.749 264969 DEBUG oslo.privsep.daemon [-] privsep: reply[e1168aaf-57a6-4f65-b670-9997acc0c6f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.765 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba959af-0063-4c25-bde2-e78102303aed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e968718-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:1f:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919536, 'reachable_time': 19650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414622, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.780 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[03405e47-1ef0-459e-8043-b9e92345d8ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:1f3a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 919536, 'tstamp': 919536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414623, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.795 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[b71e21c5-f25d-4900-a0a3-117238b5d6fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e968718-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:1f:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919536, 'reachable_time': 19650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414624, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.826 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[42b2fb3f-1e64-46fe-bb56-d9c4a4dc3d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.887 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[7187b232-ca39-4f12-a9d0-457f53b583a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.888 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e968718-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.888 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.889 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e968718-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 NetworkManager[44987]: <info>  [1759411127.8917] manager: (tap2e968718-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Oct 02 13:18:47 compute-0 kernel: tap2e968718-40: entered promiscuous mode
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.897 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e968718-40, col_values=(('external_ids', {'iface-id': 'd0899bf2-1dee-4c8d-a606-66b23eef0d04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ovn_controller[148183]: 2025-10-02T13:18:47Z|01024|binding|INFO|Releasing lport d0899bf2-1dee-4c8d-a606-66b23eef0d04 from this chassis (sb_readonly=0)
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.901 158261 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.902 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[00754ab1-9806-4bf2-95f0-bdd3a7bcce89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.903 158261 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: global
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     log         /dev/log local0 debug
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     log-tag     haproxy-metadata-proxy-2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     user        root
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     group       root
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     maxconn     1024
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     pidfile     /var/lib/neutron/external/pids/2e968718-4c14-4683-834f-9f8b2842e831.pid.haproxy
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     daemon
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: defaults
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     log global
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     mode http
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     option httplog
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     option dontlognull
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     option http-server-close
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     option forwardfor
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     retries                 3
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     timeout http-request    30s
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     timeout connect         30s
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     timeout client          32s
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     timeout server          32s
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     timeout http-keep-alive 30s
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: listen listener
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     bind 169.254.169.254:80
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:     http-request add-header X-OVN-Network-ID 2e968718-4c14-4683-834f-9f8b2842e831
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:18:47 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:47.903 158261 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'env', 'PROCESS_TAG=haproxy-2e968718-4c14-4683-834f-9f8b2842e831', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e968718-4c14-4683-834f-9f8b2842e831.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.912 2 DEBUG nova.compute.manager [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.912 2 DEBUG oslo_concurrency.lockutils [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.913 2 DEBUG oslo_concurrency.lockutils [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.913 2 DEBUG oslo_concurrency.lockutils [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.913 2 DEBUG nova.compute.manager [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.914 2 WARNING nova.compute.manager [req-fd23cf62-38fb-46ac-a1f4-5eef122679eb req-b1754029-304b-4110-be67-8083d1584b16 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state suspended and task_state resuming.
Oct 02 13:18:47 compute-0 nova_compute[257802]: 2025-10-02 13:18:47.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 ceph-mon[73607]: pgmap v3634: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:48.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:48 compute-0 podman[414698]: 2025-10-02 13:18:48.283638689 +0000 UTC m=+0.052832172 container create 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:18:48 compute-0 systemd[1]: Started libpod-conmon-66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92.scope.
Oct 02 13:18:48 compute-0 podman[414698]: 2025-10-02 13:18:48.25874923 +0000 UTC m=+0.027942733 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:18:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab0d7cb4e776b12455c702725f6297912c5d2b6d28feff266e54840b393e5bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:48 compute-0 podman[414698]: 2025-10-02 13:18:48.407707734 +0000 UTC m=+0.176901247 container init 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 13:18:48 compute-0 podman[414698]: 2025-10-02 13:18:48.413678078 +0000 UTC m=+0.182871561 container start 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:18:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:48 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [NOTICE]   (414717) : New worker (414719) forked
Oct 02 13:18:48 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [NOTICE]   (414717) : Loading success.
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.678 2 DEBUG nova.virt.libvirt.host [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Removed pending event for 701a8696-62d7-4046-a17e-da0dbbffa7f4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.679 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411128.6779437, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.680 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Started (Lifecycle Event)
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.715 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.716 2 DEBUG nova.compute.manager [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.716 2 DEBUG nova.objects.instance [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.719 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.735 2 INFO nova.virt.libvirt.driver [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance running successfully.
Oct 02 13:18:48 compute-0 virtqemud[257280]: argument unsupported: QEMU guest agent is not configured
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.738 2 DEBUG nova.virt.libvirt.guest [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.739 2 DEBUG nova.compute.manager [None req-0520e1fe-f345-4ce7-9594-a61b7e795e1c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.740 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.740 2 DEBUG nova.virt.driver [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] Emitting event <LifecycleEvent: 1759411128.6840365, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.741 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Resumed (Lifecycle Event)
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.775 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.779 2 DEBUG nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:48 compute-0 nova_compute[257802]: 2025-10-02 13:18:48.814 2 INFO nova.compute.manager [None req-07e5fefd-020c-4292-aeb1-9785a2e06d1c - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 13:18:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 300 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 02 13:18:49 compute-0 ceph-mon[73607]: pgmap v3635: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 300 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.000 2 DEBUG nova.compute.manager [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.000 2 DEBUG oslo_concurrency.lockutils [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.000 2 DEBUG oslo_concurrency.lockutils [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.001 2 DEBUG oslo_concurrency.lockutils [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.001 2 DEBUG nova.compute.manager [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.001 2 WARNING nova.compute.manager [req-7001cd59-8fca-473d-a175-314e416093d0 req-38093dfe-f8a3-4dc0-9552-843610bb9833 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state active and task_state None.
Oct 02 13:18:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:50 compute-0 nova_compute[257802]: 2025-10-02 13:18:50.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 72 KiB/s wr, 17 op/s
Oct 02 13:18:50 compute-0 sudo[414730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:50 compute-0 sudo[414730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:50 compute-0 sudo[414730]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:50 compute-0 sudo[414755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:50 compute-0 sudo[414755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:50 compute-0 sudo[414755]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:51 compute-0 nova_compute[257802]: 2025-10-02 13:18:51.127 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:51 compute-0 nova_compute[257802]: 2025-10-02 13:18:51.777 2 INFO nova.compute.manager [None req-47350daf-354f-469c-987e-2f2c2c331919 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Get console output
Oct 02 13:18:51 compute-0 nova_compute[257802]: 2025-10-02 13:18:51.785 20794 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:18:52 compute-0 ceph-mon[73607]: pgmap v3636: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 72 KiB/s wr, 17 op/s
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.151329) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132151380, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1176, "num_deletes": 251, "total_data_size": 1949093, "memory_usage": 1981728, "flush_reason": "Manual Compaction"}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132166932, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1915902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79308, "largest_seqno": 80483, "table_properties": {"data_size": 1910260, "index_size": 3036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11991, "raw_average_key_size": 19, "raw_value_size": 1899048, "raw_average_value_size": 3154, "num_data_blocks": 135, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411021, "oldest_key_time": 1759411021, "file_creation_time": 1759411132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15710 microseconds, and 5080 cpu microseconds.
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.167023) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1915902 bytes OK
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.167074) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.169026) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.169044) EVENT_LOG_v1 {"time_micros": 1759411132169037, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.169074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1943898, prev total WAL file size 1943898, number of live WAL files 2.
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.169986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1870KB)], [182(12MB)]
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132170071, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 14934856, "oldest_snapshot_seqno": -1}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10584 keys, 13020764 bytes, temperature: kUnknown
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132241166, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13020764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12953084, "index_size": 40111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 280160, "raw_average_key_size": 26, "raw_value_size": 12768676, "raw_average_value_size": 1206, "num_data_blocks": 1521, "num_entries": 10584, "num_filter_entries": 10584, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.241568) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13020764 bytes
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.242992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.6 rd, 182.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.6) write-amplify(6.8) OK, records in: 11101, records dropped: 517 output_compression: NoCompression
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.243020) EVENT_LOG_v1 {"time_micros": 1759411132243007, "job": 114, "event": "compaction_finished", "compaction_time_micros": 71239, "compaction_time_cpu_micros": 32931, "output_level": 6, "num_output_files": 1, "total_output_size": 13020764, "num_input_records": 11101, "num_output_records": 10584, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132243692, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132247693, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.169769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.247895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.247907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.247910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.247913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:18:52.247917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:18:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:52.503 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:52 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:52.504 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.754 2 DEBUG nova.compute.manager [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.755 2 DEBUG nova.compute.manager [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing instance network info cache due to event network-changed-a810367a-1f5d-4568-a28c-5ab99cb8bf57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.755 2 DEBUG oslo_concurrency.lockutils [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.756 2 DEBUG oslo_concurrency.lockutils [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.756 2 DEBUG nova.network.neutron [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Refreshing network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.818 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.819 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.819 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.820 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.820 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.821 2 INFO nova.compute.manager [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Terminating instance
Oct 02 13:18:52 compute-0 nova_compute[257802]: 2025-10-02 13:18:52.822 2 DEBUG nova.compute.manager [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:18:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 13:18:52 compute-0 kernel: tapa810367a-1f (unregistering): left promiscuous mode
Oct 02 13:18:52 compute-0 NetworkManager[44987]: <info>  [1759411132.9908] device (tapa810367a-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:18:53 compute-0 ovn_controller[148183]: 2025-10-02T13:18:53Z|01025|binding|INFO|Releasing lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 from this chassis (sb_readonly=0)
Oct 02 13:18:53 compute-0 ovn_controller[148183]: 2025-10-02T13:18:53Z|01026|binding|INFO|Setting lport a810367a-1f5d-4568-a28c-5ab99cb8bf57 down in Southbound
Oct 02 13:18:53 compute-0 ovn_controller[148183]: 2025-10-02T13:18:53Z|01027|binding|INFO|Removing iface tapa810367a-1f ovn-installed in OVS
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.012 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e1:b1 10.100.0.3'], port_security=['fa:16:3e:c4:e1:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '701a8696-62d7-4046-a17e-da0dbbffa7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e968718-4c14-4683-834f-9f8b2842e831', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8a88184f-66e4-43e9-9667-38387ec5ec50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c427a773-c561-49a7-aeba-7befd0a6f84e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>], logical_port=a810367a-1f5d-4568-a28c-5ab99cb8bf57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6f7616e850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.014 158261 INFO neutron.agent.ovn.metadata.agent [-] Port a810367a-1f5d-4568-a28c-5ab99cb8bf57 in datapath 2e968718-4c14-4683-834f-9f8b2842e831 unbound from our chassis
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.016 158261 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e968718-4c14-4683-834f-9f8b2842e831, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.018 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[84e99d18-086d-432b-b745-edbf23711582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.019 158261 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 namespace which is not needed anymore
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Oct 02 13:18:53 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000e0.scope: Consumed 1.094s CPU time.
Oct 02 13:18:53 compute-0 systemd-machined[211836]: Machine qemu-109-instance-000000e0 terminated.
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:18:53 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [NOTICE]   (414717) : haproxy version is 2.8.14-c23fe91
Oct 02 13:18:53 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [NOTICE]   (414717) : path to executable is /usr/sbin/haproxy
Oct 02 13:18:53 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [WARNING]  (414717) : Exiting Master process...
Oct 02 13:18:53 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [ALERT]    (414717) : Current worker (414719) exited with code 143 (Terminated)
Oct 02 13:18:53 compute-0 neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831[414713]: [WARNING]  (414717) : All workers exited. Exiting... (0)
Oct 02 13:18:53 compute-0 systemd[1]: libpod-66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92.scope: Deactivated successfully.
Oct 02 13:18:53 compute-0 podman[414805]: 2025-10-02 13:18:53.190027269 +0000 UTC m=+0.046263314 container died 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:53 compute-0 ceph-mon[73607]: pgmap v3637: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 13:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92-userdata-shm.mount: Deactivated successfully.
Oct 02 13:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab0d7cb4e776b12455c702725f6297912c5d2b6d28feff266e54840b393e5bc-merged.mount: Deactivated successfully.
Oct 02 13:18:53 compute-0 podman[414805]: 2025-10-02 13:18:53.227351637 +0000 UTC m=+0.083587682 container cleanup 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:18:53 compute-0 systemd[1]: libpod-conmon-66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92.scope: Deactivated successfully.
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.255 2 INFO nova.virt.libvirt.driver [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Instance destroyed successfully.
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.255 2 DEBUG nova.objects.instance [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 701a8696-62d7-4046-a17e-da0dbbffa7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.271 2 DEBUG nova.virt.libvirt.vif [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1532373923',display_name='tempest-TestNetworkAdvancedServerOps-server-1532373923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1532373923',id=224,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxNV81Fq7eK+6ST2Jt1hNcnCBbZFrK+0k2aVOLxgQZlGJmOH1cMJsOmeMX4LxyryLs76B+yz31Ns2YEGZ2X4ixvB4zRgY23pSFnzyUUOeaDgM5tsHwSMti+IeIqi4QgdQ==',key_name='tempest-TestNetworkAdvancedServerOps-206612891',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-9l3506wu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:48Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=701a8696-62d7-4046-a17e-da0dbbffa7f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.271 2 DEBUG nova.network.os_vif_util [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.272 2 DEBUG nova.network.os_vif_util [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.272 2 DEBUG os_vif [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa810367a-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.279 2 INFO os_vif [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e1:b1,bridge_name='br-int',has_traffic_filtering=True,id=a810367a-1f5d-4568-a28c-5ab99cb8bf57,network=Network(2e968718-4c14-4683-834f-9f8b2842e831),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa810367a-1f')
Oct 02 13:18:53 compute-0 podman[414837]: 2025-10-02 13:18:53.289676956 +0000 UTC m=+0.041637233 container remove 66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.296 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[be1e05a8-b409-4262-b5bf-e1fcbbf816b3]: (4, ('Thu Oct  2 01:18:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 (66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92)\n66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92\nThu Oct  2 01:18:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 (66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92)\n66796f293430e3a0fba1c6b24c5235b583ae94ceffd51c7963ede5a304327c92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.298 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dcb630-9776-42bf-aa66-0659401e10a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.299 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e968718-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:53 compute-0 kernel: tap2e968718-40: left promiscuous mode
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.316 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[4de64ecb-44d9-47a2-83ed-9df3b54b244e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.342 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[aab60b75-5266-4f94-9348-b8b68104c801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.343 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9fa73a-de5a-42f4-af42-921fa1a7e28a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.356 264953 DEBUG oslo.privsep.daemon [-] privsep: reply[ee551deb-b59a-4e78-bc18-c7ee700da1c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919530, 'reachable_time': 38829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414877, 'error': None, 'target': 'ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.359 158373 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e968718-4c14-4683-834f-9f8b2842e831 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:18:53 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:18:53.360 158373 DEBUG oslo.privsep.daemon [-] privsep: reply[82be72c3-8997-4832-b4e8-b0ee275e7f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d2e968718\x2d4c14\x2d4683\x2d834f\x2d9f8b2842e831.mount: Deactivated successfully.
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.425 2 DEBUG nova.compute.manager [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.425 2 DEBUG oslo_concurrency.lockutils [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.426 2 DEBUG oslo_concurrency.lockutils [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.426 2 DEBUG oslo_concurrency.lockutils [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.426 2 DEBUG nova.compute.manager [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:53 compute-0 nova_compute[257802]: 2025-10-02 13:18:53.427 2 DEBUG nova.compute.manager [req-2c51e4c9-a69d-4ddc-b57c-1ec75e67590b req-9584c9e9-0460-4bc6-a86e-40d313959248 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-unplugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:18:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:18:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:18:54 compute-0 nova_compute[257802]: 2025-10-02 13:18:54.331 2 DEBUG nova.network.neutron [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updated VIF entry in instance network info cache for port a810367a-1f5d-4568-a28c-5ab99cb8bf57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:54 compute-0 nova_compute[257802]: 2025-10-02 13:18:54.333 2 DEBUG nova.network.neutron [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [{"id": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "address": "fa:16:3e:c4:e1:b1", "network": {"id": "2e968718-4c14-4683-834f-9f8b2842e831", "bridge": "br-int", "label": "tempest-network-smoke--1015430429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa810367a-1f", "ovs_interfaceid": "a810367a-1f5d-4568-a28c-5ab99cb8bf57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:54 compute-0 nova_compute[257802]: 2025-10-02 13:18:54.367 2 DEBUG oslo_concurrency.lockutils [req-004ee2df-fadd-4fe2-b8ff-0ed033e729c2 req-1589280e-7ec5-4c5e-bf0e-6961a946e20b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-701a8696-62d7-4046-a17e-da0dbbffa7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:54.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3475925167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 13 op/s
Oct 02 13:18:54 compute-0 nova_compute[257802]: 2025-10-02 13:18:54.913 2 INFO nova.virt.libvirt.driver [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Deleting instance files /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4_del
Oct 02 13:18:54 compute-0 nova_compute[257802]: 2025-10-02 13:18:54.914 2 INFO nova.virt.libvirt.driver [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Deletion of /var/lib/nova/instances/701a8696-62d7-4046-a17e-da0dbbffa7f4_del complete
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.003 2 INFO nova.compute.manager [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Took 2.18 seconds to destroy the instance on the hypervisor.
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.004 2 DEBUG oslo.service.loopingcall [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.004 2 DEBUG nova.compute.manager [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.005 2 DEBUG nova.network.neutron [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017860933719523544 of space, bias 1.0, pg target 0.5358280115857064 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.601 2 DEBUG nova.compute.manager [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.601 2 DEBUG oslo_concurrency.lockutils [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.602 2 DEBUG oslo_concurrency.lockutils [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.602 2 DEBUG oslo_concurrency.lockutils [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.602 2 DEBUG nova.compute.manager [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] No waiting events found dispatching network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.603 2 WARNING nova.compute.manager [req-ef2bb54b-7805-4797-baaf-681eb86959b8 req-1bfb4d3c-3da7-455c-902a-2f5751d6d6dc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received unexpected event network-vif-plugged-a810367a-1f5d-4568-a28c-5ab99cb8bf57 for instance with vm_state active and task_state deleting.
Oct 02 13:18:55 compute-0 ceph-mon[73607]: pgmap v3638: 305 pgs: 305 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 13 op/s
Oct 02 13:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2906155138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2906155138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:18:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2756459664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:55 compute-0 nova_compute[257802]: 2025-10-02 13:18:55.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:55 compute-0 podman[414881]: 2025-10-02 13:18:55.919794723 +0000 UTC m=+0.052911675 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:18:55 compute-0 podman[414882]: 2025-10-02 13:18:55.923595303 +0000 UTC m=+0.055857735 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:18:55 compute-0 podman[414880]: 2025-10-02 13:18:55.946879234 +0000 UTC m=+0.082470305 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.262 2 DEBUG nova.network.neutron [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.295 2 INFO nova.compute.manager [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Took 1.29 seconds to deallocate network for instance.
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.400 2 DEBUG nova.compute.manager [req-516dba46-ca21-4412-8bfb-9164216bc444 req-01beeea0-73e1-475c-af51-defe9ba11a6d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Received event network-vif-deleted-a810367a-1f5d-4568-a28c-5ab99cb8bf57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.428 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.429 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:56.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.491 2 DEBUG oslo_concurrency.processutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 13:18:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630882670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.917 2 DEBUG oslo_concurrency.processutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.926 2 DEBUG nova.compute.provider_tree [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:18:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1630882670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:56 compute-0 nova_compute[257802]: 2025-10-02 13:18:56.991 2 DEBUG nova.scheduler.client.report [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:18:57 compute-0 nova_compute[257802]: 2025-10-02 13:18:57.021 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:57 compute-0 nova_compute[257802]: 2025-10-02 13:18:57.054 2 INFO nova.scheduler.client.report [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance 701a8696-62d7-4046-a17e-da0dbbffa7f4
Oct 02 13:18:57 compute-0 nova_compute[257802]: 2025-10-02 13:18:57.149 2 DEBUG oslo_concurrency.lockutils [None req-592beb6d-7af9-46b2-94ad-d5bfabaea639 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "701a8696-62d7-4046-a17e-da0dbbffa7f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:57 compute-0 ceph-mon[73607]: pgmap v3639: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 13 KiB/s wr, 16 op/s
Oct 02 13:18:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:58 compute-0 nova_compute[257802]: 2025-10-02 13:18:58.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:18:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:18:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:58.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:18:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.158 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.159 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.159 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.159 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.159 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763796321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.593 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.748 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.749 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4139MB free_disk=20.988140106201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.750 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.750 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.865 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.866 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:18:59 compute-0 nova_compute[257802]: 2025-10-02 13:18:59.887 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:00 compute-0 ceph-mon[73607]: pgmap v3640: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 02 13:19:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1763796321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227930391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.316 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.322 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.339 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.389 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.389 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:00.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:00 compute-0 nova_compute[257802]: 2025-10-02 13:19:00.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 02 13:19:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3227930391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:01 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:19:01.507 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:02 compute-0 ceph-mon[73607]: pgmap v3641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 02 13:19:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:19:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:19:02 compute-0 nova_compute[257802]: 2025-10-02 13:19:02.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:02 compute-0 nova_compute[257802]: 2025-10-02 13:19:02.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 02 13:19:03 compute-0 nova_compute[257802]: 2025-10-02 13:19:03.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:04 compute-0 ceph-mon[73607]: pgmap v3642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 02 13:19:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:04.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:04 compute-0 sudo[415011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:04 compute-0 sudo[415011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:04 compute-0 sudo[415011]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 02 13:19:04 compute-0 sudo[415036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:04 compute-0 sudo[415036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:04 compute-0 sudo[415036]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:04 compute-0 sudo[415061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:04 compute-0 sudo[415061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:04 compute-0 sudo[415061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:05 compute-0 sudo[415086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:19:05 compute-0 sudo[415086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:05 compute-0 sudo[415086]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:05 compute-0 nova_compute[257802]: 2025-10-02 13:19:05.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:06 compute-0 ceph-mon[73607]: pgmap v3643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 02 13:19:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5de32da7-0829-4f86-a35c-0cb61ad84e82 does not exist
Oct 02 13:19:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8f2b41e5-ce79-425d-97a2-78a671255654 does not exist
Oct 02 13:19:06 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ddc4abf1-d42b-444b-bc54-b8335a1a39e5 does not exist
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:19:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-0 sudo[415143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:06 compute-0 sudo[415143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:06 compute-0 sudo[415143]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:06.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:06 compute-0 sudo[415168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:06 compute-0 sudo[415168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:06 compute-0 sudo[415168]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:06 compute-0 sudo[415193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:06 compute-0 sudo[415193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:06 compute-0 sudo[415193]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:06 compute-0 sudo[415218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:19:06 compute-0 sudo[415218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:06.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.813755324 +0000 UTC m=+0.043227661 container create 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:19:06 compute-0 systemd[1]: Started libpod-conmon-0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835.scope.
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.793758294 +0000 UTC m=+0.023230651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 13:19:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.928550586 +0000 UTC m=+0.158022953 container init 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.935505224 +0000 UTC m=+0.164977561 container start 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.938145667 +0000 UTC m=+0.167618004 container attach 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:19:06 compute-0 bold_mendel[415300]: 167 167
Oct 02 13:19:06 compute-0 systemd[1]: libpod-0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835.scope: Deactivated successfully.
Oct 02 13:19:06 compute-0 conmon[415300]: conmon 0ec44b10377fb725829a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835.scope/container/memory.events
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.947534073 +0000 UTC m=+0.177006420 container died 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0e6625ec9661774f25cd7a7b0540be3356e53228e92c98c78ffef66576a6ce-merged.mount: Deactivated successfully.
Oct 02 13:19:06 compute-0 podman[415283]: 2025-10-02 13:19:06.988738094 +0000 UTC m=+0.218210431 container remove 0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:19:07 compute-0 systemd[1]: libpod-conmon-0ec44b10377fb725829a8ed6a6f0aa46fb27c9fd06d005e21a57bdf7cb5d2835.scope: Deactivated successfully.
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:19:07 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:07 compute-0 podman[415323]: 2025-10-02 13:19:07.139247146 +0000 UTC m=+0.041266074 container create 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 02 13:19:07 compute-0 systemd[1]: Started libpod-conmon-27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84.scope.
Oct 02 13:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 podman[415323]: 2025-10-02 13:19:07.120990937 +0000 UTC m=+0.023009895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:07 compute-0 podman[415323]: 2025-10-02 13:19:07.219852435 +0000 UTC m=+0.121871393 container init 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:19:07 compute-0 podman[415323]: 2025-10-02 13:19:07.225730736 +0000 UTC m=+0.127749664 container start 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:19:07 compute-0 podman[415323]: 2025-10-02 13:19:07.228605445 +0000 UTC m=+0.130624393 container attach 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:19:07 compute-0 podman[415345]: 2025-10-02 13:19:07.932613822 +0000 UTC m=+0.074932263 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:19:07 compute-0 xenodochial_allen[415339]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:19:07 compute-0 xenodochial_allen[415339]: --> relative data size: 1.0
Oct 02 13:19:07 compute-0 xenodochial_allen[415339]: --> All data devices are unavailable
Oct 02 13:19:07 compute-0 systemd[1]: libpod-27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84.scope: Deactivated successfully.
Oct 02 13:19:08 compute-0 podman[415379]: 2025-10-02 13:19:08.0355868 +0000 UTC m=+0.029724056 container died 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c55ed80e6bf2bf63046352fb24e93a87c61a0da0684a0127159b929d47dd206e-merged.mount: Deactivated successfully.
Oct 02 13:19:08 compute-0 ceph-mon[73607]: pgmap v3644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 13:19:08 compute-0 podman[415379]: 2025-10-02 13:19:08.097026018 +0000 UTC m=+0.091163264 container remove 27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:19:08 compute-0 systemd[1]: libpod-conmon-27d284f61e213b845936eedcf66966b5940aa4259e0e76124c7029e249341b84.scope: Deactivated successfully.
Oct 02 13:19:08 compute-0 sudo[415218]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:08 compute-0 sudo[415393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:08 compute-0 sudo[415393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:08 compute-0 sudo[415393]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:08 compute-0 sudo[415418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:08 compute-0 sudo[415418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:08 compute-0 sudo[415418]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:08 compute-0 nova_compute[257802]: 2025-10-02 13:19:08.254 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411133.253123, 701a8696-62d7-4046-a17e-da0dbbffa7f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:19:08 compute-0 nova_compute[257802]: 2025-10-02 13:19:08.254 2 INFO nova.compute.manager [-] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] VM Stopped (Lifecycle Event)
Oct 02 13:19:08 compute-0 nova_compute[257802]: 2025-10-02 13:19:08.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:08 compute-0 nova_compute[257802]: 2025-10-02 13:19:08.285 2 DEBUG nova.compute.manager [None req-ffb63645-2012-4b4d-a3ed-236317085b02 - - - - - -] [instance: 701a8696-62d7-4046-a17e-da0dbbffa7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:08 compute-0 sudo[415443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:08 compute-0 sudo[415443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:08.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:08 compute-0 sudo[415443]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:08 compute-0 sudo[415468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:19:08 compute-0 sudo[415468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.676363685 +0000 UTC m=+0.077565356 container create cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.620930213 +0000 UTC m=+0.022131904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:08 compute-0 systemd[1]: Started libpod-conmon-cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7.scope.
Oct 02 13:19:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.873530489 +0000 UTC m=+0.274732180 container init cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.885548789 +0000 UTC m=+0.286750470 container start cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.888745436 +0000 UTC m=+0.289947107 container attach cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:08 compute-0 amazing_chebyshev[415550]: 167 167
Oct 02 13:19:08 compute-0 systemd[1]: libpod-cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7.scope: Deactivated successfully.
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.891131513 +0000 UTC m=+0.292333224 container died cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:19:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Oct 02 13:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c7c72648349e108912ba73a61fcc4d7bbd5f875139ecc2cf2c6757c5ca4e17e-merged.mount: Deactivated successfully.
Oct 02 13:19:08 compute-0 podman[415532]: 2025-10-02 13:19:08.942213032 +0000 UTC m=+0.343414703 container remove cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:19:08 compute-0 systemd[1]: libpod-conmon-cb68806b8cd2821e42c8d14697b85f5cec8b3eb5ff9da32ec495474d60a8a3b7.scope: Deactivated successfully.
Oct 02 13:19:09 compute-0 ceph-mon[73607]: pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.124466066 +0000 UTC m=+0.061723436 container create 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:19:09 compute-0 systemd[1]: Started libpod-conmon-382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681.scope.
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.098518673 +0000 UTC m=+0.035776103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5b6fdec95859ac26749ea541a96afdd4eaaf5092f1bd7574e493170d8a3211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5b6fdec95859ac26749ea541a96afdd4eaaf5092f1bd7574e493170d8a3211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5b6fdec95859ac26749ea541a96afdd4eaaf5092f1bd7574e493170d8a3211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5b6fdec95859ac26749ea541a96afdd4eaaf5092f1bd7574e493170d8a3211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.215498396 +0000 UTC m=+0.152755746 container init 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.221915461 +0000 UTC m=+0.159172791 container start 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.224851772 +0000 UTC m=+0.162109102 container attach 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:19:09 compute-0 pedantic_cray[415590]: {
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:     "1": [
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:         {
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "devices": [
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "/dev/loop3"
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             ],
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "lv_name": "ceph_lv0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "lv_size": "7511998464",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "name": "ceph_lv0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "tags": {
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.cluster_name": "ceph",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.crush_device_class": "",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.encrypted": "0",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.osd_id": "1",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.type": "block",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:                 "ceph.vdo": "0"
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             },
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "type": "block",
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:             "vg_name": "ceph_vg0"
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:         }
Oct 02 13:19:09 compute-0 pedantic_cray[415590]:     ]
Oct 02 13:19:09 compute-0 pedantic_cray[415590]: }
Oct 02 13:19:09 compute-0 systemd[1]: libpod-382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681.scope: Deactivated successfully.
Oct 02 13:19:09 compute-0 podman[415574]: 2025-10-02 13:19:09.995504593 +0000 UTC m=+0.932761943 container died 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5b6fdec95859ac26749ea541a96afdd4eaaf5092f1bd7574e493170d8a3211-merged.mount: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[415574]: 2025-10-02 13:19:10.056794777 +0000 UTC m=+0.994052097 container remove 382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:19:10 compute-0 systemd[1]: libpod-conmon-382f2a99ba9ff19d16900ca3ac8cf662211ed86bc0faa5f58b9f16eef7561681.scope: Deactivated successfully.
Oct 02 13:19:10 compute-0 sudo[415468]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:10 compute-0 sudo[415610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:10 compute-0 sudo[415610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:10 compute-0 sudo[415610]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:10 compute-0 sudo[415635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:10 compute-0 sudo[415635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:10 compute-0 sudo[415635]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:10 compute-0 sudo[415660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:10 compute-0 sudo[415660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:10 compute-0 sudo[415660]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:10 compute-0 sudo[415685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:19:10 compute-0 sudo[415685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.601723478 +0000 UTC m=+0.036762626 container create e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:19:10 compute-0 systemd[1]: Started libpod-conmon-e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e.scope.
Oct 02 13:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.675725037 +0000 UTC m=+0.110764205 container init e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.681931537 +0000 UTC m=+0.116970685 container start e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.586689956 +0000 UTC m=+0.021729124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.685006891 +0000 UTC m=+0.120046059 container attach e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:19:10 compute-0 determined_darwin[415766]: 167 167
Oct 02 13:19:10 compute-0 systemd[1]: libpod-e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e.scope: Deactivated successfully.
Oct 02 13:19:10 compute-0 conmon[415766]: conmon e2ff60e9665c2c7e27eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e.scope/container/memory.events
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.687667235 +0000 UTC m=+0.122706403 container died e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffb7f08a2fdc28f1186bb8a9aea3b42fd58220b0decd0354746bb5c833074d95-merged.mount: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[415750]: 2025-10-02 13:19:10.721057009 +0000 UTC m=+0.156096157 container remove e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_darwin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:19:10 compute-0 systemd[1]: libpod-conmon-e2ff60e9665c2c7e27eb71d04b981e93d3923dce9bb74539506b2f8d29105a5e.scope: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[415790]: 2025-10-02 13:19:10.865557535 +0000 UTC m=+0.034710226 container create abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:19:10 compute-0 nova_compute[257802]: 2025-10-02 13:19:10.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:19:10 compute-0 systemd[1]: Started libpod-conmon-abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87.scope.
Oct 02 13:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a038f694b184650b68cf06404bfca5819a247ebe146144f50a2c4bbd5cd130/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a038f694b184650b68cf06404bfca5819a247ebe146144f50a2c4bbd5cd130/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a038f694b184650b68cf06404bfca5819a247ebe146144f50a2c4bbd5cd130/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a038f694b184650b68cf06404bfca5819a247ebe146144f50a2c4bbd5cd130/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 podman[415790]: 2025-10-02 13:19:10.851583179 +0000 UTC m=+0.020735880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:10 compute-0 podman[415790]: 2025-10-02 13:19:10.94850111 +0000 UTC m=+0.117653831 container init abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:19:10 compute-0 podman[415790]: 2025-10-02 13:19:10.954959746 +0000 UTC m=+0.124112437 container start abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:19:10 compute-0 podman[415790]: 2025-10-02 13:19:10.958590833 +0000 UTC m=+0.127743524 container attach abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:19:11 compute-0 sudo[415812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:11 compute-0 sudo[415812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[415812]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 sudo[415837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:11 compute-0 sudo[415837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[415837]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:11 compute-0 zealous_snyder[415807]: {
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:         "osd_id": 1,
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:         "type": "bluestore"
Oct 02 13:19:11 compute-0 zealous_snyder[415807]:     }
Oct 02 13:19:11 compute-0 zealous_snyder[415807]: }
Oct 02 13:19:11 compute-0 systemd[1]: libpod-abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87.scope: Deactivated successfully.
Oct 02 13:19:11 compute-0 conmon[415807]: conmon abddd8e83137847e79fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87.scope/container/memory.events
Oct 02 13:19:11 compute-0 podman[415790]: 2025-10-02 13:19:11.76318342 +0000 UTC m=+0.932336131 container died abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a038f694b184650b68cf06404bfca5819a247ebe146144f50a2c4bbd5cd130-merged.mount: Deactivated successfully.
Oct 02 13:19:11 compute-0 podman[415790]: 2025-10-02 13:19:11.826531014 +0000 UTC m=+0.995683705 container remove abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:19:11 compute-0 systemd[1]: libpod-conmon-abddd8e83137847e79fb8a2f77a79381885bc269792bb2e1ccf13ae7aa281f87.scope: Deactivated successfully.
Oct 02 13:19:11 compute-0 sudo[415685]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:19:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:19:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d7ac9f7e-4e2d-4656-93dc-ec7a3b92e857 does not exist
Oct 02 13:19:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 04a83199-0814-43a4-802a-4f959f23ef2d does not exist
Oct 02 13:19:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 71468f18-f721-430d-9bdc-b67a940b17d3 does not exist
Oct 02 13:19:11 compute-0 sudo[415890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:11 compute-0 sudo[415890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[415890]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 ceph-mon[73607]: pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:19:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:19:11 compute-0 sudo[415915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:19:11 compute-0 sudo[415915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[415915]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:12.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:13 compute-0 nova_compute[257802]: 2025-10-02 13:19:13.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:13 compute-0 nova_compute[257802]: 2025-10-02 13:19:13.390 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:13 compute-0 ceph-mon[73607]: pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:14.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:15 compute-0 nova_compute[257802]: 2025-10-02 13:19:15.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:16 compute-0 ceph-mon[73607]: pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:16.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:18 compute-0 ceph-mon[73607]: pgmap v3649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:18 compute-0 nova_compute[257802]: 2025-10-02 13:19:18.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:18.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:20 compute-0 ceph-mon[73607]: pgmap v3650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:20 compute-0 nova_compute[257802]: 2025-10-02 13:19:20.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:21 compute-0 ceph-mon[73607]: pgmap v3651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:22.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:23 compute-0 nova_compute[257802]: 2025-10-02 13:19:23.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:23 compute-0 ceph-mon[73607]: pgmap v3652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:19:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:19:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:25 compute-0 nova_compute[257802]: 2025-10-02 13:19:25.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:26 compute-0 ceph-mon[73607]: pgmap v3653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:26.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:26 compute-0 podman[415948]: 2025-10-02 13:19:26.927786289 +0000 UTC m=+0.061699555 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:19:26 compute-0 podman[415950]: 2025-10-02 13:19:26.931930929 +0000 UTC m=+0.061238595 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 13:19:26 compute-0 podman[415949]: 2025-10-02 13:19:26.955922636 +0000 UTC m=+0.078689774 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:19:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:19:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:19:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:19:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:28 compute-0 ceph-mon[73607]: pgmap v3654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:28 compute-0 nova_compute[257802]: 2025-10-02 13:19:28.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:29 compute-0 ceph-mon[73607]: pgmap v3655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:30 compute-0 nova_compute[257802]: 2025-10-02 13:19:30.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:31 compute-0 sudo[416008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:31 compute-0 sudo[416008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:31 compute-0 sudo[416008]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:31 compute-0 sudo[416033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:31 compute-0 sudo[416033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:31 compute-0 sudo[416033]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:32 compute-0 ceph-mon[73607]: pgmap v3656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:32.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:33 compute-0 nova_compute[257802]: 2025-10-02 13:19:33.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:34 compute-0 ceph-mon[73607]: pgmap v3657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:34.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:35 compute-0 nova_compute[257802]: 2025-10-02 13:19:35.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:36 compute-0 ceph-mon[73607]: pgmap v3658: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:37 compute-0 nova_compute[257802]: 2025-10-02 13:19:37.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:37 compute-0 nova_compute[257802]: 2025-10-02 13:19:37.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:19:37 compute-0 ceph-mon[73607]: pgmap v3659: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:38 compute-0 nova_compute[257802]: 2025-10-02 13:19:38.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:38 compute-0 nova_compute[257802]: 2025-10-02 13:19:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:38 compute-0 podman[416062]: 2025-10-02 13:19:38.933791914 +0000 UTC m=+0.079184867 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:19:39 compute-0 ceph-mon[73607]: pgmap v3660: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3724228417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:40 compute-0 nova_compute[257802]: 2025-10-02 13:19:40.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:41 compute-0 nova_compute[257802]: 2025-10-02 13:19:41.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:41 compute-0 ceph-mon[73607]: pgmap v3661: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2023800697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:42 compute-0 nova_compute[257802]: 2025-10-02 13:19:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:42 compute-0 nova_compute[257802]: 2025-10-02 13:19:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:42.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:19:42
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', '.mgr']
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:43 compute-0 nova_compute[257802]: 2025-10-02 13:19:43.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:19:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:19:43 compute-0 ceph-mon[73607]: pgmap v3662: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:19:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000047s ======
Oct 02 13:19:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 02 13:19:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:44.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:45 compute-0 nova_compute[257802]: 2025-10-02 13:19:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:45 compute-0 nova_compute[257802]: 2025-10-02 13:19:45.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:19:45 compute-0 nova_compute[257802]: 2025-10-02 13:19:45.138 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:19:45 compute-0 nova_compute[257802]: 2025-10-02 13:19:45.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:46 compute-0 ceph-mon[73607]: pgmap v3663: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:46.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:46 compute-0 ovn_controller[148183]: 2025-10-02T13:19:46Z|01028|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 02 13:19:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:46.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:48 compute-0 ceph-mon[73607]: pgmap v3664: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:48 compute-0 nova_compute[257802]: 2025-10-02 13:19:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:19:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:19:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:50 compute-0 ceph-mon[73607]: pgmap v3665: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:50 compute-0 nova_compute[257802]: 2025-10-02 13:19:50.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:51 compute-0 nova_compute[257802]: 2025-10-02 13:19:51.134 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:51 compute-0 sudo[416095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:51 compute-0 sudo[416095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:51 compute-0 sudo[416095]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:51 compute-0 ceph-mon[73607]: pgmap v3666: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:51 compute-0 sudo[416120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:51 compute-0 sudo[416120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:51 compute-0 sudo[416120]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:19:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:19:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:52.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:53 compute-0 nova_compute[257802]: 2025-10-02 13:19:53.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:53 compute-0 ceph-mon[73607]: pgmap v3667: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:54.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:19:55 compute-0 nova_compute[257802]: 2025-10-02 13:19:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:55 compute-0 nova_compute[257802]: 2025-10-02 13:19:55.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:19:55 compute-0 nova_compute[257802]: 2025-10-02 13:19:55.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:19:55 compute-0 nova_compute[257802]: 2025-10-02 13:19:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:55 compute-0 nova_compute[257802]: 2025-10-02 13:19:55.974 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:19:56 compute-0 ceph-mon[73607]: pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/930833902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:19:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/930833902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:19:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:56.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2438942268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:57 compute-0 podman[416148]: 2025-10-02 13:19:57.912791326 +0000 UTC m=+0.054317468 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:19:57 compute-0 podman[416150]: 2025-10-02 13:19:57.925570593 +0000 UTC m=+0.058091998 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:19:57 compute-0 podman[416149]: 2025-10-02 13:19:57.932371397 +0000 UTC m=+0.068041748 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:19:58 compute-0 ceph-mon[73607]: pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2355559930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:58 compute-0 nova_compute[257802]: 2025-10-02 13:19:58.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:58.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:19:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:58.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:59 compute-0 ceph-mon[73607]: pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.152 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.153 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.153 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.153 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.153 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017653862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.587 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.777 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.778 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4179MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.778 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.779 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.928 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.929 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:19:59 compute-0 nova_compute[257802]: 2025-10-02 13:19:59.954 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:20:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3017653862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 13:20:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:00.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:20:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345995937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.443 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.450 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.476 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.478 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.478 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:00 compute-0 nova_compute[257802]: 2025-10-02 13:20:00.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2345995937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:01 compute-0 ceph-mon[73607]: pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:02.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:02.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:03 compute-0 nova_compute[257802]: 2025-10-02 13:20:03.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:03 compute-0 ceph-mon[73607]: pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:05 compute-0 nova_compute[257802]: 2025-10-02 13:20:05.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:06 compute-0 ceph-mon[73607]: pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:07 compute-0 nova_compute[257802]: 2025-10-02 13:20:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:07 compute-0 ceph-mon[73607]: pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:08 compute-0 nova_compute[257802]: 2025-10-02 13:20:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:09 compute-0 podman[416254]: 2025-10-02 13:20:09.978094999 +0000 UTC m=+0.117816545 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 13:20:10 compute-0 ceph-mon[73607]: pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:10.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:10 compute-0 nova_compute[257802]: 2025-10-02 13:20:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:11 compute-0 ceph-mon[73607]: pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:11 compute-0 sudo[416281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:11 compute-0 sudo[416281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:11 compute-0 sudo[416281]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:11 compute-0 sudo[416306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:11 compute-0 sudo[416306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:11 compute-0 sudo[416306]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:12 compute-0 sudo[416331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:12 compute-0 sudo[416331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:12 compute-0 sudo[416331]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:12 compute-0 sudo[416356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:12 compute-0 sudo[416356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:12 compute-0 sudo[416356]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:12 compute-0 sudo[416381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:12 compute-0 sudo[416381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:12 compute-0 sudo[416381]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:12.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:12 compute-0 sudo[416406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:20:12 compute-0 sudo[416406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:12.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:12 compute-0 sudo[416406]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:13 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 64870675-219f-4c42-863a-b7dce776da6d does not exist
Oct 02 13:20:13 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d7e5290c-1a86-4c8b-8e00-d72be3962b8c does not exist
Oct 02 13:20:13 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7161d114-5341-4344-8556-cdf1d59082f9 does not exist
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:20:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:20:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:13 compute-0 nova_compute[257802]: 2025-10-02 13:20:13.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:13 compute-0 nova_compute[257802]: 2025-10-02 13:20:13.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:13 compute-0 nova_compute[257802]: 2025-10-02 13:20:13.116 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:20:13 compute-0 sudo[416462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:13 compute-0 sudo[416462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:13 compute-0 sudo[416462]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:13 compute-0 sudo[416487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:13 compute-0 sudo[416487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:13 compute-0 sudo[416487]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:13 compute-0 sudo[416512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:13 compute-0 sudo[416512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:13 compute-0 sudo[416512]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:13 compute-0 nova_compute[257802]: 2025-10-02 13:20:13.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:13 compute-0 sudo[416537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:20:13 compute-0 sudo[416537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.62092844 +0000 UTC m=+0.036047639 container create 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:20:13 compute-0 systemd[1]: Started libpod-conmon-5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1.scope.
Oct 02 13:20:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.690799811 +0000 UTC m=+0.105919030 container init 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.699722365 +0000 UTC m=+0.114841564 container start 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.604667498 +0000 UTC m=+0.019786717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.702516503 +0000 UTC m=+0.117635722 container attach 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:20:13 compute-0 heuristic_leavitt[416618]: 167 167
Oct 02 13:20:13 compute-0 systemd[1]: libpod-5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1.scope: Deactivated successfully.
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.706646592 +0000 UTC m=+0.121765801 container died 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dfee41db586d8bd4d64ea2079d505b380d1e0e90e2f2ba062b67787ac281a9b-merged.mount: Deactivated successfully.
Oct 02 13:20:13 compute-0 podman[416602]: 2025-10-02 13:20:13.743325084 +0000 UTC m=+0.158444283 container remove 5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leavitt, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:13 compute-0 systemd[1]: libpod-conmon-5c5ece1d268df8a34cc779333fe3a9a6e9589dd326f3d420787d2d1b4172ffc1.scope: Deactivated successfully.
Oct 02 13:20:13 compute-0 podman[416641]: 2025-10-02 13:20:13.897142695 +0000 UTC m=+0.042872033 container create aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:20:13 compute-0 systemd[1]: Started libpod-conmon-aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a.scope.
Oct 02 13:20:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:13 compute-0 podman[416641]: 2025-10-02 13:20:13.961300259 +0000 UTC m=+0.107029617 container init aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:20:13 compute-0 podman[416641]: 2025-10-02 13:20:13.876618951 +0000 UTC m=+0.022348309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:13 compute-0 podman[416641]: 2025-10-02 13:20:13.972935378 +0000 UTC m=+0.118664716 container start aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:13 compute-0 podman[416641]: 2025-10-02 13:20:13.976351441 +0000 UTC m=+0.122080799 container attach aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:14 compute-0 ceph-mon[73607]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:20:14 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:14.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:14 compute-0 keen_lovelace[416657]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:20:14 compute-0 keen_lovelace[416657]: --> relative data size: 1.0
Oct 02 13:20:14 compute-0 keen_lovelace[416657]: --> All data devices are unavailable
Oct 02 13:20:14 compute-0 systemd[1]: libpod-aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a.scope: Deactivated successfully.
Oct 02 13:20:14 compute-0 podman[416641]: 2025-10-02 13:20:14.780933508 +0000 UTC m=+0.926662846 container died aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c4c4bf1294e991f30c55d72a7140d4833b49080c46d5baeee086916ef5bec88-merged.mount: Deactivated successfully.
Oct 02 13:20:14 compute-0 podman[416641]: 2025-10-02 13:20:14.848523004 +0000 UTC m=+0.994252332 container remove aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:20:14 compute-0 systemd[1]: libpod-conmon-aaca3fc8c572c591209c0e70b8dc91074dbd9cf6501c71677d370243b345334a.scope: Deactivated successfully.
Oct 02 13:20:14 compute-0 sudo[416537]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 sudo[416687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:14 compute-0 sudo[416687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:14 compute-0 sudo[416687]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 sudo[416712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:14 compute-0 sudo[416712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 sudo[416712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:15 compute-0 sudo[416737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:15 compute-0 sudo[416737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:15 compute-0 sudo[416737]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:15 compute-0 sudo[416762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:20:15 compute-0 sudo[416762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.429642915 +0000 UTC m=+0.036028638 container create dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:20:15 compute-0 systemd[1]: Started libpod-conmon-dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd.scope.
Oct 02 13:20:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.413857105 +0000 UTC m=+0.020242848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.515516671 +0000 UTC m=+0.121902424 container init dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.523543424 +0000 UTC m=+0.129929147 container start dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.526981107 +0000 UTC m=+0.133366830 container attach dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:20:15 compute-0 dreamy_black[416845]: 167 167
Oct 02 13:20:15 compute-0 systemd[1]: libpod-dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd.scope: Deactivated successfully.
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.529475957 +0000 UTC m=+0.135861690 container died dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d4abebfd378ba1cafd7bb46d563b38607083b01f16ec76b92899570d1f5b21-merged.mount: Deactivated successfully.
Oct 02 13:20:15 compute-0 podman[416829]: 2025-10-02 13:20:15.566360274 +0000 UTC m=+0.172746017 container remove dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:20:15 compute-0 systemd[1]: libpod-conmon-dab89f9d542f261d51a2f7fb98afc5bd02dd76303931e827318928db4f763cdd.scope: Deactivated successfully.
Oct 02 13:20:15 compute-0 podman[416869]: 2025-10-02 13:20:15.780462655 +0000 UTC m=+0.078400267 container create fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:20:15 compute-0 systemd[1]: Started libpod-conmon-fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f.scope.
Oct 02 13:20:15 compute-0 podman[416869]: 2025-10-02 13:20:15.748971218 +0000 UTC m=+0.046908930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad23c3fdbeea47d2924171719ee708d2bafbb7023a27b7f21c26d2720dedfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad23c3fdbeea47d2924171719ee708d2bafbb7023a27b7f21c26d2720dedfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad23c3fdbeea47d2924171719ee708d2bafbb7023a27b7f21c26d2720dedfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad23c3fdbeea47d2924171719ee708d2bafbb7023a27b7f21c26d2720dedfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:15 compute-0 podman[416869]: 2025-10-02 13:20:15.895648466 +0000 UTC m=+0.193586128 container init fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:20:15 compute-0 podman[416869]: 2025-10-02 13:20:15.904254994 +0000 UTC m=+0.202192606 container start fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:15 compute-0 podman[416869]: 2025-10-02 13:20:15.907977683 +0000 UTC m=+0.205915295 container attach fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:20:15 compute-0 nova_compute[257802]: 2025-10-02 13:20:15.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:16 compute-0 ceph-mon[73607]: pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:16.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:16.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:16 compute-0 determined_lewin[416885]: {
Oct 02 13:20:16 compute-0 determined_lewin[416885]:     "1": [
Oct 02 13:20:16 compute-0 determined_lewin[416885]:         {
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "devices": [
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "/dev/loop3"
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             ],
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "lv_name": "ceph_lv0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "lv_size": "7511998464",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "name": "ceph_lv0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "tags": {
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.cluster_name": "ceph",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.crush_device_class": "",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.encrypted": "0",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.osd_id": "1",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.type": "block",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:                 "ceph.vdo": "0"
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             },
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "type": "block",
Oct 02 13:20:16 compute-0 determined_lewin[416885]:             "vg_name": "ceph_vg0"
Oct 02 13:20:16 compute-0 determined_lewin[416885]:         }
Oct 02 13:20:16 compute-0 determined_lewin[416885]:     ]
Oct 02 13:20:16 compute-0 determined_lewin[416885]: }
Oct 02 13:20:16 compute-0 systemd[1]: libpod-fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f.scope: Deactivated successfully.
Oct 02 13:20:16 compute-0 podman[416869]: 2025-10-02 13:20:16.724286172 +0000 UTC m=+1.022223824 container died fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ad23c3fdbeea47d2924171719ee708d2bafbb7023a27b7f21c26d2720dedfd-merged.mount: Deactivated successfully.
Oct 02 13:20:16 compute-0 podman[416869]: 2025-10-02 13:20:16.812462263 +0000 UTC m=+1.110399885 container remove fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:20:16 compute-0 systemd[1]: libpod-conmon-fe49bf30348601dd2f68b080ce6c4e001d011445af95f1dfc75bead7a73b709f.scope: Deactivated successfully.
Oct 02 13:20:16 compute-0 sudo[416762]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:16 compute-0 sudo[416909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:16 compute-0 sudo[416909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-0 sudo[416909]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[416934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:17 compute-0 sudo[416934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[416934]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[416959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:17 compute-0 sudo[416959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[416959]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[416984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:20:17 compute-0 sudo[416984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.519210717 +0000 UTC m=+0.026886018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.662634467 +0000 UTC m=+0.170309788 container create 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:20:17 compute-0 systemd[1]: Started libpod-conmon-886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1.scope.
Oct 02 13:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.814696926 +0000 UTC m=+0.322372307 container init 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.82860743 +0000 UTC m=+0.336282751 container start 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.833634422 +0000 UTC m=+0.341309743 container attach 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:20:17 compute-0 wizardly_bouman[417066]: 167 167
Oct 02 13:20:17 compute-0 systemd[1]: libpod-886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1.scope: Deactivated successfully.
Oct 02 13:20:17 compute-0 podman[417050]: 2025-10-02 13:20:17.83938395 +0000 UTC m=+0.347059291 container died 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-09401cf0664b180f34b7e7949ad84f83f71c6f2009ed0f4f512935aad0e416c8-merged.mount: Deactivated successfully.
Oct 02 13:20:18 compute-0 podman[417050]: 2025-10-02 13:20:18.021345058 +0000 UTC m=+0.529020339 container remove 886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bouman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:20:18 compute-0 systemd[1]: libpod-conmon-886103efb65d39ea152c72a6b08e855ef177e3654037bd911ac1483aea1914f1.scope: Deactivated successfully.
Oct 02 13:20:18 compute-0 ceph-mon[73607]: pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:18 compute-0 podman[417091]: 2025-10-02 13:20:18.311044977 +0000 UTC m=+0.121051443 container create ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:18 compute-0 podman[417091]: 2025-10-02 13:20:18.223343177 +0000 UTC m=+0.033349643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:18 compute-0 nova_compute[257802]: 2025-10-02 13:20:18.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:18 compute-0 systemd[1]: Started libpod-conmon-ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c.scope.
Oct 02 13:20:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f57a950c1e242295c81a054d3ce24d9318ed635b22cb076ba792857a432847/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f57a950c1e242295c81a054d3ce24d9318ed635b22cb076ba792857a432847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f57a950c1e242295c81a054d3ce24d9318ed635b22cb076ba792857a432847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f57a950c1e242295c81a054d3ce24d9318ed635b22cb076ba792857a432847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:18.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:18 compute-0 podman[417091]: 2025-10-02 13:20:18.459631832 +0000 UTC m=+0.269638308 container init ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:18 compute-0 podman[417091]: 2025-10-02 13:20:18.468468504 +0000 UTC m=+0.278474950 container start ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:18.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:18 compute-0 podman[417091]: 2025-10-02 13:20:18.586581516 +0000 UTC m=+0.396587982 container attach ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:19 compute-0 ceph-mon[73607]: pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]: {
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:         "osd_id": 1,
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:         "type": "bluestore"
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]:     }
Oct 02 13:20:19 compute-0 quirky_proskuriakova[417107]: }
Oct 02 13:20:19 compute-0 systemd[1]: libpod-ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c.scope: Deactivated successfully.
Oct 02 13:20:19 compute-0 podman[417129]: 2025-10-02 13:20:19.315581095 +0000 UTC m=+0.022345709 container died ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3f57a950c1e242295c81a054d3ce24d9318ed635b22cb076ba792857a432847-merged.mount: Deactivated successfully.
Oct 02 13:20:19 compute-0 podman[417129]: 2025-10-02 13:20:19.365055674 +0000 UTC m=+0.071820268 container remove ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:20:19 compute-0 systemd[1]: libpod-conmon-ca44263def59b38fa15dbf9a1d80a32a7a0e8d0bac3b88a563e06cbdb18fdc4c.scope: Deactivated successfully.
Oct 02 13:20:19 compute-0 sudo[416984]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:20:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:20:19 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ca888ed2-1ba4-46c1-aba9-ec689c962d26 does not exist
Oct 02 13:20:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 77482bce-dd85-4153-ba86-73ab8c85c23f does not exist
Oct 02 13:20:19 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c04674ed-3c64-4703-956f-4bc0e2ebe433 does not exist
Oct 02 13:20:19 compute-0 sudo[417144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:19 compute-0 sudo[417144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:19 compute-0 sudo[417144]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:19 compute-0 sudo[417169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:20:19 compute-0 sudo[417169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:19 compute-0 sudo[417169]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:20 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:20:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:20 compute-0 sshd-session[417194]: Connection closed by 193.32.162.151 port 33366
Oct 02 13:20:20 compute-0 nova_compute[257802]: 2025-10-02 13:20:20.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:21 compute-0 ceph-mon[73607]: pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:22.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:23 compute-0 nova_compute[257802]: 2025-10-02 13:20:23.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:24 compute-0 ceph-mon[73607]: pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:25 compute-0 nova_compute[257802]: 2025-10-02 13:20:25.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:26 compute-0 ceph-mon[73607]: pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:26.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:20:27.005 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:20:27.006 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:20:27.006 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:27 compute-0 ceph-mon[73607]: pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:28 compute-0 nova_compute[257802]: 2025-10-02 13:20:28.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:28 compute-0 podman[417202]: 2025-10-02 13:20:28.938406825 +0000 UTC m=+0.062690869 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:20:28 compute-0 podman[417200]: 2025-10-02 13:20:28.938687002 +0000 UTC m=+0.068293615 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:20:28 compute-0 podman[417201]: 2025-10-02 13:20:28.967210417 +0000 UTC m=+0.098293895 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:20:30 compute-0 ceph-mon[73607]: pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:30 compute-0 nova_compute[257802]: 2025-10-02 13:20:30.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:31 compute-0 ceph-mon[73607]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:31 compute-0 sudo[417255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:31 compute-0 sudo[417255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:31 compute-0 sudo[417255]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:31 compute-0 sudo[417280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:31 compute-0 sudo[417280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:31 compute-0 sudo[417280]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:32.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:33 compute-0 nova_compute[257802]: 2025-10-02 13:20:33.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:33 compute-0 ceph-mon[73607]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:35 compute-0 ceph-mon[73607]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:35 compute-0 nova_compute[257802]: 2025-10-02 13:20:35.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:37 compute-0 ceph-mon[73607]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:37 compute-0 sshd-session[417308]: Accepted publickey for zuul from 192.168.122.10 port 41252 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 13:20:37 compute-0 systemd-logind[789]: New session 78 of user zuul.
Oct 02 13:20:37 compute-0 systemd[1]: Started Session 78 of User zuul.
Oct 02 13:20:37 compute-0 sshd-session[417308]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:20:38 compute-0 sudo[417312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:20:38 compute-0 sudo[417312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:20:38 compute-0 nova_compute[257802]: 2025-10-02 13:20:38.114 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:38 compute-0 nova_compute[257802]: 2025-10-02 13:20:38.114 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:20:38 compute-0 nova_compute[257802]: 2025-10-02 13:20:38.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:38.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:39 compute-0 nova_compute[257802]: 2025-10-02 13:20:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:40 compute-0 ceph-mon[73607]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:40 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/347551512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:40.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:40 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47060 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:40 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:40 compute-0 nova_compute[257802]: 2025-10-02 13:20:40.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:40 compute-0 podman[417488]: 2025-10-02 13:20:40.959054113 +0000 UTC m=+0.099406762 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:20:41 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47069 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38508 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mon[73607]: from='client.47060 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1262707481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mon[73607]: from='client.38502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mon[73607]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:41 compute-0 ceph-mon[73607]: from='client.47069 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:20:41 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036113177' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:20:42 compute-0 ceph-mon[73607]: from='client.38508 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1021490665' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:20:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4036113177' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:20:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:20:42
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:43 compute-0 nova_compute[257802]: 2025-10-02 13:20:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:43 compute-0 nova_compute[257802]: 2025-10-02 13:20:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:43 compute-0 nova_compute[257802]: 2025-10-02 13:20:43.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:43 compute-0 nova_compute[257802]: 2025-10-02 13:20:43.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-0 ceph-mon[73607]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:20:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:20:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:44.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46207 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:44.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:44 compute-0 ovs-vsctl[417624]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:20:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:45 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46213 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:45 compute-0 ceph-mon[73607]: from='client.46207 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:45 compute-0 ceph-mon[73607]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:45 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:20:45 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:20:45 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:20:45 compute-0 nova_compute[257802]: 2025-10-02 13:20:45.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:46 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47081 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:46 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:20:46 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:46 compute-0 ceph-mon[73607]: from='client.46213 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3080946224' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:20:46 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:20:46 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:46 compute-0 lvm[417979]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:20:46 compute-0 lvm[417979]: VG ceph_vg0 finished
Oct 02 13:20:46 compute-0 kernel: block dm-0: the capability attribute has been deprecated.
Oct 02 13:20:46 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47093 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:20:46 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:47 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38526 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:20:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889483038' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38541 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:47 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47114 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:47 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:47.635+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.47081 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.47093 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3209301266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mon[73607]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2038607376' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/889483038' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:47 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:20:47 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4197069089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:20:47 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 13:20:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047824104' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38568 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:48 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:48.275+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:48 compute-0 nova_compute[257802]: 2025-10-02 13:20:48.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:20:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: ops {prefix=ops} (starting...)
Oct 02 13:20:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:48 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:20:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232519185' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:20:48 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707328247' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.38526 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.38541 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.47114 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1701019612' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4197069089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/856340769' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4047824104' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2183296961' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2416550832' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2232519185' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47159 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:48 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:20:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 81K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1568 writes, 6683 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 10.39 MB, 0.02 MB/s
                                           Interval WAL: 1568 writes, 1568 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.4      2.39              0.30        57    0.042       0      0       0.0       0.0
                                             L6      1/0   12.42 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    109.2     93.8      6.20              1.61        56    0.111    434K    30K       0.0       0.0
                                            Sum      1/0   12.42 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     78.9     80.3      8.59              1.91       113    0.076    434K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    102.0    101.2      0.67              0.19        10    0.067     54K   2558       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    109.2     93.8      6.20              1.61        56    0.111    434K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.4      2.39              0.30        56    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.106, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.67 GB write, 0.10 MB/s write, 0.66 GB read, 0.10 MB/s read, 8.6 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 73.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000498 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4247,70.49 MB,23.1885%) FilterBlock(114,1.16 MB,0.381886%) IndexBlock(114,1.94 MB,0.639178%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:20:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:20:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345643864' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:20:49 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:20:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:20:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38598 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: status {prefix=status} (starting...)
Oct 02 13:20:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:20:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388739759' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.38568 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.47147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/707328247' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.47159 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.38586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3863448866' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2345643864' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3805049822' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1009958000' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1388739759' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3311019110' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:20:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425878621' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:20:49 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654235563' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47189 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:50 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:50.161+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:20:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260978290' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:20:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364009437' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:50 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38637 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:50 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:50.639+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:50 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:20:50 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561350854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.38598 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2425878621' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/882833153' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3654235563' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1260978290' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2544960413' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3364009437' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1889401041' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3043726876' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:50 compute-0 nova_compute[257802]: 2025-10-02 13:20:50.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:20:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020573393' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:20:51 compute-0 nova_compute[257802]: 2025-10-02 13:20:51.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:51 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:20:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21278998' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:20:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795750497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 sudo[418697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:51 compute-0 sudo[418697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:51 compute-0 sudo[418697]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:51 compute-0 sudo[418739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:51 compute-0 sudo[418739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:51 compute-0 sudo[418739]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:51 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:20:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413002667' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.47189 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.38637 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2561350854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1020573393' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2504091879' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/21278998' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1795750497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/608220265' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:51 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38688 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38700 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:20:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139641157' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:52.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47270 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38706 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:09.560170+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19ccb3000/0x0/0x1bfc00000, data 0xa64324f/0xa84b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460496896 unmapped: 26853376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:10.560243+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19ccb3000/0x0/0x1bfc00000, data 0xa64324f/0xa84b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:11.560418+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5697954 data_alloc: 285212672 data_used: 78192640
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:12.560558+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.940839767s of 13.178479195s, submitted: 21
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:13.560740+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:14.560954+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:15.561096+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd7218960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:16.561297+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19ccb3000/0x0/0x1bfc00000, data 0xa64324f/0xa84b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd5cd4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460521472 unmapped: 26828800 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5696602 data_alloc: 285212672 data_used: 78192640
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:17.561466+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19ccb3000/0x0/0x1bfc00000, data 0xa64324f/0xa84b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19ccb3000/0x0/0x1bfc00000, data 0xa64324f/0xa84b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,2])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:18.561645+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:19.561791+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd82af0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:20.561951+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:21.562144+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:22.562271+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:23.562414+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:24.562596+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:25.562722+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 27983872 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:26.562888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:27.563141+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:28.563370+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 217K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s
                                           Cumulative WAL: 55K writes, 19K syncs, 2.80 writes per sync, written: 0.21 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7093 writes, 28K keys, 7093 commit groups, 1.0 writes per commit group, ingest: 28.78 MB, 0.05 MB/s
                                           Interval WAL: 7093 writes, 2781 syncs, 2.55 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:29.563614+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:30.563870+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:31.564008+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:32.564357+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:33.564604+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 27975680 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:34.564767+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459382784 unmapped: 27967488 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:35.564884+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:36.565057+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:37.565277+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:38.565514+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:39.565684+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:40.565849+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:41.566031+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 27959296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:42.566187+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 27951104 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:43.566348+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 27942912 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:44.566475+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 27942912 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:45.566598+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 27942912 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:46.566737+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459415552 unmapped: 27934720 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:47.566897+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459415552 unmapped: 27934720 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:48.567029+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459415552 unmapped: 27934720 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:49.567154+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:50.567299+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:51.567473+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:52.567600+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:53.567777+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:54.567908+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:55.568023+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459423744 unmapped: 27926528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:56.568144+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459431936 unmapped: 27918336 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:57.568292+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:58.568435+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:52:59.568626+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:00.568791+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:01.568920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:02.569037+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:03.569201+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:04.569368+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459440128 unmapped: 27910144 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:05.569521+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459448320 unmapped: 27901952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:06.569765+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459448320 unmapped: 27901952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:07.569902+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:08.570030+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:09.570172+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:10.570356+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:11.570502+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:12.570635+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 27893760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:13.570809+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459464704 unmapped: 27885568 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:14.570974+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:15.571132+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:16.571264+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:17.571407+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd8007400 session 0x55bcd6813a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd7252f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:18.571561+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd7fce5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:19.571704+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:20.571809+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7205000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7205000 session 0x55bcd75943c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd5339680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459472896 unmapped: 27877376 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:21.571923+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459489280 unmapped: 27860992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5558692 data_alloc: 285212672 data_used: 74821632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7214000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd8007400 session 0x55bcd75b9680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd74a6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7207400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7207400 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:22.572063+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19dabb000/0x0/0x1bfc00000, data 0x983c1dd/0x9a42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 70.158851624s of 70.396728516s, submitted: 45
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 27779072 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:23.572362+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd72172c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd5cd43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd8007400 session 0x55bcd82afa40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd5cd52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 27770880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd7252780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd7fcf680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bce58b2400 session 0x55bcd4f43c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd61970e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd6196000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:24.572521+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd8007400 session 0x55bcd820e780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 27770880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:25.573116+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 27770880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:26.573318+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 27770880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5628102 data_alloc: 285212672 data_used: 74825728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd72170e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:27.573508+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0xa0d324f/0xa2db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 27770880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd7fd6000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:28.573698+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd68132c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459759616 unmapped: 27590656 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:29.573949+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459759616 unmapped: 27590656 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d1f7000/0x0/0x1bfc00000, data 0xa0fd282/0xa307000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:30.574153+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459776000 unmapped: 27574272 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:31.574298+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 25640960 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5709210 data_alloc: 285212672 data_used: 85061632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd87c8400 session 0x55bcd6891680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:32.574487+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 25640960 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:33.574678+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e52c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e52c00 session 0x55bcd75b9860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 25640960 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:34.574873+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d1f7000/0x0/0x1bfc00000, data 0xa0fd282/0xa307000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd7218b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 25640960 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.714688301s of 12.225479126s, submitted: 23
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd82afc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:35.575039+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 25632768 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:36.575204+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 25632768 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5711208 data_alloc: 285212672 data_used: 85065728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:37.575342+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 25632768 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:38.575625+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 25632768 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:39.575779+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d1f6000/0x0/0x1bfc00000, data 0xa0fd292/0xa308000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 25632768 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:40.576109+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d1f6000/0x0/0x1bfc00000, data 0xa0fd292/0xa308000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462954496 unmapped: 24395776 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d1f6000/0x0/0x1bfc00000, data 0xa0fd292/0xa308000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:41.576242+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bce58b2400 session 0x55bcd75b8960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd826b0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464969728 unmapped: 22380544 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5766408 data_alloc: 301989888 data_used: 91795456
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:42.576459+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464969728 unmapped: 22380544 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:43.576752+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464986112 unmapped: 22364160 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:44.576878+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d221000/0x0/0x1bfc00000, data 0xa0d325f/0xa2dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 22355968 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.002289772s of 10.025804520s, submitted: 37
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:45.576999+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd7fcf0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d222000/0x0/0x1bfc00000, data 0xa0d31ed/0xa2da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 22355968 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:46.577161+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0xa0d31ed/0xa2da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465002496 unmapped: 22347776 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5759972 data_alloc: 301989888 data_used: 91791360
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:47.577304+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0xa0d31ed/0xa2da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465002496 unmapped: 22347776 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:48.577426+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465002496 unmapped: 22347776 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:49.577569+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 22339584 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:50.577729+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 22339584 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:51.577901+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 22339584 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5759972 data_alloc: 301989888 data_used: 91791360
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:52.578081+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 22339584 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:53.578306+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0xa0d31ed/0xa2da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 22233088 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:54.578469+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19cee4000/0x0/0x1bfc00000, data 0xa4131ed/0xa61a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465215488 unmapped: 22134784 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:55.578601+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.075058937s of 10.339115143s, submitted: 19
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465256448 unmapped: 22093824 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:56.578753+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19cedd000/0x0/0x1bfc00000, data 0xa4171ed/0xa61e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465272832 unmapped: 22077440 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5788902 data_alloc: 301989888 data_used: 91901952
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:57.578944+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465272832 unmapped: 22077440 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:58.579097+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465272832 unmapped: 22077440 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:53:59.579268+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd826be00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bce58b2400 session 0x55bcd6196960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ea000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ea000 session 0x55bcd75b90e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd5cd5e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465272832 unmapped: 22077440 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:00.579410+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd53372c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd749cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ea000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ea000 session 0x55bcd7217860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd7fd6b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bce58b2400 session 0x55bcd7c9cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465321984 unmapped: 22028288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:01.579557+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465321984 unmapped: 22028288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5823946 data_alloc: 301989888 data_used: 91901952
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:02.579738+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19cb70000/0x0/0x1bfc00000, data 0xa7861fd/0xa98e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465321984 unmapped: 22028288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:03.579957+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465321984 unmapped: 22028288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd5534f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:04.580151+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd87c8400 session 0x55bcd4f430e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ea000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 465321984 unmapped: 22028288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:05.580276+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ebc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ebc00 session 0x55bcd5535e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.731106758s of 10.112491608s, submitted: 16
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd7125000 session 0x55bcd78052c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd81ea000 session 0x55bcd820a3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461856768 unmapped: 25493504 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:06.580401+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461864960 unmapped: 25485312 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5685556 data_alloc: 285212672 data_used: 83927040
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d2b5000/0x0/0x1bfc00000, data 0x9bcd1fd/0x9dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:07.580544+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461864960 unmapped: 25485312 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:08.580698+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461864960 unmapped: 25485312 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:09.580888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461864960 unmapped: 25485312 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:10.581014+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461873152 unmapped: 25477120 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:11.581167+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d729000/0x0/0x1bfc00000, data 0x9bcd1fd/0x9dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461905920 unmapped: 25444352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5685380 data_alloc: 285212672 data_used: 83927040
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:12.581290+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463052800 unmapped: 24297472 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:13.581422+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463937536 unmapped: 23412736 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:14.581593+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463937536 unmapped: 23412736 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:15.581794+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.404006004s of 10.142316818s, submitted: 281
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 heartbeat osd_stat(store_statfs(0x19d319000/0x0/0x1bfc00000, data 0x9bcd1fd/0x9dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcdfcf7c00 session 0x55bcd6891c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd5211a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463937536 unmapped: 23412736 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:16.581952+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada5000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 handle_osd_map epochs [383,383], i have 383, src has [1,383]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 ms_handle_reset con 0x55bcd87c8400 session 0x55bcd75b90e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458940416 unmapped: 28409856 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5340551 data_alloc: 268435456 data_used: 70619136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 ms_handle_reset con 0x55bcdada5000 session 0x55bcd6ffe3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:17.582078+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd7c9d4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd82ae000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458940416 unmapped: 28409856 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:18.582300+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 28401664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:19.582581+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 heartbeat osd_stat(store_statfs(0x19f4f0000/0x0/0x1bfc00000, data 0x79f7dd6/0x7bfe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 28401664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:20.582943+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 28401664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:21.583140+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458956800 unmapped: 28393472 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5339495 data_alloc: 268435456 data_used: 70619136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:22.583438+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd7487a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444792832 unmapped: 42557440 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:23.583631+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445243392 unmapped: 42106880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:24.583846+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441393152 unmapped: 45957120 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:25.583996+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a02c8000/0x0/0x1bfc00000, data 0x6c208e2/0x6e26000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 45588480 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:26.584163+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 45588480 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157048 data_alloc: 251658240 data_used: 49086464
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:27.584338+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 45588480 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:28.584478+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7125000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.879434586s of 12.859483719s, submitted: 223
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 384 ms_handle_reset con 0x55bcd7125000 session 0x55bcd5529e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a12cb000/0x0/0x1bfc00000, data 0x6c558e2/0x6e5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441532416 unmapped: 45817856 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:29.584627+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 441532416 unmapped: 45817856 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:30.584759+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd4f963c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 437108736 unmapped: 50241536 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 ms_handle_reset con 0x55bcd76b1000 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 ms_handle_reset con 0x55bcd8006400 session 0x55bcd79d7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:31.584886+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429498368 unmapped: 57851904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4869068 data_alloc: 234881024 data_used: 33665024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:32.585050+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 57868288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:33.585357+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 57868288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a2605000/0x0/0x1bfc00000, data 0x55734fd/0x5776000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:34.585622+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 57868288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:35.585909+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 57868288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:36.586077+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 386 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd7486000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 386 ms_handle_reset con 0x55bcd81e9c00 session 0x55bcd7253860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 386 ms_handle_reset con 0x55bcd622c000 session 0x55bcd7594f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 57868288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4819981 data_alloc: 234881024 data_used: 33558528
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:37.586205+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 386 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd7487860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427089920 unmapped: 60260352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:38.586355+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.013216019s of 10.694962502s, submitted: 141
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427089920 unmapped: 60260352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:39.586577+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a386b000/0x0/0x1bfc00000, data 0x46c2128/0x48c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427089920 unmapped: 60260352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:40.586738+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a3867000/0x0/0x1bfc00000, data 0x46c3c83/0x48c6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427089920 unmapped: 60260352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:41.586917+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427089920 unmapped: 60260352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4707147 data_alloc: 234881024 data_used: 30355456
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd4f97c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:42.587036+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b1000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 ms_handle_reset con 0x55bcd76b1000 session 0x55bcd74af680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 ms_handle_reset con 0x55bcd622c000 session 0x55bcd82ae960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd7c9c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46243 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81e9c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd81e9c00 session 0x55bcd820ad20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd7c9cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8006400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd8006400 session 0x55bcd79d63c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416079872 unmapped: 71270400 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd622c000 session 0x55bcd7210d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd7595860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd7486b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:43.587169+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 heartbeat osd_stat(store_statfs(0x1a3d9e000/0x0/0x1bfc00000, data 0x418d930/0x4390000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7fd65a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd826bc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 71458816 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:44.587367+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 389 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7805e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 77717504 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:45.587594+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 77717504 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:46.587795+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 77717504 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:47.587887+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291115 data_alloc: 218103808 data_used: 12693504
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 389 heartbeat osd_stat(store_statfs(0x1a5428000/0x0/0x1bfc00000, data 0x268e47b/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 77717504 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:48.588028+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 389 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7bf5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.437670708s of 10.013500214s, submitted: 163
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 389 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd72194a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd82ae5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409550848 unmapped: 77799424 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:49.588159+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3fc00 session 0x55bcd55874a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a5428000/0x0/0x1bfc00000, data 0x268e47b/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd61bc000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd820ab40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd7fcf0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81e9c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a5428000/0x0/0x1bfc00000, data 0x268e47b/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd5336000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81e9c00 session 0x55bcd6813c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd749cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 76513280 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:50.588286+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4ced000/0x0/0x1bfc00000, data 0x323bfca/0x3441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81e9c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 76513280 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:51.588410+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 76513280 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:52.588634+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387815 data_alloc: 218103808 data_used: 12693504
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4ced000/0x0/0x1bfc00000, data 0x323bfca/0x3441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 76922880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:53.588875+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 76922880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:54.589405+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 76922880 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:55.589776+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:56.590066+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd749c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:57.590326+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4ced000/0x0/0x1bfc00000, data 0x323bfca/0x3441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4418903 data_alloc: 218103808 data_used: 17170432
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd8238f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd826ad20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada5000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:58.590599+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:54:59.591152+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcdada5000 session 0x55bcd53374a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:00.591298+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.455079079s of 11.771809578s, submitted: 41
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:01.591426+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4ceb000/0x0/0x1bfc00000, data 0x323cfda/0x3443000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:02.591686+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 410394624 unmapped: 76955648 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420761 data_alloc: 218103808 data_used: 17178624
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4ceb000/0x0/0x1bfc00000, data 0x323cfda/0x3443000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:03.591861+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 409731072 unmapped: 77619200 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:04.592030+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 73596928 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:05.592215+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 73629696 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:06.592383+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:07.592649+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4563755 data_alloc: 234881024 data_used: 29335552
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:08.592907+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4575000/0x0/0x1bfc00000, data 0x39aafda/0x3bb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:09.593038+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:10.593311+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:11.593429+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a457b000/0x0/0x1bfc00000, data 0x39acfda/0x3bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:12.593680+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a457b000/0x0/0x1bfc00000, data 0x39acfda/0x3bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4558119 data_alloc: 234881024 data_used: 29339648
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:13.593918+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 72433664 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.014983177s of 12.469816208s, submitted: 105
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81e9c00 session 0x55bcd81c1860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72134a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:14.594113+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419528704 unmapped: 67821568 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd74a4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:15.594325+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415391744 unmapped: 71958528 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:16.594499+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:17.594757+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527981 data_alloc: 234881024 data_used: 23924736
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4052000/0x0/0x1bfc00000, data 0x38baf78/0x3ac0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:18.594969+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:19.595220+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:20.595447+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:21.595612+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417505280 unmapped: 69844992 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4052000/0x0/0x1bfc00000, data 0x38baf78/0x3ac0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:22.595794+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417628160 unmapped: 69722112 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530017 data_alloc: 234881024 data_used: 23924736
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:23.596064+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417153024 unmapped: 70197248 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:24.596378+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417153024 unmapped: 70197248 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:25.596562+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464d000/0x0/0x1bfc00000, data 0x38dbf78/0x3ae1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417153024 unmapped: 70197248 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:26.596679+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417153024 unmapped: 70197248 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.337785721s of 13.240474701s, submitted: 147
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd75943c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:27.596820+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 70180864 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521985 data_alloc: 234881024 data_used: 23912448
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:28.596997+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 70180864 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:29.597176+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417030144 unmapped: 70320128 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:30.597353+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd749dc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 70680576 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a521e000/0x0/0x1bfc00000, data 0x2d0bf68/0x2f10000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:31.597470+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 70680576 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:32.597660+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 70680576 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386680 data_alloc: 234881024 data_used: 18812928
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:33.597882+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 70680576 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd7218960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd82af4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7fce1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81e9c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81e9c00 session 0x55bcd72123c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:34.598058+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 70672384 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a5243000/0x0/0x1bfc00000, data 0x2ce7f58/0x2eeb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd820a780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd7804f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd826ab40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:35.598196+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd5337680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd89f0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd89f0c00 session 0x55bcd7fcfc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:36.598325+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:37.598520+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457648 data_alloc: 234881024 data_used: 18817024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:38.598683+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4aeb000/0x0/0x1bfc00000, data 0x343ef67/0x3643000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:39.598896+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:40.599016+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:41.599148+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:42.599297+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4aeb000/0x0/0x1bfc00000, data 0x343ef67/0x3643000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457648 data_alloc: 234881024 data_used: 18817024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:43.599509+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:44.599616+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:45.599768+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4aeb000/0x0/0x1bfc00000, data 0x343ef67/0x3643000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:46.599976+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417357824 unmapped: 69992448 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.810281754s of 20.044521332s, submitted: 76
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:47.600119+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 70131712 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456592 data_alloc: 234881024 data_used: 18817024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:48.600253+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 70131712 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:49.600413+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 70131712 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7eb41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:50.603427+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 70131712 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4aeb000/0x0/0x1bfc00000, data 0x343ef67/0x3643000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:51.603721+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417226752 unmapped: 70123520 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:52.603857+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417374208 unmapped: 69976064 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4494192 data_alloc: 234881024 data_used: 23973888
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:53.603997+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:54.604129+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:55.604338+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4adb000/0x0/0x1bfc00000, data 0x344ef67/0x3653000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:56.604540+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:57.604905+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4501808 data_alloc: 234881024 data_used: 24993792
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:58.605211+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:55:59.605367+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd5cd4780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd5528780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:00.605492+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417472512 unmapped: 69877760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.836582184s of 13.856300354s, submitted: 6
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4adb000/0x0/0x1bfc00000, data 0x344ef67/0x3653000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:01.605790+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7fcfc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 72548352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4adb000/0x0/0x1bfc00000, data 0x344ef67/0x3653000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:02.605957+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 72548352 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395063 data_alloc: 234881024 data_used: 18821120
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a5230000/0x0/0x1bfc00000, data 0x2cfaf58/0x2efe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:03.606124+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 72613888 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:04.606251+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 72613888 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:05.606368+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 72613888 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ea000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81ea000 session 0x55bcd7218960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:06.606493+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd75943c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd74a4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 72663040 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a5230000/0x0/0x1bfc00000, data 0x2cfaf58/0x2efe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,0,0,1,0,8])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd72134a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdfcf7c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcdfcf7c00 session 0x55bcd8238f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd749c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd749cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:07.612060+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd5336000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425773 data_alloc: 234881024 data_used: 18821120
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:08.612207+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:09.612301+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:10.612458+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2e000/0x0/0x1bfc00000, data 0x2ffcf58/0x3200000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:11.612569+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:12.612704+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4426265 data_alloc: 234881024 data_used: 18821120
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:13.612920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2e000/0x0/0x1bfc00000, data 0x2ffcf58/0x3200000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:14.613054+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2e000/0x0/0x1bfc00000, data 0x2ffcf58/0x3200000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:15.613200+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:16.613369+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 72204288 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd820ab40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:17.613494+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.675496101s of 16.864711761s, submitted: 27
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 72163328 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428377 data_alloc: 234881024 data_used: 18804736
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd55874a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:18.613616+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 72163328 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:19.613768+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 72163328 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2e000/0x0/0x1bfc00000, data 0x2ffcf58/0x3200000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2e000/0x0/0x1bfc00000, data 0x2ffcf58/0x3200000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:20.613926+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd82ae5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 72163328 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72194a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:21.614096+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 72196096 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:22.614286+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4431561 data_alloc: 234881024 data_used: 18804736
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:23.614448+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:24.614652+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:25.614785+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:26.614962+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:27.615155+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 72187904 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4431561 data_alloc: 234881024 data_used: 18804736
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:28.615305+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415121408 unmapped: 72228864 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:29.615673+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:30.615951+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:31.616158+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:32.616252+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4453481 data_alloc: 234881024 data_used: 21770240
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:33.616465+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:34.616625+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:35.616889+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f2c000/0x0/0x1bfc00000, data 0x2ffcf8b/0x3202000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:36.617013+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.339656830s of 19.437833786s, submitted: 26
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:37.617176+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4453493 data_alloc: 234881024 data_used: 21774336
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:38.617320+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4f25000/0x0/0x1bfc00000, data 0x3003f8b/0x3209000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 72220672 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:39.617451+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417325056 unmapped: 70025216 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:40.617657+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417431552 unmapped: 69918720 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:41.617799+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4650000/0x0/0x1bfc00000, data 0x38d8f8b/0x3ade000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:42.618013+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533669 data_alloc: 234881024 data_used: 23642112
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:43.618194+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464e000/0x0/0x1bfc00000, data 0x38daf8b/0x3ae0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464b000/0x0/0x1bfc00000, data 0x38ddf8b/0x3ae3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:44.618341+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464b000/0x0/0x1bfc00000, data 0x38ddf8b/0x3ae3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:45.618487+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:46.618593+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:47.618984+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464b000/0x0/0x1bfc00000, data 0x38ddf8b/0x3ae3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537009 data_alloc: 234881024 data_used: 23875584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.067267418s of 11.337072372s, submitted: 96
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:48.619169+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7c9de00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7575e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b2000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a464b000/0x0/0x1bfc00000, data 0x38ddf8b/0x3ae3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 70909952 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b2000 session 0x55bcd5337680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b2000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b2000 session 0x55bcd749d680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd79d61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7253680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7fce1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:49.619285+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd820a780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd82aef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 70901760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:50.619450+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 70901760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:51.619569+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 70901760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:52.619709+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 70901760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4582368 data_alloc: 234881024 data_used: 23875584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:53.619914+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 70901760 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a416f000/0x0/0x1bfc00000, data 0x3db7ffd/0x3fbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:54.620054+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a416f000/0x0/0x1bfc00000, data 0x3db7ffd/0x3fbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 70967296 heap: 487350272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7eb52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd820e960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd81c0d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b2000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b2000 session 0x55bcd7bf5c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b2000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b2000 session 0x55bcd82385a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:55.620247+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 74629120 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd78050e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7bf4b40
Oct 02 13:20:52 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716509811' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd75b8780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7212f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd82390e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd74a45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:56.620484+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd72525a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416636928 unmapped: 74391552 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:57.620675+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd4f974a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416636928 unmapped: 74391552 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b2000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4701460 data_alloc: 234881024 data_used: 23875584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:58.620947+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.898313522s of 10.251000404s, submitted: 82
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b2000 session 0x55bcd7fd63c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a321e000/0x0/0x1bfc00000, data 0x4d0805f/0x4f10000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416636928 unmapped: 74391552 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:56:59.621078+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 74383360 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7fd61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bce00e2800 session 0x55bcd81c1c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5dde800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:00.621201+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5dde800 session 0x55bcd7212960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81ed800 session 0x55bcd7804960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 72908800 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:01.621325+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 72900608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2ccb000/0x0/0x1bfc00000, data 0x525a0c1/0x5463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:02.621707+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5dde800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 72900608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4785661 data_alloc: 234881024 data_used: 28721152
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5dde800 session 0x55bcd75b9860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:03.621916+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 72900608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2ca4000/0x0/0x1bfc00000, data 0x527e0c1/0x5487000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:04.622075+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 72900608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2ca4000/0x0/0x1bfc00000, data 0x527e0c1/0x5487000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bce00e2800 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:05.622224+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ecc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418103296 unmapped: 72925184 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:06.622354+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 72695808 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7124c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7124c00 session 0x55bcd7fd65a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:07.622504+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422199296 unmapped: 68829184 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4905187 data_alloc: 251658240 data_used: 43622400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8006000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd8006000 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:08.622645+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0c00 session 0x55bcd5cd4f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5dde800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422199296 unmapped: 68829184 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.318293571s of 10.503440857s, submitted: 56
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5dde800 session 0x55bcd7252f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:09.622808+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2ca3000/0x0/0x1bfc00000, data 0x52810e4/0x548b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422346752 unmapped: 68681728 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7124c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8006000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:10.622913+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422346752 unmapped: 68681728 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:11.623036+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 65069056 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:12.623158+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425992192 unmapped: 65036288 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969798 data_alloc: 251658240 data_used: 49307648
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:13.623301+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 64118784 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:14.623437+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7124c00 session 0x55bcd75b8d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd8006000 session 0x55bcd53365a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 64118784 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2a3c000/0x0/0x1bfc00000, data 0x54d9107/0x56e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bce00e2800 session 0x55bcd820e960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:15.623555+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426917888 unmapped: 64110592 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:16.623678+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428982272 unmapped: 62046208 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:17.623821+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x577a082/0x5983000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429629440 unmapped: 61399040 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971314 data_alloc: 251658240 data_used: 44376064
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7219680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7486960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:18.623970+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429809664 unmapped: 61218816 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.964620590s of 10.055987358s, submitted: 277
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd5528780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:19.624141+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 59858944 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a175e000/0x0/0x1bfc00000, data 0x51f3020/0x53fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:20.624274+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 59858944 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1723000/0x0/0x1bfc00000, data 0x522e020/0x5436000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:21.624453+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 59850752 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:22.624625+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 59850752 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852882 data_alloc: 234881024 data_used: 35057664
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1723000/0x0/0x1bfc00000, data 0x522e020/0x5436000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72154a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7805860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:23.624920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5dde800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1723000/0x0/0x1bfc00000, data 0x522e020/0x5436000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429637632 unmapped: 61390848 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5dde800 session 0x55bcd79d74a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:24.625078+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429645824 unmapped: 61382656 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:25.625210+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429645824 unmapped: 61382656 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:26.625358+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a18e0000/0x0/0x1bfc00000, data 0x4ae9fae/0x4cf0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429645824 unmapped: 61382656 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7bf43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:27.625527+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429654016 unmapped: 61374464 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751588 data_alloc: 234881024 data_used: 30023680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:28.625777+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429654016 unmapped: 61374464 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e69000/0x0/0x1bfc00000, data 0x4b0efae/0x4d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:29.625962+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.494888306s of 10.727897644s, submitted: 63
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:30.626160+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:31.626312+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e63000/0x0/0x1bfc00000, data 0x4b14fae/0x4d1b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:32.626456+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81ecc00 session 0x55bcd79d7860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751024 data_alloc: 234881024 data_used: 30023680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd5cd45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:33.626619+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e50000/0x0/0x1bfc00000, data 0x4b24fae/0x4d2b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:34.626893+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:35.627047+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:36.627467+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:37.627658+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751388 data_alloc: 234881024 data_used: 30023680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:38.627928+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e50000/0x0/0x1bfc00000, data 0x4b27fae/0x4d2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429662208 unmapped: 61366272 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e50000/0x0/0x1bfc00000, data 0x4b27fae/0x4d2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:39.628061+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7252780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a1e50000/0x0/0x1bfc00000, data 0x4b27fae/0x4d2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd622c000 session 0x55bcd5535e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429678592 unmapped: 61349888 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.161248207s of 10.306302071s, submitted: 19
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:40.628461+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5cd52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:41.629318+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:42.629758+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4623777 data_alloc: 234881024 data_used: 25055232
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:43.630099+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2a25000/0x0/0x1bfc00000, data 0x3f54f7b/0x4159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:44.630266+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:45.630611+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:46.630820+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:47.631213+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4623601 data_alloc: 234881024 data_used: 25055232
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:48.631414+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2a25000/0x0/0x1bfc00000, data 0x3f54f7b/0x4159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:49.631664+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2a25000/0x0/0x1bfc00000, data 0x3f54f7b/0x4159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:50.631818+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:51.632132+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 65806336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7214f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd8238d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:52.632272+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ecc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.250152588s of 12.415252686s, submitted: 46
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373613 data_alloc: 218103808 data_used: 13828096
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:53.632471+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:54.632619+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd81ecc00 session 0x55bcd6ffe5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:55.632888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4125000/0x0/0x1bfc00000, data 0x2856f5b/0x2a59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:56.633031+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:57.633168+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4385044 data_alloc: 218103808 data_used: 15253504
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:58.633310+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:57:59.633468+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 71041024 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:00.633662+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4120000/0x0/0x1bfc00000, data 0x285bf5b/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:01.633820+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:02.634042+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387930 data_alloc: 218103808 data_used: 15249408
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:03.634311+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4120000/0x0/0x1bfc00000, data 0x285bf5b/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:04.634538+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:05.634733+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 70918144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.862198830s of 13.714788437s, submitted: 36
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:06.634882+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a4120000/0x0/0x1bfc00000, data 0x285bf5b/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422207488 unmapped: 68820992 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:07.635023+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422207488 unmapped: 68820992 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386950 data_alloc: 218103808 data_used: 15249408
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:08.635142+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422207488 unmapped: 68820992 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:09.635360+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422215680 unmapped: 68812800 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:10.635497+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7e000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422215680 unmapped: 68812800 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:11.635645+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422223872 unmapped: 68804608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:12.635890+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422223872 unmapped: 68804608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386790 data_alloc: 218103808 data_used: 15245312
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:13.636089+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422223872 unmapped: 68804608 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:14.636251+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422150144 unmapped: 68878336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:15.636404+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422150144 unmapped: 68878336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:16.636565+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.484205246s of 10.783671379s, submitted: 9
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422150144 unmapped: 68878336 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:17.636735+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386598 data_alloc: 218103808 data_used: 15249408
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:18.636885+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:19.637058+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:20.637212+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5529c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7bf4b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7d000/0x0/0x1bfc00000, data 0x285cf5b/0x2a5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:21.637370+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:22.637521+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387706 data_alloc: 218103808 data_used: 15269888
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c96000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:23.637710+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5c96000 session 0x55bcd5cd5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:24.637896+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422158336 unmapped: 68870144 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:25.638026+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2f7e000/0x0/0x1bfc00000, data 0x285cfbd/0x2a60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422182912 unmapped: 68845568 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72134a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:26.638227+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7487860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.568503380s of 10.046241760s, submitted: 40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7eb4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 68886528 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:27.638340+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 68886528 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452886 data_alloc: 218103808 data_used: 15269888
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:28.638519+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 68886528 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:29.638644+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd4f972c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 68886528 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:30.638773+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7e53000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7124c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 68886528 heap: 491028480 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:31.638934+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a2754000/0x0/0x1bfc00000, data 0x3087f5b/0x328a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8006000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd8006000 session 0x55bcd81c1680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd78041e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd74a41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd7805c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd61961e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bce00e2800 session 0x55bcd749dc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422387712 unmapped: 72843264 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7eb50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:32.639061+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75943c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 392 ms_handle_reset con 0x55bcd7124c00 session 0x55bcd8239e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 422387712 unmapped: 72843264 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539311 data_alloc: 218103808 data_used: 15290368
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:33.639231+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 393 heartbeat osd_stat(store_statfs(0x1a1ed5000/0x0/0x1bfc00000, data 0x38ff8e6/0x3b07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd711f000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 393 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd7c9cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 73891840 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:34.639361+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd711f000 session 0x55bcd72194a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a1ed1000/0x0/0x1bfc00000, data 0x390155b/0x3b0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd76b3800 session 0x55bcd820e960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd7e53000 session 0x55bcd7fd7c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421175296 unmapped: 74055680 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:35.639488+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5211a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421183488 unmapped: 74047488 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:36.639637+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421183488 unmapped: 74047488 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:37.639782+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3e800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd6f3e800 session 0x55bcd74a70e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd711f000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.850226402s of 10.691410065s, submitted: 77
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd74a7a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420749312 unmapped: 74481664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4662559 data_alloc: 218103808 data_used: 15290368
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:38.640620+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 ms_handle_reset con 0x55bcd81f4c00 session 0x55bcd7804f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420765696 unmapped: 74465280 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:39.641493+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 395 ms_handle_reset con 0x55bcd711f000 session 0x55bcd7216f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd79d74a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fdc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a1136000/0x0/0x1bfc00000, data 0x469c825/0x48a8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420773888 unmapped: 74457088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:40.641990+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81fdc00 session 0x55bcd7212f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7c9c960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420610048 unmapped: 74620928 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:41.642106+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420495360 unmapped: 74735616 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:42.642270+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420503552 unmapped: 74727424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4730268 data_alloc: 234881024 data_used: 22519808
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:43.642438+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420503552 unmapped: 74727424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:44.642921+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a112f000/0x0/0x1bfc00000, data 0x46a003b/0x48af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420503552 unmapped: 74727424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:45.643423+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420503552 unmapped: 74727424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:46.643630+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420503552 unmapped: 74727424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:47.643779+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7487a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd72101e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.636717796s of 10.374717712s, submitted: 40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:48.644131+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a112f000/0x0/0x1bfc00000, data 0x46a003b/0x48af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,6])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 78536704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609890 data_alloc: 218103808 data_used: 14983168
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:49.644342+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 78536704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7210780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:50.644509+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 78536704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:51.644728+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 78536704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd8238960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd82394a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:52.644903+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 78536704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:53.645045+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 78528512 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607342 data_alloc: 218103808 data_used: 14876672
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a19a4000/0x0/0x1bfc00000, data 0x3e2c02b/0x403a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:54.645184+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd81c0d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:55.645415+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:56.645573+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:57.645889+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x2c03008/0x2e10000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:58.646040+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4412114 data_alloc: 218103808 data_used: 7598080
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd7bf4b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:58:59.646222+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.829627037s of 11.339650154s, submitted: 90
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd6ffe5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:00.646355+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:01.646516+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413786112 unmapped: 81444864 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:02.646624+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 81526784 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2ba9000/0x0/0x1bfc00000, data 0x2c27018/0x2e35000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:03.646724+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481332 data_alloc: 218103808 data_used: 16601088
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2ba9000/0x0/0x1bfc00000, data 0x2c27018/0x2e35000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:04.646875+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:05.647045+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7212780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd5336960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd75b9e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd711f000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd711f000 session 0x55bcd7804000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:06.647200+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 77422592 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd68910e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd6197e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd5cd43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7805e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81f4c00 session 0x55bcd74a6b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:07.647424+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2725000/0x0/0x1bfc00000, data 0x30a908a/0x32b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:08.647591+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521552 data_alloc: 218103808 data_used: 16601088
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:09.647757+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2725000/0x0/0x1bfc00000, data 0x30a908a/0x32b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:10.647906+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:11.648034+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd5528780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:12.648179+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 81616896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.866413116s of 13.300230980s, submitted: 36
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd79d61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7c9de00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd7eb52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:13.648657+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7212960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a2486000/0x0/0x1bfc00000, data 0x349e0b3/0x3558000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd6813c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417914880 unmapped: 77316096 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4701009 data_alloc: 218103808 data_used: 16818176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:14.649043+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd75b90e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417914880 unmapped: 77316096 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd7c9c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:15.649338+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 77815808 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:16.649538+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a169d000/0x0/0x1bfc00000, data 0x46090fb/0x433c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 77815808 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:17.649875+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417431552 unmapped: 77799424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:18.650112+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7575e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7595860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 417431552 unmapped: 77799424 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4744363 data_alloc: 234881024 data_used: 21700608
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:19.650281+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7804b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:20.650471+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 heartbeat osd_stat(store_statfs(0x1a1b25000/0x0/0x1bfc00000, data 0x418707a/0x3eb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:21.650659+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd5338b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7214f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:22.650792+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd6813c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:23.651106+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4665377 data_alloc: 218103808 data_used: 16859136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:24.651296+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 79077376 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:25.651439+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 416980992 unmapped: 78249984 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.442849159s of 13.401086807s, submitted: 155
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:26.651562+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd7594f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 79437824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd7211c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 ms_handle_reset con 0x55bceb79d400 session 0x55bcd5534780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd8239860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a31fc000/0x0/0x1bfc00000, data 0x24fede9/0x270b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:27.651714+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 398 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd7bf54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 82747392 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 398 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd74a41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:28.651877+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369073 data_alloc: 218103808 data_used: 14483456
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:29.652123+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:30.652349+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a3af9000/0x0/0x1bfc00000, data 0x1cd546d/0x1ee2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:31.652531+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:32.652693+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a3af9000/0x0/0x1bfc00000, data 0x1cd546d/0x1ee2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:33.652908+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369073 data_alloc: 218103808 data_used: 14483456
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:34.653062+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 82739200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:35.653199+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 412499968 unmapped: 82731008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.336834908s of 10.005132675s, submitted: 136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:36.653336+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a360a000/0x0/0x1bfc00000, data 0x21c4fe4/0x23d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:37.653522+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:38.653705+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419733 data_alloc: 218103808 data_used: 14782464
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:39.654080+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:40.654340+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:41.654535+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x21d1fe4/0x23e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:42.654716+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:43.654915+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419733 data_alloc: 218103808 data_used: 14782464
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:44.655072+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x21d1fe4/0x23e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:45.655237+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x21d1fe4/0x23e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 81690624 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:46.655360+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x21d1fe4/0x23e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 81682432 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.532223701s of 10.651512146s, submitted: 39
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd52114a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd820f2c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7eb45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd4f423c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:47.655727+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:48.656160+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4467490 data_alloc: 218103808 data_used: 14786560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301f000/0x0/0x1bfc00000, data 0x27b0fe4/0x29bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:49.656724+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:50.657060+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:51.657455+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:52.657627+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd74a6b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:53.658077+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4467622 data_alloc: 218103808 data_used: 14786560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:54.658387+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 81485824 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301f000/0x0/0x1bfc00000, data 0x27b0fe4/0x29bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:55.658907+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 81477632 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:56.659237+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 81477632 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:57.659460+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 81477632 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301f000/0x0/0x1bfc00000, data 0x27b0fe4/0x29bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:58.659669+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 81477632 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4500742 data_alloc: 234881024 data_used: 19505152
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.006391525s of 12.111726761s, submitted: 24
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fe000 session 0x55bcd55874a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T12:59:59.660104+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301f000/0x0/0x1bfc00000, data 0x27b0fe4/0x29bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:00.660268+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:01.660495+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e3400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301f000/0x0/0x1bfc00000, data 0x27b0fe4/0x29bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:02.660673+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:03.660946+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f0000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81f0000 session 0x55bcd5528000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4502636 data_alloc: 234881024 data_used: 19505152
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:04.661174+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a301e000/0x0/0x1bfc00000, data 0x27b0ff4/0x29c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:05.661309+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:06.661449+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 81469440 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:07.661567+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 81035264 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bce00e3400 session 0x55bcd4f963c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:08.661703+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 80609280 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517604 data_alloc: 234881024 data_used: 19582976
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.592151642s of 10.002706528s, submitted: 33
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ec0000/0x0/0x1bfc00000, data 0x290cff4/0x2b1c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:09.661871+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 80592896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:10.662026+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 80592896 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:11.662326+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2e83000/0x0/0x1bfc00000, data 0x2949ff4/0x2b59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 80584704 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7c9de00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd6891c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b0400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:12.662577+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b0400 session 0x55bcd7c9cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:13.662776+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4381870 data_alloc: 218103808 data_used: 12398592
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:14.662918+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:15.664604+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39f4000/0x0/0x1bfc00000, data 0x1ddbf92/0x1fea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:16.664788+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:17.666140+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39f4000/0x0/0x1bfc00000, data 0x1ddbf92/0x1fea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:18.667086+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383042 data_alloc: 218103808 data_used: 12406784
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:19.667319+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 81756160 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.956000328s of 11.097105980s, submitted: 32
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7218d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7bf41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:20.667525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75b81e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72530e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413515776 unmapped: 81715200 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd74a50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:21.667784+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd81c10e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:22.668062+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:23.668263+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd6ffef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:24.668721+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:25.669303+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:26.669619+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:27.670136+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:28.670972+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:29.671331+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:30.671470+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:31.671760+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:32.671900+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:33.672090+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:34.672227+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:35.672434+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:36.672593+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:37.672816+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:38.673035+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:39.673203+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:40.673341+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 81707008 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:41.673514+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:42.673741+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:43.674046+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:44.674200+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:45.674394+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:46.674550+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:47.674689+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:48.674902+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413532160 unmapped: 81698816 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:49.675059+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 81690624 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:50.675225+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 81690624 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:51.675388+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 81690624 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:52.675648+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 81690624 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:53.675891+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 81682432 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:54.675968+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 81682432 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:55.676057+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:56.676132+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:57.676293+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:58.676403+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:00:59.676535+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:00.676621+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:01.676803+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:02.676989+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:03.677300+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 81674240 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:04.677456+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 81666048 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:05.677613+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 81666048 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:06.677743+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 81666048 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:07.677915+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:08.678151+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:09.678308+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:10.678450+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:11.678667+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:12.678810+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 81657856 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:13.679036+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286875 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:14.679185+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:15.680096+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:16.680282+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:17.680463+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:18.680601+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.743377686s of 58.975467682s, submitted: 62
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 81649664 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336219 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7215c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd4f410e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd75941e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7486b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:19.680724+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd8239c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd72170e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7212960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413589504 unmapped: 81641472 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7fd61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd79d61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:20.680885+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413589504 unmapped: 81641472 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:21.681016+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413597696 unmapped: 81633280 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:22.681354+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a7000/0x0/0x1bfc00000, data 0x2229f82/0x2437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:23.682362+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377659 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:24.682555+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:25.683354+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:26.683613+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a7000/0x0/0x1bfc00000, data 0x2229f82/0x2437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:27.684237+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:28.684389+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377659 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:29.684883+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a7000/0x0/0x1bfc00000, data 0x2229f82/0x2437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413605888 unmapped: 81625088 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:30.685038+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a7000/0x0/0x1bfc00000, data 0x2229f82/0x2437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.028133392s of 12.106801033s, submitted: 11
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 81862656 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd74a6000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:31.685175+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a7000/0x0/0x1bfc00000, data 0x2229f82/0x2437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce00e3400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:32.685285+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:33.685426+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a6000/0x0/0x1bfc00000, data 0x2229fa5/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415469 data_alloc: 218103808 data_used: 12447744
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:34.685576+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:35.685886+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a6000/0x0/0x1bfc00000, data 0x2229fa5/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:36.686099+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 81854464 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:37.686378+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:38.686558+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4467469 data_alloc: 234881024 data_used: 19234816
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:39.686780+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a6000/0x0/0x1bfc00000, data 0x2229fa5/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:40.686926+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:41.687201+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a6000/0x0/0x1bfc00000, data 0x2229fa5/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a35a6000/0x0/0x1bfc00000, data 0x2229fa5/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:42.687359+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 81428480 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:43.687544+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.110331535s of 13.142873764s, submitted: 10
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 414023680 unmapped: 81207296 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4471457 data_alloc: 234881024 data_used: 19243008
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:44.687661+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 77070336 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:45.687786+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418177024 unmapped: 77053952 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:46.687931+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2dd4000/0x0/0x1bfc00000, data 0x29f3fa5/0x2c02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418177024 unmapped: 77053952 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:47.688129+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418177024 unmapped: 77053952 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:48.688278+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 76447744 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556605 data_alloc: 234881024 data_used: 19263488
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:49.688423+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:50.688572+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:51.688745+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2996000/0x0/0x1bfc00000, data 0x2e39fa5/0x3048000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:52.688908+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:53.689059+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4558735 data_alloc: 234881024 data_used: 19263488
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:54.689253+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:55.689409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:56.689556+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2974000/0x0/0x1bfc00000, data 0x2e5bfa5/0x306a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:57.689762+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.268679619s of 13.592313766s, submitted: 86
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd53381e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 76439552 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:58.689920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560211 data_alloc: 234881024 data_used: 19263488
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:01:59.690097+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2973000/0x0/0x1bfc00000, data 0x2e5bfb5/0x306b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:00.690289+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:01.690516+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:02.690672+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:03.690879+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a296e000/0x0/0x1bfc00000, data 0x2e60fb5/0x3070000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560599 data_alloc: 234881024 data_used: 19267584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:04.691003+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 76431360 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:05.691137+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd5cd4780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd79d6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd4f412c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd5211a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a296e000/0x0/0x1bfc00000, data 0x2e60fb5/0x3070000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:06.691259+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7eb54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7eb4b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd81c14a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd5cd4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7c9c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:07.691557+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:08.691726+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4641483 data_alloc: 234881024 data_used: 19267584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:09.691931+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1ebf000/0x0/0x1bfc00000, data 0x390efc5/0x3b1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:10.692169+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:11.692302+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:12.692463+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:13.692668+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4641483 data_alloc: 234881024 data_used: 19267584
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:14.692799+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1ebf000/0x0/0x1bfc00000, data 0x390efc5/0x3b1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:15.692892+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.311491013s of 18.410348892s, submitted: 16
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:16.693052+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 418889728 unmapped: 76341248 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:17.693217+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1ebd000/0x0/0x1bfc00000, data 0x390ffc5/0x3b20000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd8239e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419045376 unmapped: 76185600 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:18.693356+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1e9a000/0x0/0x1bfc00000, data 0x3933fc5/0x3b44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419045376 unmapped: 76185600 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4644145 data_alloc: 234881024 data_used: 19275776
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:19.693519+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 419045376 unmapped: 76185600 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:20.693645+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1e9a000/0x0/0x1bfc00000, data 0x3933fc5/0x3b44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420102144 unmapped: 75128832 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:21.693803+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420560896 unmapped: 74670080 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:22.693906+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420560896 unmapped: 74670080 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:23.694090+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7c9cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7c9de00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420560896 unmapped: 74670080 heap: 495230976 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4715345 data_alloc: 234881024 data_used: 26705920
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:24.694208+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada6800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdada6800 session 0x55bcd4f963c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd5528000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd7bf54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7211c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7214f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:25.694350+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1694000/0x0/0x1bfc00000, data 0x4139fc5/0x434a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:26.694500+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:27.694714+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1694000/0x0/0x1bfc00000, data 0x4139fc5/0x434a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 60K writes, 236K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.77 writes per sync, written: 0.22 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4972 writes, 18K keys, 4972 commit groups, 1.0 writes per commit group, ingest: 17.15 MB, 0.03 MB/s
                                           Interval WAL: 4972 writes, 2044 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:28.694885+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784866 data_alloc: 234881024 data_used: 26705920
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets getting new tickets!
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:29.695080+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _finish_auth 0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:29.696167+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 78708736 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:30.695215+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.897384644s of 15.018092155s, submitted: 26
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:31.695319+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 75186176 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada6800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdada6800 session 0x55bcd4916b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd61961e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:32.695496+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 73916416 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0ce7000/0x0/0x1bfc00000, data 0x4ae6fc5/0x4cf7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:33.695678+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425549824 unmapped: 73883648 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:34.695822+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425549824 unmapped: 73883648 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4878322 data_alloc: 234881024 data_used: 27750400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: mgrc ms_handle_reset ms_handle_reset con 0x55bcd5c3bc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3158772141
Oct 02 13:20:52 compute-0 ceph-osd[83986]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3158772141,v1:192.168.122.100:6801/3158772141]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: get_auth_request con 0x55bcdada6800 auth_method 0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd6196f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:35.696002+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 73826304 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd5cd41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5d7f000 session 0x55bcd4f40f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:36.696084+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 73818112 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcdada5400 session 0x55bcd7bf4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec000 session 0x55bcd7595a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:37.696242+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 73818112 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:38.696387+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 73220096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0cc1000/0x0/0x1bfc00000, data 0x4b0bfd5/0x4d1d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:39.696507+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4937968 data_alloc: 251658240 data_used: 35450880
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:40.696633+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:41.696769+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:42.696926+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:43.697065+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:44.697194+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4938928 data_alloc: 251658240 data_used: 35520512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x4b0dfd5/0x4d1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.293130875s of 13.537086487s, submitted: 84
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:45.697314+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0cbe000/0x0/0x1bfc00000, data 0x4b0efd5/0x4d20000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:46.697458+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:47.697592+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:48.697746+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69918720 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0cbe000/0x0/0x1bfc00000, data 0x4b0efd5/0x4d20000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:49.697888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429654016 unmapped: 69779456 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4951132 data_alloc: 251658240 data_used: 35536896
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:50.698016+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431284224 unmapped: 68149248 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0626000/0x0/0x1bfc00000, data 0x519efd5/0x53b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:51.698461+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431734784 unmapped: 67698688 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:52.698607+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 67559424 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a05b6000/0x0/0x1bfc00000, data 0x5216fd5/0x5428000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:53.698792+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 67551232 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:54.698945+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 67551232 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5000408 data_alloc: 251658240 data_used: 36978688
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:55.699093+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 67551232 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd749d680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7252960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a05ac000/0x0/0x1bfc00000, data 0x5220fd5/0x5432000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:56.699300+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 67551232 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:57.699550+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.406940460s of 12.713471413s, submitted: 67
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 67551232 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:58.699646+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 68321280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:02:59.699778+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 68321280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4762779 data_alloc: 234881024 data_used: 25944064
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1a54000/0x0/0x1bfc00000, data 0x3d79fc5/0x3f8a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:00.699913+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 68321280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:01.700028+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 68321280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:02.700133+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 68313088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:03.700259+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 68313088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:04.700411+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 68313088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4762955 data_alloc: 234881024 data_used: 25944064
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd4917a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fe000 session 0x55bcd5cd50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:05.700592+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1a54000/0x0/0x1bfc00000, data 0x3d79fc5/0x3f8a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 68313088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:06.700739+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 68313088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:07.700900+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.270828247s of 10.358497620s, submitted: 27
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:08.701038+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd4f410e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:09.701225+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633233 data_alloc: 234881024 data_used: 19156992
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:10.701369+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:11.701439+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:12.701590+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:13.701742+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:14.701911+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633233 data_alloc: 234881024 data_used: 19156992
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:15.702050+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7eb50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd81c05a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:16.702165+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427491328 unmapped: 71942144 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7804960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:17.702308+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:18.702455+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:19.702573+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4450620 data_alloc: 218103808 data_used: 9314304
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:20.702699+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:21.702936+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:22.703081+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:23.703241+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd6812960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:24.703389+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4450620 data_alloc: 218103808 data_used: 9314304
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:25.703534+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:26.703682+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7211860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.247697830s of 19.366950989s, submitted: 37
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bce00e3400 session 0x55bcd5535e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:27.703867+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:28.704311+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:29.705137+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423313408 unmapped: 76120064 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4322784 data_alloc: 218103808 data_used: 7610368
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:30.705350+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826ad20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:31.705996+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:32.706155+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:33.706347+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:34.706484+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:35.706788+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:36.706914+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:37.707222+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:38.707394+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:39.707524+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:40.707691+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:41.707904+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:42.708113+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:43.708430+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:44.708647+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:45.709002+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:46.709189+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:47.709345+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:48.709640+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:49.709861+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7bf41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd7804d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd5211860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd826a5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:50.709972+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.051149368s of 23.211111069s, submitted: 39
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd6891a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd78052c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd68910e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 75726848 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd81c03c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7253680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd749cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd820b4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72154a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7bf5e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7fd6960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:51.710230+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:52.710379+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:53.710610+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:54.710790+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4424811 data_alloc: 218103808 data_used: 7610368
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:55.711064+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:56.711223+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7805a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:57.711379+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd81c1a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7805860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd55874a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7bf5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:58.711502+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 78159872 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:59.711655+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 78159872 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425324 data_alloc: 218103808 data_used: 7614464
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:00.711801+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:01.711954+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:02.712089+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:03.712321+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:04.712449+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4494924 data_alloc: 234881024 data_used: 17330176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:05.712586+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:06.712719+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:07.712863+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:08.712982+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:09.713154+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4494924 data_alloc: 234881024 data_used: 17330176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.235544205s of 19.393142700s, submitted: 51
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:10.713282+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:11.713405+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428171264 unmapped: 71262208 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1db2000/0x0/0x1bfc00000, data 0x3606fa2/0x3816000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:12.713521+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:13.713685+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:14.713823+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4678504 data_alloc: 234881024 data_used: 18624512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c87000/0x0/0x1bfc00000, data 0x3726fa2/0x3936000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:15.714012+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:16.714148+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c87000/0x0/0x1bfc00000, data 0x3726fa2/0x3936000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:17.747145+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:18.747289+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431390720 unmapped: 68042752 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:19.747422+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431390720 unmapped: 68042752 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4667948 data_alloc: 234881024 data_used: 18624512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.733645439s of 10.034718513s, submitted: 451
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd81c01e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:20.747587+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:21.747714+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:22.747928+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c56000/0x0/0x1bfc00000, data 0x3767fb1/0x3978000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:23.748153+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:24.748288+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4669736 data_alloc: 234881024 data_used: 18624512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd68130e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7fcfc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:25.748430+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd4917a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c44000/0x0/0x1bfc00000, data 0x3779fb1/0x398a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd7486b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426737664 unmapped: 72695808 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:26.748584+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 72687616 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:27.748726+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7253860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7bf5c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 72687616 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:28.748866+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd81c1e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:29.749014+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:30.749187+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:31.749317+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:32.749453+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:33.749627+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:34.749764+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:35.750105+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:36.750243+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:37.750438+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:38.750648+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:39.750890+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:40.751088+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7fce3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:41.751247+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:42.751397+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:43.751554+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd5cd43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd7804960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75b92c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7595a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.821767807s of 24.315164566s, submitted: 73
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:44.751699+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7eb50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7219860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd5587e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 74498048 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4375227 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7fd7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7c9cd20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:45.751851+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:46.752041+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:47.752260+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af1000/0x0/0x1bfc00000, data 0x18cdfa1/0x1add000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:48.752512+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:49.752779+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 74473472 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374215 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:50.753019+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 74473472 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:51.753229+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af1000/0x0/0x1bfc00000, data 0x18cdfa1/0x1add000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 74465280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7fcf680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7253c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fe000 session 0x55bcd4f97c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd6ffe3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:52.753413+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:53.753665+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:54.753850+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af0000/0x0/0x1bfc00000, data 0x18cdfb1/0x1ade000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4376039 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:55.754047+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576940536s of 11.361857414s, submitted: 25
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd74a43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 434495488 unmapped: 73334784 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd74a6000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7fd7c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7253c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7fd7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:56.754180+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd81c03c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7eb50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd826b680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7253860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:57.754331+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:58.754493+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3118000/0x0/0x1bfc00000, data 0x22a5fb1/0x24b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424869888 unmapped: 82960384 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd81c1860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:59.754749+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424869888 unmapped: 82960384 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463945 data_alloc: 218103808 data_used: 8957952
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:00.754974+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:01.755165+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:02.755393+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:03.755599+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:04.755740+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4536105 data_alloc: 234881024 data_used: 19079168
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd749d0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd749cb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:05.755888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:06.756023+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:07.756159+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:08.756324+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:09.756491+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.923907280s of 14.041229248s, submitted: 27
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537017 data_alloc: 234881024 data_used: 19107840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,2,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:10.756612+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428998656 unmapped: 78831616 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:11.756754+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd7804f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7252780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429047808 unmapped: 78782464 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:12.756925+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:13.757116+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:14.757259+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4589719 data_alloc: 234881024 data_used: 20172800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:15.757384+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:16.757542+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428384256 unmapped: 79446016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:17.757685+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428384256 unmapped: 79446016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd6891e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7bf5e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:18.757793+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 83877888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd7c9cd20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:19.757919+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.748688698s of 10.229146004s, submitted: 114
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7487860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446863 data_alloc: 218103808 data_used: 10043392
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:20.758017+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:21.758151+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:22.758227+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:23.758388+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:24.758512+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3492000/0x0/0x1bfc00000, data 0x1f2bfa1/0x213b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec000 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446687 data_alloc: 218103808 data_used: 10043392
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:25.758661+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7486b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5cd5c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:26.758789+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:27.758931+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd79d72c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:28.759102+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:29.759476+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360291 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:30.759615+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:31.759784+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.338652611s of 12.466805458s, submitted: 33
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75b8b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:32.759903+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd55343c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:33.760065+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75745a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd6196780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:34.760199+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359002 data_alloc: 218103808 data_used: 7606272
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:35.760402+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd72185a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:36.760588+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd72123c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7c000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:37.760762+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:38.760913+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:39.761098+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:40.761288+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:41.761427+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:42.761573+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:43.761729+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:44.761885+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:45.762033+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:46.762191+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:47.762336+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:48.762477+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:49.762603+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:50.762740+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:51.762898+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:52.763029+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:53.763242+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.123798370s of 21.309267044s, submitted: 33
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 82747392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd75b8960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7486960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:54.763400+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd74ae960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd4f43c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4916b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395345 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:55.763561+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:56.763692+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:57.763793+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:58.763956+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fd000 session 0x55bcd72143c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826b4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:59.764079+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395345 data_alloc: 218103808 data_used: 7602176
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:00.764235+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7c9c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:01.764363+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd53374a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:02.764499+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:03.764704+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.176534653s of 10.768110275s, submitted: 26
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:04.764845+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd4f43680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c9000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd87c9000 session 0x55bcd7805a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826ad20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd72194a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd820b2c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423190528 unmapped: 84639744 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490929 data_alloc: 218103808 data_used: 8429568
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:05.765031+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:06.765167+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:07.765482+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ec7000/0x0/0x1bfc00000, data 0x24f9f82/0x2707000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:08.765621+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:09.765799+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505175 data_alloc: 218103808 data_used: 10043392
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:10.765897+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd7fd6960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423149568 unmapped: 84680704 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f7000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:11.766039+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ec6000/0x0/0x1bfc00000, data 0x24f9fa5/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423149568 unmapped: 84680704 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:12.766202+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:13.766501+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:14.766923+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4578963 data_alloc: 234881024 data_used: 20344832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.565558434s of 10.998103142s, submitted: 28
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:15.767271+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:16.767429+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2b47000/0x0/0x1bfc00000, data 0x2878fa5/0x2a87000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:17.767920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:18.768262+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:19.768413+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427843584 unmapped: 79986688 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4619425 data_alloc: 234881024 data_used: 21434368
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:20.768598+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428105728 unmapped: 79724544 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:21.768749+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428531712 unmapped: 79298560 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2983000/0x0/0x1bfc00000, data 0x2a3bfa5/0x2c4a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:22.768941+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75776000 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:23.769085+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 434946048 unmapped: 72884224 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:24.769395+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435134464 unmapped: 72695808 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4774745 data_alloc: 234881024 data_used: 24383488
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.375843048s of 10.013811111s, submitted: 192
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:25.769890+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:26.770217+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:27.770492+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a174a000/0x0/0x1bfc00000, data 0x3c74fa5/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a174a000/0x0/0x1bfc00000, data 0x3c74fa5/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:28.770741+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:29.770904+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1729000/0x0/0x1bfc00000, data 0x3c96fa5/0x3ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783807 data_alloc: 234881024 data_used: 24764416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:30.771024+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1729000/0x0/0x1bfc00000, data 0x3c96fa5/0x3ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:31.771212+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:32.771450+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1726000/0x0/0x1bfc00000, data 0x3c99fa5/0x3ea8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:33.771680+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:34.771913+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784367 data_alloc: 234881024 data_used: 24764416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:35.772082+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1726000/0x0/0x1bfc00000, data 0x3c99fa5/0x3ea8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:36.772381+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:37.772558+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:38.772810+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:39.773032+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.932869911s of 14.232981682s, submitted: 17
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784731 data_alloc: 234881024 data_used: 24772608
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:40.773328+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1723000/0x0/0x1bfc00000, data 0x3c9cfa5/0x3eab000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435183616 unmapped: 72646656 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:41.773653+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435183616 unmapped: 72646656 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:42.773919+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435191808 unmapped: 72638464 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81f7000 session 0x55bcd72105a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd75b8f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:43.774147+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd826ba40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7575860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 72630272 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:44.774349+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75b94a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1721000/0x0/0x1bfc00000, data 0x3c9efa5/0x3ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435224576 unmapped: 72605696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677607 data_alloc: 234881024 data_used: 20029440
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:45.774519+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435224576 unmapped: 72605696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:46.774773+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd75b9e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:47.774976+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72185a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7253680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7eb5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd81c0d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:48.775122+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd82aef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:49.775306+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682073 data_alloc: 234881024 data_used: 20041728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:50.775445+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:51.775590+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:52.775759+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:53.775996+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:54.776233+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4683033 data_alloc: 234881024 data_used: 20144128
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:55.776393+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:56.776525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:57.776689+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7bf5c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.647792816s of 18.847537994s, submitted: 42
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 401 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7215a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:58.776803+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:59.777007+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686007 data_alloc: 234881024 data_used: 20144128
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:00.777160+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:01.777389+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:02.777554+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:03.777761+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:04.777974+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4701431 data_alloc: 234881024 data_used: 21639168
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:05.778257+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 72581120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:06.778479+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:07.778762+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x33823ea/0x3594000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:08.779013+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:09.779198+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705717 data_alloc: 234881024 data_used: 21663744
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:10.779349+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:11.779562+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x33823ea/0x3594000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:12.779753+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7c9c780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.350227356s of 14.460050583s, submitted: 27
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7210d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430178304 unmapped: 77651968 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:13.779991+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd7486960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:14.780280+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493ea/0x185b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:15.780736+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:16.781019+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:17.781163+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:18.781292+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:19.781536+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:20.781675+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:21.782380+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:22.782972+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:23.783474+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:24.783719+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:25.784021+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:26.784337+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:27.784534+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:28.785054+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:29.785412+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:30.785578+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:31.785737+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:32.785961+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:33.786210+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.157569885s of 21.315719604s, submitted: 45
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5cd43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd5339860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4f434a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:34.786373+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd4f965a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f7000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd81f7000 session 0x55bcd7487e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd82381e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7805680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 81420288 heap: 512032768 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82385a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3236000/0x0/0x1bfc00000, data 0x21873c7/0x2398000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7804b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd81c1c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd74af680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd826ab40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd79d70e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:35.786522+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579925 data_alloc: 218103808 data_used: 7618560
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:36.786669+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd820a780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:37.786806+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f9000/0x0/0x1bfc00000, data 0x2dc43c7/0x2fd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd81c0780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:38.786958+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 87228416 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75941e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd72154a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f9000/0x0/0x1bfc00000, data 0x2dc43c7/0x2fd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,2])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd7210d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:39.787087+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 87203840 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:40.787211+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590834 data_alloc: 218103808 data_used: 8327168
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 87203840 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:41.787357+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:42.787613+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:43.787806+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:44.788042+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:45.788310+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4749074 data_alloc: 234881024 data_used: 30711808
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:46.788388+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:47.788530+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:48.788686+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:49.788888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:50.789112+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4749554 data_alloc: 234881024 data_used: 30724096
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.757154465s of 16.976579666s, submitted: 50
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:51.789241+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446365696 unmapped: 71491584 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:52.789383+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a11ea000/0x0/0x1bfc00000, data 0x41d13fa/0x43e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,2])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450527232 unmapped: 67330048 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:53.789537+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:54.789648+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:55.789765+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f995000/0x0/0x1bfc00000, data 0x48863fa/0x4a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973042 data_alloc: 234881024 data_used: 32993280
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:56.789889+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:57.790029+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:58.790156+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f971000/0x0/0x1bfc00000, data 0x48aa3fa/0x4abd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:59.790304+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:00.790383+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973526 data_alloc: 234881024 data_used: 32993280
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:01.790557+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.594237328s of 11.148545265s, submitted: 226
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:02.790671+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:03.790857+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f95f000/0x0/0x1bfc00000, data 0x48bc3fa/0x4acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:04.790997+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c8c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd87c8c00 session 0x55bcd7214f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:05.791116+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4980708 data_alloc: 234881024 data_used: 33009664
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826bc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450306048 unmapped: 67551232 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd622d400 session 0x55bcd5337680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd68910e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd5535e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:06.791229+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd521e000 session 0x55bcd72101e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461914112 unmapped: 55943168 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:07.791299+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd521e000 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:08.791403+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:09.791570+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:10.791664+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100033 data_alloc: 251658240 data_used: 45219840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461930496 unmapped: 55926784 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:11.791817+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:12.791972+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7804780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.599666595s of 11.089976311s, submitted: 74
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7eb5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:13.792171+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:14.792332+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:15.792459+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099901 data_alloc: 251658240 data_used: 45219840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd826ad20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7211c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5cd54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd75b8960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd5cd4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:16.792597+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461963264 unmapped: 55894016 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:17.792691+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461963264 unmapped: 55894016 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:18.792989+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:19.793104+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:20.793297+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100427 data_alloc: 251658240 data_used: 45219840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:21.793446+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:22.793570+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd828d000 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:23.793743+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.123583794s of 10.139816284s, submitted: 12
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd622d400 session 0x55bcd6fff2c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5210000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7eb50e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7252f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:24.793888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7eb4b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:25.794043+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdfcf7800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5103533 data_alloc: 251658240 data_used: 45219840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79cc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bceb79cc00 session 0x55bcd7575680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:26.794250+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462004224 unmapped: 55853056 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:27.794409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd5cd4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd75b8960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:28.794540+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd711f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:29.794703+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:30.794880+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5120236 data_alloc: 251658240 data_used: 46493696
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462618624 unmapped: 55238656 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:31.795067+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7486960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcdfcf7800 session 0x55bcd820b680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:32.795226+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:33.795583+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:34.795711+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.776150703s of 11.827246666s, submitted: 13
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd82ae000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:35.795819+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288082 data_alloc: 251658240 data_used: 48136192
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x672517f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd826a960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:36.795943+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 408 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7eb45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:37.796081+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:38.796208+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 408 heartbeat osd_stat(store_statfs(0x19f005000/0x0/0x1bfc00000, data 0x520be2c/0x5428000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:39.796323+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462864384 unmapped: 54992896 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:40.796460+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198850 data_alloc: 251658240 data_used: 48144384
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462864384 unmapped: 54992896 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:41.796564+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464134144 unmapped: 53723136 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:42.796702+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464224256 unmapped: 53633024 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19ee39000/0x0/0x1bfc00000, data 0x53d696b/0x55f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:43.796868+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:44.797049+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:45.797239+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5249326 data_alloc: 251658240 data_used: 51548160
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:46.797417+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19ee39000/0x0/0x1bfc00000, data 0x53d696b/0x55f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:47.797963+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.192468643s of 12.316826820s, submitted: 47
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464707584 unmapped: 53149696 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:48.798111+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464707584 unmapped: 53149696 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:49.798241+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464715776 unmapped: 53141504 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:50.798361+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5262504 data_alloc: 251658240 data_used: 52187136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464306176 unmapped: 53551104 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:51.798496+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edca000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:52.798655+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:53.798944+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:54.799110+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:55.799221+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5269704 data_alloc: 251658240 data_used: 52932608
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edca000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:56.799339+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:57.799449+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edb2000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:58.799586+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7218b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.053912163s of 11.107004166s, submitted: 12
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd72143c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd5cd52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:59.799748+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:00.799898+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4953532 data_alloc: 251658240 data_used: 38354944
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:01.800025+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:02.800151+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:03.800317+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:04.800409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:05.800557+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4953532 data_alloc: 251658240 data_used: 38354944
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:06.800707+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:07.800858+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:08.801006+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd61961e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.946537018s of 10.045336723s, submitted: 37
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7bf5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f425a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:09.801165+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 411 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd4f974a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 60481536 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:10.801396+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5256647 data_alloc: 251658240 data_used: 44515328
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd82af680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:11.801527+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c828000/0x0/0x1bfc00000, data 0x6430fba/0x6655000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd6812960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c0960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d7c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:12.801670+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:13.801851+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c822000/0x0/0x1bfc00000, data 0x6435fba/0x665a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c822000/0x0/0x1bfc00000, data 0x6435fba/0x665a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:14.802019+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 heartbeat osd_stat(store_statfs(0x19c825000/0x0/0x1bfc00000, data 0x6435f58/0x6659000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd81c0f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:15.802510+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5010368 data_alloc: 251658240 data_used: 44523520
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:16.802641+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7804780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd711f800 session 0x55bcd79d61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5587c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:17.802767+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:18.802920+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 heartbeat osd_stat(store_statfs(0x19ef87000/0x0/0x1bfc00000, data 0x3cd5b9f/0x3ef7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:19.803051+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.788119316s of 11.300113678s, submitted: 149
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:20.803256+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4844875 data_alloc: 234881024 data_used: 32546816
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 456900608 unmapped: 65159168 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f89c000/0x0/0x1bfc00000, data 0x33bc399/0x35df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:21.803451+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 456908800 unmapped: 65150976 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82aef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7fd7c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:22.803610+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd6197e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a15fc000/0x0/0x1bfc00000, data 0x1660366/0x1881000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:23.803767+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd4f974a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f425a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd61961e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:24.803931+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82ae000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:25.804113+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7486960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd5cd4960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4550295 data_alloc: 218103808 data_used: 7684096
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:26.804240+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:27.804429+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:28.804615+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0fdd000/0x0/0x1bfc00000, data 0x1c7e3d8/0x1ea1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449929216 unmapped: 72130560 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:29.804766+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449929216 unmapped: 72130560 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0fdd000/0x0/0x1bfc00000, data 0x1c7e3d8/0x1ea1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:30.804894+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.252073288s of 10.649868965s, submitted: 134
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7575680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556122 data_alloc: 218103808 data_used: 7692288
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:31.805064+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:32.805225+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:33.805393+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:34.805543+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:35.805671+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600762 data_alloc: 218103808 data_used: 14016512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:36.805799+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:37.805923+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:38.806057+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:39.806203+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:40.806308+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600762 data_alloc: 218103808 data_used: 14016512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:41.806450+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:42.806604+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.795286179s of 11.819027901s, submitted: 19
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 70467584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0f00000/0x0/0x1bfc00000, data 0x1d58f3a/0x1f7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,1,3])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:43.806806+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450519040 unmapped: 71540736 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:44.807009+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:45.807205+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651102 data_alloc: 218103808 data_used: 14012416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:46.807329+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:47.807473+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:48.807623+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:49.807756+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:50.807875+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:51.808033+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:52.808260+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:53.808455+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:54.808628+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:55.809378+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:56.809640+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:57.809871+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:58.810020+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:59.810214+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:00.810740+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:01.810889+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:02.811307+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.828195572s of 20.298160553s, submitted: 52
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:03.811524+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 71475200 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:04.811800+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7fd61e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:05.812149+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650903 data_alloc: 218103808 data_used: 14016512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:06.812301+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:07.812546+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7252f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd74a45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:08.814493+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:09.814643+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:10.814900+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650831 data_alloc: 218103808 data_used: 14016512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:11.815096+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:12.815340+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:13.815731+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:14.816007+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.773118019s of 12.470500946s, submitted: 7
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:15.816188+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650903 data_alloc: 218103808 data_used: 14016512
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:16.816409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd5535e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 71450624 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:17.816564+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4f40f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd78054a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 71450624 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:18.816681+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd826a1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15cf000/0x0/0x1bfc00000, data 0x1685eeb/0x18aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:19.816942+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:20.817084+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522595 data_alloc: 218103808 data_used: 7692288
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:21.817245+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:22.817570+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd987a000 session 0x55bcd820e960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bce58b2800 session 0x55bcd6890780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446349312 unmapped: 75710464 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d65a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:23.817738+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15f9000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15f9000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:24.817901+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:25.818051+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518487 data_alloc: 218103808 data_used: 7688192
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.642345428s of 10.898589134s, submitted: 78
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7c9c5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:26.818247+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5534960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15fa000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:27.818466+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:28.818624+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:29.818771+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd987a000 session 0x55bcd820be00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bce58b2800 session 0x55bcd74a70e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:30.818947+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15fa000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517719 data_alloc: 218103808 data_used: 7688192
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:31.819076+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7487680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:32.819215+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:33.819385+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd8238b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:34.819559+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14c8000/0x0/0x1bfc00000, data 0x1793ea5/0x19b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c0000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd4f97680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446414848 unmapped: 75644928 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:35.819718+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559079 data_alloc: 218103808 data_used: 7688192
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446414848 unmapped: 75644928 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.069718361s of 10.233050346s, submitted: 48
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:36.819842+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7210780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bce58b2800 session 0x55bcd7fd6780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd4f423c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd78043c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446521344 unmapped: 75538432 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7574780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74aeb40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:37.819941+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c05a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7210780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a112a000/0x0/0x1bfc00000, data 0x1b2fafe/0x1d53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:38.820058+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:39.820231+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd81c0000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7487680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:40.820367+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd74a70e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd820be00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609389 data_alloc: 218103808 data_used: 7700480
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:41.820525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd68912c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd7c9c5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:42.820745+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:43.820868+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b93000/0x0/0x1bfc00000, data 0x20c5b31/0x22eb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:44.820998+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:45.821138+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4643469 data_alloc: 218103808 data_used: 12431360
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:46.821329+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:47.821448+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b1800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.463706017s of 11.551223755s, submitted: 21
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd76b1800 session 0x55bcd7805860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:48.821576+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446971904 unmapped: 75087872 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b92000/0x0/0x1bfc00000, data 0x20c5b54/0x22ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:49.821731+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447676416 unmapped: 74383360 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:50.821865+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4674079 data_alloc: 218103808 data_used: 16248832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:51.822042+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:52.822176+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:53.822426+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b92000/0x0/0x1bfc00000, data 0x20c5b54/0x22ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd63c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd5528960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 70746112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:54.822629+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd81c0b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 70746112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:55.822755+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4715805 data_alloc: 218103808 data_used: 16478208
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:56.822947+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5211860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3ac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd5c3ac00 session 0x55bcd7253c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:57.823139+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:58.823409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.650910378s of 11.011292458s, submitted: 92
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd55281e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451346432 unmapped: 70713344 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a06ff000/0x0/0x1bfc00000, data 0x2559b31/0x277f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:59.823600+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd622c400 session 0x55bcd75b8000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451346432 unmapped: 70713344 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd820b4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:00.823804+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5528f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4655939 data_alloc: 218103808 data_used: 12673024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:01.824016+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:02.824266+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:03.824570+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd81c03c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:04.824718+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7215e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:05.824909+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4655795 data_alloc: 218103808 data_used: 12673024
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5211a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:06.825068+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:07.825260+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:08.825444+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:09.825579+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.221710205s of 11.399076462s, submitted: 51
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:10.825763+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd53372c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659617 data_alloc: 218103808 data_used: 12681216
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcda41b400 session 0x55bcd4f40f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd82383c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453058560 unmapped: 69001216 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:11.825877+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a4b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd53385a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a03fa000/0x0/0x1bfc00000, data 0x285a37f/0x2a83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 70598656 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:12.826114+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:13.826373+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7c9cd20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:14.826525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:15.826665+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:16.826864+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:17.827009+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:18.827260+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:19.827462+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:20.827577+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:21.827739+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:22.827921+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:23.828072+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:24.828227+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:25.828345+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:26.828488+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.276529312s of 16.571434021s, submitted: 92
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7214f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:27.828650+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:28.828925+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:29.829075+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447430656 unmapped: 74629120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:30.829277+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663682 data_alloc: 218103808 data_used: 14639104
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:31.830947+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:32.831647+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:33.833081+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:34.833656+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:35.834762+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663682 data_alloc: 218103808 data_used: 14639104
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:36.837326+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:37.837506+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:38.839448+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:39.840008+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.600367546s of 12.624567032s, submitted: 7
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:40.840606+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453623808 unmapped: 68435968 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4785748 data_alloc: 218103808 data_used: 15151104
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:41.841052+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a09c3000/0x0/0x1bfc00000, data 0x229334c/0x24ba000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,18])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 66846720 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd828d000 session 0x55bcd74ae960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c01e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7212960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7804d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd82ae5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:42.841748+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:43.842032+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:44.842267+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2834c/0x2d4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:45.842398+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4779116 data_alloc: 218103808 data_used: 15372288
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:46.842705+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6fdcc00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd6fdcc00 session 0x55bcd5338b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:47.842883+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454705152 unmapped: 71032832 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2834c/0x2d4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd826b680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:48.843005+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454705152 unmapped: 71032832 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7804b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:49.843162+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.861032486s of 10.011510849s, submitted: 117
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454868992 unmapped: 70868992 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:50.843311+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454868992 unmapped: 70868992 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0109000/0x0/0x1bfc00000, data 0x2b4c37f/0x2d75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 420 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4822614 data_alloc: 234881024 data_used: 19931136
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:51.843435+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 67444736 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5351800 session 0x55bcd826a5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:52.843566+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:53.843750+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:54.843992+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:55.844144+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849730 data_alloc: 234881024 data_used: 23068672
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:56.844287+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:57.844432+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:58.844565+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:59.844681+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7595860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7215a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:00.844935+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847602 data_alloc: 234881024 data_used: 23072768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:01.845074+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.320135117s of 12.432018280s, submitted: 29
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:02.845213+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460251136 unmapped: 65486848 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd1d000/0x0/0x1bfc00000, data 0x2f2ac7c/0x3159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:03.845446+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460120064 unmapped: 65617920 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:04.845563+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:05.845692+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd14000/0x0/0x1bfc00000, data 0x2f32c7c/0x3161000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd14000/0x0/0x1bfc00000, data 0x2f32c7c/0x3161000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:06.845860+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889466 data_alloc: 234881024 data_used: 23203840
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:07.846006+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd622c400 session 0x55bcd6fff2c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:08.846128+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459464704 unmapped: 66273280 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:09.846207+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459489280 unmapped: 66248704 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:10.846323+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf3000/0x0/0x1bfc00000, data 0x2f5cc7c/0x318b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:11.846399+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890884 data_alloc: 234881024 data_used: 23379968
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:12.846525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:13.846674+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd75b8f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.524916649s of 11.728118896s, submitted: 41
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd4f43e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:14.846974+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:15.847108+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:16.847236+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914444 data_alloc: 234881024 data_used: 23437312
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:17.847388+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:18.847529+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:19.847672+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:20.847776+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:21.847897+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4967962 data_alloc: 234881024 data_used: 26353664
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 62013440 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:22.848040+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:23.848188+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:24.848382+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:25.848546+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:26.848742+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969562 data_alloc: 234881024 data_used: 26501120
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.103944778s of 13.150414467s, submitted: 6
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:27.848917+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458989568 unmapped: 66748416 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 65K writes, 256K keys, 65K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 65K writes, 23K syncs, 2.74 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4809 writes, 20K keys, 4809 commit groups, 1.0 writes per commit group, ingest: 19.69 MB, 0.03 MB/s
                                           Interval WAL: 4809 writes, 1955 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:28.849202+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:29.849402+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:30.849635+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:31.849788+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4965690 data_alloc: 234881024 data_used: 26886144
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:32.850112+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:33.850384+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:34.850561+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:35.850724+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:36.850957+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4993036 data_alloc: 234881024 data_used: 26882048
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459857920 unmapped: 65880064 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f7a8000/0x0/0x1bfc00000, data 0x37adc7c/0x36d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:37.851130+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459857920 unmapped: 65880064 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd826ab40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.982535362s of 11.166376114s, submitted: 14
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd52114a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:38.851283+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 457621504 unmapped: 68116480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd79d63c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:39.851404+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f7ab000/0x0/0x1bfc00000, data 0x37adc7c/0x36d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:40.851527+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:41.851708+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4822849 data_alloc: 234881024 data_used: 18599936
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:42.852009+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:43.852231+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd820b680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd74a70e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:44.852342+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a041b000/0x0/0x1bfc00000, data 0x2b6cc49/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:45.852492+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:46.852657+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a041b000/0x0/0x1bfc00000, data 0x2b6cc49/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831417 data_alloc: 234881024 data_used: 18472960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd72185a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:47.852810+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:48.852912+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5528000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:49.853092+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:50.853272+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:51.853453+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831417 data_alloc: 234881024 data_used: 18472960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.426866531s of 13.683501244s, submitted: 62
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0445000/0x0/0x1bfc00000, data 0x2b42c49/0x2a39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:52.853633+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7fd6960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:53.853907+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd7805860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:54.854037+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd622c400 session 0x55bcd749c5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:55.854210+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:56.854363+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a099e000/0x0/0x1bfc00000, data 0x22b18f6/0x24df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4780425 data_alloc: 234881024 data_used: 18477056
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6ffef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcda41b400 session 0x55bcd7211860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:57.854525+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454418432 unmapped: 71319552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:58.854683+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454418432 unmapped: 71319552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:59.854866+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 71311360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a099f000/0x0/0x1bfc00000, data 0x22b18f6/0x24df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:00.855006+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 71311360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 423 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd820b4a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:01.855134+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4782683 data_alloc: 234881024 data_used: 18489344
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.637521744s of 10.001954079s, submitted: 97
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 423 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd6ffef00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:02.855309+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a15e0000/0x0/0x1bfc00000, data 0x166e3d3/0x189c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:03.855491+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:04.855663+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:05.855800+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444456960 unmapped: 81281024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:06.855936+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4595427 data_alloc: 218103808 data_used: 7753728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444473344 unmapped: 81264640 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:07.856068+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a15df000/0x0/0x1bfc00000, data 0x1670062/0x189e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444473344 unmapped: 81264640 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd521e000 session 0x55bcd55341e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd68912c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6813860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcda41b400 session 0x55bcd72101e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a15e1000/0x0/0x1bfc00000, data 0x1670035/0x189c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd521e000 session 0x55bcd75b8b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd4f410e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd68910e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:08.856224+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd7217c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd52105a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:09.856367+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0a13000/0x0/0x1bfc00000, data 0x223d0a7/0x246b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:10.856522+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:11.856697+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0a13000/0x0/0x1bfc00000, data 0x223d0a7/0x246b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4689912 data_alloc: 218103808 data_used: 7753728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444178432 unmapped: 81559552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:12.856820+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444178432 unmapped: 81559552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:13.857033+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444186624 unmapped: 81551360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5535a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:14.857200+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444186624 unmapped: 81551360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5529c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.008628845s of 13.355698586s, submitted: 123
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd79d6d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:15.857331+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444334080 unmapped: 81403904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:16.857485+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4693869 data_alloc: 218103808 data_used: 7757824
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444334080 unmapped: 81403904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:17.857644+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:18.857895+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5cd41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:19.858057+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:20.858287+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7574000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:21.858478+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7bf41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783390 data_alloc: 234881024 data_used: 20123648
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446742528 unmapped: 78995456 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd4f963c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5534d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:22.858723+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:23.859031+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:24.859161+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:25.859312+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:26.859609+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4837251 data_alloc: 234881024 data_used: 20123648
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 77930496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:27.859775+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 77930496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.694820404s of 13.001426697s, submitted: 43
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:28.859918+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:29.860042+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:30.860246+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:31.860379+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4867081 data_alloc: 234881024 data_used: 20242432
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:32.860509+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a007f000/0x0/0x1bfc00000, data 0x2bcebf6/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd79d6000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:33.860675+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd78054a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7805a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:34.860804+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada4c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcdada4c00 session 0x55bcd7218d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:35.860940+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a007e000/0x0/0x1bfc00000, data 0x2bcec19/0x2e00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:36.861063+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882818 data_alloc: 234881024 data_used: 22142976
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450920448 unmapped: 74817536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:37.861182+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.053140640s of 10.173756599s, submitted: 39
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:38.861300+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:39.861438+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:40.861575+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6891e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd5211c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:41.861694+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4712946 data_alloc: 218103808 data_used: 14405632
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7805c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0b52000/0x0/0x1bfc00000, data 0x1cebba7/0x1f1b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7fd7860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 81321984 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd6ffe3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd5528f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd55294a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6ffe3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:42.861812+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd78054a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd4f963c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd5cd41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd5535a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd52105a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:43.861987+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd68910e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd4f410e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:44.862136+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd75b8b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:45.862319+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:46.862497+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783685 data_alloc: 234881024 data_used: 18337792
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444547072 unmapped: 81190912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a068c000/0x0/0x1bfc00000, data 0x21b0bb7/0x23e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:47.862604+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:48.862809+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:49.863117+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.070795059s of 11.480711937s, submitted: 135
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:50.863330+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:51.863519+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4855783 data_alloc: 234881024 data_used: 18796544
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:52.863685+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:53.863922+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:54.864053+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:55.864167+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:56.864325+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4855943 data_alloc: 234881024 data_used: 18800640
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5cd45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd82ae1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:57.864473+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [0,0,0,4,5])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 76169216 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd6812960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f85b000/0x0/0x1bfc00000, data 0x2fe2bb7/0x3213000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:58.864728+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 76169216 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:59.864948+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449372160 unmapped: 76365824 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:00.865084+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449372160 unmapped: 76365824 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:01.865251+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900751 data_alloc: 234881024 data_used: 19607552
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:02.865381+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:03.865584+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:04.865722+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:05.865884+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:06.866045+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900751 data_alloc: 234881024 data_used: 19607552
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:07.866190+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:08.866328+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:09.866513+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.579032898s of 20.090600967s, submitted: 74
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449388544 unmapped: 76349440 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd72101e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd7214000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:10.866633+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd61961e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449396736 unmapped: 76341248 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:11.866793+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786173 data_alloc: 218103808 data_used: 14860288
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:12.866930+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:13.867122+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:14.867226+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [0,0,0,1,0,1])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:15.867340+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:16.867490+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786173 data_alloc: 218103808 data_used: 14860288
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:17.867653+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd6812d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:18.902620+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:19.902760+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:20.902870+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:21.903046+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786465 data_alloc: 218103808 data_used: 14880768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:22.903201+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:23.903365+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:24.903515+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:25.903736+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:26.903919+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786465 data_alloc: 218103808 data_used: 14880768
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:27.904076+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:28.904250+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:29.904426+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.245162010s of 20.124031067s, submitted: 286
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:30.904550+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:31.904681+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4798557 data_alloc: 218103808 data_used: 15982592
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:32.904810+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:33.905009+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:34.905161+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:35.905360+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:36.905549+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802325 data_alloc: 218103808 data_used: 15990784
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:37.905694+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:38.905888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:39.906003+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:40.906122+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:41.906241+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802325 data_alloc: 218103808 data_used: 15990784
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:42.906423+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:43.906629+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.248320580s of 13.613334656s, submitted: 9
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:44.906788+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 425 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:45.906935+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:46.907083+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4821151 data_alloc: 218103808 data_used: 17117184
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd5cd5e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd6ffe780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:47.907259+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:48.907405+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:49.907521+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450543616 unmapped: 75194368 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd4f423c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820a1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd749d0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:50.907706+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd8238f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a1307000/0x0/0x1bfc00000, data 0x25e284f/0x27a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461848576 unmapped: 67559424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:51.907879+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5cd54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd5535860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7fd6000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a45a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902445 data_alloc: 218103808 data_used: 17117184
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:52.908075+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:53.908284+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:54.908408+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:55.908553+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7eb41e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0837000/0x0/0x1bfc00000, data 0x30b285f/0x3277000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:56.908675+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902445 data_alloc: 218103808 data_used: 17117184
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7fd6b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:57.908792+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0837000/0x0/0x1bfc00000, data 0x30b285f/0x3277000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd8238b40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.560773849s of 14.498044014s, submitted: 21
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820e1e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:58.908965+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.909197+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0811000/0x0/0x1bfc00000, data 0x30d6892/0x329d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 77545472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.909399+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521f400 session 0x55bcd7c9c000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 77545472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.909547+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7487680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4991495 data_alloc: 234881024 data_used: 28463104
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.909694+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7fd63c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd75b9c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.909914+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdcdd4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.910045+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.910178+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.910288+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4993244 data_alloc: 234881024 data_used: 28475392
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.910421+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.910551+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.910739+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.910886+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.712766647s of 12.775095940s, submitted: 17
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453255168 unmapped: 76152832 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.911027+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057974 data_alloc: 234881024 data_used: 29130752
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453345280 unmapped: 76062720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.911166+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453525504 unmapped: 75882496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.911387+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.911940+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.912051+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a003f000/0x0/0x1bfc00000, data 0x389e8b5/0x3a67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.912188+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063758 data_alloc: 234881024 data_used: 29274112
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.912347+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.912455+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a003f000/0x0/0x1bfc00000, data 0x389e8b5/0x3a67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.912546+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.912914+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.913129+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5064558 data_alloc: 234881024 data_used: 29294592
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.913271+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941401482s of 12.116731644s, submitted: 55
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.913513+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.913657+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x39708b5/0x3b39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.913736+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.913884+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7574000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcdcdd4000 session 0x55bcd7eb5860
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5072190 data_alloc: 234881024 data_used: 29294592
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7210780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.914021+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.914189+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.914310+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5351000 session 0x55bcd5cd54a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd987b800 session 0x55bcd4f403c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.914424+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.914574+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067642 data_alloc: 234881024 data_used: 29278208
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454033408 unmapped: 75374592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.914708+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5336000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd79d6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454049792 unmapped: 75358208 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.914897+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.793792725s of 10.926726341s, submitted: 51
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd749cf00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5351000 session 0x55bcd5534f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454057984 unmapped: 75350016 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.915069+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7eb52c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd987b800 session 0x55bcd8239a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7fd6960
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f40f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454066176 unmapped: 75341824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7c9c780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.915230+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5351000 session 0x55bcd749dc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454066176 unmapped: 75341824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.915369+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0047000/0x0/0x1bfc00000, data 0x383261f/0x3a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057984 data_alloc: 234881024 data_used: 29282304
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5512000 session 0x55bcd6891680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd75750e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454074368 unmapped: 75333632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.915477+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820a3c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454074368 unmapped: 75333632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.915616+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454082560 unmapped: 75325440 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.915753+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0048000/0x0/0x1bfc00000, data 0x383261f/0x3a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454082560 unmapped: 75325440 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.915866+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454090752 unmapped: 75317248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.916002+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 428 ms_handle_reset con 0x55bcd5512000 session 0x55bcd72101e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5061826 data_alloc: 234881024 data_used: 29372416
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454090752 unmapped: 75317248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.916132+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd987b800 session 0x55bcd7213e00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd74a43c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd74af0e0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.916303+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:44.916444+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0f43000/0x0/0x1bfc00000, data 0x2934dfb/0x2b6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.916585+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.916729+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879242 data_alloc: 234881024 data_used: 19992576
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.926608+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0f43000/0x0/0x1bfc00000, data 0x2934dfb/0x2b6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.926788+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.384347916s of 15.611342430s, submitted: 86
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.926999+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.927118+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.927238+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889576 data_alloc: 234881024 data_used: 20553728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.927782+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.928419+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.929071+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455213056 unmapped: 74194944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.929295+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455213056 unmapped: 74194944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.929798+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889576 data_alloc: 234881024 data_used: 20553728
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.929979+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7575c20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd7fd6f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.930115+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd5cd4f00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.930326+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.930515+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.930703+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.930888+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.931074+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.931241+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.931387+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.931515+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.931685+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.932112+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.932254+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.932409+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.932556+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.932713+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.932877+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.933090+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.933233+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.933405+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.933607+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.933798+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.933939+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.934056+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf5a40
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7bf5680
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf4d20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7bf4780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.934227+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.655097961s of 32.044475555s, submitted: 67
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd820f2c0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd820fc20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820e5a0
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820ed20
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820e000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4734651 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.934370+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.934513+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.934669+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.934949+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a18a5000/0x0/0x1bfc00000, data 0x1fd58f7/0x2209000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.935097+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4734651 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.935249+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd820e780
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.935687+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451133440 unmapped: 78274560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.936047+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451133440 unmapped: 78274560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.936340+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.936566+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:52 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:52 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805964 data_alloc: 234881024 data_used: 17387520
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.936957+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.937321+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.937453+0000)
Oct 02 13:20:52 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:52 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.937698+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.937968+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805964 data_alloc: 234881024 data_used: 17387520
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.938165+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.938395+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.938544+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.866851807s of 18.948247910s, submitted: 19
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.938688+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [0,0,0,0,0,6,0,10])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.938801+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453902336 unmapped: 75505664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894062 data_alloc: 234881024 data_used: 18345984
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.938932+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453902336 unmapped: 75505664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.939176+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.939331+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e47000/0x0/0x1bfc00000, data 0x2a2b8f7/0x2c5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.939469+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.939647+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4908178 data_alloc: 234881024 data_used: 18374656
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.939779+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.939961+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.940215+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd820e3c0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd55874a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.940359+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.940500+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4907530 data_alloc: 234881024 data_used: 18374656
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.940656+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.940896+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.941071+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.941219+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.941371+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4907530 data_alloc: 234881024 data_used: 18374656
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.941558+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf41e0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.941727+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.941886+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.596574783s of 19.309139252s, submitted: 103
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.942039+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.942151+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909422 data_alloc: 234881024 data_used: 18493440
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.942299+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.942456+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.942584+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.942788+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.942937+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909422 data_alloc: 234881024 data_used: 18493440
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.943103+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.943266+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.943457+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.943609+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.943796+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.275099754s of 12.280827522s, submitted: 1
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4915422 data_alloc: 234881024 data_used: 18903040
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.943982+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.944188+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.944364+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.944568+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.944727+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918622 data_alloc: 234881024 data_used: 19460096
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.944918+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.945052+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.945284+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.945498+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd82392c0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd82390e0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd5cd4960
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.945719+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.945891+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.946064+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.946272+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.946438+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.946572+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.946694+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.946807+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.947066+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.947255+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.947473+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.947700+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.947917+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.948134+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.948359+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd7219680
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4800 session 0x55bcd5336b40
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd7eb54a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdfcda000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.948530+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449536000 unmapped: 79872000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.948672+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.948942+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.949166+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.949428+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.949653+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.949919+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.950074+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd81c05a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7eb43c0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd749cb40
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd53385a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.950455+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.354831696s of 32.466384888s, submitted: 51
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd5529c20
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd82aef00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd74a54a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449552384 unmapped: 79855616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7eb4b40
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd7211860
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.950613+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.950786+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.950932+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686822 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.951102+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd81c1c20
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.951301+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd7804780
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd55294a0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.951456+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd55283c0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.952309+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.952498+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.952737+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.952881+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.953031+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.953187+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.953392+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.953610+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.953921+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.954231+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.954399+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449593344 unmapped: 79814656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.954620+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449593344 unmapped: 79814656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.766798019s of 18.784076691s, submitted: 8
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.954783+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 77807616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.954944+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 77807616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1d58000/0x0/0x1bfc00000, data 0x1b1a8f7/0x1d4e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.955140+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.955286+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.955482+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733898 data_alloc: 218103808 data_used: 8380416
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1cbb000/0x0/0x1bfc00000, data 0x1bb78f7/0x1deb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1cbb000/0x0/0x1bfc00000, data 0x1bb78f7/0x1deb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.955596+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.955722+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.955887+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820a1e0
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd81c1c20
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.956011+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.956215+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.956393+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.956536+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.956692+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.956876+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.957005+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.957194+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.957326+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.957467+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.957707+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd5534000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd5529c20
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.957931+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.845039368s of 20.065383911s, submitted: 66
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.958234+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451215360 unmapped: 78192640 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd79d7860
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.958425+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.958571+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.958737+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.958890+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.959072+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.959218+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.959422+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.959661+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.959949+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.960238+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.960427+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.960577+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.960713+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.960852+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.960975+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.961098+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.961220+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.961343+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.961528+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.961889+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.962171+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.962373+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.962660+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.962888+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.963156+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.963347+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451264512 unmapped: 78143488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.963574+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.963787+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.963949+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.964174+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.964456+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.964677+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.964821+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.964989+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.965204+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.965404+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.965655+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.965852+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.965994+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.966190+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.966393+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.966606+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.966762+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.966896+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.967027+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.967236+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.967399+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.967599+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.967864+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.968119+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.968337+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.968526+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.968686+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.968944+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.969138+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.969387+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.969685+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.969869+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.970040+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.970260+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.970404+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.970529+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.970717+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.970870+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.971094+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.971261+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.971400+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.971597+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.971758+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.971989+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.972279+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.972448+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.972617+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.972764+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451362816 unmapped: 78045184 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.972913+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.973055+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.973184+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd76af000 session 0x55bcd820bc20
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.973328+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.973463+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.973626+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.973757+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.973948+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.974118+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.974322+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.974472+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.974646+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.974791+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.974964+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451395584 unmapped: 78012416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.975140+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451395584 unmapped: 78012416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.975319+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.975478+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.975702+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.975883+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.976036+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451411968 unmapped: 77996032 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.976179+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.976396+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.976600+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.976921+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.977089+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.977278+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.977488+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.977638+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.977771+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.977892+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.978036+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.978123+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.978229+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.978403+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.978577+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.978757+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.978896+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.979023+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.979151+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.979275+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:20:53 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:20:53 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.979392+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.979500+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}'
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.979625+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451452928 unmapped: 77955072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:20:53 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:20:53 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.979765+0000)
Oct 02 13:20:53 compute-0 ceph-osd[83986]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38721 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47291 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 nova_compute[257802]: 2025-10-02 13:20:53.116 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.38673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.47252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2413002667' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2447536203' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.38688 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/139641157' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2366614009' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2931863105' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1716509811' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47306 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:20:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898529307' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:20:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:20:53 compute-0 nova_compute[257802]: 2025-10-02 13:20:53.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38739 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47312 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:20:53 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920209949' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47321 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:53 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47327 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46282 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:54 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:54.002+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:20:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841067862' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38772 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.38700 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47270 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.38706 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.46243 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.38721 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47291 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47306 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/319070693' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1898529307' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/514734843' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.38739 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47312 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1920209949' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1516179473' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2268979104' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47321 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.47327 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.46282 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1412689173' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3841067862' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:20:54 compute-0 crontab[419344]: (root) LIST (root)
Oct 02 13:20:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38790 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46318 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47363 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:54 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:54.940+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:20:54 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606772114' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535177969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535177969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38820 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:55 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:55.338+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:20:55 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46333 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817434855' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.38772 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.47339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1610077542' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2665894373' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/47274639' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.38790 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4041968854' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/768032342' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.46318 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.47363 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1606772114' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2535177969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/2535177969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774979115' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:20:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482400831' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:20:55 compute-0 nova_compute[257802]: 2025-10-02 13:20:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:20:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440866303' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 13:20:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448923522' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/682358552' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.38820 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.46333 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2041360105' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3803297933' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3817434855' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/234934425' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3104217782' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/348385409' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1174465865' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1774979115' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3482400831' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3545378520' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1195385828' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2440866303' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3174335008' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/448923522' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/274695586' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2299104660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:20:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179954840' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:20:56 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903993649' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46390 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:56 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:56 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:20:56.719+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:20:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755298732' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 nova_compute[257802]: 2025-10-02 13:20:57.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:57 compute-0 nova_compute[257802]: 2025-10-02 13:20:57.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:20:57 compute-0 nova_compute[257802]: 2025-10-02 13:20:57.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:20:57 compute-0 nova_compute[257802]: 2025-10-02 13:20:57.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801385131' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073040172' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46411 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3830493280' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/179954840' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3720196201' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2903993649' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.46390 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3949948913' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/394210393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4060181063' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1755298732' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/693468167' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1842640529' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2522821367' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/647639629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1062998975' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/801385131' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3073040172' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/740542697' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281951078' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:20:57 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:20:57 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420828427' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:20:57 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 13:20:57 compute-0 systemd[1]: Started Hostname Service.
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46423 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:20:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318755113' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47501 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:20:58 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376068391' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 nova_compute[257802]: 2025-10-02 13:20:58.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47510 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38943 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:20:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:20:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47522 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47528 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.46411 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2022479561' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2482861771' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4281951078' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1420828427' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3150948890' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.46423 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3318755113' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2376068391' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2391751382' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46450 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38958 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:58 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38964 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47543 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:20:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3014659111' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46465 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.38985 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47552 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39006 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.47501 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.47510 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.46435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.38943 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.38949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.47522 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.47528 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2259120394' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.46450 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.38958 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.38964 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3014659111' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1993493330' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4139500642' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:20:59 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301744227' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:20:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:20:59 compute-0 podman[420104]: 2025-10-02 13:20:59.949957046 +0000 UTC m=+0.081490722 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:20:59 compute-0 podman[420105]: 2025-10-02 13:20:59.952915997 +0000 UTC m=+0.082362202 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:20:59 compute-0 podman[420103]: 2025-10-02 13:20:59.969817124 +0000 UTC m=+0.101418742 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46495 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39018 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:21:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147398677' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47591 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46513 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39030 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:21:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643488527' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.47543 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.46465 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.38985 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.47552 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.39006 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2301744227' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.47570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2958749579' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3775449968' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1147398677' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1016760165' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3958445490' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1643488527' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1234391447' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39045 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:00 compute-0 nova_compute[257802]: 2025-10-02 13:21:00.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:21:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494839982' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.122 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.123 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.123 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.123 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940389635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.554 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:01 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:01 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:21:01.681+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:21:01 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.715 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.717 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3851MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.718 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.718 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.813 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.814 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.837 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.855 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.856 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:21:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:21:01 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827993771' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.885 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.909 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:21:01 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47663 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:01 compute-0 nova_compute[257802]: 2025-10-02 13:21:01.933 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.46495 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.39018 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.47591 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.46513 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.39030 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.47606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.39045 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.46528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3494839982' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2753875291' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2940389635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/707602679' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/556070882' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39117 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886332830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:02 compute-0 nova_compute[257802]: 2025-10-02 13:21:02.470 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:02 compute-0 nova_compute[257802]: 2025-10-02 13:21:02.475 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:21:02 compute-0 nova_compute[257802]: 2025-10-02 13:21:02.498 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:21:02 compute-0 nova_compute[257802]: 2025-10-02 13:21:02.499 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:21:02 compute-0 nova_compute[257802]: 2025-10-02 13:21:02.499 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:02.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 13:21:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143781819' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:21:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:21:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2347908466' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.46561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1827993771' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.47663 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2910516409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.39117 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1368806525' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3241487011' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3886332830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2344270193' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4143781819' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/675020617' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4212749076' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2480337755' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:21:03 compute-0 nova_compute[257802]: 2025-10-02 13:21:03.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:21:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616862150' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:21:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:21:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643439206' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47723 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2347908466' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2755522120' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1933385493' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2616862150' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1480918496' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1840728467' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2922471195' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/643439206' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.47723 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/301432652' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/933441506' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39174 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:04.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:21:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1872239937' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:21:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:21:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2734751516' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47762 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.39174 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2024468315' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1509359919' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1223793828' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1872239937' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3781967353' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/479756167' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2734751516' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2043791923' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46681 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:21:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465571603' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:21:05 compute-0 nova_compute[257802]: 2025-10-02 13:21:05.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47786 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46696 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46702 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39225 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47795 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:06.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.47762 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.39210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.46681 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3719616468' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.46690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/465571603' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.47786 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73607]: from='client.46696 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39240 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46717 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:21:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256259752' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46732 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 13:21:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483361935' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46741 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47840 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:21:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3016525263' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.46702 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.39225 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.47795 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.39240 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.46717 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3208837824' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3256259752' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.46732 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3571929272' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3067747308' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2483361935' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3016525263' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39273 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47849 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46747 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39285 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46759 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 nova_compute[257802]: 2025-10-02 13:21:08.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 13:21:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264273198' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:21:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:08.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.46741 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.47840 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.39273 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.47849 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.46747 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2609548251' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.39285 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2494512526' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/18796829' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4264273198' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/590510402' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 13:21:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568995041' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47867 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39315 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47888 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39330 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ovs-appctl[421987]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:21:09 compute-0 ovs-appctl[421991]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='client.46759 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2568995041' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='client.47867 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='client.39315 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2560908634' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:21:09 compute-0 ovs-appctl[421995]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:21:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47897 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:21:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709111309' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:21:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 13:21:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720730746' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:21:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:10.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:10 compute-0 nova_compute[257802]: 2025-10-02 13:21:10.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47921 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 13:21:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409656405' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:21:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4152142505' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:21:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39369 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:11 compute-0 sudo[422574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:11 compute-0 sudo[422574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:11 compute-0 sudo[422574]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:12 compute-0 podman[422544]: 2025-10-02 13:21:12.022904572 +0000 UTC m=+0.158172886 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:21:12 compute-0 sudo[422612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:12 compute-0 sudo[422612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:12 compute-0 sudo[422612]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:21:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562087625' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.47888 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.39330 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4269806587' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.47897 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1709111309' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/226297357' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3720730746' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3754853666' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:21:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2907860677' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 13:21:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342439279' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:21:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 13:21:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242946370' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.47921 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1409656405' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4152142505' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.39369 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2602768956' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2562087625' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1342439279' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73607]: pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1242946370' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 nova_compute[257802]: 2025-10-02 13:21:13.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 13:21:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087154351' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46834 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39405 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 02 13:21:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935061567' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4245540599' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1982109687' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3087154351' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4021928338' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.46834 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.39405 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2362213958' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2935061567' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/442756606' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:21:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 02 13:21:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206505538' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47978 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39426 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46855 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 02 13:21:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346049475' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2556280337' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2206505538' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/690376585' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.47978 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.39426 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3734773083' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47999 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:15 compute-0 nova_compute[257802]: 2025-10-02 13:21:15.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48005 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46870 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39447 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:16 compute-0 nova_compute[257802]: 2025-10-02 13:21:16.499 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46885 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 02 13:21:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878349899' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:16.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.46855 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1346049475' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/728846324' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1827421971' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.47999 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/590357678' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48023 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 13:21:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094581678' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:21:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3963782086' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48032 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39471 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39480 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46900 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.48005 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.46870 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.39447 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.46885 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/878349899' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.48023 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4094581678' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3963782086' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2680411240' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:21:17 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2608124106' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:21:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267592696' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46909 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 nova_compute[257802]: 2025-10-02 13:21:18.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48056 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 02 13:21:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962812195' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:18.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:18.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39501 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48065 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:19 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39507 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 13:21:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643678668' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.48032 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.39471 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.39480 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.46900 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3138812586' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1267592696' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3962812195' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3566435322' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:21:19 compute-0 sudo[423744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:19 compute-0 sudo[423744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:19 compute-0 sudo[423744]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:20 compute-0 sudo[423772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:20 compute-0 sudo[423772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 13:21:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748606852' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:20 compute-0 sudo[423772]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:20 compute-0 sudo[423799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:20 compute-0 sudo[423799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:20 compute-0 sudo[423799]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48086 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46927 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:20 compute-0 sudo[423830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:21:20 compute-0 sudo[423830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48092 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:20 compute-0 sudo[423830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:20.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:21:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:21:20 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:21:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:21:20 compute-0 nova_compute[257802]: 2025-10-02 13:21:20.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:21 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.46909 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.48056 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.39501 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.48065 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.39507 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1128627087' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1625710437' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3643678668' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3768860084' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1748606852' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: from='client.48086 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 240a0052-2709-493e-bcfb-77417e70f384 does not exist
Oct 02 13:21:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 61e85f91-fcd0-449e-bc76-7fa97949fcde does not exist
Oct 02 13:21:21 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 6dbbd83b-c6cd-46f0-b753-1e82b30b5a0a does not exist
Oct 02 13:21:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:21:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:21:21 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:21:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:21 compute-0 sudo[424194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:21 compute-0 sudo[424194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:21 compute-0 sudo[424194]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:21 compute-0 sudo[424224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:21 compute-0 sudo[424224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:21 compute-0 sudo[424224]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:21 compute-0 sudo[424253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:21 compute-0 sudo[424253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:21 compute-0 sudo[424253]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:21 compute-0 sudo[424282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:21:21 compute-0 sudo[424282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:21 compute-0 podman[424369]: 2025-10-02 13:21:21.938520728 +0000 UTC m=+0.041303884 container create 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:21:21 compute-0 systemd[1]: Started libpod-conmon-07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e.scope.
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:21.917598714 +0000 UTC m=+0.020381870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:22.048390692 +0000 UTC m=+0.151173848 container init 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:22.05747453 +0000 UTC m=+0.160257686 container start 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:22.060808349 +0000 UTC m=+0.163591505 container attach 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:21:22 compute-0 gallant_driscoll[424392]: 167 167
Oct 02 13:21:22 compute-0 systemd[1]: libpod-07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e.scope: Deactivated successfully.
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:22.064523399 +0000 UTC m=+0.167306555 container died 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-557bdc5105f2986cd6fdeb196de594523882cc47f4a8e42c0f0a978e81773e8a-merged.mount: Deactivated successfully.
Oct 02 13:21:22 compute-0 podman[424369]: 2025-10-02 13:21:22.112811871 +0000 UTC m=+0.215595027 container remove 07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:21:22 compute-0 systemd[1]: libpod-conmon-07fb10d8e27e2bae031037892057cd9588840eef10de5e0b8a1440ef5f9a287e.scope: Deactivated successfully.
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.46927 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.48092 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.46933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2296609078' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3111701462' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2397871864' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3705838895' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:21:22 compute-0 podman[424434]: 2025-10-02 13:21:22.287026422 +0000 UTC m=+0.044811769 container create 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:22 compute-0 systemd[1]: Started libpod-conmon-9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b.scope.
Oct 02 13:21:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:22 compute-0 podman[424434]: 2025-10-02 13:21:22.271644282 +0000 UTC m=+0.029429649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:22 compute-0 podman[424434]: 2025-10-02 13:21:22.37840109 +0000 UTC m=+0.136186457 container init 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:21:22 compute-0 podman[424434]: 2025-10-02 13:21:22.388888863 +0000 UTC m=+0.146674210 container start 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:21:22 compute-0 podman[424434]: 2025-10-02 13:21:22.392342446 +0000 UTC m=+0.150127813 container attach 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:21:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:22.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:22 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46957 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:23 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 13:21:23 compute-0 optimistic_moore[424458]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:21:23 compute-0 optimistic_moore[424458]: --> relative data size: 1.0
Oct 02 13:21:23 compute-0 optimistic_moore[424458]: --> All data devices are unavailable
Oct 02 13:21:23 compute-0 podman[424434]: 2025-10-02 13:21:23.208630015 +0000 UTC m=+0.966415372 container died 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:21:23 compute-0 systemd[1]: libpod-9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b.scope: Deactivated successfully.
Oct 02 13:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f96d752fa0ddb37843dc098d3327ef4f589098fc86b715716829cae3c11751e-merged.mount: Deactivated successfully.
Oct 02 13:21:23 compute-0 podman[424434]: 2025-10-02 13:21:23.267788449 +0000 UTC m=+1.025573796 container remove 9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:23 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 13:21:23 compute-0 systemd[1]: libpod-conmon-9588e2417ae7969fdec889cb8e900bbf5f8f98972948772063a0d6302ab3941b.scope: Deactivated successfully.
Oct 02 13:21:23 compute-0 sudo[424282]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2646082702' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:23 compute-0 ceph-mon[73607]: from='client.46957 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:23 compute-0 ceph-mon[73607]: pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:23 compute-0 sudo[424539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:23 compute-0 sudo[424539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:23 compute-0 nova_compute[257802]: 2025-10-02 13:21:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:23 compute-0 sudo[424539]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:23 compute-0 sudo[424564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:23 compute-0 sudo[424564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:23 compute-0 sudo[424564]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:23 compute-0 sudo[424589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:23 compute-0 sudo[424589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:23 compute-0 sudo[424589]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:23 compute-0 sudo[424614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:21:23 compute-0 sudo[424614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.841343667 +0000 UTC m=+0.038167319 container create 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:21:23 compute-0 systemd[1]: Started libpod-conmon-67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6.scope.
Oct 02 13:21:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.91959407 +0000 UTC m=+0.116417722 container init 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.825361963 +0000 UTC m=+0.022185645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.92625503 +0000 UTC m=+0.123078682 container start 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.929462537 +0000 UTC m=+0.126286219 container attach 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:21:23 compute-0 sleepy_blackwell[424694]: 167 167
Oct 02 13:21:23 compute-0 systemd[1]: libpod-67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6.scope: Deactivated successfully.
Oct 02 13:21:23 compute-0 conmon[424694]: conmon 67114c07ae1a271653b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6.scope/container/memory.events
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.932403557 +0000 UTC m=+0.129227229 container died 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-89894983ae6228bf39528eb5a6a1ecf636de55976529ba6616112e78a76ae21e-merged.mount: Deactivated successfully.
Oct 02 13:21:23 compute-0 podman[424678]: 2025-10-02 13:21:23.963570717 +0000 UTC m=+0.160394369 container remove 67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:21:23 compute-0 systemd[1]: libpod-conmon-67114c07ae1a271653b24a01d62d5eb02ee0485e600e7322bb7653e7f98e41b6.scope: Deactivated successfully.
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.111323522 +0000 UTC m=+0.038167959 container create ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:24 compute-0 systemd[1]: Started libpod-conmon-ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6.scope.
Oct 02 13:21:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da15b79664869cc822f75a9617512882d039a045c56c2000c425085eac6952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da15b79664869cc822f75a9617512882d039a045c56c2000c425085eac6952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da15b79664869cc822f75a9617512882d039a045c56c2000c425085eac6952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da15b79664869cc822f75a9617512882d039a045c56c2000c425085eac6952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.184416651 +0000 UTC m=+0.111261158 container init ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.093820911 +0000 UTC m=+0.020665378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.191227805 +0000 UTC m=+0.118072252 container start ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.195819165 +0000 UTC m=+0.122663632 container attach ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:21:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/452093045' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/975264910' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:21:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1136109663' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]: {
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:     "1": [
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:         {
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "devices": [
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "/dev/loop3"
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             ],
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "lv_name": "ceph_lv0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "lv_size": "7511998464",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "name": "ceph_lv0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "tags": {
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.cluster_name": "ceph",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.crush_device_class": "",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.encrypted": "0",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.osd_id": "1",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.type": "block",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:                 "ceph.vdo": "0"
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             },
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "type": "block",
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:             "vg_name": "ceph_vg0"
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:         }
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]:     ]
Oct 02 13:21:24 compute-0 vibrant_cohen[424734]: }
Oct 02 13:21:24 compute-0 systemd[1]: libpod-ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6.scope: Deactivated successfully.
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.920818437 +0000 UTC m=+0.847662884 container died ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-55da15b79664869cc822f75a9617512882d039a045c56c2000c425085eac6952-merged.mount: Deactivated successfully.
Oct 02 13:21:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:24 compute-0 podman[424718]: 2025-10-02 13:21:24.98122182 +0000 UTC m=+0.908066267 container remove ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:24 compute-0 systemd[1]: libpod-conmon-ddfae43e268ad30b89cc2695872285c581cd662d5fb6653e3f375225fcd201d6.scope: Deactivated successfully.
Oct 02 13:21:25 compute-0 sudo[424614]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:25 compute-0 sudo[424757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:25 compute-0 sudo[424757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:25 compute-0 sudo[424757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:25 compute-0 sudo[424782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:25 compute-0 sudo[424782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:25 compute-0 sudo[424782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:25 compute-0 sudo[424807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:25 compute-0 sudo[424807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:25 compute-0 sudo[424807]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:25 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.46987 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:25 compute-0 sudo[424832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:21:25 compute-0 sudo[424832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.560297033 +0000 UTC m=+0.047331350 container create f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:21:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3567487335' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:25 compute-0 ceph-mon[73607]: pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:25 compute-0 systemd[1]: Started libpod-conmon-f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0.scope.
Oct 02 13:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.638399802 +0000 UTC m=+0.125434089 container init f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.542226728 +0000 UTC m=+0.029261025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.644865647 +0000 UTC m=+0.131899924 container start f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:21:25 compute-0 quizzical_brattain[424912]: 167 167
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.649600291 +0000 UTC m=+0.136634568 container attach f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:21:25 compute-0 systemd[1]: libpod-f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0.scope: Deactivated successfully.
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.650287177 +0000 UTC m=+0.137321454 container died f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fb548b82266a93979a54ff779fb63e37fe0ba1a34488bd7d7469f08f586b7d1-merged.mount: Deactivated successfully.
Oct 02 13:21:25 compute-0 podman[424897]: 2025-10-02 13:21:25.689341157 +0000 UTC m=+0.176375444 container remove f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:21:25 compute-0 systemd[1]: libpod-conmon-f1915cbe37d4237b9b7f910b0cddea0b2adff17196a5b74e5ed220cfbc3394b0.scope: Deactivated successfully.
Oct 02 13:21:25 compute-0 podman[424936]: 2025-10-02 13:21:25.862133404 +0000 UTC m=+0.038172219 container create 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:21:25 compute-0 systemd[1]: Started libpod-conmon-1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0.scope.
Oct 02 13:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96398ea7f7c5a91c2be55d0605f483c82cef0ca42798e4c79ed9886a9cf7660f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96398ea7f7c5a91c2be55d0605f483c82cef0ca42798e4c79ed9886a9cf7660f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96398ea7f7c5a91c2be55d0605f483c82cef0ca42798e4c79ed9886a9cf7660f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96398ea7f7c5a91c2be55d0605f483c82cef0ca42798e4c79ed9886a9cf7660f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:25 compute-0 podman[424936]: 2025-10-02 13:21:25.925491808 +0000 UTC m=+0.101530643 container init 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:21:25 compute-0 podman[424936]: 2025-10-02 13:21:25.933502501 +0000 UTC m=+0.109541316 container start 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:21:25 compute-0 podman[424936]: 2025-10-02 13:21:25.936426712 +0000 UTC m=+0.112465527 container attach 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:25 compute-0 podman[424936]: 2025-10-02 13:21:25.846523778 +0000 UTC m=+0.022562623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:25 compute-0 nova_compute[257802]: 2025-10-02 13:21:25.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:26 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47005 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:26 compute-0 fervent_brattain[424952]: {
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:         "osd_id": 1,
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:         "type": "bluestore"
Oct 02 13:21:26 compute-0 fervent_brattain[424952]:     }
Oct 02 13:21:26 compute-0 fervent_brattain[424952]: }
Oct 02 13:21:26 compute-0 systemd[1]: libpod-1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0.scope: Deactivated successfully.
Oct 02 13:21:26 compute-0 conmon[424952]: conmon 1fbcf641d61fe5351841 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0.scope/container/memory.events
Oct 02 13:21:26 compute-0 podman[424974]: 2025-10-02 13:21:26.786389821 +0000 UTC m=+0.023389554 container died 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-96398ea7f7c5a91c2be55d0605f483c82cef0ca42798e4c79ed9886a9cf7660f-merged.mount: Deactivated successfully.
Oct 02 13:21:26 compute-0 podman[424974]: 2025-10-02 13:21:26.839631992 +0000 UTC m=+0.076631705 container remove 1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:26 compute-0 systemd[1]: libpod-conmon-1fbcf641d61fe53518411cc716612d820d4bd7d8332abaabeb8c55c08ee2c6a0.scope: Deactivated successfully.
Oct 02 13:21:26 compute-0 sudo[424832]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:21:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:21:27.006 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:21:27.006 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:21:27.007 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:27 compute-0 ceph-mon[73607]: from='client.46987 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/424226001' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3536608000' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:21:27 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a4fcc4cb-4f57-41d7-a68f-1eef0c9331c5 does not exist
Oct 02 13:21:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bd1d9a0f-45d4-4c90-ab63-309429072268 does not exist
Oct 02 13:21:27 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 02dade7e-9642-4dde-97f0-ea656a20f6ca does not exist
Oct 02 13:21:27 compute-0 sudo[424989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:27 compute-0 sudo[424989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:27 compute-0 sudo[424989]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:27 compute-0 sudo[425014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:21:27 compute-0 sudo[425014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:27 compute-0 sudo[425014]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47017 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:28 compute-0 ceph-mon[73607]: from='client.47005 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:28 compute-0 ceph-mon[73607]: pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:28 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:21:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2426649433' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:21:28 compute-0 nova_compute[257802]: 2025-10-02 13:21:28.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:28.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47023 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:21:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:28.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:21:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 13:21:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3536921107' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:29 compute-0 ceph-mon[73607]: from='client.47017 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:29 compute-0 ceph-mon[73607]: from='client.47023 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/343366598' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:21:29 compute-0 ceph-mon[73607]: pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:29 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47041 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:29 compute-0 sshd-session[425040]: Invalid user vr from 167.99.55.34 port 36102
Oct 02 13:21:29 compute-0 sshd-session[425040]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:21:29 compute-0 sshd-session[425040]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48119 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:30 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3536921107' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:30 compute-0 ceph-mon[73607]: from='client.47041 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:30 compute-0 ceph-mon[73607]: from='client.48119 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:30 compute-0 podman[425044]: 2025-10-02 13:21:30.937288095 +0000 UTC m=+0.075382325 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 13:21:30 compute-0 podman[425043]: 2025-10-02 13:21:30.946265701 +0000 UTC m=+0.084926514 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 13:21:30 compute-0 podman[425045]: 2025-10-02 13:21:30.957804058 +0000 UTC m=+0.085243623 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:21:30 compute-0 nova_compute[257802]: 2025-10-02 13:21:30.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:30 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:31 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47065 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1553975403' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/772974085' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73607]: pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:31 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47071 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:31 compute-0 sshd-session[425040]: Failed password for invalid user vr from 167.99.55.34 port 36102 ssh2
Oct 02 13:21:32 compute-0 sudo[425096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:32 compute-0 sudo[425096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:32 compute-0 sudo[425096]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:32 compute-0 sudo[425121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:32 compute-0 sudo[425121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:32 compute-0 sudo[425121]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:32 compute-0 ceph-mon[73607]: from='client.47065 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:32 compute-0 ceph-mon[73607]: from='client.47071 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:21:32 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3210879108' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:32 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:33 compute-0 nova_compute[257802]: 2025-10-02 13:21:33.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:33 compute-0 sshd-session[425040]: Connection closed by invalid user vr 167.99.55.34 port 36102 [preauth]
Oct 02 13:21:33 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2566103676' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:21:33 compute-0 ceph-mon[73607]: pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:34.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:34 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:35 compute-0 nova_compute[257802]: 2025-10-02 13:21:35.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:36 compute-0 ceph-mon[73607]: pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:21:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:21:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:21:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:36.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:21:36 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:38 compute-0 ceph-mon[73607]: pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:38 compute-0 nova_compute[257802]: 2025-10-02 13:21:38.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:38.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:38 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:39 compute-0 nova_compute[257802]: 2025-10-02 13:21:39.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:39 compute-0 nova_compute[257802]: 2025-10-02 13:21:39.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:21:39 compute-0 ceph-mon[73607]: pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:40 compute-0 nova_compute[257802]: 2025-10-02 13:21:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:40 compute-0 nova_compute[257802]: 2025-10-02 13:21:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:40 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:42 compute-0 ceph-mon[73607]: pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:42 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2394421373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:42 compute-0 podman[425151]: 2025-10-02 13:21:42.381683219 +0000 UTC m=+0.131665298 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:42.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:21:42
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'vms', '.rgw.root']
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:42 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:43 compute-0 ceph-mon[73607]: pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2190346141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:43 compute-0 nova_compute[257802]: 2025-10-02 13:21:43.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:21:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:21:44 compute-0 nova_compute[257802]: 2025-10-02 13:21:44.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:44 compute-0 nova_compute[257802]: 2025-10-02 13:21:44.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:21:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:44.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:44 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:45 compute-0 nova_compute[257802]: 2025-10-02 13:21:45.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:45 compute-0 nova_compute[257802]: 2025-10-02 13:21:45.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:46 compute-0 ceph-mon[73607]: pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:46.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:46 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:47 compute-0 ceph-mon[73607]: pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:48 compute-0 nova_compute[257802]: 2025-10-02 13:21:48.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:48.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:48 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:50 compute-0 ceph-mon[73607]: pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:50.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:50 compute-0 nova_compute[257802]: 2025-10-02 13:21:50.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:50 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:51 compute-0 ceph-mon[73607]: pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:52 compute-0 sudo[425184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:52 compute-0 sudo[425184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:52 compute-0 sudo[425184]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:52 compute-0 sudo[425209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:52 compute-0 sudo[425209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:52 compute-0 sudo[425209]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:52 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:53 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 13:21:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 13:21:53 compute-0 nova_compute[257802]: 2025-10-02 13:21:53.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:54 compute-0 nova_compute[257802]: 2025-10-02 13:21:54.092 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:54 compute-0 ceph-mon[73607]: pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:54.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:54.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:54 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:55 compute-0 ceph-mon[73607]: pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1819642134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1819642134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:55 compute-0 nova_compute[257802]: 2025-10-02 13:21:55.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:21:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:56.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:21:56 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:58 compute-0 ceph-mon[73607]: pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:58 compute-0 nova_compute[257802]: 2025-10-02 13:21:58.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:58.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:21:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:58.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:58 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:59 compute-0 nova_compute[257802]: 2025-10-02 13:21:59.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:59 compute-0 nova_compute[257802]: 2025-10-02 13:21:59.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:21:59 compute-0 nova_compute[257802]: 2025-10-02 13:21:59.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:21:59 compute-0 nova_compute[257802]: 2025-10-02 13:21:59.118 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:21:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/509542363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:59 compute-0 ceph-mon[73607]: pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/114957763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:00 compute-0 nova_compute[257802]: 2025-10-02 13:22:00.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:00 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:01 compute-0 ceph-mon[73607]: pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:01 compute-0 podman[425243]: 2025-10-02 13:22:01.917728848 +0000 UTC m=+0.054809359 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:22:01 compute-0 podman[425245]: 2025-10-02 13:22:01.921332574 +0000 UTC m=+0.054447110 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:22:01 compute-0 podman[425244]: 2025-10-02 13:22:01.930578427 +0000 UTC m=+0.066988423 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.144 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.144 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.145 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.145 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.145 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:02 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:02 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115983862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.571 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:02.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3115983862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:02.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.716 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.717 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4042MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.718 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:02 compute-0 nova_compute[257802]: 2025-10-02 13:22:02.718 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:02 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.080 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.081 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.098 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.546 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.552 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.602 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.605 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:22:03 compute-0 nova_compute[257802]: 2025-10-02 13:22:03.605 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:03 compute-0 ceph-mon[73607]: pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4138600610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:04.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:04 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:05 compute-0 nova_compute[257802]: 2025-10-02 13:22:05.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:06 compute-0 ceph-mon[73607]: pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:06 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:08 compute-0 ceph-mon[73607]: pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:08 compute-0 nova_compute[257802]: 2025-10-02 13:22:08.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:08.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:08 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:09 compute-0 ceph-mon[73607]: pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:10.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:10 compute-0 nova_compute[257802]: 2025-10-02 13:22:10.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:10 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:11 compute-0 ceph-mon[73607]: pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:12 compute-0 sudo[425351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:12 compute-0 sudo[425351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:12 compute-0 sudo[425351]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:12 compute-0 sudo[425377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:12 compute-0 sudo[425377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:12 compute-0 sudo[425377]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:12 compute-0 podman[425375]: 2025-10-02 13:22:12.58139274 +0000 UTC m=+0.084981346 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:22:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:12.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:12 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:13 compute-0 ceph-mon[73607]: pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:13 compute-0 sudo[417312]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:13 compute-0 sshd-session[417311]: Received disconnect from 192.168.122.10 port 41252:11: disconnected by user
Oct 02 13:22:13 compute-0 sshd-session[417311]: Disconnected from user zuul 192.168.122.10 port 41252
Oct 02 13:22:13 compute-0 nova_compute[257802]: 2025-10-02 13:22:13.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:13 compute-0 sshd-session[417308]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:22:13 compute-0 systemd[1]: session-78.scope: Deactivated successfully.
Oct 02 13:22:13 compute-0 systemd[1]: session-78.scope: Consumed 2min 46.701s CPU time, 1020.5M memory peak, read 403.0M from disk, written 393.8M to disk.
Oct 02 13:22:13 compute-0 systemd-logind[789]: Session 78 logged out. Waiting for processes to exit.
Oct 02 13:22:13 compute-0 systemd-logind[789]: Removed session 78.
Oct 02 13:22:13 compute-0 sshd-session[425429]: Accepted publickey for zuul from 192.168.122.10 port 59450 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 13:22:13 compute-0 systemd-logind[789]: New session 79 of user zuul.
Oct 02 13:22:13 compute-0 systemd[1]: Started Session 79 of User zuul.
Oct 02 13:22:13 compute-0 sshd-session[425429]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:22:13 compute-0 sudo[425433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-02-cryigpz.tar.xz
Oct 02 13:22:13 compute-0 sudo[425433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:22:13 compute-0 sudo[425433]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:13 compute-0 sshd-session[425432]: Received disconnect from 192.168.122.10 port 59450:11: disconnected by user
Oct 02 13:22:13 compute-0 sshd-session[425432]: Disconnected from user zuul 192.168.122.10 port 59450
Oct 02 13:22:13 compute-0 sshd-session[425429]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:22:13 compute-0 systemd[1]: session-79.scope: Deactivated successfully.
Oct 02 13:22:13 compute-0 systemd-logind[789]: Session 79 logged out. Waiting for processes to exit.
Oct 02 13:22:13 compute-0 systemd-logind[789]: Removed session 79.
Oct 02 13:22:13 compute-0 sshd-session[425458]: Accepted publickey for zuul from 192.168.122.10 port 59464 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 13:22:13 compute-0 systemd-logind[789]: New session 80 of user zuul.
Oct 02 13:22:13 compute-0 systemd[1]: Started Session 80 of User zuul.
Oct 02 13:22:13 compute-0 sshd-session[425458]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:22:14 compute-0 sudo[425462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 13:22:14 compute-0 sudo[425462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:22:14 compute-0 sudo[425462]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:14 compute-0 sshd-session[425461]: Received disconnect from 192.168.122.10 port 59464:11: disconnected by user
Oct 02 13:22:14 compute-0 sshd-session[425461]: Disconnected from user zuul 192.168.122.10 port 59464
Oct 02 13:22:14 compute-0 sshd-session[425458]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:22:14 compute-0 systemd[1]: session-80.scope: Deactivated successfully.
Oct 02 13:22:14 compute-0 systemd-logind[789]: Session 80 logged out. Waiting for processes to exit.
Oct 02 13:22:14 compute-0 systemd-logind[789]: Removed session 80.
Oct 02 13:22:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:14.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:14 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:15 compute-0 ceph-mon[73607]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:15 compute-0 nova_compute[257802]: 2025-10-02 13:22:15.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.476122) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336476214, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2395, "num_deletes": 250, "total_data_size": 3949980, "memory_usage": 3995448, "flush_reason": "Manual Compaction"}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336488428, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 2412864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80484, "largest_seqno": 82878, "table_properties": {"data_size": 2403874, "index_size": 4907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26958, "raw_average_key_size": 22, "raw_value_size": 2382946, "raw_average_value_size": 2007, "num_data_blocks": 215, "num_entries": 1187, "num_filter_entries": 1187, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411133, "oldest_key_time": 1759411133, "file_creation_time": 1759411336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 12353 microseconds, and 6088 cpu microseconds.
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.488485) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 2412864 bytes OK
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.488509) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.490629) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.490643) EVENT_LOG_v1 {"time_micros": 1759411336490639, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.490664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3939265, prev total WAL file size 3939265, number of live WAL files 2.
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.492080) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303132' seq:72057594037927935, type:22 .. '6D6772737461740033323633' seq:0, type:0; will stop at (end)
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(2356KB)], [185(12MB)]
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336492128, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 15433628, "oldest_snapshot_seqno": -1}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 11352 keys, 13053508 bytes, temperature: kUnknown
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336560223, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 13053508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12982602, "index_size": 41417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 298502, "raw_average_key_size": 26, "raw_value_size": 12786790, "raw_average_value_size": 1126, "num_data_blocks": 1578, "num_entries": 11352, "num_filter_entries": 11352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.560457) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 13053508 bytes
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.561662) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.4 rd, 191.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 11771, records dropped: 419 output_compression: NoCompression
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.561681) EVENT_LOG_v1 {"time_micros": 1759411336561672, "job": 116, "event": "compaction_finished", "compaction_time_micros": 68176, "compaction_time_cpu_micros": 28917, "output_level": 6, "num_output_files": 1, "total_output_size": 13053508, "num_input_records": 11771, "num_output_records": 11352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336562379, "job": 116, "event": "table_file_deletion", "file_number": 187}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336564644, "job": 116, "event": "table_file_deletion", "file_number": 185}
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.491800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.564926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.564937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.564940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.564942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:16.564945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:16 compute-0 nova_compute[257802]: 2025-10-02 13:22:16.606 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:16 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:17 compute-0 ceph-mon[73607]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:18 compute-0 nova_compute[257802]: 2025-10-02 13:22:18.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:18.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:18 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:19 compute-0 ceph-mon[73607]: pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:20 compute-0 nova_compute[257802]: 2025-10-02 13:22:20.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:20 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:22 compute-0 ceph-mon[73607]: pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:22 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:23 compute-0 ceph-mon[73607]: pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:23 compute-0 nova_compute[257802]: 2025-10-02 13:22:23.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.546506) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343546549, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 314, "num_deletes": 251, "total_data_size": 141798, "memory_usage": 148856, "flush_reason": "Manual Compaction"}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343556198, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 140566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82879, "largest_seqno": 83192, "table_properties": {"data_size": 138568, "index_size": 225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5162, "raw_average_key_size": 18, "raw_value_size": 134605, "raw_average_value_size": 480, "num_data_blocks": 10, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411337, "oldest_key_time": 1759411337, "file_creation_time": 1759411343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9738 microseconds, and 1615 cpu microseconds.
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.556241) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 140566 bytes OK
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.556268) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.557517) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.557531) EVENT_LOG_v1 {"time_micros": 1759411343557526, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.557549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 139589, prev total WAL file size 139589, number of live WAL files 2.
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.557884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(137KB)], [188(12MB)]
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343557913, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 13194074, "oldest_snapshot_seqno": -1}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 11122 keys, 11280923 bytes, temperature: kUnknown
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343629821, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11280923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11213094, "index_size": 38916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 294410, "raw_average_key_size": 26, "raw_value_size": 11022739, "raw_average_value_size": 991, "num_data_blocks": 1465, "num_entries": 11122, "num_filter_entries": 11122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.630301) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11280923 bytes
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.631762) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.9 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(174.1) write-amplify(80.3) OK, records in: 11632, records dropped: 510 output_compression: NoCompression
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.631783) EVENT_LOG_v1 {"time_micros": 1759411343631773, "job": 118, "event": "compaction_finished", "compaction_time_micros": 72153, "compaction_time_cpu_micros": 27509, "output_level": 6, "num_output_files": 1, "total_output_size": 11280923, "num_input_records": 11632, "num_output_records": 11122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343632437, "job": 118, "event": "table_file_deletion", "file_number": 190}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343635458, "job": 118, "event": "table_file_deletion", "file_number": 188}
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.557809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.635612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.635616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.635617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.635619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:22:23.635620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:22:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:24.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:24.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:24 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:25 compute-0 nova_compute[257802]: 2025-10-02 13:22:25.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:26 compute-0 ceph-mon[73607]: pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:26.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:26 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:22:27.007 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:22:27.007 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:22:27.007 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:27 compute-0 sudo[425494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:27 compute-0 sudo[425494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:27 compute-0 sudo[425494]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:28 compute-0 sudo[425519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:28 compute-0 sudo[425519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:28 compute-0 sudo[425519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:28 compute-0 sudo[425544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:28 compute-0 sudo[425544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:28 compute-0 sudo[425544]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:28 compute-0 ceph-mon[73607]: pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:28 compute-0 sudo[425569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:22:28 compute-0 sudo[425569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:28 compute-0 nova_compute[257802]: 2025-10-02 13:22:28.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:28.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:28 compute-0 podman[425667]: 2025-10-02 13:22:28.63030948 +0000 UTC m=+0.058573379 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:28.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:28 compute-0 podman[425667]: 2025-10-02 13:22:28.764278324 +0000 UTC m=+0.192542213 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:22:28 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:22:29 compute-0 podman[425805]: 2025-10-02 13:22:29.229646479 +0000 UTC m=+0.048903507 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:22:29 compute-0 podman[425805]: 2025-10-02 13:22:29.239377424 +0000 UTC m=+0.058634432 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:22:29 compute-0 ceph-mon[73607]: pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:29 compute-0 podman[425865]: 2025-10-02 13:22:29.415231595 +0000 UTC m=+0.044003160 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct 02 13:22:29 compute-0 podman[425865]: 2025-10-02 13:22:29.431051775 +0000 UTC m=+0.059823320 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, architecture=x86_64, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, release=1793, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 13:22:29 compute-0 sudo[425569]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:22:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:22:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:22:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:29 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:29 compute-0 sudo[425918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:29 compute-0 sudo[425918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:29 compute-0 sudo[425918]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:29 compute-0 sudo[425943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:29 compute-0 sudo[425943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:29 compute-0 sudo[425943]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:29 compute-0 sudo[425968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:29 compute-0 sudo[425968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:29 compute-0 sudo[425968]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:29 compute-0 sudo[425993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:22:29 compute-0 sudo[425993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:30 compute-0 sudo[425993]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 74176dc1-a51d-45cf-9398-df65f43167a8 does not exist
Oct 02 13:22:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 94459412-d024-47da-8e7c-6e9b21973bf1 does not exist
Oct 02 13:22:30 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev afa7fac1-4aa3-4e50-b812-eb4e9e7398eb does not exist
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:22:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:30 compute-0 sudo[426049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:30 compute-0 sudo[426049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:30 compute-0 sudo[426049]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:30 compute-0 sudo[426074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:30 compute-0 sudo[426074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:30 compute-0 sudo[426074]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:22:30 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:30.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:30 compute-0 sudo[426099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:30 compute-0 sudo[426099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:30 compute-0 sudo[426099]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:30 compute-0 sudo[426124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:22:30 compute-0 sudo[426124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:30 compute-0 nova_compute[257802]: 2025-10-02 13:22:30.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.122782766 +0000 UTC m=+0.043410095 container create c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:31 compute-0 systemd[1]: Started libpod-conmon-c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c.scope.
Oct 02 13:22:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.104875105 +0000 UTC m=+0.025502454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.208080518 +0000 UTC m=+0.128707877 container init c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.216585353 +0000 UTC m=+0.137212682 container start c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.219924033 +0000 UTC m=+0.140551412 container attach c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:22:31 compute-0 competent_antonelli[426206]: 167 167
Oct 02 13:22:31 compute-0 systemd[1]: libpod-c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c.scope: Deactivated successfully.
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.224329049 +0000 UTC m=+0.144956398 container died c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa38c096637351f7f14b6b3d9662619c2fd0d8e852396600f032356596d97b9-merged.mount: Deactivated successfully.
Oct 02 13:22:31 compute-0 podman[426189]: 2025-10-02 13:22:31.258720957 +0000 UTC m=+0.179348286 container remove c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:22:31 compute-0 systemd[1]: libpod-conmon-c01a04042373fe5bdec74d8db14fe971982ffdb44ad35664ca31fc1b512e075c.scope: Deactivated successfully.
Oct 02 13:22:31 compute-0 podman[426231]: 2025-10-02 13:22:31.40893062 +0000 UTC m=+0.040713170 container create 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:31 compute-0 systemd[1]: Started libpod-conmon-1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f.scope.
Oct 02 13:22:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:31 compute-0 podman[426231]: 2025-10-02 13:22:31.391173063 +0000 UTC m=+0.022955613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:31 compute-0 podman[426231]: 2025-10-02 13:22:31.505453453 +0000 UTC m=+0.137236013 container init 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:22:31 compute-0 podman[426231]: 2025-10-02 13:22:31.513346872 +0000 UTC m=+0.145129422 container start 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:22:31 compute-0 podman[426231]: 2025-10-02 13:22:31.516544699 +0000 UTC m=+0.148327249 container attach 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:22:31 compute-0 ceph-mon[73607]: pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:32 compute-0 relaxed_gagarin[426248]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:22:32 compute-0 relaxed_gagarin[426248]: --> relative data size: 1.0
Oct 02 13:22:32 compute-0 relaxed_gagarin[426248]: --> All data devices are unavailable
Oct 02 13:22:32 compute-0 systemd[1]: libpod-1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f.scope: Deactivated successfully.
Oct 02 13:22:32 compute-0 podman[426231]: 2025-10-02 13:22:32.354982461 +0000 UTC m=+0.986765031 container died 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bea7d28b32876c577c75c807f445726677619dd69db30fa2cd46e695d80434f0-merged.mount: Deactivated successfully.
Oct 02 13:22:32 compute-0 podman[426231]: 2025-10-02 13:22:32.427136417 +0000 UTC m=+1.058918967 container remove 1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:22:32 compute-0 systemd[1]: libpod-conmon-1b2cd49a2ff1774f8da9de34c0d84b9c5b2a8abc288311c797055b60af33553f.scope: Deactivated successfully.
Oct 02 13:22:32 compute-0 sudo[426124]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 podman[426272]: 2025-10-02 13:22:32.469695551 +0000 UTC m=+0.076931782 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:22:32 compute-0 podman[426271]: 2025-10-02 13:22:32.469685341 +0000 UTC m=+0.077387764 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 02 13:22:32 compute-0 podman[426264]: 2025-10-02 13:22:32.509607401 +0000 UTC m=+0.113515502 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:22:32 compute-0 sudo[426330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:32 compute-0 sudo[426330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:32 compute-0 sudo[426330]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 sudo[426355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:32 compute-0 sudo[426355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:32 compute-0 sudo[426355]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 sudo[426369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:32 compute-0 sudo[426369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:32 compute-0 sudo[426369]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:32.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:32 compute-0 sudo[426403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:32 compute-0 sudo[426403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:32 compute-0 sudo[426403]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 sudo[426413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:32 compute-0 sudo[426413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:32 compute-0 sudo[426413]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:32.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:32 compute-0 sudo[426455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:22:32 compute-0 sudo[426455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.052179205 +0000 UTC m=+0.022186515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.194445157 +0000 UTC m=+0.164452447 container create 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:22:33 compute-0 ceph-mon[73607]: pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:33 compute-0 systemd[1]: Started libpod-conmon-4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f.scope.
Oct 02 13:22:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.352142041 +0000 UTC m=+0.322149361 container init 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.360543743 +0000 UTC m=+0.330551033 container start 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.364844737 +0000 UTC m=+0.334852037 container attach 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:33 compute-0 naughty_euler[426539]: 167 167
Oct 02 13:22:33 compute-0 systemd[1]: libpod-4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f.scope: Deactivated successfully.
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.368134096 +0000 UTC m=+0.338141386 container died 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-19056b83a527557bf4065183430d51ea59f5a17b2ccfa5b501d2db4895d1f4ef-merged.mount: Deactivated successfully.
Oct 02 13:22:33 compute-0 podman[426522]: 2025-10-02 13:22:33.406920899 +0000 UTC m=+0.376928189 container remove 4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_euler, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:22:33 compute-0 systemd[1]: libpod-conmon-4b5c4583bf54b8d72fd0941876ad8d30e9d0b0a88e85c339256b71d458e2c42f.scope: Deactivated successfully.
Oct 02 13:22:33 compute-0 nova_compute[257802]: 2025-10-02 13:22:33.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:33 compute-0 podman[426563]: 2025-10-02 13:22:33.583079677 +0000 UTC m=+0.039377648 container create 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:22:33 compute-0 systemd[1]: Started libpod-conmon-419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3.scope.
Oct 02 13:22:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbfcbc00c6a9b8f6a809c3cce575e857393a09b05404374df616070a7f38d71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbfcbc00c6a9b8f6a809c3cce575e857393a09b05404374df616070a7f38d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbfcbc00c6a9b8f6a809c3cce575e857393a09b05404374df616070a7f38d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbfcbc00c6a9b8f6a809c3cce575e857393a09b05404374df616070a7f38d71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:33 compute-0 podman[426563]: 2025-10-02 13:22:33.565991945 +0000 UTC m=+0.022289967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:33 compute-0 podman[426563]: 2025-10-02 13:22:33.665580322 +0000 UTC m=+0.121878293 container init 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:33 compute-0 podman[426563]: 2025-10-02 13:22:33.67420972 +0000 UTC m=+0.130507711 container start 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:22:33 compute-0 podman[426563]: 2025-10-02 13:22:33.677536059 +0000 UTC m=+0.133834050 container attach 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:22:34 compute-0 hungry_hawking[426579]: {
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:     "1": [
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:         {
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "devices": [
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "/dev/loop3"
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             ],
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "lv_name": "ceph_lv0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "lv_size": "7511998464",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "name": "ceph_lv0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "tags": {
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.cluster_name": "ceph",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.crush_device_class": "",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.encrypted": "0",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.osd_id": "1",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.type": "block",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:                 "ceph.vdo": "0"
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             },
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "type": "block",
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:             "vg_name": "ceph_vg0"
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:         }
Oct 02 13:22:34 compute-0 hungry_hawking[426579]:     ]
Oct 02 13:22:34 compute-0 hungry_hawking[426579]: }
Oct 02 13:22:34 compute-0 systemd[1]: libpod-419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3.scope: Deactivated successfully.
Oct 02 13:22:34 compute-0 podman[426563]: 2025-10-02 13:22:34.423588248 +0000 UTC m=+0.879886219 container died 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fbfcbc00c6a9b8f6a809c3cce575e857393a09b05404374df616070a7f38d71-merged.mount: Deactivated successfully.
Oct 02 13:22:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:34.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:34.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:34 compute-0 podman[426563]: 2025-10-02 13:22:34.789427149 +0000 UTC m=+1.245725110 container remove 419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:22:34 compute-0 systemd[1]: libpod-conmon-419a64f8934464bcd0afd0c521dd6835a7d4f3bc7eac21ff83254e92a52288b3.scope: Deactivated successfully.
Oct 02 13:22:34 compute-0 sudo[426455]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:34 compute-0 sudo[426600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:34 compute-0 sudo[426600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:34 compute-0 sudo[426600]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:34 compute-0 sudo[426625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:34 compute-0 sudo[426625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:34 compute-0 sudo[426625]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:35 compute-0 sudo[426650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:35 compute-0 sudo[426650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:35 compute-0 sudo[426650]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:35 compute-0 sudo[426675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:22:35 compute-0 sudo[426675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:35 compute-0 ceph-mon[73607]: pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.489514442 +0000 UTC m=+0.097182048 container create 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.421248121 +0000 UTC m=+0.028915757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:35 compute-0 systemd[1]: Started libpod-conmon-2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1.scope.
Oct 02 13:22:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.747564321 +0000 UTC m=+0.355232027 container init 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.759797355 +0000 UTC m=+0.367464961 container start 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:22:35 compute-0 jolly_moore[426755]: 167 167
Oct 02 13:22:35 compute-0 systemd[1]: libpod-2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1.scope: Deactivated successfully.
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.766259561 +0000 UTC m=+0.373927227 container attach 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.768541116 +0000 UTC m=+0.376208732 container died 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:22:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9e451519ad6d39abd7293375dd9f35ae5de0814f58f0e43cb52c6be800e9fa-merged.mount: Deactivated successfully.
Oct 02 13:22:35 compute-0 podman[426739]: 2025-10-02 13:22:35.806403287 +0000 UTC m=+0.414070893 container remove 2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_moore, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:22:35 compute-0 systemd[1]: libpod-conmon-2ebe1fca8aef6f72b14cae83bb62a20355be54c613fcf381b00c46a98c2033b1.scope: Deactivated successfully.
Oct 02 13:22:35 compute-0 podman[426777]: 2025-10-02 13:22:35.965473633 +0000 UTC m=+0.041340605 container create aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:22:35 compute-0 nova_compute[257802]: 2025-10-02 13:22:35.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:36 compute-0 systemd[1]: Started libpod-conmon-aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a.scope.
Oct 02 13:22:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a85f227f0584775718636165e4bb717630c4b530668f16f04dfeb08c1a2b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a85f227f0584775718636165e4bb717630c4b530668f16f04dfeb08c1a2b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a85f227f0584775718636165e4bb717630c4b530668f16f04dfeb08c1a2b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a85f227f0584775718636165e4bb717630c4b530668f16f04dfeb08c1a2b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:35.948001513 +0000 UTC m=+0.023868515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:36.054606288 +0000 UTC m=+0.130473260 container init aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:36.061655717 +0000 UTC m=+0.137522689 container start aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:36.064748072 +0000 UTC m=+0.140615044 container attach aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:22:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:36.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]: {
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:         "osd_id": 1,
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:         "type": "bluestore"
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]:     }
Oct 02 13:22:36 compute-0 priceless_blackburn[426793]: }
Oct 02 13:22:36 compute-0 systemd[1]: libpod-aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a.scope: Deactivated successfully.
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:36.830186737 +0000 UTC m=+0.906053709 container died aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a0a85f227f0584775718636165e4bb717630c4b530668f16f04dfeb08c1a2b4-merged.mount: Deactivated successfully.
Oct 02 13:22:36 compute-0 podman[426777]: 2025-10-02 13:22:36.88392417 +0000 UTC m=+0.959791142 container remove aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:36 compute-0 systemd[1]: libpod-conmon-aaca371a2d3bca9b06b1ed60d3bb375eaa89ec171de2fed285ca6795d4c3271a.scope: Deactivated successfully.
Oct 02 13:22:36 compute-0 sudo[426675]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:22:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:22:36 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3310659e-910a-480b-9a88-9c7b53614864 does not exist
Oct 02 13:22:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 115b8b3b-8a15-4c2c-b2c2-919ef5914817 does not exist
Oct 02 13:22:36 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2f8ee046-5e98-41e3-a56a-927b01cac0a5 does not exist
Oct 02 13:22:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:37 compute-0 sudo[426827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:37 compute-0 sudo[426827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:37 compute-0 sudo[426827]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:37 compute-0 sudo[426852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:22:37 compute-0 sudo[426852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:37 compute-0 sudo[426852]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:37 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:22:37 compute-0 ceph-mon[73607]: pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:38 compute-0 nova_compute[257802]: 2025-10-02 13:22:38.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:38.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:39 compute-0 nova_compute[257802]: 2025-10-02 13:22:39.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:39 compute-0 nova_compute[257802]: 2025-10-02 13:22:39.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:22:40 compute-0 nova_compute[257802]: 2025-10-02 13:22:40.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:40 compute-0 ceph-mon[73607]: pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:40.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:40 compute-0 nova_compute[257802]: 2025-10-02 13:22:40.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:41 compute-0 ceph-mon[73607]: pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:22:42
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:22:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:42.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:42 compute-0 podman[426880]: 2025-10-02 13:22:42.972572424 +0000 UTC m=+0.086949883 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:43 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1731271340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:43 compute-0 nova_compute[257802]: 2025-10-02 13:22:43.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:22:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:22:44 compute-0 ceph-mon[73607]: pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:44 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4131413981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:22:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:22:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:22:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:22:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:22:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:45 compute-0 nova_compute[257802]: 2025-10-02 13:22:45.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:45 compute-0 ceph-mon[73607]: pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:45 compute-0 nova_compute[257802]: 2025-10-02 13:22:45.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:46 compute-0 nova_compute[257802]: 2025-10-02 13:22:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:46 compute-0 nova_compute[257802]: 2025-10-02 13:22:46.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:48 compute-0 ceph-mon[73607]: pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:48 compute-0 nova_compute[257802]: 2025-10-02 13:22:48.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:50 compute-0 ceph-mon[73607]: pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:22:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:22:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:51 compute-0 nova_compute[257802]: 2025-10-02 13:22:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:51 compute-0 ceph-mon[73607]: pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:52 compute-0 nova_compute[257802]: 2025-10-02 13:22:52.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:52 compute-0 sudo[426910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:52 compute-0 sudo[426910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:52 compute-0 sudo[426910]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:52 compute-0 sudo[426936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:52 compute-0 sudo[426936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:52 compute-0 sudo[426936]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:53 compute-0 nova_compute[257802]: 2025-10-02 13:22:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:54 compute-0 ceph-mon[73607]: pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:54 compute-0 nova_compute[257802]: 2025-10-02 13:22:54.113 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:22:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:22:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163954854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:22:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:22:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163954854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:22:56 compute-0 nova_compute[257802]: 2025-10-02 13:22:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:56 compute-0 ceph-mon[73607]: pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4163954854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:22:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/4163954854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:22:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:56.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:58 compute-0 ceph-mon[73607]: pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:58 compute-0 nova_compute[257802]: 2025-10-02 13:22:58.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:22:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 67K writes, 264K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2545 writes, 8403 keys, 2545 commit groups, 1.0 writes per commit group, ingest: 7.16 MB, 0.01 MB/s
                                           Interval WAL: 2545 writes, 1106 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:22:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:22:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:22:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:22:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1297171564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:59 compute-0 ceph-mon[73607]: pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/876074866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:01 compute-0 nova_compute[257802]: 2025-10-02 13:23:01.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:01 compute-0 nova_compute[257802]: 2025-10-02 13:23:01.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:01 compute-0 nova_compute[257802]: 2025-10-02 13:23:01.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:23:01 compute-0 nova_compute[257802]: 2025-10-02 13:23:01.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:23:01 compute-0 nova_compute[257802]: 2025-10-02 13:23:01.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:23:01 compute-0 ceph-mon[73607]: pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:02.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:02 compute-0 podman[426966]: 2025-10-02 13:23:02.911136855 +0000 UTC m=+0.048808495 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:23:02 compute-0 podman[426968]: 2025-10-02 13:23:02.916419602 +0000 UTC m=+0.048485648 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 13:23:02 compute-0 podman[426967]: 2025-10-02 13:23:02.922725864 +0000 UTC m=+0.058344924 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true)
Oct 02 13:23:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:03 compute-0 nova_compute[257802]: 2025-10-02 13:23:03.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:03 compute-0 ceph-mon[73607]: pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.130 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.131 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.131 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379799277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.544 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.746 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.747 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4142MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.748 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.748 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1379799277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.836 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.837 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:23:04 compute-0 nova_compute[257802]: 2025-10-02 13:23:04.861 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478930758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:05 compute-0 nova_compute[257802]: 2025-10-02 13:23:05.315 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:05 compute-0 nova_compute[257802]: 2025-10-02 13:23:05.322 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:23:05 compute-0 nova_compute[257802]: 2025-10-02 13:23:05.358 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:23:05 compute-0 nova_compute[257802]: 2025-10-02 13:23:05.360 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:23:05 compute-0 nova_compute[257802]: 2025-10-02 13:23:05.360 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:05 compute-0 ceph-mon[73607]: pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1478930758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:06 compute-0 nova_compute[257802]: 2025-10-02 13:23:06.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:23:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:23:07 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 13:23:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:08 compute-0 ceph-mon[73607]: pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:08 compute-0 nova_compute[257802]: 2025-10-02 13:23:08.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:08.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:10 compute-0 ceph-mon[73607]: pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:11 compute-0 nova_compute[257802]: 2025-10-02 13:23:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:12 compute-0 ceph-mon[73607]: pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:12 compute-0 sudo[427073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:12 compute-0 sudo[427073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:12 compute-0 sudo[427073]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:12 compute-0 sudo[427098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:12 compute-0 sudo[427098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:12 compute-0 sudo[427098]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:13 compute-0 nova_compute[257802]: 2025-10-02 13:23:13.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:13 compute-0 podman[427123]: 2025-10-02 13:23:13.95607643 +0000 UTC m=+0.090793455 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:23:14 compute-0 ceph-mon[73607]: pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:15 compute-0 ceph-mon[73607]: pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:16 compute-0 nova_compute[257802]: 2025-10-02 13:23:16.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:16 compute-0 nova_compute[257802]: 2025-10-02 13:23:16.361 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:23:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:23:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:18 compute-0 ceph-mon[73607]: pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:18 compute-0 nova_compute[257802]: 2025-10-02 13:23:18.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:20 compute-0 ceph-mon[73607]: pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:20.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:21 compute-0 nova_compute[257802]: 2025-10-02 13:23:21.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:21 compute-0 ceph-mon[73607]: pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:22.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:23 compute-0 ceph-mon[73607]: pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:23 compute-0 nova_compute[257802]: 2025-10-02 13:23:23.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:24.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:24.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:26 compute-0 nova_compute[257802]: 2025-10-02 13:23:26.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:26 compute-0 ceph-mon[73607]: pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:23:27.008 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:23:27.008 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:23:27.008 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:28 compute-0 ceph-mon[73607]: pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:28 compute-0 nova_compute[257802]: 2025-10-02 13:23:28.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:28.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:28.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:29 compute-0 ceph-mon[73607]: pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:31 compute-0 nova_compute[257802]: 2025-10-02 13:23:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:32 compute-0 ceph-mon[73607]: pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:32.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:33 compute-0 sudo[427160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:33 compute-0 sudo[427160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:33 compute-0 sudo[427160]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:33 compute-0 podman[427185]: 2025-10-02 13:23:33.143373117 +0000 UTC m=+0.067743871 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 13:23:33 compute-0 sudo[427198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:33 compute-0 sudo[427198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:33 compute-0 sudo[427198]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:33 compute-0 podman[427184]: 2025-10-02 13:23:33.1593057 +0000 UTC m=+0.082352552 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 13:23:33 compute-0 podman[427186]: 2025-10-02 13:23:33.171941614 +0000 UTC m=+0.082593608 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:23:33 compute-0 nova_compute[257802]: 2025-10-02 13:23:33.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:34 compute-0 ceph-mon[73607]: pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:35 compute-0 ceph-mon[73607]: pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:36 compute-0 nova_compute[257802]: 2025-10-02 13:23:36.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:36.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:37 compute-0 sudo[427268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:37 compute-0 sudo[427268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:37 compute-0 sudo[427268]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:37 compute-0 sudo[427293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:37 compute-0 sudo[427293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:37 compute-0 sudo[427293]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:37 compute-0 sudo[427318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:37 compute-0 sudo[427318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:37 compute-0 sudo[427318]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:37 compute-0 sudo[427343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:23:37 compute-0 sudo[427343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:38 compute-0 sudo[427343]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:38 compute-0 ceph-mon[73607]: pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3d20f663-1309-4790-8d37-eb30356f8191 does not exist
Oct 02 13:23:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2c2122df-ce4c-4208-b53f-e2b07121faba does not exist
Oct 02 13:23:38 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a3178f71-d981-489e-8cb8-efdcb08ec5ee does not exist
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:23:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:23:38 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:38 compute-0 sudo[427399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:38 compute-0 sudo[427399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:38 compute-0 sudo[427399]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:38 compute-0 sudo[427424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:38 compute-0 sudo[427424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:38 compute-0 sudo[427424]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:38 compute-0 sudo[427449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:38 compute-0 sudo[427449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:38 compute-0 sudo[427449]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:38 compute-0 sudo[427474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:23:38 compute-0 sudo[427474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:38 compute-0 nova_compute[257802]: 2025-10-02 13:23:38.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.715156696 +0000 UTC m=+0.038090297 container create 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:23:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:38 compute-0 systemd[1]: Started libpod-conmon-20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f.scope.
Oct 02 13:23:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:38.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.794013203 +0000 UTC m=+0.116946804 container init 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.698776491 +0000 UTC m=+0.021710122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.800150051 +0000 UTC m=+0.123083652 container start 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.80343566 +0000 UTC m=+0.126369281 container attach 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:23:38 compute-0 loving_wu[427558]: 167 167
Oct 02 13:23:38 compute-0 systemd[1]: libpod-20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f.scope: Deactivated successfully.
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.805585312 +0000 UTC m=+0.128518923 container died 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42fb37589f3f915a9f69470d9003334e4dc1b3fc5da438f226d63772ab9c340-merged.mount: Deactivated successfully.
Oct 02 13:23:38 compute-0 podman[427541]: 2025-10-02 13:23:38.850664226 +0000 UTC m=+0.173597827 container remove 20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wu, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:23:38 compute-0 systemd[1]: libpod-conmon-20a192968d8fb66378f93f14774252e5b7eda21fc5e4cfd6422d42abaec90f7f.scope: Deactivated successfully.
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.010312867 +0000 UTC m=+0.051277975 container create 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:23:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:39 compute-0 systemd[1]: Started libpod-conmon-3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69.scope.
Oct 02 13:23:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.088523629 +0000 UTC m=+0.129488757 container init 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:38.993667997 +0000 UTC m=+0.034633125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.097980056 +0000 UTC m=+0.138945164 container start 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.101669415 +0000 UTC m=+0.142634543 container attach 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:23:39 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:39 compute-0 boring_tesla[427597]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:23:39 compute-0 boring_tesla[427597]: --> relative data size: 1.0
Oct 02 13:23:39 compute-0 boring_tesla[427597]: --> All data devices are unavailable
Oct 02 13:23:39 compute-0 systemd[1]: libpod-3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69.scope: Deactivated successfully.
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.903882695 +0000 UTC m=+0.944847823 container died 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c6088de9e3a9ad4a856036b04a1c82234b20d54be276606ea7d9fd9111556e-merged.mount: Deactivated successfully.
Oct 02 13:23:39 compute-0 podman[427581]: 2025-10-02 13:23:39.958610541 +0000 UTC m=+0.999575649 container remove 3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:23:39 compute-0 systemd[1]: libpod-conmon-3f3758d86d97da85e9a2bd66bb0625b1a34fad056a2942c32b2b90dabc73bc69.scope: Deactivated successfully.
Oct 02 13:23:39 compute-0 sudo[427474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:40 compute-0 sudo[427625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:40 compute-0 sudo[427625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:40 compute-0 sudo[427625]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:40 compute-0 nova_compute[257802]: 2025-10-02 13:23:40.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:40 compute-0 nova_compute[257802]: 2025-10-02 13:23:40.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:40 compute-0 nova_compute[257802]: 2025-10-02 13:23:40.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:23:40 compute-0 sudo[427650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:40 compute-0 sudo[427650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:40 compute-0 sudo[427650]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:40 compute-0 ceph-mon[73607]: pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:40 compute-0 sudo[427675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:40 compute-0 sudo[427675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:40 compute-0 sudo[427675]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:40 compute-0 sudo[427700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:23:40 compute-0 sudo[427700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.531279039 +0000 UTC m=+0.038216090 container create d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:23:40 compute-0 systemd[1]: Started libpod-conmon-d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7.scope.
Oct 02 13:23:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.514491505 +0000 UTC m=+0.021428576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.611199682 +0000 UTC m=+0.118136813 container init d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.623286753 +0000 UTC m=+0.130223804 container start d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:23:40 compute-0 great_cori[427783]: 167 167
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.627513435 +0000 UTC m=+0.134450546 container attach d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:23:40 compute-0 systemd[1]: libpod-d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7.scope: Deactivated successfully.
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.629439091 +0000 UTC m=+0.136376132 container died d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5ffec7f306468e91358fbc60e4b770a408ecbb152d5beeec7d8a599e8b197d-merged.mount: Deactivated successfully.
Oct 02 13:23:40 compute-0 podman[427766]: 2025-10-02 13:23:40.663389157 +0000 UTC m=+0.170326208 container remove d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:23:40 compute-0 systemd[1]: libpod-conmon-d26fc980cc29f72151a476e1d3f4cc172e5674bacbdee3b9fcebffbce03782f7.scope: Deactivated successfully.
Oct 02 13:23:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:40.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:40 compute-0 podman[427810]: 2025-10-02 13:23:40.83721278 +0000 UTC m=+0.040817324 container create 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:23:40 compute-0 systemd[1]: Started libpod-conmon-28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b.scope.
Oct 02 13:23:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1100465bbfab6e4982792e0a7c23e9dd2c1a0cd3e68155fe7f2968b27e25a991/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1100465bbfab6e4982792e0a7c23e9dd2c1a0cd3e68155fe7f2968b27e25a991/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:40 compute-0 podman[427810]: 2025-10-02 13:23:40.821181064 +0000 UTC m=+0.024785628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1100465bbfab6e4982792e0a7c23e9dd2c1a0cd3e68155fe7f2968b27e25a991/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1100465bbfab6e4982792e0a7c23e9dd2c1a0cd3e68155fe7f2968b27e25a991/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:40 compute-0 podman[427810]: 2025-10-02 13:23:40.937281967 +0000 UTC m=+0.140886531 container init 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:23:40 compute-0 podman[427810]: 2025-10-02 13:23:40.944450999 +0000 UTC m=+0.148055533 container start 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:40 compute-0 podman[427810]: 2025-10-02 13:23:40.94741087 +0000 UTC m=+0.151015444 container attach 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:41 compute-0 nova_compute[257802]: 2025-10-02 13:23:41.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:41 compute-0 ceph-mon[73607]: pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:41 compute-0 zealous_kare[427827]: {
Oct 02 13:23:41 compute-0 zealous_kare[427827]:     "1": [
Oct 02 13:23:41 compute-0 zealous_kare[427827]:         {
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "devices": [
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "/dev/loop3"
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             ],
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "lv_name": "ceph_lv0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "lv_size": "7511998464",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "name": "ceph_lv0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "tags": {
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.cluster_name": "ceph",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.crush_device_class": "",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.encrypted": "0",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.osd_id": "1",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.type": "block",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:                 "ceph.vdo": "0"
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             },
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "type": "block",
Oct 02 13:23:41 compute-0 zealous_kare[427827]:             "vg_name": "ceph_vg0"
Oct 02 13:23:41 compute-0 zealous_kare[427827]:         }
Oct 02 13:23:41 compute-0 zealous_kare[427827]:     ]
Oct 02 13:23:41 compute-0 zealous_kare[427827]: }
Oct 02 13:23:41 compute-0 systemd[1]: libpod-28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b.scope: Deactivated successfully.
Oct 02 13:23:41 compute-0 podman[427810]: 2025-10-02 13:23:41.709993047 +0000 UTC m=+0.913597591 container died 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1100465bbfab6e4982792e0a7c23e9dd2c1a0cd3e68155fe7f2968b27e25a991-merged.mount: Deactivated successfully.
Oct 02 13:23:41 compute-0 podman[427810]: 2025-10-02 13:23:41.770011881 +0000 UTC m=+0.973616425 container remove 28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:23:41 compute-0 systemd[1]: libpod-conmon-28a224e1a0372f1c77e1fa5d19b69db7213039bdba9ae2bc893038f5652a0b5b.scope: Deactivated successfully.
Oct 02 13:23:41 compute-0 sudo[427700]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:41 compute-0 sudo[427847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:41 compute-0 sudo[427847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:41 compute-0 sudo[427847]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:41 compute-0 sudo[427872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:41 compute-0 sudo[427872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:41 compute-0 sudo[427872]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:41 compute-0 sudo[427897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:41 compute-0 sudo[427897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:41 compute-0 sudo[427897]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:41 compute-0 sudo[427922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:23:41 compute-0 sudo[427922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.29534267 +0000 UTC m=+0.036151151 container create 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:23:42 compute-0 systemd[1]: Started libpod-conmon-301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255.scope.
Oct 02 13:23:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.374040083 +0000 UTC m=+0.114848574 container init 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.279580221 +0000 UTC m=+0.020388722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.379536125 +0000 UTC m=+0.120344606 container start 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:42 compute-0 stoic_banach[428004]: 167 167
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.385182201 +0000 UTC m=+0.125990682 container attach 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.385379176 +0000 UTC m=+0.126187657 container died 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:23:42 compute-0 systemd[1]: libpod-301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255.scope: Deactivated successfully.
Oct 02 13:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b94582ba6573d5966c04ca7ca27c1048ae041db81ceac56e846e7370f77ed7e-merged.mount: Deactivated successfully.
Oct 02 13:23:42 compute-0 podman[427988]: 2025-10-02 13:23:42.419910077 +0000 UTC m=+0.160718548 container remove 301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:23:42 compute-0 systemd[1]: libpod-conmon-301faa38df734d1d171999c9ae5d36385f19ac95d7f99a14c1af850c9c60c255.scope: Deactivated successfully.
Oct 02 13:23:42 compute-0 podman[428028]: 2025-10-02 13:23:42.570945281 +0000 UTC m=+0.035818554 container create ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:23:42 compute-0 systemd[1]: Started libpod-conmon-ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390.scope.
Oct 02 13:23:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed9a653e92cc7d03c00dd6e04bf4d915a74d8683f786988d9025dd652cb218/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed9a653e92cc7d03c00dd6e04bf4d915a74d8683f786988d9025dd652cb218/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed9a653e92cc7d03c00dd6e04bf4d915a74d8683f786988d9025dd652cb218/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed9a653e92cc7d03c00dd6e04bf4d915a74d8683f786988d9025dd652cb218/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:42 compute-0 podman[428028]: 2025-10-02 13:23:42.555440498 +0000 UTC m=+0.020313791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:42 compute-0 podman[428028]: 2025-10-02 13:23:42.659103772 +0000 UTC m=+0.123977075 container init ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:23:42
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:23:42 compute-0 podman[428028]: 2025-10-02 13:23:42.682110635 +0000 UTC m=+0.146983898 container start ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:23:42 compute-0 podman[428028]: 2025-10-02 13:23:42.686521471 +0000 UTC m=+0.151394774 container attach ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:23:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:43 compute-0 nova_compute[257802]: 2025-10-02 13:23:43.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]: {
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:         "osd_id": 1,
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:         "type": "bluestore"
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]:     }
Oct 02 13:23:43 compute-0 elegant_heyrovsky[428044]: }
Oct 02 13:23:43 compute-0 systemd[1]: libpod-ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390.scope: Deactivated successfully.
Oct 02 13:23:43 compute-0 podman[428028]: 2025-10-02 13:23:43.50281787 +0000 UTC m=+0.967691153 container died ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ed9a653e92cc7d03c00dd6e04bf4d915a74d8683f786988d9025dd652cb218-merged.mount: Deactivated successfully.
Oct 02 13:23:43 compute-0 podman[428028]: 2025-10-02 13:23:43.559933184 +0000 UTC m=+1.024806457 container remove ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:43 compute-0 systemd[1]: libpod-conmon-ef6b9b913290122f84dfd2039742346b88ccdadf6739e124a488c021513ad390.scope: Deactivated successfully.
Oct 02 13:23:43 compute-0 sudo[427922]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:23:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:23:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b934a8ef-d58f-4922-a171-be30b85f33da does not exist
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5378db9e-c5aa-4861-a917-88ce14ccac13 does not exist
Oct 02 13:23:43 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev faf883ec-440e-40f5-8416-1e60fc3a6954 does not exist
Oct 02 13:23:43 compute-0 sudo[428080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:43 compute-0 sudo[428080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:43 compute-0 sudo[428080]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:43 compute-0 sudo[428105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:23:43 compute-0 sudo[428105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:43 compute-0 sudo[428105]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:44 compute-0 ceph-mon[73607]: pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:23:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:23:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:23:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:23:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:23:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:23:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:44.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:45 compute-0 podman[428131]: 2025-10-02 13:23:45.000668486 +0000 UTC m=+0.118334917 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:23:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:45 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1656033096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:45 compute-0 ceph-mon[73607]: pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:46 compute-0 nova_compute[257802]: 2025-10-02 13:23:46.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:46 compute-0 nova_compute[257802]: 2025-10-02 13:23:46.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:46 compute-0 nova_compute[257802]: 2025-10-02 13:23:46.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2475010573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:46.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:47 compute-0 ceph-mon[73607]: pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:48 compute-0 nova_compute[257802]: 2025-10-02 13:23:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:48 compute-0 nova_compute[257802]: 2025-10-02 13:23:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:23:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:23:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:50 compute-0 ceph-mon[73607]: pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:51 compute-0 nova_compute[257802]: 2025-10-02 13:23:51.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:51 compute-0 ceph-mon[73607]: pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:23:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:23:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:53 compute-0 sudo[428164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:53 compute-0 sudo[428164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 sudo[428164]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 sudo[428189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:53 compute-0 sudo[428189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 sudo[428189]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 nova_compute[257802]: 2025-10-02 13:23:53.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:54 compute-0 nova_compute[257802]: 2025-10-02 13:23:54.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:54 compute-0 ceph-mon[73607]: pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:54.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:23:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/877821466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:23:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/877821466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:23:56 compute-0 nova_compute[257802]: 2025-10-02 13:23:56.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:56 compute-0 ceph-mon[73607]: pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.580540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436580591, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1064, "num_deletes": 255, "total_data_size": 1700638, "memory_usage": 1727520, "flush_reason": "Manual Compaction"}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436591743, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1659869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83193, "largest_seqno": 84256, "table_properties": {"data_size": 1654697, "index_size": 2631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11036, "raw_average_key_size": 19, "raw_value_size": 1644332, "raw_average_value_size": 2900, "num_data_blocks": 116, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411345, "oldest_key_time": 1759411345, "file_creation_time": 1759411436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 12026 microseconds, and 6527 cpu microseconds.
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.592541) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1659869 bytes OK
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.592591) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595271) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595301) EVENT_LOG_v1 {"time_micros": 1759411436595291, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1695767, prev total WAL file size 1695767, number of live WAL files 2.
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.596765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353130' seq:72057594037927935, type:22 .. '6C6F676D0033373631' seq:0, type:0; will stop at (end)
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1620KB)], [191(10MB)]
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436596820, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12940792, "oldest_snapshot_seqno": -1}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11164 keys, 12802404 bytes, temperature: kUnknown
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436695760, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 12802404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12732404, "index_size": 40959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 296193, "raw_average_key_size": 26, "raw_value_size": 12539381, "raw_average_value_size": 1123, "num_data_blocks": 1551, "num_entries": 11164, "num_filter_entries": 11164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.696096) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12802404 bytes
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.698757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.7 rd, 129.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(15.5) write-amplify(7.7) OK, records in: 11689, records dropped: 525 output_compression: NoCompression
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.698779) EVENT_LOG_v1 {"time_micros": 1759411436698768, "job": 120, "event": "compaction_finished", "compaction_time_micros": 99020, "compaction_time_cpu_micros": 39061, "output_level": 6, "num_output_files": 1, "total_output_size": 12802404, "num_input_records": 11689, "num_output_records": 11164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436699305, "job": 120, "event": "table_file_deletion", "file_number": 193}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436702866, "job": 120, "event": "table_file_deletion", "file_number": 191}
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.596615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.703022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.703035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.703039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.703042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:23:56.703046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:56.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:56.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:57 compute-0 ceph-mon[73607]: pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:58 compute-0 nova_compute[257802]: 2025-10-02 13:23:58.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1639181627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:23:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:23:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:58.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:23:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:59 compute-0 ceph-mon[73607]: pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2579191126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:00.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:01 compute-0 nova_compute[257802]: 2025-10-02 13:24:01.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:01 compute-0 nova_compute[257802]: 2025-10-02 13:24:01.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:01 compute-0 nova_compute[257802]: 2025-10-02 13:24:01.097 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:24:01 compute-0 nova_compute[257802]: 2025-10-02 13:24:01.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:24:01 compute-0 nova_compute[257802]: 2025-10-02 13:24:01.173 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:24:01 compute-0 ceph-mon[73607]: pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:02.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:03 compute-0 nova_compute[257802]: 2025-10-02 13:24:03.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:03 compute-0 podman[428221]: 2025-10-02 13:24:03.926252856 +0000 UTC m=+0.057328630 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:24:03 compute-0 podman[428220]: 2025-10-02 13:24:03.92642716 +0000 UTC m=+0.060124697 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:24:03 compute-0 podman[428219]: 2025-10-02 13:24:03.93261283 +0000 UTC m=+0.062869694 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:04 compute-0 ceph-mon[73607]: pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.280 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.280 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.280 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.280 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.281 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:04 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:04 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756372274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.734 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.868 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.869 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4163MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.870 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:04 compute-0 nova_compute[257802]: 2025-10-02 13:24:04.870 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.037 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.037 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:24:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.116 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3756372274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:05 compute-0 ceph-mon[73607]: pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816618298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.543 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.549 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.562 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.564 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:24:05 compute-0 nova_compute[257802]: 2025-10-02 13:24:05.564 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:06 compute-0 nova_compute[257802]: 2025-10-02 13:24:06.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2816618298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:07 compute-0 ceph-mon[73607]: pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:08 compute-0 nova_compute[257802]: 2025-10-02 13:24:08.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:08.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:10 compute-0 ceph-mon[73607]: pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:10.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:11 compute-0 nova_compute[257802]: 2025-10-02 13:24:11.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:11 compute-0 ceph-mon[73607]: pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:13 compute-0 sudo[428326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:13 compute-0 sudo[428326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:13 compute-0 sudo[428326]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:13 compute-0 sudo[428351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:13 compute-0 sudo[428351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:13 compute-0 sudo[428351]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:13 compute-0 nova_compute[257802]: 2025-10-02 13:24:13.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:14 compute-0 ceph-mon[73607]: pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:15 compute-0 ceph-mon[73607]: pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:15 compute-0 podman[428377]: 2025-10-02 13:24:15.95162883 +0000 UTC m=+0.088280085 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:24:16 compute-0 nova_compute[257802]: 2025-10-02 13:24:16.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:16.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:17 compute-0 ceph-mon[73607]: pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:17 compute-0 nova_compute[257802]: 2025-10-02 13:24:17.565 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:18 compute-0 nova_compute[257802]: 2025-10-02 13:24:18.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:18.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:20 compute-0 ceph-mon[73607]: pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:20.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:21 compute-0 nova_compute[257802]: 2025-10-02 13:24:21.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:21 compute-0 ceph-mon[73607]: pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:23 compute-0 nova_compute[257802]: 2025-10-02 13:24:23.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:24 compute-0 ceph-mon[73607]: pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:24.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:25 compute-0 ceph-mon[73607]: pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:26 compute-0 nova_compute[257802]: 2025-10-02 13:24:26.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:26.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:24:27.011 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:24:27.011 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:24:27.011 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:28 compute-0 ceph-mon[73607]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:28 compute-0 nova_compute[257802]: 2025-10-02 13:24:28.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:28.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:29 compute-0 ceph-mon[73607]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:30.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:31 compute-0 nova_compute[257802]: 2025-10-02 13:24:31.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:32 compute-0 ceph-mon[73607]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:32.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:33 compute-0 ceph-mon[73607]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:33 compute-0 sudo[428412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:33 compute-0 sudo[428412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:33 compute-0 nova_compute[257802]: 2025-10-02 13:24:33.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:33 compute-0 sudo[428412]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:33 compute-0 sudo[428437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:33 compute-0 sudo[428437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:33 compute-0 sudo[428437]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:34.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:34.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:34 compute-0 podman[428463]: 2025-10-02 13:24:34.904223593 +0000 UTC m=+0.046971192 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:24:34 compute-0 podman[428464]: 2025-10-02 13:24:34.916413566 +0000 UTC m=+0.055017975 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:24:34 compute-0 podman[428465]: 2025-10-02 13:24:34.942577055 +0000 UTC m=+0.078468939 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:24:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:36 compute-0 nova_compute[257802]: 2025-10-02 13:24:36.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:36 compute-0 ceph-mon[73607]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:38 compute-0 ceph-mon[73607]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:38 compute-0 nova_compute[257802]: 2025-10-02 13:24:38.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:38.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:39 compute-0 ceph-mon[73607]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:41 compute-0 nova_compute[257802]: 2025-10-02 13:24:41.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:41 compute-0 nova_compute[257802]: 2025-10-02 13:24:41.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:41 compute-0 nova_compute[257802]: 2025-10-02 13:24:41.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:24:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:42 compute-0 nova_compute[257802]: 2025-10-02 13:24:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:42 compute-0 ceph-mon[73607]: pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:24:42
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:43 compute-0 nova_compute[257802]: 2025-10-02 13:24:43.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:24:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:24:44 compute-0 ceph-mon[73607]: pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:24:44 compute-0 sudo[428525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:44 compute-0 sudo[428525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:44 compute-0 sudo[428525]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:44 compute-0 sudo[428550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:24:44 compute-0 sudo[428550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:44 compute-0 sudo[428550]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:44 compute-0 sudo[428575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:44 compute-0 sudo[428575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:44 compute-0 sudo[428575]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:24:44 compute-0 sudo[428600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:24:44 compute-0 sudo[428600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:44 compute-0 sudo[428600]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c649a565-1372-4c2d-8498-0e5536bae897 does not exist
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev be67fb8e-87d2-45a1-b0a9-b506a2962c80 does not exist
Oct 02 13:24:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d4be6045-8cb2-41e2-9922-169850255467 does not exist
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:24:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:24:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:24:45 compute-0 sudo[428658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:45 compute-0 sudo[428658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:45 compute-0 sudo[428658]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Oct 02 13:24:45 compute-0 sudo[428683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:24:45 compute-0 sudo[428683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:45 compute-0 sudo[428683]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:24:45 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:24:45 compute-0 ceph-mon[73607]: pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Oct 02 13:24:45 compute-0 sudo[428708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:45 compute-0 sudo[428708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:45 compute-0 sudo[428708]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:45 compute-0 sudo[428733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:24:45 compute-0 sudo[428733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.714356458 +0000 UTC m=+0.072858544 container create aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.663506254 +0000 UTC m=+0.022008360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:45 compute-0 systemd[1]: Started libpod-conmon-aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d.scope.
Oct 02 13:24:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.892195666 +0000 UTC m=+0.250697772 container init aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.900372703 +0000 UTC m=+0.258874789 container start aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:24:45 compute-0 heuristic_moore[428816]: 167 167
Oct 02 13:24:45 compute-0 systemd[1]: libpod-aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d.scope: Deactivated successfully.
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.921918251 +0000 UTC m=+0.280420357 container attach aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:24:45 compute-0 podman[428800]: 2025-10-02 13:24:45.922460944 +0000 UTC m=+0.280963050 container died aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:24:46 compute-0 nova_compute[257802]: 2025-10-02 13:24:46.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-246bcceae4ac3d52c073e98ebd0a732abf30b589aff6e95d7c151e85a1492a9c-merged.mount: Deactivated successfully.
Oct 02 13:24:46 compute-0 podman[428800]: 2025-10-02 13:24:46.218923087 +0000 UTC m=+0.577425173 container remove aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:24:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3288897796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:46 compute-0 systemd[1]: libpod-conmon-aace936013e960341218d5787d0754eb9b978a01a1ec6e99c16cd19d7b4fab3d.scope: Deactivated successfully.
Oct 02 13:24:46 compute-0 podman[428833]: 2025-10-02 13:24:46.330092971 +0000 UTC m=+0.225021394 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:24:46 compute-0 podman[428866]: 2025-10-02 13:24:46.43480989 +0000 UTC m=+0.082652189 container create 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:24:46 compute-0 podman[428866]: 2025-10-02 13:24:46.377347838 +0000 UTC m=+0.025190167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:46 compute-0 systemd[1]: Started libpod-conmon-99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101.scope.
Oct 02 13:24:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:46 compute-0 podman[428866]: 2025-10-02 13:24:46.59232047 +0000 UTC m=+0.240162859 container init 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:24:46 compute-0 podman[428866]: 2025-10-02 13:24:46.601273755 +0000 UTC m=+0.249116044 container start 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:24:46 compute-0 podman[428866]: 2025-10-02 13:24:46.606256395 +0000 UTC m=+0.254098814 container attach 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:24:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:46.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Oct 02 13:24:47 compute-0 nova_compute[257802]: 2025-10-02 13:24:47.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:47 compute-0 nova_compute[257802]: 2025-10-02 13:24:47.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/467109670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:47 compute-0 ceph-mon[73607]: pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Oct 02 13:24:47 compute-0 nostalgic_shtern[428883]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:24:47 compute-0 nostalgic_shtern[428883]: --> relative data size: 1.0
Oct 02 13:24:47 compute-0 nostalgic_shtern[428883]: --> All data devices are unavailable
Oct 02 13:24:47 compute-0 systemd[1]: libpod-99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101.scope: Deactivated successfully.
Oct 02 13:24:47 compute-0 podman[428866]: 2025-10-02 13:24:47.408184048 +0000 UTC m=+1.056026347 container died 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e25d70b108a824a52aff85a15da5cc5659be324eb6cbc2ca5c25718efbc08a78-merged.mount: Deactivated successfully.
Oct 02 13:24:47 compute-0 podman[428866]: 2025-10-02 13:24:47.460788864 +0000 UTC m=+1.108631163 container remove 99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:24:47 compute-0 systemd[1]: libpod-conmon-99962638e9e2da496396294d6fc656d04707fd535f43e469ef699e1c4cbd5101.scope: Deactivated successfully.
Oct 02 13:24:47 compute-0 sudo[428733]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:47 compute-0 sudo[428913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:47 compute-0 sudo[428913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:47 compute-0 sudo[428913]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:47 compute-0 sudo[428938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:24:47 compute-0 sudo[428938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:47 compute-0 sudo[428938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:47 compute-0 sudo[428963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:47 compute-0 sudo[428963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:47 compute-0 sudo[428963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:47 compute-0 sudo[428988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:24:47 compute-0 sudo[428988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.07580422 +0000 UTC m=+0.040450084 container create 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:24:48 compute-0 nova_compute[257802]: 2025-10-02 13:24:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:48 compute-0 systemd[1]: Started libpod-conmon-2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4.scope.
Oct 02 13:24:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.153151852 +0000 UTC m=+0.117797746 container init 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.058115995 +0000 UTC m=+0.022761879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.161406249 +0000 UTC m=+0.126052123 container start 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:24:48 compute-0 happy_lewin[429071]: 167 167
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.165345724 +0000 UTC m=+0.129991638 container attach 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.166587944 +0000 UTC m=+0.131233808 container died 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:24:48 compute-0 systemd[1]: libpod-2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4.scope: Deactivated successfully.
Oct 02 13:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3533e6be5ee0a19760bf0a59ce7166d2ffda1ee01e26252ab95dee0834ee0e-merged.mount: Deactivated successfully.
Oct 02 13:24:48 compute-0 podman[429054]: 2025-10-02 13:24:48.202804826 +0000 UTC m=+0.167450690 container remove 2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:24:48 compute-0 systemd[1]: libpod-conmon-2616a6bd8e82707cb61f5e43474f754c1aec73d121ed2a1c9183f60ad2467fd4.scope: Deactivated successfully.
Oct 02 13:24:48 compute-0 podman[429095]: 2025-10-02 13:24:48.366522665 +0000 UTC m=+0.039457661 container create 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:24:48 compute-0 systemd[1]: Started libpod-conmon-7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28.scope.
Oct 02 13:24:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6b636f8ce3a2e57523aed1b188ae5bddad2b061ba96be0e45f2e3fbc5d557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6b636f8ce3a2e57523aed1b188ae5bddad2b061ba96be0e45f2e3fbc5d557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6b636f8ce3a2e57523aed1b188ae5bddad2b061ba96be0e45f2e3fbc5d557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6b636f8ce3a2e57523aed1b188ae5bddad2b061ba96be0e45f2e3fbc5d557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:48 compute-0 podman[429095]: 2025-10-02 13:24:48.349025353 +0000 UTC m=+0.021960359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:48 compute-0 podman[429095]: 2025-10-02 13:24:48.464731977 +0000 UTC m=+0.137666963 container init 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:24:48 compute-0 podman[429095]: 2025-10-02 13:24:48.472229777 +0000 UTC m=+0.145164763 container start 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:24:48 compute-0 podman[429095]: 2025-10-02 13:24:48.475502777 +0000 UTC m=+0.148437763 container attach 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:24:48 compute-0 nova_compute[257802]: 2025-10-02 13:24:48.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 13:24:49 compute-0 hardcore_austin[429111]: {
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:     "1": [
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:         {
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "devices": [
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "/dev/loop3"
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             ],
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "lv_name": "ceph_lv0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "lv_size": "7511998464",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "name": "ceph_lv0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "tags": {
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.cluster_name": "ceph",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.crush_device_class": "",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.encrypted": "0",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.osd_id": "1",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.type": "block",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:                 "ceph.vdo": "0"
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             },
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "type": "block",
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:             "vg_name": "ceph_vg0"
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:         }
Oct 02 13:24:49 compute-0 hardcore_austin[429111]:     ]
Oct 02 13:24:49 compute-0 hardcore_austin[429111]: }
Oct 02 13:24:49 compute-0 systemd[1]: libpod-7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28.scope: Deactivated successfully.
Oct 02 13:24:49 compute-0 podman[429095]: 2025-10-02 13:24:49.21510892 +0000 UTC m=+0.888043966 container died 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf6b636f8ce3a2e57523aed1b188ae5bddad2b061ba96be0e45f2e3fbc5d557-merged.mount: Deactivated successfully.
Oct 02 13:24:49 compute-0 podman[429095]: 2025-10-02 13:24:49.340401494 +0000 UTC m=+1.013336480 container remove 7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:24:49 compute-0 systemd[1]: libpod-conmon-7b653fc5c1f1825d404697d804198a5a7e79bff9478b5c4afb3760a61e205d28.scope: Deactivated successfully.
Oct 02 13:24:49 compute-0 sudo[428988]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:49 compute-0 sudo[429134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:49 compute-0 sudo[429134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:49 compute-0 sudo[429134]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:49 compute-0 sudo[429159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:24:49 compute-0 sudo[429159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:49 compute-0 sudo[429159]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:49 compute-0 sudo[429184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:49 compute-0 sudo[429184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:49 compute-0 sudo[429184]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:49 compute-0 sudo[429209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:24:49 compute-0 sudo[429209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:49 compute-0 podman[429274]: 2025-10-02 13:24:49.921776792 +0000 UTC m=+0.035745661 container create b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:24:49 compute-0 systemd[1]: Started libpod-conmon-b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2.scope.
Oct 02 13:24:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:49 compute-0 podman[429274]: 2025-10-02 13:24:49.997138745 +0000 UTC m=+0.111107634 container init b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:24:50 compute-0 podman[429274]: 2025-10-02 13:24:50.003329254 +0000 UTC m=+0.117298123 container start b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:24:50 compute-0 podman[429274]: 2025-10-02 13:24:49.908203205 +0000 UTC m=+0.022172104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:50 compute-0 thirsty_shannon[429290]: 167 167
Oct 02 13:24:50 compute-0 systemd[1]: libpod-b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2.scope: Deactivated successfully.
Oct 02 13:24:50 compute-0 podman[429274]: 2025-10-02 13:24:50.010198839 +0000 UTC m=+0.124167708 container attach b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:24:50 compute-0 podman[429274]: 2025-10-02 13:24:50.010451915 +0000 UTC m=+0.124420784 container died b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-53d11211cde692fbb8ef252ca988fe5bf0382638f79a95d9fe402b5f3dac5415-merged.mount: Deactivated successfully.
Oct 02 13:24:50 compute-0 podman[429274]: 2025-10-02 13:24:50.048723855 +0000 UTC m=+0.162692714 container remove b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:24:50 compute-0 systemd[1]: libpod-conmon-b133d414fba62840a4fbbf81a4ad84ab4e0aec163e143faa986a8b08058cbde2.scope: Deactivated successfully.
Oct 02 13:24:50 compute-0 ceph-mon[73607]: pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 13:24:50 compute-0 podman[429315]: 2025-10-02 13:24:50.211177044 +0000 UTC m=+0.035491995 container create db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:24:50 compute-0 systemd[1]: Started libpod-conmon-db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235.scope.
Oct 02 13:24:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9aebdc13d0c1f6d8a77f97b8c4187b1623d8d887dbf5177c65e98ece0ed40a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9aebdc13d0c1f6d8a77f97b8c4187b1623d8d887dbf5177c65e98ece0ed40a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9aebdc13d0c1f6d8a77f97b8c4187b1623d8d887dbf5177c65e98ece0ed40a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9aebdc13d0c1f6d8a77f97b8c4187b1623d8d887dbf5177c65e98ece0ed40a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:24:50 compute-0 podman[429315]: 2025-10-02 13:24:50.195783453 +0000 UTC m=+0.020098444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:24:50 compute-0 podman[429315]: 2025-10-02 13:24:50.297195154 +0000 UTC m=+0.121510135 container init db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:24:50 compute-0 podman[429315]: 2025-10-02 13:24:50.303860624 +0000 UTC m=+0.128175585 container start db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:24:50 compute-0 podman[429315]: 2025-10-02 13:24:50.306886597 +0000 UTC m=+0.131201578 container attach db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:24:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:50.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:51 compute-0 nova_compute[257802]: 2025-10-02 13:24:51.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:51 compute-0 zealous_wiles[429331]: {
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:         "osd_id": 1,
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:         "type": "bluestore"
Oct 02 13:24:51 compute-0 zealous_wiles[429331]:     }
Oct 02 13:24:51 compute-0 zealous_wiles[429331]: }
Oct 02 13:24:51 compute-0 systemd[1]: libpod-db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235.scope: Deactivated successfully.
Oct 02 13:24:51 compute-0 podman[429353]: 2025-10-02 13:24:51.160671048 +0000 UTC m=+0.025662479 container died db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f9aebdc13d0c1f6d8a77f97b8c4187b1623d8d887dbf5177c65e98ece0ed40a-merged.mount: Deactivated successfully.
Oct 02 13:24:51 compute-0 podman[429353]: 2025-10-02 13:24:51.204374689 +0000 UTC m=+0.069366090 container remove db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:24:51 compute-0 systemd[1]: libpod-conmon-db967c1bacc9ce92606d966632d5b46d7fb0f8466ab50d918ae083e3aad9e235.scope: Deactivated successfully.
Oct 02 13:24:51 compute-0 sudo[429209]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:24:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:24:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 79cac1dc-c3cf-4481-9eef-bdc28a35678d does not exist
Oct 02 13:24:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 61f23c5a-d5a7-4bdc-8d6e-f8504b8b6a0c does not exist
Oct 02 13:24:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 36600fd5-ddb6-4d79-8729-67dd90582af3 does not exist
Oct 02 13:24:51 compute-0 sudo[429366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:51 compute-0 sudo[429366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:51 compute-0 sudo[429366]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:51 compute-0 sudo[429391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:24:51 compute-0 sudo[429391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:51 compute-0 sudo[429391]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:52 compute-0 nova_compute[257802]: 2025-10-02 13:24:52.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:52 compute-0 ceph-mon[73607]: pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:52 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:24:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:52.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:53 compute-0 ceph-mon[73607]: pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:53 compute-0 nova_compute[257802]: 2025-10-02 13:24:53.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:53 compute-0 sudo[429417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:53 compute-0 sudo[429417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:53 compute-0 sudo[429417]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:53 compute-0 sudo[429442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:53 compute-0 sudo[429442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:53 compute-0 sudo[429442]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:54 compute-0 nova_compute[257802]: 2025-10-02 13:24:54.146 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:24:56 compute-0 nova_compute[257802]: 2025-10-02 13:24:56.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:56 compute-0 nova_compute[257802]: 2025-10-02 13:24:56.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:56 compute-0 nova_compute[257802]: 2025-10-02 13:24:56.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:24:56 compute-0 ceph-mon[73607]: pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:24:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1555327155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:24:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1555327155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:24:56 compute-0 nova_compute[257802]: 2025-10-02 13:24:56.363 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:24:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:56.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 83 op/s
Oct 02 13:24:57 compute-0 ceph-mon[73607]: pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 83 op/s
Oct 02 13:24:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1705947348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:58 compute-0 nova_compute[257802]: 2025-10-02 13:24:58.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:24:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:58.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:24:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:24:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:24:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:58.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:24:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 02 13:24:59 compute-0 ceph-mon[73607]: pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 02 13:24:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2004890090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:00.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 13:25:01 compute-0 nova_compute[257802]: 2025-10-02 13:25:01.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:02 compute-0 ceph-mon[73607]: pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 13:25:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:03 compute-0 nova_compute[257802]: 2025-10-02 13:25:03.364 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:03 compute-0 nova_compute[257802]: 2025-10-02 13:25:03.364 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:25:03 compute-0 nova_compute[257802]: 2025-10-02 13:25:03.364 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:25:03 compute-0 nova_compute[257802]: 2025-10-02 13:25:03.387 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:25:03 compute-0 nova_compute[257802]: 2025-10-02 13:25:03.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:04 compute-0 ceph-mon[73607]: pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:04.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:05 compute-0 ceph-mon[73607]: pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:05 compute-0 podman[429475]: 2025-10-02 13:25:05.918756948 +0000 UTC m=+0.055309642 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:25:05 compute-0 podman[429474]: 2025-10-02 13:25:05.923591174 +0000 UTC m=+0.063140010 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:25:05 compute-0 podman[429473]: 2025-10-02 13:25:05.942558051 +0000 UTC m=+0.082089786 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.138 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.139 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.139 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.139 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.139 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884529319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.587 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/884529319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.746 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.747 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4169MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.747 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.747 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.843 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.843 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:25:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:06.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:06 compute-0 nova_compute[257802]: 2025-10-02 13:25:06.859 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:06.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969933811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.320 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.327 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.371 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.372 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.373 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:07 compute-0 nova_compute[257802]: 2025-10-02 13:25:07.373 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:07 compute-0 ceph-mon[73607]: pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3969933811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:08 compute-0 nova_compute[257802]: 2025-10-02 13:25:08.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:08.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:10 compute-0 ceph-mon[73607]: pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:10.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:11 compute-0 nova_compute[257802]: 2025-10-02 13:25:11.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:11 compute-0 ceph-mon[73607]: pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:25:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:12.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:12.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:13 compute-0 nova_compute[257802]: 2025-10-02 13:25:13.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:13 compute-0 sudo[429574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:13 compute-0 sudo[429574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:13 compute-0 sudo[429574]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:13 compute-0 sudo[429599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:13 compute-0 sudo[429599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:13 compute-0 sudo[429599]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:14 compute-0 ceph-mon[73607]: pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:14.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:14.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:15 compute-0 ceph-mon[73607]: pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:16 compute-0 nova_compute[257802]: 2025-10-02 13:25:16.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:16.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:16 compute-0 podman[429626]: 2025-10-02 13:25:16.949596453 +0000 UTC m=+0.086015140 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:25:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:18 compute-0 ceph-mon[73607]: pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:18 compute-0 nova_compute[257802]: 2025-10-02 13:25:18.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:18.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:19 compute-0 nova_compute[257802]: 2025-10-02 13:25:19.399 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:19 compute-0 nova_compute[257802]: 2025-10-02 13:25:19.954 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:20 compute-0 ceph-mon[73607]: pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:21 compute-0 nova_compute[257802]: 2025-10-02 13:25:21.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:21 compute-0 nova_compute[257802]: 2025-10-02 13:25:21.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:21 compute-0 nova_compute[257802]: 2025-10-02 13:25:21.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:25:21 compute-0 ceph-mon[73607]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:22.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:22.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:23 compute-0 ceph-mon[73607]: pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:23 compute-0 nova_compute[257802]: 2025-10-02 13:25:23.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:24.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:26 compute-0 nova_compute[257802]: 2025-10-02 13:25:26.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:26 compute-0 ceph-mon[73607]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:26.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:27.012 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:27.012 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:27.012 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:27 compute-0 ceph-mon[73607]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:28 compute-0 nova_compute[257802]: 2025-10-02 13:25:28.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:28.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:29 compute-0 ceph-mon[73607]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:30.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:31 compute-0 nova_compute[257802]: 2025-10-02 13:25:31.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:32 compute-0 ceph-mon[73607]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:32.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:33 compute-0 nova_compute[257802]: 2025-10-02 13:25:33.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:33 compute-0 sudo[429661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:33 compute-0 sudo[429661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:33 compute-0 sudo[429661]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:33 compute-0 sudo[429686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:33 compute-0 sudo[429686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:33 compute-0 sudo[429686]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:34 compute-0 ceph-mon[73607]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:34.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:34.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:35 compute-0 ceph-mon[73607]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:35 compute-0 nova_compute[257802]: 2025-10-02 13:25:35.845 2 DEBUG oslo_concurrency.processutils [None req-5849a452-6b6a-4a68-83cf-321285824690 57823c2cf8a04b2abc574ed057efc3db 1533ac528d35434c826050eed402afba - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:35 compute-0 nova_compute[257802]: 2025-10-02 13:25:35.880 2 DEBUG oslo_concurrency.processutils [None req-5849a452-6b6a-4a68-83cf-321285824690 57823c2cf8a04b2abc574ed057efc3db 1533ac528d35434c826050eed402afba - - default default] CMD "env LANG=C uptime" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:36 compute-0 nova_compute[257802]: 2025-10-02 13:25:36.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:36 compute-0 podman[429714]: 2025-10-02 13:25:36.913929382 +0000 UTC m=+0.056049209 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 13:25:36 compute-0 podman[429716]: 2025-10-02 13:25:36.92506158 +0000 UTC m=+0.060529777 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:25:36 compute-0 podman[429715]: 2025-10-02 13:25:36.948768701 +0000 UTC m=+0.087913737 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 13:25:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:38 compute-0 ceph-mon[73607]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:38 compute-0 nova_compute[257802]: 2025-10-02 13:25:38.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:39 compute-0 ceph-mon[73607]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:41 compute-0 nova_compute[257802]: 2025-10-02 13:25:41.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:41 compute-0 nova_compute[257802]: 2025-10-02 13:25:41.167 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:41 compute-0 nova_compute[257802]: 2025-10-02 13:25:41.167 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:25:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:42 compute-0 nova_compute[257802]: 2025-10-02 13:25:42.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:42 compute-0 ceph-mon[73607]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:25:42
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'images']
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:42.775 158261 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:42 compute-0 nova_compute[257802]: 2025-10-02 13:25:42.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:42 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:42.776 158261 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:25:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:42.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:43 compute-0 nova_compute[257802]: 2025-10-02 13:25:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:25:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:25:44 compute-0 ceph-mon[73607]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:25:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:25:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:25:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:25:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:25:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:45 compute-0 ceph-mon[73607]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:46 compute-0 nova_compute[257802]: 2025-10-02 13:25:46.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:46 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/797989071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:47 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3308627465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:47 compute-0 ceph-mon[73607]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:47 compute-0 podman[429777]: 2025-10-02 13:25:47.934526863 +0000 UTC m=+0.078984811 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:25:48 compute-0 nova_compute[257802]: 2025-10-02 13:25:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:48 compute-0 nova_compute[257802]: 2025-10-02 13:25:48.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:49 compute-0 nova_compute[257802]: 2025-10-02 13:25:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:49 compute-0 nova_compute[257802]: 2025-10-02 13:25:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:49 compute-0 ceph-mon[73607]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:50 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:25:50.777 158261 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=6718a9ec-e13c-42f0-978a-6c44c48d0d54, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:25:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:51 compute-0 nova_compute[257802]: 2025-10-02 13:25:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:51 compute-0 sudo[429807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:51 compute-0 sudo[429807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:51 compute-0 sudo[429807]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:51 compute-0 sudo[429832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:51 compute-0 sudo[429832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:51 compute-0 sudo[429832]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:51 compute-0 sudo[429857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:51 compute-0 sudo[429857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:51 compute-0 sudo[429857]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:51 compute-0 sudo[429882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:25:51 compute-0 sudo[429882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:52 compute-0 ceph-mon[73607]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:52 compute-0 sudo[429882]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:52.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:53 compute-0 ceph-mon[73607]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:53 compute-0 nova_compute[257802]: 2025-10-02 13:25:53.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:54 compute-0 sudo[429938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:54 compute-0 sudo[429938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:54 compute-0 sudo[429938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:54 compute-0 nova_compute[257802]: 2025-10-02 13:25:54.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:54 compute-0 sudo[429963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:54 compute-0 sudo[429963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:54 compute-0 sudo[429963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:25:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:54 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:25:54 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d023576e-217c-45bf-8ac9-0938d72bbfeb does not exist
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev cef869d6-3cd6-497d-ba07-1b8d99788532 does not exist
Oct 02 13:25:55 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev a5b2bf41-8b90-4f08-8297-84964b940a95 does not exist
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:25:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:55 compute-0 sudo[429989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:55 compute-0 sudo[429989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:55 compute-0 sudo[429989]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:55 compute-0 sudo[430014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:55 compute-0 sudo[430014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:55 compute-0 sudo[430014]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:55 compute-0 sudo[430039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:55 compute-0 sudo[430039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:55 compute-0 sudo[430039]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:55 compute-0 ceph-mon[73607]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1754120143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1754120143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:25:55 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:55 compute-0 sudo[430064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:25:55 compute-0 sudo[430064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.079582112 +0000 UTC m=+0.040237349 container create e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:56 compute-0 systemd[1]: Started libpod-conmon-e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9.scope.
Oct 02 13:25:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.061348133 +0000 UTC m=+0.022003400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:56 compute-0 nova_compute[257802]: 2025-10-02 13:25:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.174917506 +0000 UTC m=+0.135572743 container init e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.18509001 +0000 UTC m=+0.145745237 container start e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.189313972 +0000 UTC m=+0.149969219 container attach e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:56 compute-0 exciting_kalam[430146]: 167 167
Oct 02 13:25:56 compute-0 systemd[1]: libpod-e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9.scope: Deactivated successfully.
Oct 02 13:25:56 compute-0 conmon[430146]: conmon e5cd59e2cc58296d4072 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9.scope/container/memory.events
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.196262429 +0000 UTC m=+0.156917676 container died e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf458a05fc487a435a3c3cc8a2c145e85ca76b03e07e9196fa63470223d79ab-merged.mount: Deactivated successfully.
Oct 02 13:25:56 compute-0 podman[430130]: 2025-10-02 13:25:56.241961859 +0000 UTC m=+0.202617086 container remove e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:25:56 compute-0 systemd[1]: libpod-conmon-e5cd59e2cc58296d40727affc416bdaecccd1b405405c212752091c84736d1a9.scope: Deactivated successfully.
Oct 02 13:25:56 compute-0 podman[430169]: 2025-10-02 13:25:56.415034652 +0000 UTC m=+0.050361483 container create 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:25:56 compute-0 systemd[1]: Started libpod-conmon-2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4.scope.
Oct 02 13:25:56 compute-0 podman[430169]: 2025-10-02 13:25:56.394037947 +0000 UTC m=+0.029364768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:56 compute-0 podman[430169]: 2025-10-02 13:25:56.51347303 +0000 UTC m=+0.148799871 container init 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:56 compute-0 podman[430169]: 2025-10-02 13:25:56.522866746 +0000 UTC m=+0.158193567 container start 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:25:56 compute-0 podman[430169]: 2025-10-02 13:25:56.52593887 +0000 UTC m=+0.161265711 container attach 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:25:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:25:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:25:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:57 compute-0 ceph-mon[73607]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:57 compute-0 hungry_yonath[430185]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:25:57 compute-0 hungry_yonath[430185]: --> relative data size: 1.0
Oct 02 13:25:57 compute-0 hungry_yonath[430185]: --> All data devices are unavailable
Oct 02 13:25:57 compute-0 systemd[1]: libpod-2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4.scope: Deactivated successfully.
Oct 02 13:25:57 compute-0 podman[430169]: 2025-10-02 13:25:57.34061096 +0000 UTC m=+0.975937781 container died 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc82eb690fc6af99a9a857fee24999261528a12d6d7935778a83c8e9a2f0d87a-merged.mount: Deactivated successfully.
Oct 02 13:25:57 compute-0 podman[430169]: 2025-10-02 13:25:57.419868207 +0000 UTC m=+1.055195028 container remove 2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:25:57 compute-0 systemd[1]: libpod-conmon-2f07015599cee7f2329863636067a05f83bcf6a746342036147ec827dd5da3b4.scope: Deactivated successfully.
Oct 02 13:25:57 compute-0 sudo[430064]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:57 compute-0 sudo[430213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:57 compute-0 sudo[430213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:57 compute-0 sudo[430213]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:57 compute-0 sudo[430238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:57 compute-0 sudo[430238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:57 compute-0 sudo[430238]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:57 compute-0 sudo[430263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:57 compute-0 sudo[430263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:57 compute-0 sudo[430263]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:57 compute-0 sudo[430288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:25:57 compute-0 sudo[430288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.165140517 +0000 UTC m=+0.046144531 container create 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:25:58 compute-0 systemd[1]: Started libpod-conmon-05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392.scope.
Oct 02 13:25:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.242053727 +0000 UTC m=+0.123057771 container init 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.148408264 +0000 UTC m=+0.029412308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.249418794 +0000 UTC m=+0.130422808 container start 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.252633052 +0000 UTC m=+0.133637076 container attach 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:25:58 compute-0 systemd[1]: libpod-05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392.scope: Deactivated successfully.
Oct 02 13:25:58 compute-0 tender_bell[430371]: 167 167
Oct 02 13:25:58 compute-0 conmon[430371]: conmon 05430f6633583bab8c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392.scope/container/memory.events
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.25545646 +0000 UTC m=+0.136460474 container died 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d45bc1d7fe615a73395f4a9158c9e085694a1713410998e6da6514680407ef28-merged.mount: Deactivated successfully.
Oct 02 13:25:58 compute-0 podman[430354]: 2025-10-02 13:25:58.307462551 +0000 UTC m=+0.188466565 container remove 05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:58 compute-0 systemd[1]: libpod-conmon-05430f6633583bab8c520d222296fb9ba47d85e8f61c13b24e539f009e396392.scope: Deactivated successfully.
Oct 02 13:25:58 compute-0 podman[430394]: 2025-10-02 13:25:58.459063298 +0000 UTC m=+0.038438885 container create eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:25:58 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2468123150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:58 compute-0 systemd[1]: Started libpod-conmon-eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f.scope.
Oct 02 13:25:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea139fb8b2c08993062fdf1d0a6efccca0778828703069c4eee1cdd6c563c1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea139fb8b2c08993062fdf1d0a6efccca0778828703069c4eee1cdd6c563c1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea139fb8b2c08993062fdf1d0a6efccca0778828703069c4eee1cdd6c563c1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea139fb8b2c08993062fdf1d0a6efccca0778828703069c4eee1cdd6c563c1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:58 compute-0 podman[430394]: 2025-10-02 13:25:58.539487463 +0000 UTC m=+0.118863080 container init eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:25:58 compute-0 podman[430394]: 2025-10-02 13:25:58.444630111 +0000 UTC m=+0.024005718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:58 compute-0 podman[430394]: 2025-10-02 13:25:58.545385196 +0000 UTC m=+0.124760783 container start eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:25:58 compute-0 podman[430394]: 2025-10-02 13:25:58.550080888 +0000 UTC m=+0.129456505 container attach eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:25:58 compute-0 nova_compute[257802]: 2025-10-02 13:25:58.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:25:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:58.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:25:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:25:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:58.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]: {
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:     "1": [
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:         {
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "devices": [
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "/dev/loop3"
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             ],
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "lv_name": "ceph_lv0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "lv_size": "7511998464",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "name": "ceph_lv0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "tags": {
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.cluster_name": "ceph",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.crush_device_class": "",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.encrypted": "0",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.osd_id": "1",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.type": "block",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:                 "ceph.vdo": "0"
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             },
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "type": "block",
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:             "vg_name": "ceph_vg0"
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:         }
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]:     ]
Oct 02 13:25:59 compute-0 trusting_matsumoto[430410]: }
Oct 02 13:25:59 compute-0 systemd[1]: libpod-eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f.scope: Deactivated successfully.
Oct 02 13:25:59 compute-0 podman[430394]: 2025-10-02 13:25:59.35572222 +0000 UTC m=+0.935097827 container died eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea139fb8b2c08993062fdf1d0a6efccca0778828703069c4eee1cdd6c563c1f-merged.mount: Deactivated successfully.
Oct 02 13:25:59 compute-0 podman[430394]: 2025-10-02 13:25:59.408726476 +0000 UTC m=+0.988102063 container remove eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:59 compute-0 systemd[1]: libpod-conmon-eca0655a64147a3a62bb6259911dfd91beddcc2772d2aa57cfe8a8a9963d858f.scope: Deactivated successfully.
Oct 02 13:25:59 compute-0 sudo[430288]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:59 compute-0 ceph-mon[73607]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:59 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1508713232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:59 compute-0 sudo[430432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:59 compute-0 sudo[430432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:59 compute-0 sudo[430432]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:59 compute-0 sudo[430457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:59 compute-0 sudo[430457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:59 compute-0 sudo[430457]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:59 compute-0 sudo[430482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:59 compute-0 sudo[430482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:59 compute-0 sudo[430482]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:59 compute-0 sudo[430507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:25:59 compute-0 sudo[430507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.104046914 +0000 UTC m=+0.053585610 container create 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:26:00 compute-0 systemd[1]: Started libpod-conmon-8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f.scope.
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.079773571 +0000 UTC m=+0.029312307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.193679601 +0000 UTC m=+0.143218337 container init 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.200649148 +0000 UTC m=+0.150187834 container start 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.203996379 +0000 UTC m=+0.153535085 container attach 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:26:00 compute-0 wizardly_meninsky[430589]: 167 167
Oct 02 13:26:00 compute-0 systemd[1]: libpod-8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f.scope: Deactivated successfully.
Oct 02 13:26:00 compute-0 conmon[430589]: conmon 8c99e4aa716f3a68a886 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f.scope/container/memory.events
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.208418605 +0000 UTC m=+0.157957301 container died 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce24a31f07aaf9a017a5566dce01714a3d84d3df4366c60a3e6303b0e8c19fe0-merged.mount: Deactivated successfully.
Oct 02 13:26:00 compute-0 podman[430573]: 2025-10-02 13:26:00.280842157 +0000 UTC m=+0.230380843 container remove 8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:26:00 compute-0 systemd[1]: libpod-conmon-8c99e4aa716f3a68a886b96aeb4c6c78cd982d4eb4950985dea3ea826168da7f.scope: Deactivated successfully.
Oct 02 13:26:00 compute-0 podman[430612]: 2025-10-02 13:26:00.443232004 +0000 UTC m=+0.042005331 container create 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:26:00 compute-0 systemd[1]: Started libpod-conmon-48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb.scope.
Oct 02 13:26:00 compute-0 podman[430612]: 2025-10-02 13:26:00.42393017 +0000 UTC m=+0.022703517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8adfe1fdc55455a78afa57e4453bc596679909a074385fc93793965e84801b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8adfe1fdc55455a78afa57e4453bc596679909a074385fc93793965e84801b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8adfe1fdc55455a78afa57e4453bc596679909a074385fc93793965e84801b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8adfe1fdc55455a78afa57e4453bc596679909a074385fc93793965e84801b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:00 compute-0 podman[430612]: 2025-10-02 13:26:00.542714338 +0000 UTC m=+0.141487685 container init 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:26:00 compute-0 podman[430612]: 2025-10-02 13:26:00.5502664 +0000 UTC m=+0.149039727 container start 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:00 compute-0 podman[430612]: 2025-10-02 13:26:00.553511348 +0000 UTC m=+0.152284695 container attach 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:26:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:01 compute-0 nova_compute[257802]: 2025-10-02 13:26:01.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:01 compute-0 ceph-mon[73607]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]: {
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:         "osd_id": 1,
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:         "type": "bluestore"
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]:     }
Oct 02 13:26:01 compute-0 laughing_dubinsky[430628]: }
Oct 02 13:26:01 compute-0 systemd[1]: libpod-48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb.scope: Deactivated successfully.
Oct 02 13:26:01 compute-0 podman[430612]: 2025-10-02 13:26:01.453644664 +0000 UTC m=+1.052417991 container died 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d8adfe1fdc55455a78afa57e4453bc596679909a074385fc93793965e84801b-merged.mount: Deactivated successfully.
Oct 02 13:26:01 compute-0 podman[430612]: 2025-10-02 13:26:01.507275464 +0000 UTC m=+1.106048791 container remove 48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:26:01 compute-0 systemd[1]: libpod-conmon-48353c833527a5cf58c44b570a669108d23563348c9af05242d911ce0db8e3fb.scope: Deactivated successfully.
Oct 02 13:26:01 compute-0 sudo[430507]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:26:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:26:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:26:01 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:26:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev dfbc6fcf-1f4d-4dc1-ad2f-4889ff6b2937 does not exist
Oct 02 13:26:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 76218c5b-d481-46fa-bd45-84056a0d7913 does not exist
Oct 02 13:26:01 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev c28b1801-cea5-45aa-84bb-dbad3d786c0a does not exist
Oct 02 13:26:01 compute-0 sudo[430664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:01 compute-0 sudo[430664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:01 compute-0 sudo[430664]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:02 compute-0 sudo[430689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:26:02 compute-0 sudo[430689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:02 compute-0 sudo[430689]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.092340) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562092380, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 251, "total_data_size": 2215392, "memory_usage": 2261200, "flush_reason": "Manual Compaction"}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562105130, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2179196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84257, "largest_seqno": 85553, "table_properties": {"data_size": 2173059, "index_size": 3399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12937, "raw_average_key_size": 19, "raw_value_size": 2160817, "raw_average_value_size": 3339, "num_data_blocks": 150, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411437, "oldest_key_time": 1759411437, "file_creation_time": 1759411562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 12872 microseconds, and 5385 cpu microseconds.
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.105199) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2179196 bytes OK
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.105228) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.107442) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.107457) EVENT_LOG_v1 {"time_micros": 1759411562107452, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.107480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2209770, prev total WAL file size 2209770, number of live WAL files 2.
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.108248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2128KB)], [194(12MB)]
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562108297, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 14981600, "oldest_snapshot_seqno": -1}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 11294 keys, 13034929 bytes, temperature: kUnknown
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562192208, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 13034929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12963910, "index_size": 41658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28293, "raw_key_size": 299560, "raw_average_key_size": 26, "raw_value_size": 12768402, "raw_average_value_size": 1130, "num_data_blocks": 1576, "num_entries": 11294, "num_filter_entries": 11294, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.192472) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13034929 bytes
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.194527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.3 rd, 155.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 12.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(12.9) write-amplify(6.0) OK, records in: 11811, records dropped: 517 output_compression: NoCompression
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.194550) EVENT_LOG_v1 {"time_micros": 1759411562194541, "job": 122, "event": "compaction_finished", "compaction_time_micros": 84007, "compaction_time_cpu_micros": 33020, "output_level": 6, "num_output_files": 1, "total_output_size": 13034929, "num_input_records": 11811, "num_output_records": 11294, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562195240, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562197261, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.108128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.197479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.197485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.197487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.197489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:26:02.197492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:26:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:26:02 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:26:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:02.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:03 compute-0 nova_compute[257802]: 2025-10-02 13:26:03.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:03 compute-0 nova_compute[257802]: 2025-10-02 13:26:03.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:26:03 compute-0 nova_compute[257802]: 2025-10-02 13:26:03.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:26:03 compute-0 nova_compute[257802]: 2025-10-02 13:26:03.116 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:26:03 compute-0 nova_compute[257802]: 2025-10-02 13:26:03.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:03 compute-0 ceph-mon[73607]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:04.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:06 compute-0 nova_compute[257802]: 2025-10-02 13:26:06.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:06 compute-0 ceph-mon[73607]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:06.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.129 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.129 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:07 compute-0 ceph-mon[73607]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:07 compute-0 podman[430739]: 2025-10-02 13:26:07.420010436 +0000 UTC m=+0.055234150 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 13:26:07 compute-0 podman[430737]: 2025-10-02 13:26:07.419702548 +0000 UTC m=+0.061492650 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:26:07 compute-0 podman[430738]: 2025-10-02 13:26:07.446065263 +0000 UTC m=+0.086283137 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 13:26:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489323931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.569 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.735 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.736 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4148MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.736 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.737 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.798 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.799 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.812 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.848 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.849 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.872 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.898 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:26:07 compute-0 nova_compute[257802]: 2025-10-02 13:26:07.917 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810225662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.375 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.382 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.401 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.403 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.403 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1489323931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1810225662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:08 compute-0 nova_compute[257802]: 2025-10-02 13:26:08.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:08.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:09 compute-0 ceph-mon[73607]: pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:10.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:11 compute-0 nova_compute[257802]: 2025-10-02 13:26:11.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:11 compute-0 ceph-mon[73607]: pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:12.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:13 compute-0 ceph-mon[73607]: pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:13 compute-0 nova_compute[257802]: 2025-10-02 13:26:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:14 compute-0 sudo[430817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:14 compute-0 sudo[430817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:14 compute-0 sudo[430817]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:14 compute-0 sudo[430842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:14 compute-0 sudo[430842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:14 compute-0 sudo[430842]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:14.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:15 compute-0 ceph-mon[73607]: pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:16 compute-0 nova_compute[257802]: 2025-10-02 13:26:16.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:16.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:16 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:16 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:16 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:17 compute-0 ceph-mon[73607]: pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:18 compute-0 nova_compute[257802]: 2025-10-02 13:26:18.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:18 compute-0 podman[430870]: 2025-10-02 13:26:18.931934954 +0000 UTC m=+0.075535269 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:18.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:18 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:18 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:18 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:19 compute-0 ceph-mon[73607]: pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:20 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:20 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:20 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:20.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:21 compute-0 nova_compute[257802]: 2025-10-02 13:26:21.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:21 compute-0 ceph-mon[73607]: pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:21 compute-0 nova_compute[257802]: 2025-10-02 13:26:21.402 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:22.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:22 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:22 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:22 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:23 compute-0 ceph-mon[73607]: pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:23 compute-0 nova_compute[257802]: 2025-10-02 13:26:23.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:24.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:24 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:24 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:24 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:26 compute-0 ceph-mon[73607]: pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:26 compute-0 nova_compute[257802]: 2025-10-02 13:26:26.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:26 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:26 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:26 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:26.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:26:27.013 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:26:27.013 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:26:27.014 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:27 compute-0 ceph-mon[73607]: pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:28 compute-0 nova_compute[257802]: 2025-10-02 13:26:28.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:28.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:28 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:28 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:28 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:30 compute-0 ceph-mon[73607]: pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:30.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:30 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:30 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:30 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:31 compute-0 nova_compute[257802]: 2025-10-02 13:26:31.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:31 compute-0 ceph-mon[73607]: pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:32 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:32 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:32 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:33 compute-0 ceph-mon[73607]: pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:33 compute-0 nova_compute[257802]: 2025-10-02 13:26:33.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:34 compute-0 sudo[430903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:34 compute-0 sudo[430903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:34 compute-0 sudo[430903]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:34 compute-0 sudo[430928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:34 compute-0 sudo[430928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:34 compute-0 sudo[430928]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:34.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:34 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:34 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:34 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:36 compute-0 ceph-mon[73607]: pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:36 compute-0 nova_compute[257802]: 2025-10-02 13:26:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:36 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:36 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:36 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:37 compute-0 ceph-mon[73607]: pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:37 compute-0 podman[430958]: 2025-10-02 13:26:37.911625676 +0000 UTC m=+0.046621354 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:26:37 compute-0 podman[430956]: 2025-10-02 13:26:37.911775159 +0000 UTC m=+0.050424684 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:26:37 compute-0 podman[430957]: 2025-10-02 13:26:37.946672899 +0000 UTC m=+0.082303831 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:26:37 compute-0 nova_compute[257802]: 2025-10-02 13:26:37.988 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:38 compute-0 nova_compute[257802]: 2025-10-02 13:26:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:38.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:38 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:38 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:38 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:39 compute-0 ceph-mon[73607]: pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:40 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:40 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:40 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:41 compute-0 nova_compute[257802]: 2025-10-02 13:26:41.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:42 compute-0 nova_compute[257802]: 2025-10-02 13:26:42.100 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:42 compute-0 ceph-mon[73607]: pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:26:42
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:42 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:42 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:42 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:43 compute-0 nova_compute[257802]: 2025-10-02 13:26:43.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:43 compute-0 nova_compute[257802]: 2025-10-02 13:26:43.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:43 compute-0 ceph-mon[73607]: pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:26:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:26:43 compute-0 nova_compute[257802]: 2025-10-02 13:26:43.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:26:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:26:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:26:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:26:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:26:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:26:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:44.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:26:44 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:44 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:44 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:46 compute-0 nova_compute[257802]: 2025-10-02 13:26:46.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:46 compute-0 ceph-mon[73607]: pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:46.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:46 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:46 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:46 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:46.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:47 compute-0 ceph-mon[73607]: pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:48 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1269393396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:48 compute-0 nova_compute[257802]: 2025-10-02 13:26:48.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:48 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:48 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:48 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:49 compute-0 nova_compute[257802]: 2025-10-02 13:26:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:49 compute-0 ceph-mon[73607]: pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:49 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2578479254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:49 compute-0 podman[431020]: 2025-10-02 13:26:49.927688845 +0000 UTC m=+0.070004305 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:26:50 compute-0 nova_compute[257802]: 2025-10-02 13:26:50.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:50 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:50 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:50 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:50.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:26:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:50.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:26:51 compute-0 nova_compute[257802]: 2025-10-02 13:26:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:51 compute-0 nova_compute[257802]: 2025-10-02 13:26:51.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:52 compute-0 ceph-mon[73607]: pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:52 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:52 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:52 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:53 compute-0 nova_compute[257802]: 2025-10-02 13:26:53.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:53 compute-0 ceph-mon[73607]: pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:53 compute-0 nova_compute[257802]: 2025-10-02 13:26:53.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:54 compute-0 sudo[431046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:54 compute-0 sudo[431046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:54 compute-0 sudo[431046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:54 compute-0 sudo[431071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:54 compute-0 sudo[431071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:54 compute-0 sudo[431071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:54 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:54 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:54 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:54.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:26:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507537496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:26:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507537496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:26:55 compute-0 nova_compute[257802]: 2025-10-02 13:26:55.148 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1507537496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1507537496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:56 compute-0 nova_compute[257802]: 2025-10-02 13:26:56.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:56 compute-0 ceph-mon[73607]: pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:56 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:56 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:56 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:57 compute-0 ceph-mon[73607]: pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:58 compute-0 nova_compute[257802]: 2025-10-02 13:26:58.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:58 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:58 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:58 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:26:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:59 compute-0 ceph-mon[73607]: pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:00 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3144941086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:00 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:00 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:00 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:01 compute-0 nova_compute[257802]: 2025-10-02 13:27:01.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:01 compute-0 ceph-mon[73607]: pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:02 compute-0 sudo[431100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:02 compute-0 sudo[431100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:02 compute-0 sudo[431100]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:02 compute-0 sudo[431125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:02 compute-0 sudo[431125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:02 compute-0 sudo[431125]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:02 compute-0 sudo[431150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:02 compute-0 sudo[431150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:02 compute-0 sudo[431150]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:02 compute-0 sudo[431175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:27:02 compute-0 sudo[431175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1429653974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:02 compute-0 sudo[431175]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:02 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:02 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:02 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:27:03 compute-0 nova_compute[257802]: 2025-10-02 13:27:03.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:03 compute-0 nova_compute[257802]: 2025-10-02 13:27:03.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:27:03 compute-0 nova_compute[257802]: 2025-10-02 13:27:03.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:27:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 83dc5bd2-a208-4c2c-9476-58f38947fd5b does not exist
Oct 02 13:27:03 compute-0 nova_compute[257802]: 2025-10-02 13:27:03.129 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:27:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ad76bc6f-2df4-4c5d-99bd-17b7185f8687 does not exist
Oct 02 13:27:03 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 11a7f657-69be-4483-844e-76d324cd5fe4 does not exist
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:03 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:03 compute-0 sudo[431231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:03 compute-0 sudo[431231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:03 compute-0 sudo[431231]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:03 compute-0 sudo[431256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:03 compute-0 sudo[431256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:03 compute-0 sudo[431256]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:03 compute-0 sudo[431281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:03 compute-0 sudo[431281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:03 compute-0 sudo[431281]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:03 compute-0 sudo[431306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:27:03 compute-0 sudo[431306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.754693801 +0000 UTC m=+0.057015073 container create e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:27:03 compute-0 nova_compute[257802]: 2025-10-02 13:27:03.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:03 compute-0 systemd[1]: Started libpod-conmon-e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5.scope.
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.72471412 +0000 UTC m=+0.027035422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.852951995 +0000 UTC m=+0.155273297 container init e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:03 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.85895061 +0000 UTC m=+0.161271892 container start e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:27:03 compute-0 angry_bhabha[431390]: 167 167
Oct 02 13:27:03 compute-0 systemd[1]: libpod-e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5.scope: Deactivated successfully.
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.8693542 +0000 UTC m=+0.171675482 container attach e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.870418826 +0000 UTC m=+0.172740108 container died e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-067cf1302dd1f70bdb0d228198ae0c04544fbc006fa124297f0802fcd28ebddc-merged.mount: Deactivated successfully.
Oct 02 13:27:03 compute-0 podman[431374]: 2025-10-02 13:27:03.946063935 +0000 UTC m=+0.248385217 container remove e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bhabha, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:03 compute-0 systemd[1]: libpod-conmon-e58091dc62135c68ebcb4f6a35c3b30d9684c8335c67c27e1a88b6a41b48afc5.scope: Deactivated successfully.
Oct 02 13:27:04 compute-0 podman[431415]: 2025-10-02 13:27:04.112695624 +0000 UTC m=+0.048320434 container create 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:04 compute-0 systemd[1]: Started libpod-conmon-817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d.scope.
Oct 02 13:27:04 compute-0 podman[431415]: 2025-10-02 13:27:04.088471551 +0000 UTC m=+0.024096381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:04 compute-0 podman[431415]: 2025-10-02 13:27:04.223426308 +0000 UTC m=+0.159051148 container init 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:27:04 compute-0 podman[431415]: 2025-10-02 13:27:04.232041515 +0000 UTC m=+0.167666325 container start 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:27:04 compute-0 podman[431415]: 2025-10-02 13:27:04.239601197 +0000 UTC m=+0.175226007 container attach 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:27:04 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:04 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:04 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:04.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:05 compute-0 adoring_ritchie[431432]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:27:05 compute-0 adoring_ritchie[431432]: --> relative data size: 1.0
Oct 02 13:27:05 compute-0 adoring_ritchie[431432]: --> All data devices are unavailable
Oct 02 13:27:05 compute-0 systemd[1]: libpod-817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d.scope: Deactivated successfully.
Oct 02 13:27:05 compute-0 podman[431415]: 2025-10-02 13:27:05.065538248 +0000 UTC m=+1.001163078 container died 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01e721df318a3bebd79c290a4d00e5677cf0308a7e729e69b6466e4710653d3-merged.mount: Deactivated successfully.
Oct 02 13:27:05 compute-0 ceph-mon[73607]: pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:05 compute-0 podman[431415]: 2025-10-02 13:27:05.370652689 +0000 UTC m=+1.306277499 container remove 817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:05 compute-0 systemd[1]: libpod-conmon-817b474b5605b8dfec852c5a1af364038505e67b0a98ef2339f1d49dae427e9d.scope: Deactivated successfully.
Oct 02 13:27:05 compute-0 sudo[431306]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:05 compute-0 sudo[431460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:05 compute-0 sudo[431460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:05 compute-0 sudo[431460]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:05 compute-0 sudo[431485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:05 compute-0 sudo[431485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:05 compute-0 sudo[431485]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:05 compute-0 sudo[431510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:05 compute-0 sudo[431510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:05 compute-0 sudo[431510]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:05 compute-0 sudo[431535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:27:05 compute-0 sudo[431535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.028513426 +0000 UTC m=+0.057339951 container create 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:05.99254671 +0000 UTC m=+0.021373245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:06 compute-0 systemd[1]: Started libpod-conmon-1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc.scope.
Oct 02 13:27:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.14298806 +0000 UTC m=+0.171814595 container init 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.151800882 +0000 UTC m=+0.180627397 container start 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:27:06 compute-0 inspiring_mahavira[431618]: 167 167
Oct 02 13:27:06 compute-0 systemd[1]: libpod-1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc.scope: Deactivated successfully.
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.157133361 +0000 UTC m=+0.185959926 container attach 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:06 compute-0 conmon[431618]: conmon 1056b07ce275277734ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc.scope/container/memory.events
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.158399141 +0000 UTC m=+0.187225666 container died 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b205136b1bca9f3dee6277f48d2745a34a9972aadc8f8148f1658d74b26653-merged.mount: Deactivated successfully.
Oct 02 13:27:06 compute-0 nova_compute[257802]: 2025-10-02 13:27:06.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:06 compute-0 podman[431602]: 2025-10-02 13:27:06.364044488 +0000 UTC m=+0.392871003 container remove 1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:27:06 compute-0 systemd[1]: libpod-conmon-1056b07ce275277734cee3fcb843cc33fb969795a05f6ee734e247ccc7d826dc.scope: Deactivated successfully.
Oct 02 13:27:06 compute-0 podman[431643]: 2025-10-02 13:27:06.555812102 +0000 UTC m=+0.042344300 container create 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:06 compute-0 systemd[1]: Started libpod-conmon-5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2.scope.
Oct 02 13:27:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacd2df52d64d98446f806263e473464fc448a4e16f752bb5af2edbeb2dc606/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacd2df52d64d98446f806263e473464fc448a4e16f752bb5af2edbeb2dc606/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacd2df52d64d98446f806263e473464fc448a4e16f752bb5af2edbeb2dc606/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacd2df52d64d98446f806263e473464fc448a4e16f752bb5af2edbeb2dc606/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:06 compute-0 podman[431643]: 2025-10-02 13:27:06.538166827 +0000 UTC m=+0.024699045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:06 compute-0 podman[431643]: 2025-10-02 13:27:06.636883553 +0000 UTC m=+0.123415781 container init 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:27:06 compute-0 podman[431643]: 2025-10-02 13:27:06.644051485 +0000 UTC m=+0.130583683 container start 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:06 compute-0 podman[431643]: 2025-10-02 13:27:06.648458231 +0000 UTC m=+0.134990449 container attach 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:06 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:06 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:06 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:06.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.164 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.165 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.165 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.165 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.165 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]: {
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:     "1": [
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:         {
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "devices": [
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "/dev/loop3"
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             ],
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "lv_name": "ceph_lv0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "lv_size": "7511998464",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "name": "ceph_lv0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "tags": {
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.cluster_name": "ceph",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.crush_device_class": "",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.encrypted": "0",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.osd_id": "1",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.type": "block",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:                 "ceph.vdo": "0"
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             },
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "type": "block",
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:             "vg_name": "ceph_vg0"
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:         }
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]:     ]
Oct 02 13:27:07 compute-0 sharp_proskuriakova[431660]: }
Oct 02 13:27:07 compute-0 systemd[1]: libpod-5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2.scope: Deactivated successfully.
Oct 02 13:27:07 compute-0 podman[431643]: 2025-10-02 13:27:07.421804156 +0000 UTC m=+0.908336354 container died 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aacd2df52d64d98446f806263e473464fc448a4e16f752bb5af2edbeb2dc606-merged.mount: Deactivated successfully.
Oct 02 13:27:07 compute-0 podman[431643]: 2025-10-02 13:27:07.478776967 +0000 UTC m=+0.965309155 container remove 5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:27:07 compute-0 systemd[1]: libpod-conmon-5eb8158597b95a44230a4c7e58449caa5d45ca1812c4c238e6b0b7650a60e6d2.scope: Deactivated successfully.
Oct 02 13:27:07 compute-0 sudo[431535]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:07 compute-0 sudo[431705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:07 compute-0 sudo[431705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:07 compute-0 sudo[431705]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782148413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:07 compute-0 sudo[431730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.627 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:07 compute-0 sudo[431730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:07 compute-0 sudo[431730]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:07 compute-0 sudo[431757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:07 compute-0 sudo[431757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:07 compute-0 sudo[431757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:07 compute-0 sudo[431782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:27:07 compute-0 sudo[431782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.794 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.796 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.796 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:07 compute-0 nova_compute[257802]: 2025-10-02 13:27:07.796 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.036 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.037 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.052 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.073377413 +0000 UTC m=+0.038392045 container create 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:08 compute-0 systemd[1]: Started libpod-conmon-62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1.scope.
Oct 02 13:27:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.147135397 +0000 UTC m=+0.112150029 container init 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.056705992 +0000 UTC m=+0.021720624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.154572996 +0000 UTC m=+0.119587628 container start 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:08 compute-0 zealous_bell[431867]: 167 167
Oct 02 13:27:08 compute-0 systemd[1]: libpod-62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1.scope: Deactivated successfully.
Oct 02 13:27:08 compute-0 conmon[431867]: conmon 62231b1b2884ded9a439 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1.scope/container/memory.events
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.166549324 +0000 UTC m=+0.131563956 container attach 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.166959724 +0000 UTC m=+0.131974366 container died 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:27:08 compute-0 podman[431866]: 2025-10-02 13:27:08.188210056 +0000 UTC m=+0.076580334 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 13:27:08 compute-0 podman[431861]: 2025-10-02 13:27:08.192033047 +0000 UTC m=+0.082208378 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:27:08 compute-0 podman[431865]: 2025-10-02 13:27:08.193674756 +0000 UTC m=+0.082164197 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-997d73f39d89ebc2a30b647c47e4fb65b74dea3039c7192509585bfdea279fa7-merged.mount: Deactivated successfully.
Oct 02 13:27:08 compute-0 ceph-mon[73607]: pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1782148413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:08 compute-0 podman[431848]: 2025-10-02 13:27:08.23578682 +0000 UTC m=+0.200801452 container remove 62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:27:08 compute-0 systemd[1]: libpod-conmon-62231b1b2884ded9a439739d250be2ac0f2e2bb1b85a009a4b72436012c629f1.scope: Deactivated successfully.
Oct 02 13:27:08 compute-0 podman[431961]: 2025-10-02 13:27:08.39579591 +0000 UTC m=+0.041743826 container create a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:27:08 compute-0 podman[431961]: 2025-10-02 13:27:08.375392398 +0000 UTC m=+0.021340334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:08 compute-0 systemd[1]: Started libpod-conmon-a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda.scope.
Oct 02 13:27:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390996037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.500 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.508 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:27:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e66c0564aec3c01775d99772130a95597697334e874e471c87bc3749f11819/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e66c0564aec3c01775d99772130a95597697334e874e471c87bc3749f11819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e66c0564aec3c01775d99772130a95597697334e874e471c87bc3749f11819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e66c0564aec3c01775d99772130a95597697334e874e471c87bc3749f11819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.541 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.542 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.542 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:08 compute-0 podman[431961]: 2025-10-02 13:27:08.556504506 +0000 UTC m=+0.202452452 container init a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:08 compute-0 podman[431961]: 2025-10-02 13:27:08.564018757 +0000 UTC m=+0.209966673 container start a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:27:08 compute-0 podman[431961]: 2025-10-02 13:27:08.575637576 +0000 UTC m=+0.221585522 container attach a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:08 compute-0 nova_compute[257802]: 2025-10-02 13:27:08.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:08 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:08 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:08 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:08.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2390996037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:09 compute-0 ceph-mon[73607]: pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]: {
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:         "osd_id": 1,
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:         "type": "bluestore"
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]:     }
Oct 02 13:27:09 compute-0 xenodochial_williamson[431979]: }
Oct 02 13:27:09 compute-0 systemd[1]: libpod-a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda.scope: Deactivated successfully.
Oct 02 13:27:09 compute-0 conmon[431979]: conmon a75bbcd65dd606fde297 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda.scope/container/memory.events
Oct 02 13:27:09 compute-0 podman[431961]: 2025-10-02 13:27:09.402241663 +0000 UTC m=+1.048189589 container died a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-76e66c0564aec3c01775d99772130a95597697334e874e471c87bc3749f11819-merged.mount: Deactivated successfully.
Oct 02 13:27:09 compute-0 podman[431961]: 2025-10-02 13:27:09.459738497 +0000 UTC m=+1.105686413 container remove a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:09 compute-0 systemd[1]: libpod-conmon-a75bbcd65dd606fde297e67268fb2a38509f178f5173bcb144279462e3751bda.scope: Deactivated successfully.
Oct 02 13:27:09 compute-0 sudo[431782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:27:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:27:09 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ff6d5072-c5c0-4b58-b1ce-0fcdc600faf3 does not exist
Oct 02 13:27:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ea0a8280-1d05-4e86-aac1-528eb989cacf does not exist
Oct 02 13:27:09 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev eeb180d7-759e-481d-9e24-d15fa11ef748 does not exist
Oct 02 13:27:09 compute-0 sudo[432016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:09 compute-0 sudo[432016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:09 compute-0 sudo[432016]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:09 compute-0 sudo[432041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:27:09 compute-0 sudo[432041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:09 compute-0 sudo[432041]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:10 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:27:10 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:10 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:10 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:11 compute-0 nova_compute[257802]: 2025-10-02 13:27:11.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:11 compute-0 ceph-mon[73607]: pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:12 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:12 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:12 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:13 compute-0 nova_compute[257802]: 2025-10-02 13:27:13.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:14 compute-0 ceph-mon[73607]: pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:14 compute-0 sudo[432070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:14 compute-0 sudo[432070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:14 compute-0 sudo[432070]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:14 compute-0 sudo[432095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:14 compute-0 sudo[432095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:14 compute-0 sudo[432095]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:14 compute-0 sshd-session[432068]: Invalid user admin from 193.32.162.151 port 36454
Oct 02 13:27:14 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:14 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:14 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:15 compute-0 sshd-session[432068]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:27:15 compute-0 sshd-session[432068]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.32.162.151
Oct 02 13:27:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:15 compute-0 ceph-mon[73607]: pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:16 compute-0 nova_compute[257802]: 2025-10-02 13:27:16.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:16 compute-0 sshd-session[432068]: Failed password for invalid user admin from 193.32.162.151 port 36454 ssh2
Oct 02 13:27:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:17 compute-0 ceph-mon[73607]: pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:17 compute-0 sshd-session[432068]: Connection closed by invalid user admin 193.32.162.151 port 36454 [preauth]
Oct 02 13:27:18 compute-0 nova_compute[257802]: 2025-10-02 13:27:18.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:19.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:19 compute-0 ceph-mon[73607]: pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:20 compute-0 podman[432124]: 2025-10-02 13:27:20.937848163 +0000 UTC m=+0.080343274 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:27:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:21 compute-0 nova_compute[257802]: 2025-10-02 13:27:21.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:22 compute-0 ceph-mon[73607]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:23 compute-0 ceph-mon[73607]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:23 compute-0 nova_compute[257802]: 2025-10-02 13:27:23.542 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:23 compute-0 nova_compute[257802]: 2025-10-02 13:27:23.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:25 compute-0 ceph-mon[73607]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:26 compute-0 nova_compute[257802]: 2025-10-02 13:27:26.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:27:27.015 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:27:27.015 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:27:27.015 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:28 compute-0 ceph-mon[73607]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:28 compute-0 nova_compute[257802]: 2025-10-02 13:27:28.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:29 compute-0 ceph-mon[73607]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:31 compute-0 nova_compute[257802]: 2025-10-02 13:27:31.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:32 compute-0 ceph-mon[73607]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:33 compute-0 ceph-mon[73607]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:33 compute-0 nova_compute[257802]: 2025-10-02 13:27:33.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:34 compute-0 sudo[432158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:34 compute-0 sudo[432158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:34 compute-0 sudo[432158]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:34 compute-0 sudo[432183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:34 compute-0 sudo[432183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:34 compute-0 sudo[432183]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:35.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:35 compute-0 ceph-mon[73607]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:36 compute-0 nova_compute[257802]: 2025-10-02 13:27:36.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:37.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:38 compute-0 ceph-mon[73607]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:38 compute-0 podman[432210]: 2025-10-02 13:27:38.907717891 +0000 UTC m=+0.050023635 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 13:27:38 compute-0 podman[432211]: 2025-10-02 13:27:38.919199757 +0000 UTC m=+0.058313644 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:27:38 compute-0 nova_compute[257802]: 2025-10-02 13:27:38.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:38 compute-0 podman[432212]: 2025-10-02 13:27:38.927096847 +0000 UTC m=+0.062597207 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:27:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:39.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:39.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:39 compute-0 ceph-mon[73607]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:41 compute-0 nova_compute[257802]: 2025-10-02 13:27:41.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:42 compute-0 ceph-mon[73607]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:27:42
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta', 'volumes']
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:43 compute-0 ceph-mon[73607]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:27:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:27:43 compute-0 nova_compute[257802]: 2025-10-02 13:27:43.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:44 compute-0 nova_compute[257802]: 2025-10-02 13:27:44.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:44 compute-0 nova_compute[257802]: 2025-10-02 13:27:44.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:44 compute-0 nova_compute[257802]: 2025-10-02 13:27:44.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:27:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:27:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:27:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:27:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:27:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:27:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:46 compute-0 ceph-mon[73607]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:46 compute-0 nova_compute[257802]: 2025-10-02 13:27:46.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:47.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:47 compute-0 ceph-mon[73607]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:48 compute-0 nova_compute[257802]: 2025-10-02 13:27:48.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:49 compute-0 ceph-mon[73607]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:50 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2130006771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:27:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:27:51 compute-0 nova_compute[257802]: 2025-10-02 13:27:51.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:51 compute-0 nova_compute[257802]: 2025-10-02 13:27:51.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1533256654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:51 compute-0 ceph-mon[73607]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:51 compute-0 nova_compute[257802]: 2025-10-02 13:27:51.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:51 compute-0 podman[432267]: 2025-10-02 13:27:51.944030976 +0000 UTC m=+0.084014752 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:27:52 compute-0 nova_compute[257802]: 2025-10-02 13:27:52.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:53 compute-0 nova_compute[257802]: 2025-10-02 13:27:53.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:54 compute-0 ceph-mon[73607]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:54 compute-0 sudo[432296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:54 compute-0 sudo[432296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:54 compute-0 sudo[432296]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:55.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:55 compute-0 sudo[432321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:55 compute-0 sudo[432321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:55 compute-0 sudo[432321]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:55 compute-0 nova_compute[257802]: 2025-10-02 13:27:55.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:27:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1267623879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:55 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:27:55 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1267623879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:27:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1267623879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1267623879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:55 compute-0 ceph-mon[73607]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:56 compute-0 nova_compute[257802]: 2025-10-02 13:27:56.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:57.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:58 compute-0 ceph-mon[73607]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:58 compute-0 nova_compute[257802]: 2025-10-02 13:27:58.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:59.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:27:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:27:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:27:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:59 compute-0 ceph-mon[73607]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:01.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:01 compute-0 nova_compute[257802]: 2025-10-02 13:28:01.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:02 compute-0 ceph-mon[73607]: pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:03.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1637292816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:03 compute-0 ceph-mon[73607]: pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:03 compute-0 nova_compute[257802]: 2025-10-02 13:28:03.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:04 compute-0 nova_compute[257802]: 2025-10-02 13:28:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:04 compute-0 nova_compute[257802]: 2025-10-02 13:28:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:28:04 compute-0 nova_compute[257802]: 2025-10-02 13:28:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:28:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/990183815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:05.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:05 compute-0 ceph-mon[73607]: pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:06 compute-0 nova_compute[257802]: 2025-10-02 13:28:06.379 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:28:06 compute-0 nova_compute[257802]: 2025-10-02 13:28:06.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:07.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.175 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.176 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.176 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.176 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.177 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:07 compute-0 ceph-mon[73607]: pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334608238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.657 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.790 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.791 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4152MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.792 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:07 compute-0 nova_compute[257802]: 2025-10-02 13:28:07.792 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.189 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.189 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.207 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3334608238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229684510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.636 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.643 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.727 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.729 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.729 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:08 compute-0 nova_compute[257802]: 2025-10-02 13:28:08.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:09.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1229684510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:09 compute-0 ceph-mon[73607]: pgmap v3915: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:09 compute-0 podman[432397]: 2025-10-02 13:28:09.916910276 +0000 UTC m=+0.056428659 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 13:28:09 compute-0 podman[432399]: 2025-10-02 13:28:09.938219268 +0000 UTC m=+0.071715846 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:28:09 compute-0 podman[432398]: 2025-10-02 13:28:09.938853984 +0000 UTC m=+0.074813252 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:28:09 compute-0 sudo[432449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:09 compute-0 sudo[432449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:09 compute-0 sudo[432449]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:10 compute-0 sudo[432474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 sudo[432474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:10 compute-0 sudo[432499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 sudo[432499]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:28:10 compute-0 sudo[432524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:28:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:28:10 compute-0 sudo[432524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:28:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:28:10 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:10 compute-0 sudo[432568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:10 compute-0 sudo[432568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 sudo[432568]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:10 compute-0 sudo[432593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 sudo[432593]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:10 compute-0 sudo[432618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:10 compute-0 sudo[432618]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:10 compute-0 sudo[432643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:28:10 compute-0 sudo[432643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:11.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:11 compute-0 sudo[432643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 40aac0ad-fb81-4007-868e-dbbc4bd69625 does not exist
Oct 02 13:28:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev eaef1cc0-47f0-4f4c-b685-8179d36cea91 does not exist
Oct 02 13:28:11 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 3a1c3aa8-bc0b-48d7-8880-6be28a014146 does not exist
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:28:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:11.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:28:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:11 compute-0 sudo[432702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:11 compute-0 sudo[432702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:11 compute-0 sudo[432702]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:11 compute-0 sudo[432727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:11 compute-0 sudo[432727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:11 compute-0 sudo[432727]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:11 compute-0 sudo[432752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:11 compute-0 sudo[432752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:11 compute-0 sudo[432752]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:11 compute-0 ceph-mon[73607]: pgmap v3916: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:11 compute-0 sudo[432777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:28:11 compute-0 sudo[432777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:11 compute-0 nova_compute[257802]: 2025-10-02 13:28:11.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.724097664 +0000 UTC m=+0.036717694 container create 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:11 compute-0 systemd[1]: Started libpod-conmon-2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d.scope.
Oct 02 13:28:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.708202041 +0000 UTC m=+0.020822101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.808110505 +0000 UTC m=+0.120730535 container init 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.815537214 +0000 UTC m=+0.128157244 container start 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.819328035 +0000 UTC m=+0.131948065 container attach 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:28:11 compute-0 practical_panini[432859]: 167 167
Oct 02 13:28:11 compute-0 systemd[1]: libpod-2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d.scope: Deactivated successfully.
Oct 02 13:28:11 compute-0 conmon[432859]: conmon 2a6c40e9587722390f16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d.scope/container/memory.events
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.821905147 +0000 UTC m=+0.134525177 container died 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0d9893e4de9a028c7ab7f8900e3719b095bcd3a83fd40dfaea895be06579c5-merged.mount: Deactivated successfully.
Oct 02 13:28:11 compute-0 podman[432842]: 2025-10-02 13:28:11.866996982 +0000 UTC m=+0.179617022 container remove 2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:28:11 compute-0 systemd[1]: libpod-conmon-2a6c40e9587722390f16cc870c75901c3fdf1a0b34b9ddab15b97695be61de0d.scope: Deactivated successfully.
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.026252144 +0000 UTC m=+0.042042483 container create 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:12 compute-0 systemd[1]: Started libpod-conmon-2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae.scope.
Oct 02 13:28:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.007162034 +0000 UTC m=+0.022952383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.112780104 +0000 UTC m=+0.128570453 container init 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.120655054 +0000 UTC m=+0.136445383 container start 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.123593965 +0000 UTC m=+0.139384294 container attach 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:12 compute-0 bold_euclid[432899]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:28:12 compute-0 bold_euclid[432899]: --> relative data size: 1.0
Oct 02 13:28:12 compute-0 bold_euclid[432899]: --> All data devices are unavailable
Oct 02 13:28:12 compute-0 systemd[1]: libpod-2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae.scope: Deactivated successfully.
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.928032779 +0000 UTC m=+0.943823108 container died 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2063c8fc4da1556632f2c96f6e988c14e7baa982edb4463ada0c3d5ebf56f048-merged.mount: Deactivated successfully.
Oct 02 13:28:12 compute-0 podman[432882]: 2025-10-02 13:28:12.993050353 +0000 UTC m=+1.008840692 container remove 2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:28:13 compute-0 systemd[1]: libpod-conmon-2e9b3b2ab9b7ef9db81498bfec329c65054700c349643e5f6b9d2528e98932ae.scope: Deactivated successfully.
Oct 02 13:28:13 compute-0 sudo[432777]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:13.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:13 compute-0 sudo[432927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:13 compute-0 sudo[432927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:13 compute-0 sudo[432927]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:13 compute-0 sudo[432952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:13 compute-0 sudo[432952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:13 compute-0 sudo[432952]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:13 compute-0 sudo[432977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:13 compute-0 sudo[432977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:13 compute-0 sudo[432977]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:13 compute-0 sudo[433002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:28:13 compute-0 sudo[433002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:13 compute-0 ceph-mon[73607]: pgmap v3917: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.63057023 +0000 UTC m=+0.040156836 container create 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:28:13 compute-0 systemd[1]: Started libpod-conmon-5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5.scope.
Oct 02 13:28:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.614972615 +0000 UTC m=+0.024559241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.71781935 +0000 UTC m=+0.127405976 container init 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.728354093 +0000 UTC m=+0.137940689 container start 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.731561261 +0000 UTC m=+0.141147887 container attach 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:28:13 compute-0 jovial_ride[433084]: 167 167
Oct 02 13:28:13 compute-0 systemd[1]: libpod-5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5.scope: Deactivated successfully.
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.736859258 +0000 UTC m=+0.146445874 container died 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e27a69a025a55a252aa781c054d12bc7e79185f9e532e162a684cca7d4d006-merged.mount: Deactivated successfully.
Oct 02 13:28:13 compute-0 podman[433067]: 2025-10-02 13:28:13.778006788 +0000 UTC m=+0.187593384 container remove 5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 13:28:13 compute-0 systemd[1]: libpod-conmon-5a7f32152857e8fbfba8edba1142083b0c592c15f365ad52a802bbdf11a318a5.scope: Deactivated successfully.
Oct 02 13:28:13 compute-0 nova_compute[257802]: 2025-10-02 13:28:13.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:13 compute-0 podman[433107]: 2025-10-02 13:28:13.982374695 +0000 UTC m=+0.082270501 container create 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:28:14 compute-0 systemd[1]: Started libpod-conmon-0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266.scope.
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:13.931177143 +0000 UTC m=+0.031072989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24601fe79c603edbc512319471818a781cf9743da898c258dc474e67563ed1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24601fe79c603edbc512319471818a781cf9743da898c258dc474e67563ed1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24601fe79c603edbc512319471818a781cf9743da898c258dc474e67563ed1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24601fe79c603edbc512319471818a781cf9743da898c258dc474e67563ed1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:14.077683588 +0000 UTC m=+0.177579424 container init 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:14.084075441 +0000 UTC m=+0.183971257 container start 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:14.087453073 +0000 UTC m=+0.187348909 container attach 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:28:14 compute-0 pensive_kirch[433124]: {
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:     "1": [
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:         {
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "devices": [
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "/dev/loop3"
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             ],
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "lv_name": "ceph_lv0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "lv_size": "7511998464",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "name": "ceph_lv0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "tags": {
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.cluster_name": "ceph",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.crush_device_class": "",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.encrypted": "0",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.osd_id": "1",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.type": "block",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:                 "ceph.vdo": "0"
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             },
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "type": "block",
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:             "vg_name": "ceph_vg0"
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:         }
Oct 02 13:28:14 compute-0 pensive_kirch[433124]:     ]
Oct 02 13:28:14 compute-0 pensive_kirch[433124]: }
Oct 02 13:28:14 compute-0 systemd[1]: libpod-0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266.scope: Deactivated successfully.
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:14.884429797 +0000 UTC m=+0.984325613 container died 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:28:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24601fe79c603edbc512319471818a781cf9743da898c258dc474e67563ed1e-merged.mount: Deactivated successfully.
Oct 02 13:28:14 compute-0 podman[433107]: 2025-10-02 13:28:14.943213621 +0000 UTC m=+1.043109427 container remove 0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kirch, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:28:14 compute-0 systemd[1]: libpod-conmon-0990b547fad66e8b57ea66c65c1d408f30421dc95ab1c989a065553e2a7cf266.scope: Deactivated successfully.
Oct 02 13:28:14 compute-0 sudo[433002]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 sudo[433148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:15 compute-0 sudo[433148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 sudo[433148]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:15.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:15 compute-0 sudo[433173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:15 compute-0 sudo[433173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 sudo[433173]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:15 compute-0 sudo[433198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:15 compute-0 sudo[433198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 sudo[433198]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 sudo[433201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:15 compute-0 sudo[433201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 sudo[433201]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:15 compute-0 sudo[433248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:15 compute-0 sudo[433248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 sudo[433251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:28:15 compute-0 sudo[433248]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:15 compute-0 sudo[433251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:15 compute-0 ceph-mon[73607]: pgmap v3918: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.51841649 +0000 UTC m=+0.037068334 container create f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:28:15 compute-0 systemd[1]: Started libpod-conmon-f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f.scope.
Oct 02 13:28:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.579665473 +0000 UTC m=+0.098317337 container init f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.586584729 +0000 UTC m=+0.105236573 container start f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:28:15 compute-0 agitated_bohr[433354]: 167 167
Oct 02 13:28:15 compute-0 systemd[1]: libpod-f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f.scope: Deactivated successfully.
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.593126187 +0000 UTC m=+0.111778061 container attach f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.59368333 +0000 UTC m=+0.112335194 container died f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.504149867 +0000 UTC m=+0.022801731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e620d91c063b4861c795ebae35484547714c601adc6341ed60f054518fcec6c-merged.mount: Deactivated successfully.
Oct 02 13:28:15 compute-0 podman[433337]: 2025-10-02 13:28:15.62819476 +0000 UTC m=+0.146846604 container remove f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:28:15 compute-0 systemd[1]: libpod-conmon-f272dcdccf7b7516ae132ceab27ac21637c136e4b9056e04eb1532b762b43f2f.scope: Deactivated successfully.
Oct 02 13:28:15 compute-0 podman[433378]: 2025-10-02 13:28:15.782510283 +0000 UTC m=+0.038017986 container create f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:15 compute-0 systemd[1]: Started libpod-conmon-f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56.scope.
Oct 02 13:28:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbcdf7e96464be8b8e572206a4c7d4e482e931d466eb17e709f73701cfe7f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbcdf7e96464be8b8e572206a4c7d4e482e931d466eb17e709f73701cfe7f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbcdf7e96464be8b8e572206a4c7d4e482e931d466eb17e709f73701cfe7f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbcdf7e96464be8b8e572206a4c7d4e482e931d466eb17e709f73701cfe7f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:15 compute-0 podman[433378]: 2025-10-02 13:28:15.765965196 +0000 UTC m=+0.021472919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:15 compute-0 podman[433378]: 2025-10-02 13:28:15.867237622 +0000 UTC m=+0.122745325 container init f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:28:15 compute-0 podman[433378]: 2025-10-02 13:28:15.873514152 +0000 UTC m=+0.129021855 container start f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:28:15 compute-0 podman[433378]: 2025-10-02 13:28:15.876572677 +0000 UTC m=+0.132080400 container attach f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:28:16 compute-0 nova_compute[257802]: 2025-10-02 13:28:16.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:16 compute-0 nice_poincare[433395]: {
Oct 02 13:28:16 compute-0 nice_poincare[433395]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:28:16 compute-0 nice_poincare[433395]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:28:16 compute-0 nice_poincare[433395]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:28:16 compute-0 nice_poincare[433395]:         "osd_id": 1,
Oct 02 13:28:16 compute-0 nice_poincare[433395]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:28:16 compute-0 nice_poincare[433395]:         "type": "bluestore"
Oct 02 13:28:16 compute-0 nice_poincare[433395]:     }
Oct 02 13:28:16 compute-0 nice_poincare[433395]: }
Oct 02 13:28:16 compute-0 systemd[1]: libpod-f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56.scope: Deactivated successfully.
Oct 02 13:28:16 compute-0 podman[433378]: 2025-10-02 13:28:16.676992513 +0000 UTC m=+0.932500226 container died f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbcdf7e96464be8b8e572206a4c7d4e482e931d466eb17e709f73701cfe7f16-merged.mount: Deactivated successfully.
Oct 02 13:28:16 compute-0 podman[433378]: 2025-10-02 13:28:16.731641478 +0000 UTC m=+0.987149181 container remove f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:28:16 compute-0 systemd[1]: libpod-conmon-f966a831f3d3fbc3eb4fd3e9fa78ad2d2f1160859b6a7df8ea17086d5bdf7e56.scope: Deactivated successfully.
Oct 02 13:28:16 compute-0 sudo[433251]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:28:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:28:16 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5bba902b-e43e-48c6-85ff-436151da6fb5 does not exist
Oct 02 13:28:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 71c33e65-2206-4529-9dde-22608e465e02 does not exist
Oct 02 13:28:16 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 59b5defb-3473-4851-b3d4-3a6859ced2bb does not exist
Oct 02 13:28:16 compute-0 sudo[433431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:16 compute-0 sudo[433431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:16 compute-0 sudo[433431]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:16 compute-0 sudo[433456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:28:16 compute-0 sudo[433456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:16 compute-0 sudo[433456]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:17.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:17 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:28:17 compute-0 ceph-mon[73607]: pgmap v3919: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:18 compute-0 nova_compute[257802]: 2025-10-02 13:28:18.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:19 compute-0 ceph-mon[73607]: pgmap v3920: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:21.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:21 compute-0 ceph-mon[73607]: pgmap v3921: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:21 compute-0 nova_compute[257802]: 2025-10-02 13:28:21.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:22 compute-0 podman[433484]: 2025-10-02 13:28:22.937712487 +0000 UTC m=+0.078961952 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 13:28:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:23.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:23 compute-0 ceph-mon[73607]: pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:24 compute-0 nova_compute[257802]: 2025-10-02 13:28:24.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:25.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:25 compute-0 ceph-mon[73607]: pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:25 compute-0 nova_compute[257802]: 2025-10-02 13:28:25.730 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:26 compute-0 nova_compute[257802]: 2025-10-02 13:28:26.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:28:27.015 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:28:27.016 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:28:27.016 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:27.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:27.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:27 compute-0 ceph-mon[73607]: pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:29 compute-0 nova_compute[257802]: 2025-10-02 13:28:29.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:29.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:29 compute-0 ceph-mon[73607]: pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:31 compute-0 ceph-mon[73607]: pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:31 compute-0 nova_compute[257802]: 2025-10-02 13:28:31.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:33.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:33 compute-0 ceph-mon[73607]: pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:34 compute-0 nova_compute[257802]: 2025-10-02 13:28:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:35.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:35 compute-0 sudo[433516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:35 compute-0 sudo[433516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:35 compute-0 sudo[433516]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:35 compute-0 sudo[433541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:35 compute-0 sudo[433541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:35 compute-0 sudo[433541]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:35 compute-0 ceph-mon[73607]: pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:36 compute-0 nova_compute[257802]: 2025-10-02 13:28:36.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:37.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:37 compute-0 ceph-mon[73607]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:39 compute-0 nova_compute[257802]: 2025-10-02 13:28:39.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:39.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:39 compute-0 ceph-mon[73607]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:40 compute-0 podman[433571]: 2025-10-02 13:28:40.927530577 +0000 UTC m=+0.053879137 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:28:40 compute-0 podman[433570]: 2025-10-02 13:28:40.9413724 +0000 UTC m=+0.064821210 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:28:40 compute-0 podman[433569]: 2025-10-02 13:28:40.947661472 +0000 UTC m=+0.083266905 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:28:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:41 compute-0 ceph-mon[73607]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:41 compute-0 nova_compute[257802]: 2025-10-02 13:28:41.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:28:42
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.log']
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:43 compute-0 ceph-mon[73607]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:28:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:28:44 compute-0 nova_compute[257802]: 2025-10-02 13:28:44.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:28:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:28:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:28:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:28:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:28:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:45 compute-0 nova_compute[257802]: 2025-10-02 13:28:45.101 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:45 compute-0 nova_compute[257802]: 2025-10-02 13:28:45.101 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:45 compute-0 nova_compute[257802]: 2025-10-02 13:28:45.101 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:28:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:45 compute-0 ceph-mon[73607]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:46 compute-0 nova_compute[257802]: 2025-10-02 13:28:46.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:47 compute-0 ceph-mon[73607]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:49 compute-0 nova_compute[257802]: 2025-10-02 13:28:49.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:49.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:49.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:49 compute-0 ceph-mon[73607]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:51 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1215553004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:51 compute-0 nova_compute[257802]: 2025-10-02 13:28:51.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:51.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:51 compute-0 nova_compute[257802]: 2025-10-02 13:28:51.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:52 compute-0 ceph-mon[73607]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:52 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1495290970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:52 compute-0 nova_compute[257802]: 2025-10-02 13:28:52.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:53 compute-0 nova_compute[257802]: 2025-10-02 13:28:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:53 compute-0 ceph-mon[73607]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:53 compute-0 podman[433631]: 2025-10-02 13:28:53.960607605 +0000 UTC m=+0.101225577 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:54 compute-0 nova_compute[257802]: 2025-10-02 13:28:54.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:55 compute-0 nova_compute[257802]: 2025-10-02 13:28:55.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:28:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:28:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:28:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1013062954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:28:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1013062954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:28:55 compute-0 sudo[433658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:55 compute-0 sudo[433658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:55 compute-0 sudo[433658]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:55 compute-0 sudo[433683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:55 compute-0 sudo[433683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:55 compute-0 sudo[433683]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:56 compute-0 nova_compute[257802]: 2025-10-02 13:28:56.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:56 compute-0 ceph-mon[73607]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:56 compute-0 nova_compute[257802]: 2025-10-02 13:28:56.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:28:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:28:58 compute-0 ceph-mon[73607]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:59 compute-0 nova_compute[257802]: 2025-10-02 13:28:59.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:28:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:28:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:59.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:00 compute-0 ceph-mon[73607]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:01 compute-0 ceph-mon[73607]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:01 compute-0 nova_compute[257802]: 2025-10-02 13:29:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:01 compute-0 sshd-session[433711]: Invalid user vor from 167.99.55.34 port 38142
Oct 02 13:29:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:01 compute-0 sshd-session[433711]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:29:01 compute-0 sshd-session[433711]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.55.34
Oct 02 13:29:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2494224005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:03.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:03.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:03 compute-0 sshd-session[433711]: Failed password for invalid user vor from 167.99.55.34 port 38142 ssh2
Oct 02 13:29:03 compute-0 ceph-mon[73607]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/979004558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:04 compute-0 sshd-session[433711]: Connection closed by invalid user vor 167.99.55.34 port 38142 [preauth]
Oct 02 13:29:04 compute-0 nova_compute[257802]: 2025-10-02 13:29:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:04 compute-0 nova_compute[257802]: 2025-10-02 13:29:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:04 compute-0 nova_compute[257802]: 2025-10-02 13:29:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:29:04 compute-0 nova_compute[257802]: 2025-10-02 13:29:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:29:04 compute-0 nova_compute[257802]: 2025-10-02 13:29:04.118 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:29:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:05 compute-0 ceph-mon[73607]: pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:06 compute-0 nova_compute[257802]: 2025-10-02 13:29:06.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:07.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:07 compute-0 ceph-mon[73607]: pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:09.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.127 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.128 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.128 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.128 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:09.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:09 compute-0 ceph-mon[73607]: pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165733950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.637 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.771 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.772 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4179MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.772 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.772 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.914 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.914 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:29:09 compute-0 nova_compute[257802]: 2025-10-02 13:29:09.934 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933919229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:10 compute-0 nova_compute[257802]: 2025-10-02 13:29:10.346 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:10 compute-0 nova_compute[257802]: 2025-10-02 13:29:10.350 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:29:10 compute-0 nova_compute[257802]: 2025-10-02 13:29:10.367 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:29:10 compute-0 nova_compute[257802]: 2025-10-02 13:29:10.369 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:29:10 compute-0 nova_compute[257802]: 2025-10-02 13:29:10.369 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1165733950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1933919229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:11.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:11 compute-0 nova_compute[257802]: 2025-10-02 13:29:11.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:11 compute-0 ceph-mon[73607]: pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:11 compute-0 podman[433763]: 2025-10-02 13:29:11.913945915 +0000 UTC m=+0.052492703 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:29:11 compute-0 podman[433762]: 2025-10-02 13:29:11.913842273 +0000 UTC m=+0.052469343 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:11 compute-0 podman[433764]: 2025-10-02 13:29:11.917607754 +0000 UTC m=+0.050627819 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:13.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:13 compute-0 ceph-mon[73607]: pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:14 compute-0 nova_compute[257802]: 2025-10-02 13:29:14.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:15.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:15 compute-0 ceph-mon[73607]: pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:15 compute-0 sudo[433818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:15 compute-0 sudo[433818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:15 compute-0 sudo[433818]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:15 compute-0 sudo[433843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:15 compute-0 sudo[433843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:15 compute-0 sudo[433843]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:16 compute-0 nova_compute[257802]: 2025-10-02 13:29:16.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:17 compute-0 sudo[433869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:17 compute-0 sudo[433869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:17 compute-0 sudo[433869]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:17 compute-0 ceph-mon[73607]: pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:17 compute-0 sudo[433894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:17 compute-0 sudo[433894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:17 compute-0 sudo[433894]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:17 compute-0 sudo[433919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:17 compute-0 sudo[433919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:17 compute-0 sudo[433919]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:17 compute-0 sudo[433944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:29:17 compute-0 sudo[433944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:17 compute-0 sudo[433944]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:29:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:29:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:29:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:29:17 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:18 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 98028d48-7787-4f57-b806-2548b9440b70 does not exist
Oct 02 13:29:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 54f44c1d-c987-4811-9dfb-81b06daac900 does not exist
Oct 02 13:29:18 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 977c2f90-e41d-4c7d-b948-2183485f2bc4 does not exist
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:29:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:18 compute-0 sudo[434000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:18 compute-0 sudo[434000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:18 compute-0 sudo[434000]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:18 compute-0 sudo[434026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:18 compute-0 sudo[434026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:18 compute-0 sudo[434026]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:18 compute-0 sudo[434051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:18 compute-0 sudo[434051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:18 compute-0 sudo[434051]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:18 compute-0 sudo[434076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:29:18 compute-0 sudo[434076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:19 compute-0 nova_compute[257802]: 2025-10-02 13:29:19.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.282164183 +0000 UTC m=+0.038689482 container create 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:29:19 compute-0 systemd[1]: Started libpod-conmon-17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5.scope.
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:19 compute-0 ceph-mon[73607]: pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.264974809 +0000 UTC m=+0.021500118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.384573677 +0000 UTC m=+0.141098996 container init 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.391631667 +0000 UTC m=+0.148156956 container start 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.395400808 +0000 UTC m=+0.151926137 container attach 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:19 compute-0 infallible_kapitsa[434158]: 167 167
Oct 02 13:29:19 compute-0 systemd[1]: libpod-17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5.scope: Deactivated successfully.
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.401250938 +0000 UTC m=+0.157776247 container died 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f5dad0c22f3c0ca97c70d2727ff1f2316fb9fb985301fdfd94475fdea4f7301-merged.mount: Deactivated successfully.
Oct 02 13:29:19 compute-0 podman[434141]: 2025-10-02 13:29:19.445142454 +0000 UTC m=+0.201667763 container remove 17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:29:19 compute-0 systemd[1]: libpod-conmon-17180ad607c0c95583a8a0084cf620176c85b2b0a517290e36411aa171a5afe5.scope: Deactivated successfully.
Oct 02 13:29:19 compute-0 podman[434183]: 2025-10-02 13:29:19.602679734 +0000 UTC m=+0.041686174 container create 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:29:19 compute-0 systemd[1]: Started libpod-conmon-662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3.scope.
Oct 02 13:29:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:19 compute-0 podman[434183]: 2025-10-02 13:29:19.582593541 +0000 UTC m=+0.021600001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:19 compute-0 podman[434183]: 2025-10-02 13:29:19.716140474 +0000 UTC m=+0.155147004 container init 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:29:19 compute-0 podman[434183]: 2025-10-02 13:29:19.72389623 +0000 UTC m=+0.162902670 container start 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:29:19 compute-0 podman[434183]: 2025-10-02 13:29:19.731691838 +0000 UTC m=+0.170698308 container attach 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:29:20 compute-0 clever_tu[434199]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:29:20 compute-0 clever_tu[434199]: --> relative data size: 1.0
Oct 02 13:29:20 compute-0 clever_tu[434199]: --> All data devices are unavailable
Oct 02 13:29:20 compute-0 systemd[1]: libpod-662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3.scope: Deactivated successfully.
Oct 02 13:29:20 compute-0 podman[434183]: 2025-10-02 13:29:20.560305653 +0000 UTC m=+0.999312123 container died 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 02 13:29:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3e83efae8fe0fb48611f5ad004104ce276149cabadf2f5dadf74eef398857f7-merged.mount: Deactivated successfully.
Oct 02 13:29:20 compute-0 podman[434183]: 2025-10-02 13:29:20.622020468 +0000 UTC m=+1.061026908 container remove 662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:29:20 compute-0 systemd[1]: libpod-conmon-662ea28b14fa5f26238aead34c0af96fc6b3830ee3b3e8ac3d76b4bcc3b451b3.scope: Deactivated successfully.
Oct 02 13:29:20 compute-0 sudo[434076]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:20 compute-0 sudo[434225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:20 compute-0 sudo[434225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:20 compute-0 sudo[434225]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:20 compute-0 sudo[434250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:20 compute-0 sudo[434250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:20 compute-0 sudo[434250]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:20 compute-0 sudo[434276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:20 compute-0 sudo[434276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:20 compute-0 sudo[434276]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:20 compute-0 sudo[434301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:29:20 compute-0 sudo[434301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.255094709 +0000 UTC m=+0.059769049 container create 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:29:21 compute-0 ceph-mon[73607]: pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:21 compute-0 systemd[1]: Started libpod-conmon-85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5.scope.
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.224967074 +0000 UTC m=+0.029641524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.334400366 +0000 UTC m=+0.139074726 container init 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.343579207 +0000 UTC m=+0.148253557 container start 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.346998019 +0000 UTC m=+0.151672369 container attach 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:21 compute-0 hopeful_cohen[434382]: 167 167
Oct 02 13:29:21 compute-0 systemd[1]: libpod-85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5.scope: Deactivated successfully.
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.352601214 +0000 UTC m=+0.157275554 container died 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-904849ff801795d861e03dc07a2848c31c60d97b131e9d97ae1a373d0b4501b3-merged.mount: Deactivated successfully.
Oct 02 13:29:21 compute-0 podman[434366]: 2025-10-02 13:29:21.399580205 +0000 UTC m=+0.204254615 container remove 85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:29:21 compute-0 systemd[1]: libpod-conmon-85fe6214d1b3d4ac7d9528c1d809e91991f2c238f55fa24c2ba9d865e787fbd5.scope: Deactivated successfully.
Oct 02 13:29:21 compute-0 nova_compute[257802]: 2025-10-02 13:29:21.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:21 compute-0 podman[434404]: 2025-10-02 13:29:21.590262242 +0000 UTC m=+0.052364900 container create 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:29:21 compute-0 systemd[1]: Started libpod-conmon-5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5.scope.
Oct 02 13:29:21 compute-0 podman[434404]: 2025-10-02 13:29:21.567776861 +0000 UTC m=+0.029879549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5828cae1f38e65a32a064b24b51046e4e8b1b48f91b8970deb249e05247a917d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5828cae1f38e65a32a064b24b51046e4e8b1b48f91b8970deb249e05247a917d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5828cae1f38e65a32a064b24b51046e4e8b1b48f91b8970deb249e05247a917d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5828cae1f38e65a32a064b24b51046e4e8b1b48f91b8970deb249e05247a917d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:21 compute-0 podman[434404]: 2025-10-02 13:29:21.692392759 +0000 UTC m=+0.154495457 container init 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:21 compute-0 podman[434404]: 2025-10-02 13:29:21.699643093 +0000 UTC m=+0.161745741 container start 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:29:21 compute-0 podman[434404]: 2025-10-02 13:29:21.703498206 +0000 UTC m=+0.165600954 container attach 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:29:22 compute-0 reverent_feynman[434420]: {
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:     "1": [
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:         {
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "devices": [
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "/dev/loop3"
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             ],
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "lv_name": "ceph_lv0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "lv_size": "7511998464",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "name": "ceph_lv0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "tags": {
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.cluster_name": "ceph",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.crush_device_class": "",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.encrypted": "0",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.osd_id": "1",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.type": "block",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:                 "ceph.vdo": "0"
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             },
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "type": "block",
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:             "vg_name": "ceph_vg0"
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:         }
Oct 02 13:29:22 compute-0 reverent_feynman[434420]:     ]
Oct 02 13:29:22 compute-0 reverent_feynman[434420]: }
Oct 02 13:29:22 compute-0 systemd[1]: libpod-5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5.scope: Deactivated successfully.
Oct 02 13:29:22 compute-0 podman[434404]: 2025-10-02 13:29:22.533424073 +0000 UTC m=+0.995526721 container died 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5828cae1f38e65a32a064b24b51046e4e8b1b48f91b8970deb249e05247a917d-merged.mount: Deactivated successfully.
Oct 02 13:29:22 compute-0 podman[434404]: 2025-10-02 13:29:22.60977725 +0000 UTC m=+1.071879918 container remove 5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:29:22 compute-0 systemd[1]: libpod-conmon-5ad59f5c4c5614d75a12242354cac7bf0b23eb5491206c86cf77774214719ef5.scope: Deactivated successfully.
Oct 02 13:29:22 compute-0 sudo[434301]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 sudo[434443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:22 compute-0 sudo[434443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:22 compute-0 sudo[434443]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 sudo[434468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:22 compute-0 sudo[434468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:22 compute-0 sudo[434468]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 sudo[434494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:22 compute-0 sudo[434494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:22 compute-0 sudo[434494]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 sudo[434519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:29:22 compute-0 sudo[434519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:23.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:23 compute-0 ceph-mon[73607]: pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.307610679 +0000 UTC m=+0.039303056 container create e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:29:23 compute-0 systemd[1]: Started libpod-conmon-e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2.scope.
Oct 02 13:29:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.289985475 +0000 UTC m=+0.021677852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.388204408 +0000 UTC m=+0.119896795 container init e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.396794224 +0000 UTC m=+0.128486591 container start e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.400717909 +0000 UTC m=+0.132410306 container attach e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:29:23 compute-0 amazing_galileo[434602]: 167 167
Oct 02 13:29:23 compute-0 systemd[1]: libpod-e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2.scope: Deactivated successfully.
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.40529704 +0000 UTC m=+0.136989417 container died e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab60d367a0f50622ad59a85b4711d1f57488ebe39c2bdab6d1409581525d1715-merged.mount: Deactivated successfully.
Oct 02 13:29:23 compute-0 podman[434586]: 2025-10-02 13:29:23.442067864 +0000 UTC m=+0.173760241 container remove e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:29:23 compute-0 systemd[1]: libpod-conmon-e4c0dace3cb4564d8217fd864fe920991c2b041afb994ca49761ad3883e5f5a2.scope: Deactivated successfully.
Oct 02 13:29:23 compute-0 podman[434627]: 2025-10-02 13:29:23.637872195 +0000 UTC m=+0.077846544 container create 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:29:23 compute-0 podman[434627]: 2025-10-02 13:29:23.581271813 +0000 UTC m=+0.021246152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:23 compute-0 systemd[1]: Started libpod-conmon-34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e.scope.
Oct 02 13:29:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860a8264216226fc3ddd7b36bcd4e29ac21d25a06cc35ea3e872aab8ad6c60a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860a8264216226fc3ddd7b36bcd4e29ac21d25a06cc35ea3e872aab8ad6c60a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860a8264216226fc3ddd7b36bcd4e29ac21d25a06cc35ea3e872aab8ad6c60a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860a8264216226fc3ddd7b36bcd4e29ac21d25a06cc35ea3e872aab8ad6c60a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:23 compute-0 podman[434627]: 2025-10-02 13:29:23.731228981 +0000 UTC m=+0.171203320 container init 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:29:23 compute-0 podman[434627]: 2025-10-02 13:29:23.738628219 +0000 UTC m=+0.178602538 container start 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:29:23 compute-0 podman[434627]: 2025-10-02 13:29:23.741493808 +0000 UTC m=+0.181468137 container attach 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:29:24 compute-0 nova_compute[257802]: 2025-10-02 13:29:24.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:24 compute-0 fervent_haslett[434644]: {
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:         "osd_id": 1,
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:         "type": "bluestore"
Oct 02 13:29:24 compute-0 fervent_haslett[434644]:     }
Oct 02 13:29:24 compute-0 fervent_haslett[434644]: }
Oct 02 13:29:24 compute-0 systemd[1]: libpod-34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e.scope: Deactivated successfully.
Oct 02 13:29:24 compute-0 podman[434627]: 2025-10-02 13:29:24.591395475 +0000 UTC m=+1.031369814 container died 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-860a8264216226fc3ddd7b36bcd4e29ac21d25a06cc35ea3e872aab8ad6c60a0-merged.mount: Deactivated successfully.
Oct 02 13:29:24 compute-0 podman[434627]: 2025-10-02 13:29:24.808257223 +0000 UTC m=+1.248231542 container remove 34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:24 compute-0 podman[434665]: 2025-10-02 13:29:24.816584302 +0000 UTC m=+0.188706330 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:29:24 compute-0 systemd[1]: libpod-conmon-34ccf1efe610000ad4f304d9b08336ad495acbaa7e22608a099df4dce6ac580e.scope: Deactivated successfully.
Oct 02 13:29:24 compute-0 sudo[434519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:29:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:29:24 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d549c934-d7a9-4826-9178-ec97c756de07 does not exist
Oct 02 13:29:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9e59862b-4df0-4a30-95d1-e42eb2bb37c0 does not exist
Oct 02 13:29:24 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b8c2f787-6b74-4a87-8f0d-971d5878cc45 does not exist
Oct 02 13:29:24 compute-0 sudo[434699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:24 compute-0 sudo[434699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:24 compute-0 sudo[434699]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:24 compute-0 sudo[434724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:29:24 compute-0 sudo[434724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:24 compute-0 sudo[434724]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:25 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:29:25 compute-0 ceph-mon[73607]: pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:26 compute-0 nova_compute[257802]: 2025-10-02 13:29:26.370 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:26 compute-0 nova_compute[257802]: 2025-10-02 13:29:26.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:29:27.017 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:29:27.017 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:29:27.017 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:27.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:27 compute-0 ceph-mon[73607]: pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:29 compute-0 nova_compute[257802]: 2025-10-02 13:29:29.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:29 compute-0 ceph-mon[73607]: pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:31 compute-0 ceph-mon[73607]: pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:31 compute-0 nova_compute[257802]: 2025-10-02 13:29:31.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:33 compute-0 ceph-mon[73607]: pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:34 compute-0 nova_compute[257802]: 2025-10-02 13:29:34.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:35 compute-0 ceph-mon[73607]: pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:35 compute-0 sudo[434754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:35 compute-0 sudo[434754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:35 compute-0 sudo[434754]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:35 compute-0 sudo[434779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:35 compute-0 sudo[434779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:35 compute-0 sudo[434779]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:36 compute-0 nova_compute[257802]: 2025-10-02 13:29:36.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:36 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:37 compute-0 ceph-mon[73607]: pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:39 compute-0 nova_compute[257802]: 2025-10-02 13:29:39.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:39 compute-0 ceph-mon[73607]: pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:41 compute-0 ceph-mon[73607]: pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.424090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781424113, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2102, "num_deletes": 251, "total_data_size": 3885624, "memory_usage": 3943384, "flush_reason": "Manual Compaction"}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Oct 02 13:29:41 compute-0 nova_compute[257802]: 2025-10-02 13:29:41.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781509753, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3807499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85554, "largest_seqno": 87655, "table_properties": {"data_size": 3798053, "index_size": 6003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19116, "raw_average_key_size": 20, "raw_value_size": 3779237, "raw_average_value_size": 4007, "num_data_blocks": 262, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411562, "oldest_key_time": 1759411562, "file_creation_time": 1759411781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 85714 microseconds, and 6912 cpu microseconds.
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.509799) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3807499 bytes OK
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.509819) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.515810) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.515848) EVENT_LOG_v1 {"time_micros": 1759411781515843, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.515867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3877211, prev total WAL file size 3877211, number of live WAL files 2.
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.516795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3718KB)], [197(12MB)]
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781516849, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 16842428, "oldest_snapshot_seqno": -1}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11718 keys, 14758077 bytes, temperature: kUnknown
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781678894, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 14758077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14683301, "index_size": 44391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 308998, "raw_average_key_size": 26, "raw_value_size": 14479318, "raw_average_value_size": 1235, "num_data_blocks": 1687, "num_entries": 11718, "num_filter_entries": 11718, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.680499) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14758077 bytes
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.686248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 91.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12237, records dropped: 519 output_compression: NoCompression
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.686279) EVENT_LOG_v1 {"time_micros": 1759411781686266, "job": 124, "event": "compaction_finished", "compaction_time_micros": 162138, "compaction_time_cpu_micros": 33057, "output_level": 6, "num_output_files": 1, "total_output_size": 14758077, "num_input_records": 12237, "num_output_records": 11718, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781687186, "job": 124, "event": "table_file_deletion", "file_number": 199}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781689497, "job": 124, "event": "table_file_deletion", "file_number": 197}
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.516667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.689628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.689634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.689636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.689639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:41 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:29:41.689641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:29:42
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root']
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:42 compute-0 podman[434808]: 2025-10-02 13:29:42.923916208 +0000 UTC m=+0.057627508 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:29:42 compute-0 podman[434810]: 2025-10-02 13:29:42.923231801 +0000 UTC m=+0.053112988 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:29:42 compute-0 podman[434809]: 2025-10-02 13:29:42.954377931 +0000 UTC m=+0.086141474 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:29:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:43 compute-0 ceph-mon[73607]: pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:29:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:29:44 compute-0 nova_compute[257802]: 2025-10-02 13:29:44.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:29:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:29:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:29:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:29:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:29:45 compute-0 nova_compute[257802]: 2025-10-02 13:29:45.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:45 compute-0 ceph-mon[73607]: pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:46 compute-0 nova_compute[257802]: 2025-10-02 13:29:46.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:46 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:47 compute-0 nova_compute[257802]: 2025-10-02 13:29:47.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:47 compute-0 nova_compute[257802]: 2025-10-02 13:29:47.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:29:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:47 compute-0 ceph-mon[73607]: pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:49 compute-0 nova_compute[257802]: 2025-10-02 13:29:49.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:49.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:49 compute-0 ceph-mon[73607]: pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:51 compute-0 nova_compute[257802]: 2025-10-02 13:29:51.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:51 compute-0 ceph-mon[73607]: pgmap v3966: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:51 compute-0 nova_compute[257802]: 2025-10-02 13:29:51.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:53 compute-0 nova_compute[257802]: 2025-10-02 13:29:53.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:53 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2326759658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:54 compute-0 nova_compute[257802]: 2025-10-02 13:29:54.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:54 compute-0 ceph-mon[73607]: pgmap v3967: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1388984387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:54 compute-0 podman[434871]: 2025-10-02 13:29:54.92878651 +0000 UTC m=+0.073665604 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:29:55 compute-0 nova_compute[257802]: 2025-10-02 13:29:55.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:29:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:55.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/532813708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/532813708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:55 compute-0 sudo[434897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:55 compute-0 sudo[434897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:55 compute-0 sudo[434897]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 sudo[434922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:56 compute-0 sudo[434922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:56 compute-0 sudo[434922]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 ceph-mon[73607]: pgmap v3968: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:56 compute-0 nova_compute[257802]: 2025-10-02 13:29:56.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:57 compute-0 nova_compute[257802]: 2025-10-02 13:29:57.094 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:57 compute-0 ceph-mon[73607]: pgmap v3969: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:59 compute-0 nova_compute[257802]: 2025-10-02 13:29:59.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:29:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:29:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:29:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:29:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:29:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:29:59 compute-0 ceph-mon[73607]: pgmap v3970: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:00 compute-0 ceph-mon[73607]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:30:00 compute-0 ceph-mon[73607]: overall HEALTH_OK
Oct 02 13:30:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:01 compute-0 nova_compute[257802]: 2025-10-02 13:30:01.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:01 compute-0 ceph-mon[73607]: pgmap v3971: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:01 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/804517818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:03 compute-0 ceph-mon[73607]: pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:03 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1008214551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:04 compute-0 nova_compute[257802]: 2025-10-02 13:30:04.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:04 compute-0 nova_compute[257802]: 2025-10-02 13:30:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:30:04 compute-0 nova_compute[257802]: 2025-10-02 13:30:04.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:30:04 compute-0 nova_compute[257802]: 2025-10-02 13:30:04.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:04 compute-0 nova_compute[257802]: 2025-10-02 13:30:04.269 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:30:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:06 compute-0 ceph-mon[73607]: pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:06 compute-0 nova_compute[257802]: 2025-10-02 13:30:06.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:07 compute-0 nova_compute[257802]: 2025-10-02 13:30:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:07 compute-0 nova_compute[257802]: 2025-10-02 13:30:07.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:30:07 compute-0 nova_compute[257802]: 2025-10-02 13:30:07.133 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:30:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:07.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:08 compute-0 ceph-mon[73607]: pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:09 compute-0 nova_compute[257802]: 2025-10-02 13:30:09.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:09.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:10 compute-0 nova_compute[257802]: 2025-10-02 13:30:10.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:10 compute-0 ceph-mon[73607]: pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.113 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.144 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.145 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.145 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.145 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.146 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930168130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.584 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.779 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.780 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4169MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.780 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.781 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.853 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:30:11 compute-0 nova_compute[257802]: 2025-10-02 13:30:11.853 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.011 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:12 compute-0 ceph-mon[73607]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486016299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.711 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.717 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.737 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.739 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:30:12 compute-0 nova_compute[257802]: 2025-10-02 13:30:12.739 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:13 compute-0 podman[435001]: 2025-10-02 13:30:13.920254991 +0000 UTC m=+0.055241182 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:30:13 compute-0 podman[435000]: 2025-10-02 13:30:13.920973509 +0000 UTC m=+0.060474981 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:30:13 compute-0 podman[435002]: 2025-10-02 13:30:13.925659783 +0000 UTC m=+0.056523594 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:30:14 compute-0 nova_compute[257802]: 2025-10-02 13:30:14.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:16 compute-0 sudo[435058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:16 compute-0 sudo[435058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:16 compute-0 sudo[435058]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:16 compute-0 sudo[435083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:16 compute-0 sudo[435083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:16 compute-0 sudo[435083]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:16 compute-0 nova_compute[257802]: 2025-10-02 13:30:16.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/930168130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/486016299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:19 compute-0 nova_compute[257802]: 2025-10-02 13:30:19.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:19.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:19.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:19 compute-0 ceph-mon[73607]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-0 ceph-mon[73607]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-0 ceph-mon[73607]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:20 compute-0 ceph-mon[73607]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:21.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:21.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:21 compute-0 ceph-mon[73607]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:21 compute-0 nova_compute[257802]: 2025-10-02 13:30:21.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:23.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:23.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:23 compute-0 ceph-mon[73607]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:24 compute-0 nova_compute[257802]: 2025-10-02 13:30:24.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:25.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:25 compute-0 ceph-mon[73607]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:25.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:25 compute-0 sudo[435113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:25 compute-0 sudo[435113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:25 compute-0 sudo[435113]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:25 compute-0 sudo[435144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:30:25 compute-0 sudo[435144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:25 compute-0 sudo[435144]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:25 compute-0 podman[435137]: 2025-10-02 13:30:25.445948565 +0000 UTC m=+0.078262456 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 02 13:30:25 compute-0 sudo[435185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:25 compute-0 sudo[435185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:25 compute-0 sudo[435185]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:25 compute-0 sudo[435215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:30:25 compute-0 sudo[435215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:25 compute-0 sudo[435215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 34de27cf-ecc0-4407-99ca-2e1096d7dc28 does not exist
Oct 02 13:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f47a1c09-4321-497a-910d-df40576cb259 does not exist
Oct 02 13:30:26 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev baafa5b9-1b6e-44ea-a546-678faf1df5c6 does not exist
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:30:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:30:26 compute-0 sudo[435270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:26 compute-0 sudo[435270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:26 compute-0 sudo[435270]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:26 compute-0 sudo[435295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:30:26 compute-0 sudo[435295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:26 compute-0 sudo[435295]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:26 compute-0 sudo[435320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:26 compute-0 sudo[435320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:26 compute-0 sudo[435320]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:26 compute-0 sudo[435345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:30:26 compute-0 sudo[435345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:30:26 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.649924014 +0000 UTC m=+0.044231334 container create b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:30:26 compute-0 systemd[1]: Started libpod-conmon-b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c.scope.
Oct 02 13:30:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:26 compute-0 nova_compute[257802]: 2025-10-02 13:30:26.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.62854443 +0000 UTC m=+0.022851790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.726670272 +0000 UTC m=+0.120977592 container init b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.733741485 +0000 UTC m=+0.128048805 container start b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.737491006 +0000 UTC m=+0.131798326 container attach b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:30:26 compute-0 affectionate_yalow[435426]: 167 167
Oct 02 13:30:26 compute-0 systemd[1]: libpod-b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c.scope: Deactivated successfully.
Oct 02 13:30:26 compute-0 conmon[435426]: conmon b000c734709183dfcf0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c.scope/container/memory.events
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.740609262 +0000 UTC m=+0.134916582 container died b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:30:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-789aea8cdcdc2af404f4fcd00300baaa5c7e02d4af07dfe1ba42a96cf878df12-merged.mount: Deactivated successfully.
Oct 02 13:30:26 compute-0 podman[435410]: 2025-10-02 13:30:26.786990198 +0000 UTC m=+0.181297558 container remove b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:30:26 compute-0 systemd[1]: libpod-conmon-b000c734709183dfcf0b60dac94ba4f951fab3f51afc1b006a779c9558021a1c.scope: Deactivated successfully.
Oct 02 13:30:26 compute-0 podman[435451]: 2025-10-02 13:30:26.9726452 +0000 UTC m=+0.051093461 container create 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:30:27 compute-0 systemd[1]: Started libpod-conmon-825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45.scope.
Oct 02 13:30:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:30:27.018 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:30:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:30:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:26.949913394 +0000 UTC m=+0.028361685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:27.07681795 +0000 UTC m=+0.155266231 container init 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:27.086005484 +0000 UTC m=+0.164453745 container start 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:27.090413142 +0000 UTC m=+0.168861423 container attach 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:30:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:27.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:27 compute-0 ceph-mon[73607]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:27 compute-0 nova_compute[257802]: 2025-10-02 13:30:27.725 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:27 compute-0 bold_bartik[435467]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:30:27 compute-0 bold_bartik[435467]: --> relative data size: 1.0
Oct 02 13:30:27 compute-0 bold_bartik[435467]: --> All data devices are unavailable
Oct 02 13:30:27 compute-0 systemd[1]: libpod-825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45.scope: Deactivated successfully.
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:27.906877649 +0000 UTC m=+0.985325910 container died 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:30:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dba16c865bfe76b89998d7770559e33dd3905bab090eb242ce96ff7ddd034dc5-merged.mount: Deactivated successfully.
Oct 02 13:30:27 compute-0 podman[435451]: 2025-10-02 13:30:27.96701004 +0000 UTC m=+1.045458301 container remove 825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bartik, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:30:27 compute-0 systemd[1]: libpod-conmon-825db39d94dec4e926dd5bc57e14ce69aab5e57220d711823aaf07c8e7f5bb45.scope: Deactivated successfully.
Oct 02 13:30:28 compute-0 sudo[435345]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:28 compute-0 sudo[435496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:28 compute-0 sudo[435496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:28 compute-0 sudo[435496]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:28 compute-0 sudo[435521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:30:28 compute-0 sudo[435521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:28 compute-0 sudo[435521]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:28 compute-0 sudo[435546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:28 compute-0 sudo[435546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:28 compute-0 sudo[435546]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:28 compute-0 sudo[435571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:30:28 compute-0 sudo[435571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.583435244 +0000 UTC m=+0.039622951 container create 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:30:28 compute-0 systemd[1]: Started libpod-conmon-632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18.scope.
Oct 02 13:30:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.657941627 +0000 UTC m=+0.114129364 container init 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.566741966 +0000 UTC m=+0.022929683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.664015775 +0000 UTC m=+0.120203472 container start 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.666923536 +0000 UTC m=+0.123111263 container attach 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:30:28 compute-0 dreamy_shaw[435652]: 167 167
Oct 02 13:30:28 compute-0 systemd[1]: libpod-632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18.scope: Deactivated successfully.
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.668818663 +0000 UTC m=+0.125006370 container died 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:30:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f115267cb7eaffa4d58979f2cadbcaaa97c185234befbb80e14d50be7198a72-merged.mount: Deactivated successfully.
Oct 02 13:30:28 compute-0 podman[435636]: 2025-10-02 13:30:28.706013333 +0000 UTC m=+0.162201040 container remove 632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:30:28 compute-0 systemd[1]: libpod-conmon-632e62434743afc1b044809158ffb51fa16ea106e38c541f50d42c97e83eef18.scope: Deactivated successfully.
Oct 02 13:30:28 compute-0 podman[435676]: 2025-10-02 13:30:28.855423309 +0000 UTC m=+0.040560484 container create 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:30:28 compute-0 systemd[1]: Started libpod-conmon-144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6.scope.
Oct 02 13:30:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20e0a6abad9f48332cd73af4c97f0d3dd9bab0e6c44e1f46d4912cfad78984b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20e0a6abad9f48332cd73af4c97f0d3dd9bab0e6c44e1f46d4912cfad78984b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20e0a6abad9f48332cd73af4c97f0d3dd9bab0e6c44e1f46d4912cfad78984b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20e0a6abad9f48332cd73af4c97f0d3dd9bab0e6c44e1f46d4912cfad78984b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:28 compute-0 podman[435676]: 2025-10-02 13:30:28.909727198 +0000 UTC m=+0.094864383 container init 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:30:28 compute-0 podman[435676]: 2025-10-02 13:30:28.917741513 +0000 UTC m=+0.102878698 container start 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:30:28 compute-0 podman[435676]: 2025-10-02 13:30:28.932584877 +0000 UTC m=+0.117722062 container attach 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:30:28 compute-0 podman[435676]: 2025-10-02 13:30:28.8387222 +0000 UTC m=+0.023859405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:29 compute-0 nova_compute[257802]: 2025-10-02 13:30:29.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:29 compute-0 nova_compute[257802]: 2025-10-02 13:30:29.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:30:29 compute-0 nova_compute[257802]: 2025-10-02 13:30:29.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:29.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:29 compute-0 ceph-mon[73607]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:29.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:29 compute-0 confident_williamson[435692]: {
Oct 02 13:30:29 compute-0 confident_williamson[435692]:     "1": [
Oct 02 13:30:29 compute-0 confident_williamson[435692]:         {
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "devices": [
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "/dev/loop3"
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             ],
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "lv_name": "ceph_lv0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "lv_size": "7511998464",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "name": "ceph_lv0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "tags": {
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.cluster_name": "ceph",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.crush_device_class": "",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.encrypted": "0",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.osd_id": "1",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.type": "block",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:                 "ceph.vdo": "0"
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             },
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "type": "block",
Oct 02 13:30:29 compute-0 confident_williamson[435692]:             "vg_name": "ceph_vg0"
Oct 02 13:30:29 compute-0 confident_williamson[435692]:         }
Oct 02 13:30:29 compute-0 confident_williamson[435692]:     ]
Oct 02 13:30:29 compute-0 confident_williamson[435692]: }
Oct 02 13:30:29 compute-0 systemd[1]: libpod-144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6.scope: Deactivated successfully.
Oct 02 13:30:29 compute-0 podman[435676]: 2025-10-02 13:30:29.709265171 +0000 UTC m=+0.894402346 container died 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c20e0a6abad9f48332cd73af4c97f0d3dd9bab0e6c44e1f46d4912cfad78984b-merged.mount: Deactivated successfully.
Oct 02 13:30:29 compute-0 podman[435676]: 2025-10-02 13:30:29.765592429 +0000 UTC m=+0.950729604 container remove 144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:30:29 compute-0 systemd[1]: libpod-conmon-144f3c84aa90cfc654e56ba45dba6955a6145a159d60d182b860a78f03337fc6.scope: Deactivated successfully.
Oct 02 13:30:29 compute-0 sudo[435571]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:29 compute-0 sudo[435713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:29 compute-0 sudo[435713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:29 compute-0 sudo[435713]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:29 compute-0 sudo[435738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:30:29 compute-0 sudo[435738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:29 compute-0 sudo[435738]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:30 compute-0 sudo[435763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:30 compute-0 sudo[435763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:30 compute-0 sudo[435763]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:30 compute-0 sudo[435788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:30:30 compute-0 sudo[435788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.395727677 +0000 UTC m=+0.039409855 container create 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:30:30 compute-0 systemd[1]: Started libpod-conmon-37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599.scope.
Oct 02 13:30:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.464311916 +0000 UTC m=+0.107994114 container init 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.470739823 +0000 UTC m=+0.114422001 container start 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:30:30 compute-0 eager_ishizaka[435871]: 167 167
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.378869715 +0000 UTC m=+0.022551923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:30 compute-0 systemd[1]: libpod-37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599.scope: Deactivated successfully.
Oct 02 13:30:30 compute-0 conmon[435871]: conmon 37ece102ce17f75cb159 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599.scope/container/memory.events
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.477675163 +0000 UTC m=+0.121357361 container attach 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.47798816 +0000 UTC m=+0.121670338 container died 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcbbdde1ef3c42e33a27bf92cbc20856e743dec92ec8ae8ad699a4933ee934a1-merged.mount: Deactivated successfully.
Oct 02 13:30:30 compute-0 podman[435854]: 2025-10-02 13:30:30.527188034 +0000 UTC m=+0.170870212 container remove 37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:30:30 compute-0 systemd[1]: libpod-conmon-37ece102ce17f75cb1598a86be46de0dac0f9448fb9e08f3e6f3c5b06cbf7599.scope: Deactivated successfully.
Oct 02 13:30:30 compute-0 podman[435897]: 2025-10-02 13:30:30.674891078 +0000 UTC m=+0.040167783 container create 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:30:30 compute-0 systemd[1]: Started libpod-conmon-9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8.scope.
Oct 02 13:30:30 compute-0 podman[435897]: 2025-10-02 13:30:30.657258727 +0000 UTC m=+0.022535452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:30:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7ed090f1ee7d93929fd6aabf9f05fb04e00b906862069cf3fcfaa0682df9e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7ed090f1ee7d93929fd6aabf9f05fb04e00b906862069cf3fcfaa0682df9e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7ed090f1ee7d93929fd6aabf9f05fb04e00b906862069cf3fcfaa0682df9e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7ed090f1ee7d93929fd6aabf9f05fb04e00b906862069cf3fcfaa0682df9e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:30:30 compute-0 podman[435897]: 2025-10-02 13:30:30.778559534 +0000 UTC m=+0.143836269 container init 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:30:30 compute-0 podman[435897]: 2025-10-02 13:30:30.812075944 +0000 UTC m=+0.177352649 container start 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:30:30 compute-0 podman[435897]: 2025-10-02 13:30:30.815441707 +0000 UTC m=+0.180718432 container attach 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:30:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:31.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:31 compute-0 ceph-mon[73607]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:31.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]: {
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:         "osd_id": 1,
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:         "type": "bluestore"
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]:     }
Oct 02 13:30:31 compute-0 beautiful_cohen[435913]: }
Oct 02 13:30:31 compute-0 podman[435897]: 2025-10-02 13:30:31.686950691 +0000 UTC m=+1.052227406 container died 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:30:31 compute-0 systemd[1]: libpod-9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8.scope: Deactivated successfully.
Oct 02 13:30:31 compute-0 nova_compute[257802]: 2025-10-02 13:30:31.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d7ed090f1ee7d93929fd6aabf9f05fb04e00b906862069cf3fcfaa0682df9e4-merged.mount: Deactivated successfully.
Oct 02 13:30:31 compute-0 podman[435897]: 2025-10-02 13:30:31.75556094 +0000 UTC m=+1.120837645 container remove 9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:30:31 compute-0 systemd[1]: libpod-conmon-9e2b9b3d283039039fc9d9e80e4e770d4f4a523b0532ffd7ae6b91650a5aadd8.scope: Deactivated successfully.
Oct 02 13:30:31 compute-0 sudo[435788]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:30:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:31 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:30:31 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev ac7d1ccf-d9ab-4254-897b-9a2de3efabdd does not exist
Oct 02 13:30:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2aaa6de2-e8a3-4767-be03-f01f50186b8d does not exist
Oct 02 13:30:31 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9b30b45f-8b53-482c-8302-a274ac118457 does not exist
Oct 02 13:30:31 compute-0 sudo[435947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:31 compute-0 sudo[435947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:31 compute-0 sudo[435947]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:32 compute-0 sudo[435972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:30:32 compute-0 sudo[435972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:32 compute-0 sudo[435972]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:32 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:30:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:33.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:33 compute-0 ceph-mon[73607]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:34 compute-0 nova_compute[257802]: 2025-10-02 13:30:34.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:35 compute-0 ceph-mon[73607]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:35.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:36 compute-0 sudo[435999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:36 compute-0 sudo[435999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:36 compute-0 sudo[435999]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:36 compute-0 sudo[436024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:36 compute-0 sudo[436024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:36 compute-0 sudo[436024]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:36 compute-0 nova_compute[257802]: 2025-10-02 13:30:36.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:37 compute-0 ceph-mon[73607]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:39 compute-0 nova_compute[257802]: 2025-10-02 13:30:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:39 compute-0 ceph-mon[73607]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:41.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:41 compute-0 ceph-mon[73607]: pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:41 compute-0 nova_compute[257802]: 2025-10-02 13:30:41.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:30:42
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root']
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.189036) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843189065, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 744, "num_deletes": 251, "total_data_size": 983568, "memory_usage": 998048, "flush_reason": "Manual Compaction"}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843194112, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 635511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87656, "largest_seqno": 88399, "table_properties": {"data_size": 632308, "index_size": 1046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8764, "raw_average_key_size": 20, "raw_value_size": 625466, "raw_average_value_size": 1478, "num_data_blocks": 46, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411783, "oldest_key_time": 1759411783, "file_creation_time": 1759411843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 5662 microseconds, and 3150 cpu microseconds.
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.194695) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 635511 bytes OK
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.194712) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.195836) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.195848) EVENT_LOG_v1 {"time_micros": 1759411843195844, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.195866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 979861, prev total WAL file size 979861, number of live WAL files 2.
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.196309) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323632' seq:72057594037927935, type:22 .. '6D6772737461740033353134' seq:0, type:0; will stop at (end)
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(620KB)], [200(14MB)]
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843196336, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 15393588, "oldest_snapshot_seqno": -1}
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:43.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 11648 keys, 11879931 bytes, temperature: kUnknown
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843275939, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 11879931, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11809762, "index_size": 39976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 307734, "raw_average_key_size": 26, "raw_value_size": 11611171, "raw_average_value_size": 996, "num_data_blocks": 1503, "num_entries": 11648, "num_filter_entries": 11648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.276268) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11879931 bytes
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.277711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.2 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(42.9) write-amplify(18.7) OK, records in: 12141, records dropped: 493 output_compression: NoCompression
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.277740) EVENT_LOG_v1 {"time_micros": 1759411843277727, "job": 126, "event": "compaction_finished", "compaction_time_micros": 79687, "compaction_time_cpu_micros": 28285, "output_level": 6, "num_output_files": 1, "total_output_size": 11879931, "num_input_records": 12141, "num_output_records": 11648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843278140, "job": 126, "event": "table_file_deletion", "file_number": 202}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843282509, "job": 126, "event": "table_file_deletion", "file_number": 200}
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.196239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.282599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.282606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.282608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.282609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:30:43.282611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:30:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:30:44 compute-0 ceph-mon[73607]: pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:44 compute-0 nova_compute[257802]: 2025-10-02 13:30:44.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:30:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:30:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:30:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:30:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:30:44 compute-0 podman[436054]: 2025-10-02 13:30:44.926737256 +0000 UTC m=+0.056412051 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:30:44 compute-0 podman[436055]: 2025-10-02 13:30:44.935323586 +0000 UTC m=+0.064856207 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 13:30:44 compute-0 podman[436056]: 2025-10-02 13:30:44.955930711 +0000 UTC m=+0.083686719 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:30:45 compute-0 nova_compute[257802]: 2025-10-02 13:30:45.115 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:45.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:30:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:30:45 compute-0 ceph-mon[73607]: pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:46 compute-0 nova_compute[257802]: 2025-10-02 13:30:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:47 compute-0 nova_compute[257802]: 2025-10-02 13:30:47.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:47 compute-0 nova_compute[257802]: 2025-10-02 13:30:47.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:30:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:30:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:30:47 compute-0 ceph-mon[73607]: pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:47.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:30:49 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 88K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1503 writes, 6943 keys, 1503 commit groups, 1.0 writes per commit group, ingest: 10.37 MB, 0.02 MB/s
                                           Interval WAL: 1503 writes, 1503 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     47.0      2.53              0.33        63    0.040       0      0       0.0       0.0
                                             L6      1/0   11.33 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    112.6     96.7      6.77              1.80        62    0.109    506K    33K       0.0       0.0
                                            Sum      1/0   11.33 MB   0.0      0.7     0.1      0.6       0.8      0.1       0.0   6.5     82.0     83.2      9.30              2.13       125    0.074    506K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    120.4    118.8      0.70              0.22        12    0.059     71K   2983       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    112.6     96.7      6.77              1.80        62    0.109    506K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.1      2.52              0.33        62    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.116, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.76 GB write, 0.11 MB/s write, 0.74 GB read, 0.11 MB/s read, 9.3 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5581be5e11f0#2 capacity: 304.00 MB usage: 83.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000515 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5225,80.35 MB,26.4317%) FilterBlock(126,1.36 MB,0.447098%) IndexBlock(126,2.22 MB,0.730424%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:30:49 compute-0 nova_compute[257802]: 2025-10-02 13:30:49.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:49 compute-0 ceph-mon[73607]: pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:49.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:51.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:51 compute-0 ceph-mon[73607]: pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:51 compute-0 nova_compute[257802]: 2025-10-02 13:30:51.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:53 compute-0 nova_compute[257802]: 2025-10-02 13:30:53.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:53 compute-0 ceph-mon[73607]: pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:30:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:53.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:30:54 compute-0 nova_compute[257802]: 2025-10-02 13:30:54.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3996399624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:55 compute-0 nova_compute[257802]: 2025-10-02 13:30:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:30:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:30:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2584588499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1122462114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:30:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/1122462114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:30:55 compute-0 ceph-mon[73607]: pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:55 compute-0 podman[436118]: 2025-10-02 13:30:55.934442377 +0000 UTC m=+0.075507129 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:30:56 compute-0 sudo[436144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:56 compute-0 sudo[436144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:56 compute-0 sudo[436144]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:56 compute-0 sudo[436169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:56 compute-0 sudo[436169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:56 compute-0 sudo[436169]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:56 compute-0 nova_compute[257802]: 2025-10-02 13:30:56.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:57 compute-0 nova_compute[257802]: 2025-10-02 13:30:57.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:57 compute-0 nova_compute[257802]: 2025-10-02 13:30:57.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:57 compute-0 ceph-mon[73607]: pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:58 compute-0 nova_compute[257802]: 2025-10-02 13:30:58.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:59 compute-0 nova_compute[257802]: 2025-10-02 13:30:59.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:30:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:30:59 compute-0 ceph-mon[73607]: pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:30:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:59.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:01.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:01 compute-0 ceph-mon[73607]: pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:01 compute-0 nova_compute[257802]: 2025-10-02 13:31:01.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:03.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:03 compute-0 ceph-mon[73607]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:04 compute-0 nova_compute[257802]: 2025-10-02 13:31:04.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1898096587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1579476215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:31:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:05.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:31:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:05 compute-0 ceph-mon[73607]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:06 compute-0 nova_compute[257802]: 2025-10-02 13:31:06.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:06 compute-0 nova_compute[257802]: 2025-10-02 13:31:06.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:31:06 compute-0 nova_compute[257802]: 2025-10-02 13:31:06.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:31:06 compute-0 nova_compute[257802]: 2025-10-02 13:31:06.152 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:31:06 compute-0 nova_compute[257802]: 2025-10-02 13:31:06.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:07.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:07.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:07 compute-0 ceph-mon[73607]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:09 compute-0 nova_compute[257802]: 2025-10-02 13:31:09.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:31:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:09.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:31:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:09.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:09 compute-0 ceph-mon[73607]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:11.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:11.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:11 compute-0 ceph-mon[73607]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:11 compute-0 nova_compute[257802]: 2025-10-02 13:31:11.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.718 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.719 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.719 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.719 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:31:12 compute-0 nova_compute[257802]: 2025-10-02 13:31:12.719 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766209310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.191 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.334 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.336 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4177MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.336 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.336 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1766209310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:13.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.426 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.426 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.447 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing inventories for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.469 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating ProviderTree inventory for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.469 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Updating inventory in ProviderTree for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.506 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing aggregate associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.536 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Refreshing trait associations for resource provider a293a24c-b5ed-43d1-8783-f02da4f75ad4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:31:13 compute-0 nova_compute[257802]: 2025-10-02 13:31:13.578 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869508247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.015 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.020 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.067 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.069 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.069 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:14 compute-0 nova_compute[257802]: 2025-10-02 13:31:14.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:14 compute-0 ceph-mon[73607]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3869508247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:15.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:15.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:15 compute-0 podman[436248]: 2025-10-02 13:31:15.920457776 +0000 UTC m=+0.060352398 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:31:15 compute-0 podman[436250]: 2025-10-02 13:31:15.928580695 +0000 UTC m=+0.062674255 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:15 compute-0 podman[436249]: 2025-10-02 13:31:15.928906843 +0000 UTC m=+0.067492783 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:31:16 compute-0 ceph-mon[73607]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:16 compute-0 sudo[436302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:16 compute-0 sudo[436302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:16 compute-0 sudo[436302]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:16 compute-0 sudo[436327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:16 compute-0 sudo[436327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:16 compute-0 sudo[436327]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:16 compute-0 nova_compute[257802]: 2025-10-02 13:31:16.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:17.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:17 compute-0 ceph-mon[73607]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.412627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878412656, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 256, "total_data_size": 611291, "memory_usage": 621896, "flush_reason": "Manual Compaction"}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878418743, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 604704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88400, "largest_seqno": 88955, "table_properties": {"data_size": 601723, "index_size": 952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6919, "raw_average_key_size": 18, "raw_value_size": 595657, "raw_average_value_size": 1584, "num_data_blocks": 44, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411844, "oldest_key_time": 1759411844, "file_creation_time": 1759411878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 6163 microseconds, and 2619 cpu microseconds.
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.418783) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 604704 bytes OK
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.418804) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.420466) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.420483) EVENT_LOG_v1 {"time_micros": 1759411878420477, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.420500) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 608211, prev total WAL file size 608211, number of live WAL files 2.
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.421156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373630' seq:72057594037927935, type:22 .. '6C6F676D0034303132' seq:0, type:0; will stop at (end)
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(590KB)], [203(11MB)]
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878421211, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 12484635, "oldest_snapshot_seqno": -1}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 11500 keys, 12368214 bytes, temperature: kUnknown
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878520018, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 12368214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12297934, "index_size": 40407, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 305585, "raw_average_key_size": 26, "raw_value_size": 12100735, "raw_average_value_size": 1052, "num_data_blocks": 1519, "num_entries": 11500, "num_filter_entries": 11500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759411878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.521087) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12368214 bytes
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.526946) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(41.1) write-amplify(20.5) OK, records in: 12024, records dropped: 524 output_compression: NoCompression
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.527009) EVENT_LOG_v1 {"time_micros": 1759411878526964, "job": 128, "event": "compaction_finished", "compaction_time_micros": 98909, "compaction_time_cpu_micros": 29581, "output_level": 6, "num_output_files": 1, "total_output_size": 12368214, "num_input_records": 12024, "num_output_records": 11500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878527242, "job": 128, "event": "table_file_deletion", "file_number": 205}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878528810, "job": 128, "event": "table_file_deletion", "file_number": 203}
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.421019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.529053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.529068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.529073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.529076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:18 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:31:18.529079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:19 compute-0 nova_compute[257802]: 2025-10-02 13:31:19.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:19.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:19 compute-0 ceph-mon[73607]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:21.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:21 compute-0 ceph-mon[73607]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:21 compute-0 nova_compute[257802]: 2025-10-02 13:31:21.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:23 compute-0 ceph-mon[73607]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:24 compute-0 nova_compute[257802]: 2025-10-02 13:31:24.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:31:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:31:25 compute-0 ceph-mon[73607]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:25.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:26 compute-0 podman[436358]: 2025-10-02 13:31:26.931285423 +0000 UTC m=+0.076174894 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:31:26 compute-0 nova_compute[257802]: 2025-10-02 13:31:26.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:31:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:31:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:31:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:27 compute-0 ceph-mon[73607]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:27.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:29 compute-0 nova_compute[257802]: 2025-10-02 13:31:29.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:29 compute-0 ceph-mon[73607]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:29.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:30 compute-0 nova_compute[257802]: 2025-10-02 13:31:30.069 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:31 compute-0 ceph-mon[73607]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:31.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:31 compute-0 nova_compute[257802]: 2025-10-02 13:31:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:32 compute-0 sudo[436387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:32 compute-0 sudo[436387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:32 compute-0 sudo[436387]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:32 compute-0 sudo[436412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:32 compute-0 sudo[436412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:32 compute-0 sudo[436412]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:32 compute-0 sudo[436437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:32 compute-0 sudo[436437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:32 compute-0 sudo[436437]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:32 compute-0 sudo[436462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:31:32 compute-0 sudo[436462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:32 compute-0 sudo[436462]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d442204c-321c-4921-a0a3-9749372e5167 does not exist
Oct 02 13:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 43094067-154f-4f2c-a07a-edb79aaddf94 does not exist
Oct 02 13:31:33 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d018476a-2bc3-4ca4-8f05-1ec70c34d2ef does not exist
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:31:33 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:31:33 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:33 compute-0 sudo[436518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:33 compute-0 sudo[436518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:33 compute-0 sudo[436518]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:33 compute-0 sudo[436543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:33 compute-0 sudo[436543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:33 compute-0 sudo[436543]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 13:31:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 13:31:33 compute-0 sudo[436568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:33 compute-0 sudo[436568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:33 compute-0 sudo[436568]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:33 compute-0 sudo[436593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:31:33 compute-0 sudo[436593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.713465494 +0000 UTC m=+0.061386982 container create 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:33 compute-0 systemd[1]: Started libpod-conmon-784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b.scope.
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.673711272 +0000 UTC m=+0.021632780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.836163917 +0000 UTC m=+0.184085405 container init 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.846461698 +0000 UTC m=+0.194383186 container start 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.852297502 +0000 UTC m=+0.200219000 container attach 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:33 compute-0 elegant_fermi[436677]: 167 167
Oct 02 13:31:33 compute-0 systemd[1]: libpod-784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b.scope: Deactivated successfully.
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.854003324 +0000 UTC m=+0.201924812 container died 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ea633c8736efa1c5b8f867f4d200d3697198508bd79154435287280ba09149-merged.mount: Deactivated successfully.
Oct 02 13:31:33 compute-0 podman[436660]: 2025-10-02 13:31:33.968178677 +0000 UTC m=+0.316100165 container remove 784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:33 compute-0 systemd[1]: libpod-conmon-784759edc304b8fc8ac5bb9b908b3c41cb6447ecbacf6f7e1e81bf983835206b.scope: Deactivated successfully.
Oct 02 13:31:34 compute-0 podman[436701]: 2025-10-02 13:31:34.118366512 +0000 UTC m=+0.029560685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:34 compute-0 podman[436701]: 2025-10-02 13:31:34.261492774 +0000 UTC m=+0.172686967 container create bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:31:34 compute-0 nova_compute[257802]: 2025-10-02 13:31:34.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:34 compute-0 ceph-mon[73607]: pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:34 compute-0 systemd[1]: Started libpod-conmon-bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a.scope.
Oct 02 13:31:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:34 compute-0 podman[436701]: 2025-10-02 13:31:34.557964498 +0000 UTC m=+0.469158661 container init bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:34 compute-0 podman[436701]: 2025-10-02 13:31:34.570532666 +0000 UTC m=+0.481726819 container start bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:31:34 compute-0 podman[436701]: 2025-10-02 13:31:34.589004108 +0000 UTC m=+0.500198291 container attach bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:35 compute-0 gallant_feynman[436717]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:31:35 compute-0 gallant_feynman[436717]: --> relative data size: 1.0
Oct 02 13:31:35 compute-0 gallant_feynman[436717]: --> All data devices are unavailable
Oct 02 13:31:35 compute-0 systemd[1]: libpod-bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a.scope: Deactivated successfully.
Oct 02 13:31:35 compute-0 podman[436701]: 2025-10-02 13:31:35.468153719 +0000 UTC m=+1.379347872 container died bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:31:35 compute-0 ceph-mon[73607]: pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1adef3650b7a8a9acda86da4585a7f102017b338e9ad2545b7ed13899c2c5de6-merged.mount: Deactivated successfully.
Oct 02 13:31:35 compute-0 podman[436701]: 2025-10-02 13:31:35.918742154 +0000 UTC m=+1.829936297 container remove bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:31:35 compute-0 systemd[1]: libpod-conmon-bb436e3f2686f5e8c61e66fb9b31e48d40298804ba22eb7b9df258c0c4c5081a.scope: Deactivated successfully.
Oct 02 13:31:35 compute-0 sudo[436593]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 sudo[436747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:36 compute-0 sudo[436747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 sudo[436747]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 sudo[436772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:36 compute-0 sudo[436772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 sudo[436772]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 sudo[436797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:36 compute-0 sudo[436797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 sudo[436797]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 sudo[436822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:31:36 compute-0 sudo[436822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.653800359 +0000 UTC m=+0.078815679 container create adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:36 compute-0 systemd[1]: Started libpod-conmon-adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836.scope.
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.607568278 +0000 UTC m=+0.032583618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:36 compute-0 sudo[436903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:36 compute-0 sudo[436903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 sudo[436903]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.770033804 +0000 UTC m=+0.195049154 container init adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.781891664 +0000 UTC m=+0.206906984 container start adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:31:36 compute-0 dazzling_euler[436926]: 167 167
Oct 02 13:31:36 compute-0 systemd[1]: libpod-adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836.scope: Deactivated successfully.
Oct 02 13:31:36 compute-0 sudo[436933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:36 compute-0 sudo[436933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:36 compute-0 sudo[436933]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.842567169 +0000 UTC m=+0.267582519 container attach adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.844210379 +0000 UTC m=+0.269225699 container died adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e1855aec428bf2fdcc9a787902debe54d7c9b6a31bd7f0994132bf14ad0dcf7-merged.mount: Deactivated successfully.
Oct 02 13:31:36 compute-0 nova_compute[257802]: 2025-10-02 13:31:36.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:36 compute-0 podman[436889]: 2025-10-02 13:31:36.995130471 +0000 UTC m=+0.420145791 container remove adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:31:37 compute-0 systemd[1]: libpod-conmon-adbb84dc4200f5fbc5d13d4bc3ec3da156a78018929cb930856b7659b4ea4836.scope: Deactivated successfully.
Oct 02 13:31:37 compute-0 podman[436982]: 2025-10-02 13:31:37.219299436 +0000 UTC m=+0.056829121 container create e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:31:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:37 compute-0 systemd[1]: Started libpod-conmon-e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8.scope.
Oct 02 13:31:37 compute-0 podman[436982]: 2025-10-02 13:31:37.192122072 +0000 UTC m=+0.029651777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:37.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c74b5acf23def32f3ca18ba97a2dd6e6a49560ac09d43903145c6b9499e8005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c74b5acf23def32f3ca18ba97a2dd6e6a49560ac09d43903145c6b9499e8005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c74b5acf23def32f3ca18ba97a2dd6e6a49560ac09d43903145c6b9499e8005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c74b5acf23def32f3ca18ba97a2dd6e6a49560ac09d43903145c6b9499e8005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:37 compute-0 podman[436982]: 2025-10-02 13:31:37.474790318 +0000 UTC m=+0.312320033 container init e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:37 compute-0 podman[436982]: 2025-10-02 13:31:37.48181675 +0000 UTC m=+0.319346435 container start e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:31:37 compute-0 ceph-mon[73607]: pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:37 compute-0 podman[436982]: 2025-10-02 13:31:37.592084289 +0000 UTC m=+0.429614104 container attach e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:31:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:38 compute-0 busy_sutherland[436999]: {
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:     "1": [
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:         {
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "devices": [
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "/dev/loop3"
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             ],
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "lv_name": "ceph_lv0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "lv_size": "7511998464",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "name": "ceph_lv0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "tags": {
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.cluster_name": "ceph",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.crush_device_class": "",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.encrypted": "0",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.osd_id": "1",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.type": "block",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:                 "ceph.vdo": "0"
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             },
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "type": "block",
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:             "vg_name": "ceph_vg0"
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:         }
Oct 02 13:31:38 compute-0 busy_sutherland[436999]:     ]
Oct 02 13:31:38 compute-0 busy_sutherland[436999]: }
Oct 02 13:31:38 compute-0 systemd[1]: libpod-e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8.scope: Deactivated successfully.
Oct 02 13:31:38 compute-0 podman[436982]: 2025-10-02 13:31:38.260542454 +0000 UTC m=+1.098072179 container died e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 13:31:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c74b5acf23def32f3ca18ba97a2dd6e6a49560ac09d43903145c6b9499e8005-merged.mount: Deactivated successfully.
Oct 02 13:31:38 compute-0 podman[436982]: 2025-10-02 13:31:38.361673168 +0000 UTC m=+1.199202853 container remove e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:38 compute-0 systemd[1]: libpod-conmon-e86df53ce6025a35926c3851ed3fe83c17161a7837ce85f3536a960c5c550ec8.scope: Deactivated successfully.
Oct 02 13:31:38 compute-0 sudo[436822]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:38 compute-0 sudo[437022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:38 compute-0 sudo[437022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:38 compute-0 sudo[437022]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:38 compute-0 sudo[437047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:38 compute-0 sudo[437047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:38 compute-0 sudo[437047]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:38 compute-0 sudo[437072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:38 compute-0 sudo[437072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:38 compute-0 sudo[437072]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:38 compute-0 sudo[437097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:31:38 compute-0 sudo[437097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.001291309 +0000 UTC m=+0.044177962 container create cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:39 compute-0 systemd[1]: Started libpod-conmon-cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e.scope.
Oct 02 13:31:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:38.982410958 +0000 UTC m=+0.025297621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.082549367 +0000 UTC m=+0.125436030 container init cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.090973763 +0000 UTC m=+0.133860406 container start cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.095108915 +0000 UTC m=+0.137995588 container attach cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:39 compute-0 systemd[1]: libpod-cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e.scope: Deactivated successfully.
Oct 02 13:31:39 compute-0 reverent_kirch[437181]: 167 167
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.09821002 +0000 UTC m=+0.141096653 container died cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1682d4c850df78c66c85ed28ea37b09a7023ed4704c5a6b6b499f2d68259c6f-merged.mount: Deactivated successfully.
Oct 02 13:31:39 compute-0 podman[437164]: 2025-10-02 13:31:39.151638307 +0000 UTC m=+0.194524940 container remove cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:31:39 compute-0 systemd[1]: libpod-conmon-cb9a02dcb0eb4e46e9458f4535810ab9f73d622685c3f16a0210b96d7ab9415e.scope: Deactivated successfully.
Oct 02 13:31:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:39 compute-0 nova_compute[257802]: 2025-10-02 13:31:39.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-0 ceph-mon[73607]: pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:39 compute-0 podman[437206]: 2025-10-02 13:31:39.322595811 +0000 UTC m=+0.050168478 container create 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 13:31:39 compute-0 systemd[1]: Started libpod-conmon-1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e.scope.
Oct 02 13:31:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b426894c5e33f3ccd7913a50be8e945b111313d0a667e4beaaacba3b1b522273/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b426894c5e33f3ccd7913a50be8e945b111313d0a667e4beaaacba3b1b522273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b426894c5e33f3ccd7913a50be8e945b111313d0a667e4beaaacba3b1b522273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b426894c5e33f3ccd7913a50be8e945b111313d0a667e4beaaacba3b1b522273/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:39 compute-0 podman[437206]: 2025-10-02 13:31:39.302973811 +0000 UTC m=+0.030546498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:39 compute-0 podman[437206]: 2025-10-02 13:31:39.401084732 +0000 UTC m=+0.128657419 container init 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:39 compute-0 podman[437206]: 2025-10-02 13:31:39.407271673 +0000 UTC m=+0.134844340 container start 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:31:39 compute-0 podman[437206]: 2025-10-02 13:31:39.410372728 +0000 UTC m=+0.137945395 container attach 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:40 compute-0 blissful_sammet[437222]: {
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:         "osd_id": 1,
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:         "type": "bluestore"
Oct 02 13:31:40 compute-0 blissful_sammet[437222]:     }
Oct 02 13:31:40 compute-0 blissful_sammet[437222]: }
Oct 02 13:31:40 compute-0 systemd[1]: libpod-1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e.scope: Deactivated successfully.
Oct 02 13:31:40 compute-0 podman[437206]: 2025-10-02 13:31:40.311020676 +0000 UTC m=+1.038593353 container died 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b426894c5e33f3ccd7913a50be8e945b111313d0a667e4beaaacba3b1b522273-merged.mount: Deactivated successfully.
Oct 02 13:31:40 compute-0 podman[437206]: 2025-10-02 13:31:40.364076334 +0000 UTC m=+1.091649031 container remove 1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:40 compute-0 systemd[1]: libpod-conmon-1e6d90ce72f6621976a0e1f5f2fc3fe8cc1269709e48cc619a2f9b75d37a197e.scope: Deactivated successfully.
Oct 02 13:31:40 compute-0 sudo[437097]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:31:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:40 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:31:40 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev b5e27a19-79f2-40e3-a94d-4cdb50419bfa does not exist
Oct 02 13:31:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev f9312171-fcd9-4ba9-9020-3fa6425e851c does not exist
Oct 02 13:31:40 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 4722829b-317e-4473-88f1-547875019c4e does not exist
Oct 02 13:31:40 compute-0 sudo[437254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:40 compute-0 sudo[437254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:40 compute-0 sudo[437254]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:40 compute-0 sudo[437279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:31:40 compute-0 sudo[437279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:40 compute-0 sudo[437279]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:41.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:41 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:31:41 compute-0 ceph-mon[73607]: pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:41 compute-0 nova_compute[257802]: 2025-10-02 13:31:41.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:31:42
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'vms', '.mgr']
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:43.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:43 compute-0 ceph-mon[73607]: pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:31:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:31:44 compute-0 nova_compute[257802]: 2025-10-02 13:31:44.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:31:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:31:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:31:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:31:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:31:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:45.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:45 compute-0 ceph-mon[73607]: pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:46 compute-0 podman[437308]: 2025-10-02 13:31:46.92670994 +0000 UTC m=+0.067935433 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:31:46 compute-0 podman[437309]: 2025-10-02 13:31:46.937611327 +0000 UTC m=+0.073588761 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 02 13:31:46 compute-0 nova_compute[257802]: 2025-10-02 13:31:46.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:46 compute-0 podman[437310]: 2025-10-02 13:31:46.959670897 +0000 UTC m=+0.095096528 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:31:47 compute-0 nova_compute[257802]: 2025-10-02 13:31:47.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:47 compute-0 ceph-mon[73607]: pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:48 compute-0 nova_compute[257802]: 2025-10-02 13:31:48.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:48 compute-0 nova_compute[257802]: 2025-10-02 13:31:48.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:31:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:49.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:49 compute-0 nova_compute[257802]: 2025-10-02 13:31:49.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:49 compute-0 ceph-mon[73607]: pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:51 compute-0 ceph-mon[73607]: pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:51 compute-0 nova_compute[257802]: 2025-10-02 13:31:51.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:53 compute-0 nova_compute[257802]: 2025-10-02 13:31:53.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:31:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:31:53 compute-0 ceph-mon[73607]: pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:54 compute-0 nova_compute[257802]: 2025-10-02 13:31:54.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:54 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1496869046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:31:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/956431986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/319318620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:31:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/319318620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:31:55 compute-0 ceph-mon[73607]: pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:56 compute-0 nova_compute[257802]: 2025-10-02 13:31:56.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:56 compute-0 sudo[437364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:56 compute-0 sudo[437364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:56 compute-0 sudo[437364]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:56 compute-0 nova_compute[257802]: 2025-10-02 13:31:56.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:56 compute-0 sudo[437389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:56 compute-0 sudo[437389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:56 compute-0 sudo[437389]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:57 compute-0 podman[437413]: 2025-10-02 13:31:57.039767771 +0000 UTC m=+0.066804815 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 13:31:57 compute-0 nova_compute[257802]: 2025-10-02 13:31:57.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:57 compute-0 ceph-mon[73607]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:59 compute-0 nova_compute[257802]: 2025-10-02 13:31:59.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:59 compute-0 nova_compute[257802]: 2025-10-02 13:31:59.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:59.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:59 compute-0 ceph-mon[73607]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:31:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:31:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:01 compute-0 ceph-mon[73607]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:01.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:01 compute-0 nova_compute[257802]: 2025-10-02 13:32:01.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:03.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:03 compute-0 ceph-mon[73607]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:04 compute-0 nova_compute[257802]: 2025-10-02 13:32:04.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:04 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2420125702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:05.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:05 compute-0 ceph-mon[73607]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3790900183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:06 compute-0 nova_compute[257802]: 2025-10-02 13:32:06.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:07 compute-0 nova_compute[257802]: 2025-10-02 13:32:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:07 compute-0 nova_compute[257802]: 2025-10-02 13:32:07.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:32:07 compute-0 nova_compute[257802]: 2025-10-02 13:32:07.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:32:07 compute-0 nova_compute[257802]: 2025-10-02 13:32:07.158 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:32:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:07 compute-0 ceph-mon[73607]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:09 compute-0 nova_compute[257802]: 2025-10-02 13:32:09.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:09.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:09 compute-0 ceph-mon[73607]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:11 compute-0 ceph-mon[73607]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:11 compute-0 nova_compute[257802]: 2025-10-02 13:32:11.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:32:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:13.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:32:13 compute-0 ceph-mon[73607]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.413 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.413 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.413 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.413 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.413 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686250650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.837 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.969 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.970 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4174MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.970 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:13 compute-0 nova_compute[257802]: 2025-10-02 13:32:13.971 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.180 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.180 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.199 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3686250650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092410627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.607 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.613 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.637 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.638 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:32:14 compute-0 nova_compute[257802]: 2025-10-02 13:32:14.639 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1092410627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:15 compute-0 ceph-mon[73607]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:15.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:16 compute-0 nova_compute[257802]: 2025-10-02 13:32:16.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:17 compute-0 sudo[437495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:17 compute-0 sudo[437495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:17 compute-0 sudo[437495]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:17 compute-0 sudo[437538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:17 compute-0 sudo[437538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:17 compute-0 podman[437521]: 2025-10-02 13:32:17.107609346 +0000 UTC m=+0.056860893 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:32:17 compute-0 sudo[437538]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:17 compute-0 podman[437520]: 2025-10-02 13:32:17.108699183 +0000 UTC m=+0.059659712 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:32:17 compute-0 podman[437519]: 2025-10-02 13:32:17.127810831 +0000 UTC m=+0.081342163 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:32:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:32:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:17.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:32:17 compute-0 ceph-mon[73607]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:19 compute-0 nova_compute[257802]: 2025-10-02 13:32:19.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:19 compute-0 ceph-mon[73607]: pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:19.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:21 compute-0 ceph-mon[73607]: pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:21 compute-0 nova_compute[257802]: 2025-10-02 13:32:21.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:23.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:23 compute-0 ceph-mon[73607]: pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:24 compute-0 nova_compute[257802]: 2025-10-02 13:32:24.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:32:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:32:25 compute-0 ceph-mon[73607]: pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:26 compute-0 nova_compute[257802]: 2025-10-02 13:32:26.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:32:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:32:27.020 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:32:27.020 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:27 compute-0 ceph-mon[73607]: pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:27.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:27 compute-0 podman[437607]: 2025-10-02 13:32:27.930126364 +0000 UTC m=+0.072129896 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:32:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:29 compute-0 nova_compute[257802]: 2025-10-02 13:32:29.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:29 compute-0 ceph-mon[73607]: pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:31 compute-0 ceph-mon[73607]: pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:31.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:31 compute-0 nova_compute[257802]: 2025-10-02 13:32:31.639 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:31 compute-0 nova_compute[257802]: 2025-10-02 13:32:31.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:33 compute-0 ceph-mon[73607]: pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:34 compute-0 nova_compute[257802]: 2025-10-02 13:32:34.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:36 compute-0 ceph-mon[73607]: pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:37 compute-0 nova_compute[257802]: 2025-10-02 13:32:37.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:37 compute-0 sudo[437638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:37 compute-0 sudo[437638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:37 compute-0 sudo[437638]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:37 compute-0 sudo[437663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:37 compute-0 sudo[437663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:37 compute-0 sudo[437663]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:32:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:32:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:38 compute-0 ceph-mon[73607]: pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:39 compute-0 nova_compute[257802]: 2025-10-02 13:32:39.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:39.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:40 compute-0 ceph-mon[73607]: pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:40 compute-0 sudo[437690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:40 compute-0 sudo[437690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:40 compute-0 sudo[437690]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:40 compute-0 sudo[437715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:40 compute-0 sudo[437715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:40 compute-0 sudo[437715]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:40 compute-0 sudo[437740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:40 compute-0 sudo[437740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:40 compute-0 sudo[437740]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:41 compute-0 sudo[437765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:32:41 compute-0 sudo[437765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:41.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:41 compute-0 podman[437861]: 2025-10-02 13:32:41.578379754 +0000 UTC m=+0.076172625 container exec 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:32:41 compute-0 podman[437861]: 2025-10-02 13:32:41.665220748 +0000 UTC m=+0.163013599 container exec_died 7dd5d6593b13044c1c2ed31ded484c97381e8938d72d8351dba8213aad9183db (image=quay.io/ceph/ceph:v18, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:32:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:32:41 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:41 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:32:42 compute-0 nova_compute[257802]: 2025-10-02 13:32:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:42 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:42 compute-0 podman[437995]: 2025-10-02 13:32:42.259393418 +0000 UTC m=+0.110710820 container exec 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:32:42 compute-0 podman[438017]: 2025-10-02 13:32:42.350726352 +0000 UTC m=+0.071708165 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:32:42 compute-0 podman[437995]: 2025-10-02 13:32:42.412699028 +0000 UTC m=+0.264016440 container exec_died 48ba69251bfd0dbc9b78d1d25fdc4b6267aed19d1f7d2701e0029e5205c4bceb (image=quay.io/ceph/haproxy:2.3, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-haproxy-rgw-default-compute-0-qdmsoe)
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:32:42
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:42 compute-0 podman[438061]: 2025-10-02 13:32:42.789900178 +0000 UTC m=+0.113357554 container exec a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Oct 02 13:32:42 compute-0 podman[438083]: 2025-10-02 13:32:42.9231937 +0000 UTC m=+0.107027110 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.expose-services=, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 02 13:32:42 compute-0 podman[438061]: 2025-10-02 13:32:42.977206262 +0000 UTC m=+0.300663608 container exec_died a0996176a0d461cd05b97b8b5a5b2bbae23ae6d2fba4e945727fd72b45eda1c9 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-keepalived-rgw-default-compute-0-dcvgot, description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2)
Oct 02 13:32:43 compute-0 ceph-mon[73607]: pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:43 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:43 compute-0 sudo[437765]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:43 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:43 compute-0 sudo[438116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:43 compute-0 sudo[438116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 sudo[438116]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:43.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:43 compute-0 sudo[438141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:43 compute-0 sudo[438141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 sudo[438141]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 sudo[438166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:43 compute-0 sudo[438166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 sudo[438166]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 sudo[438191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:32:43 compute-0 sudo[438191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:32:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:32:44 compute-0 sudo[438191]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9bcb5c43-efe2-4b0d-b59f-93c4694a59f5 does not exist
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 76b05b02-8bfe-4643-8f49-439f8e1bf8de does not exist
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 9de7c879-54ee-4f08-9c1e-95711479cea9 does not exist
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:44 compute-0 sudo[438247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:44 compute-0 sudo[438247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:44 compute-0 sudo[438247]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:44 compute-0 sudo[438272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:44 compute-0 sudo[438272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:44 compute-0 sudo[438272]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:44 compute-0 sudo[438297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:32:44 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:44 compute-0 sudo[438297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:44 compute-0 sudo[438297]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:44 compute-0 nova_compute[257802]: 2025-10-02 13:32:44.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:32:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:32:44 compute-0 sudo[438322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:32:44 compute-0 sudo[438322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:44 compute-0 podman[438387]: 2025-10-02 13:32:44.748868071 +0000 UTC m=+0.025186617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:44 compute-0 podman[438387]: 2025-10-02 13:32:44.950531335 +0000 UTC m=+0.226849861 container create bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:32:45 compute-0 systemd[1]: Started libpod-conmon-bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1.scope.
Oct 02 13:32:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:45 compute-0 podman[438387]: 2025-10-02 13:32:45.271232713 +0000 UTC m=+0.547551239 container init bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:32:45 compute-0 podman[438387]: 2025-10-02 13:32:45.278807237 +0000 UTC m=+0.555125763 container start bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:32:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:45 compute-0 competent_brahmagupta[438404]: 167 167
Oct 02 13:32:45 compute-0 podman[438387]: 2025-10-02 13:32:45.282977579 +0000 UTC m=+0.559296105 container attach bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:45 compute-0 systemd[1]: libpod-bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1.scope: Deactivated successfully.
Oct 02 13:32:45 compute-0 podman[438387]: 2025-10-02 13:32:45.284534047 +0000 UTC m=+0.560852573 container died bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e79981bb0e63693077fe9c92e46cc3cae862ce97f826ecec73eca9e9e6dd5d5d-merged.mount: Deactivated successfully.
Oct 02 13:32:45 compute-0 podman[438387]: 2025-10-02 13:32:45.332869051 +0000 UTC m=+0.609187577 container remove bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:32:45 compute-0 systemd[1]: libpod-conmon-bea65d0fb98d44e2aa517493f03f07956fa8f9536a7b7199b01506e98771fbe1.scope: Deactivated successfully.
Oct 02 13:32:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:45 compute-0 ceph-mon[73607]: pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:45 compute-0 podman[438427]: 2025-10-02 13:32:45.507684438 +0000 UTC m=+0.044942810 container create a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:32:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:45.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:45 compute-0 systemd[1]: Started libpod-conmon-a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8.scope.
Oct 02 13:32:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:45 compute-0 podman[438427]: 2025-10-02 13:32:45.486750556 +0000 UTC m=+0.024008958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:45 compute-0 podman[438427]: 2025-10-02 13:32:45.586337332 +0000 UTC m=+0.123595724 container init a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:32:45 compute-0 podman[438427]: 2025-10-02 13:32:45.594171105 +0000 UTC m=+0.131429467 container start a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:32:45 compute-0 podman[438427]: 2025-10-02 13:32:45.597312401 +0000 UTC m=+0.134570793 container attach a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 13:32:46 compute-0 strange_brown[438444]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:32:46 compute-0 strange_brown[438444]: --> relative data size: 1.0
Oct 02 13:32:46 compute-0 strange_brown[438444]: --> All data devices are unavailable
Oct 02 13:32:46 compute-0 systemd[1]: libpod-a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8.scope: Deactivated successfully.
Oct 02 13:32:46 compute-0 podman[438427]: 2025-10-02 13:32:46.352221352 +0000 UTC m=+0.889479724 container died a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-77dbea8785357367a65d70903324f580e79b1ba7e1d3d1db4b67519e1e90473b-merged.mount: Deactivated successfully.
Oct 02 13:32:46 compute-0 podman[438427]: 2025-10-02 13:32:46.396926756 +0000 UTC m=+0.934185128 container remove a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brown, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:46 compute-0 systemd[1]: libpod-conmon-a54013f1fe6aac4ee6f3325a9a9648d746741696d82d13cbcb35ed18ec0936a8.scope: Deactivated successfully.
Oct 02 13:32:46 compute-0 sudo[438322]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:46 compute-0 sudo[438474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:46 compute-0 sudo[438474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:46 compute-0 sudo[438474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:46 compute-0 sudo[438499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:46 compute-0 sudo[438499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:46 compute-0 sudo[438499]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:46 compute-0 sudo[438524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:46 compute-0 sudo[438524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:46 compute-0 sudo[438524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:46 compute-0 sudo[438549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:32:46 compute-0 sudo[438549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:46 compute-0 ceph-mon[73607]: pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:46 compute-0 podman[438615]: 2025-10-02 13:32:46.951225749 +0000 UTC m=+0.038281318 container create 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:32:46 compute-0 systemd[1]: Started libpod-conmon-1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414.scope.
Oct 02 13:32:47 compute-0 nova_compute[257802]: 2025-10-02 13:32:47.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:46.935148305 +0000 UTC m=+0.022203894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:47.045101866 +0000 UTC m=+0.132157445 container init 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:47.056133586 +0000 UTC m=+0.143189155 container start 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:47.060075143 +0000 UTC m=+0.147130752 container attach 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:32:47 compute-0 xenodochial_bardeen[438632]: 167 167
Oct 02 13:32:47 compute-0 systemd[1]: libpod-1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414.scope: Deactivated successfully.
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:47.061909377 +0000 UTC m=+0.148964946 container died 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:32:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-70786c144666bfabcec76e34683d79489b23b4a1f4f0023e58b7d8ee02b4f047-merged.mount: Deactivated successfully.
Oct 02 13:32:47 compute-0 podman[438615]: 2025-10-02 13:32:47.105565195 +0000 UTC m=+0.192620764 container remove 1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:32:47 compute-0 systemd[1]: libpod-conmon-1e6eaaef82c9d37c85a4505da9571f6754fe462c0334e787651ea89221d82414.scope: Deactivated successfully.
Oct 02 13:32:47 compute-0 podman[438651]: 2025-10-02 13:32:47.228099784 +0000 UTC m=+0.071711166 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 13:32:47 compute-0 podman[438652]: 2025-10-02 13:32:47.254695185 +0000 UTC m=+0.100065320 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 13:32:47 compute-0 podman[438653]: 2025-10-02 13:32:47.254693405 +0000 UTC m=+0.098418010 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 13:32:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:47 compute-0 podman[438709]: 2025-10-02 13:32:47.279685905 +0000 UTC m=+0.047777489 container create 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:32:47 compute-0 systemd[1]: Started libpod-conmon-307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71.scope.
Oct 02 13:32:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf73e8fdf6d1a92bf9dec403c72abb075e4f18e59b57fe5dbb8f4d1104dcfe8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf73e8fdf6d1a92bf9dec403c72abb075e4f18e59b57fe5dbb8f4d1104dcfe8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf73e8fdf6d1a92bf9dec403c72abb075e4f18e59b57fe5dbb8f4d1104dcfe8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf73e8fdf6d1a92bf9dec403c72abb075e4f18e59b57fe5dbb8f4d1104dcfe8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:47 compute-0 podman[438709]: 2025-10-02 13:32:47.343160979 +0000 UTC m=+0.111252603 container init 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:32:47 compute-0 podman[438709]: 2025-10-02 13:32:47.350943709 +0000 UTC m=+0.119035303 container start 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:32:47 compute-0 podman[438709]: 2025-10-02 13:32:47.259186795 +0000 UTC m=+0.027278409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:47 compute-0 podman[438709]: 2025-10-02 13:32:47.353975053 +0000 UTC m=+0.122066677 container attach 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:32:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:47.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:32:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:47.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:48 compute-0 priceless_spence[438730]: {
Oct 02 13:32:48 compute-0 priceless_spence[438730]:     "1": [
Oct 02 13:32:48 compute-0 priceless_spence[438730]:         {
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "devices": [
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "/dev/loop3"
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             ],
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "lv_name": "ceph_lv0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "lv_size": "7511998464",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "name": "ceph_lv0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "tags": {
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.cluster_name": "ceph",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.crush_device_class": "",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.encrypted": "0",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.osd_id": "1",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.type": "block",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:                 "ceph.vdo": "0"
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             },
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "type": "block",
Oct 02 13:32:48 compute-0 priceless_spence[438730]:             "vg_name": "ceph_vg0"
Oct 02 13:32:48 compute-0 priceless_spence[438730]:         }
Oct 02 13:32:48 compute-0 priceless_spence[438730]:     ]
Oct 02 13:32:48 compute-0 priceless_spence[438730]: }
Oct 02 13:32:48 compute-0 systemd[1]: libpod-307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71.scope: Deactivated successfully.
Oct 02 13:32:48 compute-0 podman[438709]: 2025-10-02 13:32:48.072039273 +0000 UTC m=+0.840130877 container died 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:32:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf73e8fdf6d1a92bf9dec403c72abb075e4f18e59b57fe5dbb8f4d1104dcfe8-merged.mount: Deactivated successfully.
Oct 02 13:32:48 compute-0 podman[438709]: 2025-10-02 13:32:48.126149037 +0000 UTC m=+0.894240631 container remove 307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:32:48 compute-0 systemd[1]: libpod-conmon-307e153d34cfd9f5dff0e5e86148787fbcfbfe8a385eb90115d5ef142db9ff71.scope: Deactivated successfully.
Oct 02 13:32:48 compute-0 sudo[438549]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:48 compute-0 sudo[438751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:48 compute-0 sudo[438751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:48 compute-0 sudo[438751]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:48 compute-0 sudo[438776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:48 compute-0 sudo[438776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:48 compute-0 sudo[438776]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:48 compute-0 sudo[438801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:48 compute-0 sudo[438801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:48 compute-0 sudo[438801]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:48 compute-0 sudo[438826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:32:48 compute-0 sudo[438826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.683215348 +0000 UTC m=+0.035766467 container create 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:32:48 compute-0 systemd[1]: Started libpod-conmon-3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce.scope.
Oct 02 13:32:48 compute-0 ceph-mon[73607]: pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.667697828 +0000 UTC m=+0.020248967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.765981453 +0000 UTC m=+0.118532602 container init 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.772033661 +0000 UTC m=+0.124584780 container start 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.774928972 +0000 UTC m=+0.127480111 container attach 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:32:48 compute-0 nifty_wiles[438907]: 167 167
Oct 02 13:32:48 compute-0 systemd[1]: libpod-3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce.scope: Deactivated successfully.
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.777339271 +0000 UTC m=+0.129890390 container died 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:32:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe36edab54f124023d10a4c84dbdf403b36a0e5af2d009d650aa2cdf0bf770bd-merged.mount: Deactivated successfully.
Oct 02 13:32:48 compute-0 podman[438890]: 2025-10-02 13:32:48.808432591 +0000 UTC m=+0.160983700 container remove 3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:32:48 compute-0 systemd[1]: libpod-conmon-3d5ae8d710539ea05396dad4b0acc21a2a826c473d0ef0acb11713563ca802ce.scope: Deactivated successfully.
Oct 02 13:32:48 compute-0 podman[438931]: 2025-10-02 13:32:48.950902668 +0000 UTC m=+0.034436114 container create 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:32:48 compute-0 systemd[1]: Started libpod-conmon-51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488.scope.
Oct 02 13:32:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a126c8236f3bb0dd7e44d61d248d90c38889a68e7b3d0f56f8e87099e8c53fe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a126c8236f3bb0dd7e44d61d248d90c38889a68e7b3d0f56f8e87099e8c53fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a126c8236f3bb0dd7e44d61d248d90c38889a68e7b3d0f56f8e87099e8c53fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a126c8236f3bb0dd7e44d61d248d90c38889a68e7b3d0f56f8e87099e8c53fe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:49.005947534 +0000 UTC m=+0.089480970 container init 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:49.012858443 +0000 UTC m=+0.096391879 container start 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:49.015870817 +0000 UTC m=+0.099404293 container attach 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:48.935348817 +0000 UTC m=+0.018882283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:49 compute-0 nova_compute[257802]: 2025-10-02 13:32:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:49 compute-0 nova_compute[257802]: 2025-10-02 13:32:49.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:49 compute-0 nova_compute[257802]: 2025-10-02 13:32:49.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:32:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:32:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:32:49 compute-0 nova_compute[257802]: 2025-10-02 13:32:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:49.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]: {
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:         "osd_id": 1,
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:         "type": "bluestore"
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]:     }
Oct 02 13:32:49 compute-0 gifted_chatterjee[438948]: }
Oct 02 13:32:49 compute-0 systemd[1]: libpod-51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488.scope: Deactivated successfully.
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:49.819181063 +0000 UTC m=+0.902714499 container died 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a126c8236f3bb0dd7e44d61d248d90c38889a68e7b3d0f56f8e87099e8c53fe5-merged.mount: Deactivated successfully.
Oct 02 13:32:49 compute-0 podman[438931]: 2025-10-02 13:32:49.864595725 +0000 UTC m=+0.948129161 container remove 51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:32:49 compute-0 systemd[1]: libpod-conmon-51a2730ef874a8596bf5477e5ac58068b5fad1ea3a5b19e65aaa3d7bc13bf488.scope: Deactivated successfully.
Oct 02 13:32:49 compute-0 sudo[438826]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:32:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:49 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:32:49 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 7aef3b78-52c8-44b4-82ce-6fb4381728c5 does not exist
Oct 02 13:32:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 0681de6b-ac86-4010-bb7b-5745f835cabf does not exist
Oct 02 13:32:49 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 5dd57e55-d1c5-48dc-803a-6871c97eb19b does not exist
Oct 02 13:32:49 compute-0 sudo[438982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:49 compute-0 sudo[438982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:49 compute-0 sudo[438982]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:50 compute-0 sudo[439007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:32:50 compute-0 sudo[439007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:50 compute-0 sudo[439007]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:50 compute-0 ceph-mon[73607]: pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:50 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:32:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:51.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-0 nova_compute[257802]: 2025-10-02 13:32:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:53 compute-0 ceph-mon[73607]: pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:53.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:54 compute-0 nova_compute[257802]: 2025-10-02 13:32:54.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:54 compute-0 nova_compute[257802]: 2025-10-02 13:32:54.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:55 compute-0 ceph-mon[73607]: pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3166897853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:32:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:55.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3300819681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3300819681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:56 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/731534150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:56 compute-0 nova_compute[257802]: 2025-10-02 13:32:56.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:57 compute-0 nova_compute[257802]: 2025-10-02 13:32:57.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:57 compute-0 nova_compute[257802]: 2025-10-02 13:32:57.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:57 compute-0 ceph-mon[73607]: pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:57 compute-0 sudo[439036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:57 compute-0 sudo[439036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:57 compute-0 sudo[439036]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:57.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:57 compute-0 sudo[439061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:57 compute-0 sudo[439061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:57 compute-0 sudo[439061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:57.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:32:58 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 68K writes, 265K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 68K writes, 25K syncs, 2.71 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 513 writes, 779 keys, 513 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 513 writes, 256 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:32:58 compute-0 podman[439087]: 2025-10-02 13:32:58.979340506 +0000 UTC m=+0.115485186 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:32:59 compute-0 ceph-mon[73607]: pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:59 compute-0 nova_compute[257802]: 2025-10-02 13:32:59.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:32:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:59.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:00 compute-0 nova_compute[257802]: 2025-10-02 13:33:00.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:01 compute-0 ceph-mon[73607]: pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:01.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:02 compute-0 nova_compute[257802]: 2025-10-02 13:33:02.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:03 compute-0 nova_compute[257802]: 2025-10-02 13:33:03.095 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:03 compute-0 ceph-mon[73607]: pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:04 compute-0 nova_compute[257802]: 2025-10-02 13:33:04.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:04 compute-0 ceph-mon[73607]: pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:05.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:05 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2099059103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:06 compute-0 ceph-mon[73607]: pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4110316203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:07 compute-0 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 02 13:33:07 compute-0 nova_compute[257802]: 2025-10-02 13:33:07.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:07 compute-0 nova_compute[257802]: 2025-10-02 13:33:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:07 compute-0 nova_compute[257802]: 2025-10-02 13:33:07.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:33:07 compute-0 nova_compute[257802]: 2025-10-02 13:33:07.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:33:07 compute-0 nova_compute[257802]: 2025-10-02 13:33:07.115 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:33:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:07.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:07.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:08 compute-0 ceph-mon[73607]: pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:09 compute-0 nova_compute[257802]: 2025-10-02 13:33:09.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:09.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:10 compute-0 ceph-mon[73607]: pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:11.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:12 compute-0 nova_compute[257802]: 2025-10-02 13:33:12.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:12 compute-0 ceph-mon[73607]: pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:13.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:14 compute-0 nova_compute[257802]: 2025-10-02 13:33:14.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:14 compute-0 ceph-mon[73607]: pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.212 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.212 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.212 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.213 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.213 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:15.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516327668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.620 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.782 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.784 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4128MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.784 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:15 compute-0 nova_compute[257802]: 2025-10-02 13:33:15.784 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1516327668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.045 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.045 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.085 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346137231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.503 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.508 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.555 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.556 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:33:16 compute-0 nova_compute[257802]: 2025-10-02 13:33:16.556 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:16 compute-0 ceph-mon[73607]: pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1346137231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:17 compute-0 nova_compute[257802]: 2025-10-02 13:33:17.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:17 compute-0 sudo[439167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:17 compute-0 sudo[439167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:17 compute-0 sudo[439167]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:33:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:33:17 compute-0 sudo[439210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:17 compute-0 sudo[439210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:17 compute-0 sudo[439210]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:17 compute-0 podman[439191]: 2025-10-02 13:33:17.592748197 +0000 UTC m=+0.062329265 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 13:33:17 compute-0 podman[439192]: 2025-10-02 13:33:17.593129857 +0000 UTC m=+0.061351972 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 13:33:17 compute-0 podman[439193]: 2025-10-02 13:33:17.613611898 +0000 UTC m=+0.080366257 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:33:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:18 compute-0 ceph-mon[73607]: pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:19 compute-0 nova_compute[257802]: 2025-10-02 13:33:19.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:21 compute-0 ceph-mon[73607]: pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:21.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:22 compute-0 nova_compute[257802]: 2025-10-02 13:33:22.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:23 compute-0 ceph-mon[73607]: pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.030504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003030545, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 251, "total_data_size": 2219326, "memory_usage": 2265520, "flush_reason": "Manual Compaction"}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003047297, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2171920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88956, "largest_seqno": 90260, "table_properties": {"data_size": 2165786, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12988, "raw_average_key_size": 19, "raw_value_size": 2153502, "raw_average_value_size": 3313, "num_data_blocks": 151, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411879, "oldest_key_time": 1759411879, "file_creation_time": 1759412003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 16832 microseconds, and 4924 cpu microseconds.
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.047335) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2171920 bytes OK
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.047358) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.050197) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.050211) EVENT_LOG_v1 {"time_micros": 1759412003050206, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.050226) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2213672, prev total WAL file size 2213672, number of live WAL files 2.
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.050973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(2121KB)], [206(11MB)]
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003051026, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14540134, "oldest_snapshot_seqno": -1}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 11633 keys, 12557707 bytes, temperature: kUnknown
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003133721, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12557707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12486431, "index_size": 41083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 309003, "raw_average_key_size": 26, "raw_value_size": 12286805, "raw_average_value_size": 1056, "num_data_blocks": 1544, "num_entries": 11633, "num_filter_entries": 11633, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404644, "oldest_key_time": 0, "file_creation_time": 1759412003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf9dce4d-b8a2-404e-8460-1e42af6fea10", "db_session_id": "6KM81CTAEWLQE5MWBG31", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.134006) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12557707 bytes
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.135334) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.7 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.8 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.8) OK, records in: 12150, records dropped: 517 output_compression: NoCompression
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.135350) EVENT_LOG_v1 {"time_micros": 1759412003135342, "job": 130, "event": "compaction_finished", "compaction_time_micros": 82773, "compaction_time_cpu_micros": 29255, "output_level": 6, "num_output_files": 1, "total_output_size": 12557707, "num_input_records": 12150, "num_output_records": 11633, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003135803, "job": 130, "event": "table_file_deletion", "file_number": 208}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003138203, "job": 130, "event": "table_file_deletion", "file_number": 206}
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.050867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.138270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.138274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.138276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.138278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: rocksdb: (Original Log Time 2025/10/02-13:33:23.138280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:23.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:24 compute-0 nova_compute[257802]: 2025-10-02 13:33:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:25 compute-0 ceph-mon[73607]: pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:25.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:25.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:33:27.019 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:33:27.020 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:33:27.020 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:27 compute-0 nova_compute[257802]: 2025-10-02 13:33:27.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:27 compute-0 ceph-mon[73607]: pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:27.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:29 compute-0 ceph-mon[73607]: pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:33:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:29.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:33:29 compute-0 nova_compute[257802]: 2025-10-02 13:33:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:29.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:29 compute-0 podman[439277]: 2025-10-02 13:33:29.929624732 +0000 UTC m=+0.073640233 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:33:31 compute-0 ceph-mon[73607]: pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:31 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:31.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:31 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:31 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:31 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:31.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:32 compute-0 nova_compute[257802]: 2025-10-02 13:33:32.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:33 compute-0 ceph-mon[73607]: pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:33 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:33 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:33.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:33 compute-0 nova_compute[257802]: 2025-10-02 13:33:33.557 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:33 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:33 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:33 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:34 compute-0 nova_compute[257802]: 2025-10-02 13:33:34.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:35 compute-0 ceph-mon[73607]: pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:35 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:33:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:33:35 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:35 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:35 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:35.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:37 compute-0 nova_compute[257802]: 2025-10-02 13:33:37.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:37 compute-0 ceph-mon[73607]: pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:37 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:37 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:37 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:37 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:37.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:37 compute-0 sudo[439307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:37 compute-0 sudo[439307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:37 compute-0 sudo[439307]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:37 compute-0 sudo[439332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:37 compute-0 sudo[439332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:37 compute-0 sudo[439332]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:38 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:39 compute-0 ceph-mon[73607]: pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:39 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:33:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:33:39 compute-0 nova_compute[257802]: 2025-10-02 13:33:39.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:39 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:39 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 13:33:39 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:39.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 13:33:41 compute-0 ceph-mon[73607]: pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:41 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:41 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:41 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:41 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:41.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:42 compute-0 nova_compute[257802]: 2025-10-02 13:33:42.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-02_13:33:42
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:42 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:43 compute-0 ceph-mon[73607]: pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:43 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:43 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:43 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:43 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:43.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:33:43 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:33:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:33:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:33:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:33:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:33:44 compute-0 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:33:44 compute-0 nova_compute[257802]: 2025-10-02 13:33:44.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:45 compute-0 ceph-mon[73607]: pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:45 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:45.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:45 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:45 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:45 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:47 compute-0 nova_compute[257802]: 2025-10-02 13:33:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:47 compute-0 ceph-mon[73607]: pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:47 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:47.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:47 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:47 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:47 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:47 compute-0 podman[439362]: 2025-10-02 13:33:47.907882698 +0000 UTC m=+0.048883437 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:33:47 compute-0 podman[439363]: 2025-10-02 13:33:47.918762305 +0000 UTC m=+0.056090274 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Oct 02 13:33:47 compute-0 podman[439364]: 2025-10-02 13:33:47.923763897 +0000 UTC m=+0.057991950 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:33:48 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:49 compute-0 nova_compute[257802]: 2025-10-02 13:33:49.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:49 compute-0 nova_compute[257802]: 2025-10-02 13:33:49.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:49 compute-0 nova_compute[257802]: 2025-10-02 13:33:49.098 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:33:49 compute-0 ceph-mon[73607]: pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:49 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:49 compute-0 nova_compute[257802]: 2025-10-02 13:33:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:49 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:49 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:49 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:49.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:50 compute-0 sudo[439421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:50 compute-0 sudo[439421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:50 compute-0 sudo[439421]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:50 compute-0 sudo[439446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:50 compute-0 sudo[439446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:50 compute-0 sudo[439446]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:50 compute-0 sudo[439471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:50 compute-0 sudo[439471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:50 compute-0 sudo[439471]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:50 compute-0 sudo[439496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:33:50 compute-0 sudo[439496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:50 compute-0 sudo[439496]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 8d8f9ed1-f791-471c-8d07-caa7c1f24d8a does not exist
Oct 02 13:33:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev bcff07e5-c6f7-41f5-94ce-02ff2d9d1984 does not exist
Oct 02 13:33:51 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 42e9b0c7-fe1a-49f7-92a0-5cf512aebc8d does not exist
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:33:51 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:51 compute-0 sudo[439554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:51 compute-0 sudo[439554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:51 compute-0 sudo[439554]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:51 compute-0 sudo[439579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:51 compute-0 sudo[439579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:51 compute-0 sudo[439579]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:51 compute-0 ceph-mon[73607]: pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:33:51 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:51 compute-0 sudo[439604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:51 compute-0 sudo[439604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:51 compute-0 sudo[439604]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:51 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:51 compute-0 sudo[439629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:33:51 compute-0 sudo[439629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:51 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:51 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:51 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:51.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.637739231 +0000 UTC m=+0.043410663 container create 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:33:51 compute-0 systemd[1]: Started libpod-conmon-3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c.scope.
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.61727884 +0000 UTC m=+0.022950292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.743697624 +0000 UTC m=+0.149369116 container init 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.75210861 +0000 UTC m=+0.157780042 container start 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.755689057 +0000 UTC m=+0.161360519 container attach 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:33:51 compute-0 determined_kare[439712]: 167 167
Oct 02 13:33:51 compute-0 systemd[1]: libpod-3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c.scope: Deactivated successfully.
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.762244068 +0000 UTC m=+0.167915500 container died 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e8d3010d988b0b17ad9fb3baab5819d56db20fc582021c8cf1cc0d8eef80a3-merged.mount: Deactivated successfully.
Oct 02 13:33:51 compute-0 podman[439695]: 2025-10-02 13:33:51.806289055 +0000 UTC m=+0.211960487 container remove 3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:51 compute-0 systemd[1]: libpod-conmon-3d4b0ea835f0f877c5cbf7ddd42a00fbd282be61a455b0e112af18bf9920f40c.scope: Deactivated successfully.
Oct 02 13:33:51 compute-0 podman[439735]: 2025-10-02 13:33:51.98305458 +0000 UTC m=+0.042135412 container create 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:52 compute-0 systemd[1]: Started libpod-conmon-94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57.scope.
Oct 02 13:33:52 compute-0 podman[439735]: 2025-10-02 13:33:51.964012685 +0000 UTC m=+0.023093537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:52 compute-0 nova_compute[257802]: 2025-10-02 13:33:52.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:52 compute-0 podman[439735]: 2025-10-02 13:33:52.112638331 +0000 UTC m=+0.171719203 container init 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:33:52 compute-0 podman[439735]: 2025-10-02 13:33:52.123006394 +0000 UTC m=+0.182087236 container start 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:33:52 compute-0 podman[439735]: 2025-10-02 13:33:52.127158866 +0000 UTC m=+0.186239718 container attach 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:52 compute-0 priceless_kalam[439751]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:33:52 compute-0 priceless_kalam[439751]: --> relative data size: 1.0
Oct 02 13:33:52 compute-0 priceless_kalam[439751]: --> All data devices are unavailable
Oct 02 13:33:53 compute-0 systemd[1]: libpod-94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57.scope: Deactivated successfully.
Oct 02 13:33:53 compute-0 podman[439735]: 2025-10-02 13:33:53.032695133 +0000 UTC m=+1.091775965 container died 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fc2e1430b676d0bd9e1276608a8782829951413d65b2d01233999435ad12987-merged.mount: Deactivated successfully.
Oct 02 13:33:53 compute-0 podman[439735]: 2025-10-02 13:33:53.113155993 +0000 UTC m=+1.172236835 container remove 94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:33:53 compute-0 systemd[1]: libpod-conmon-94040e725197467dd1a7f2740532ebf54a5877f0322d59252c11e7fe2214ef57.scope: Deactivated successfully.
Oct 02 13:33:53 compute-0 sudo[439629]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:53 compute-0 sudo[439780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:53 compute-0 sudo[439780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:53 compute-0 sudo[439780]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:53 compute-0 ceph-mon[73607]: pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:53 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:53 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:53 compute-0 sudo[439805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:53 compute-0 sudo[439805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:53 compute-0 sudo[439805]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:53 compute-0 sudo[439830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:53 compute-0 sudo[439830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:53 compute-0 sudo[439830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:53.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:53 compute-0 sudo[439855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- lvm list --format json
Oct 02 13:33:53 compute-0 sudo[439855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:53 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:53 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:53 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.745709419 +0000 UTC m=+0.032590588 container create 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:33:53 compute-0 systemd[1]: Started libpod-conmon-84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac.scope.
Oct 02 13:33:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.809143432 +0000 UTC m=+0.096024621 container init 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.81520772 +0000 UTC m=+0.102088889 container start 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.818757707 +0000 UTC m=+0.105638906 container attach 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:33:53 compute-0 interesting_mendel[439939]: 167 167
Oct 02 13:33:53 compute-0 systemd[1]: libpod-84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac.scope: Deactivated successfully.
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.820228173 +0000 UTC m=+0.107109342 container died 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.73141615 +0000 UTC m=+0.018297339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-25f0ffcf037002d13cb15480fd531ac235b6fc2626bf12ef80dae39e27fcdf67-merged.mount: Deactivated successfully.
Oct 02 13:33:53 compute-0 podman[439922]: 2025-10-02 13:33:53.860140559 +0000 UTC m=+0.147021728 container remove 84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:33:53 compute-0 systemd[1]: libpod-conmon-84965718de341fc95248ae3969e5b56834c289a5a1c69ceda45f7b7ac94a4aac.scope: Deactivated successfully.
Oct 02 13:33:54 compute-0 podman[439966]: 2025-10-02 13:33:54.025644359 +0000 UTC m=+0.042845109 container create 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 13:33:54 compute-0 systemd[1]: Started libpod-conmon-3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be.scope.
Oct 02 13:33:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:54 compute-0 podman[439966]: 2025-10-02 13:33:54.005779593 +0000 UTC m=+0.022980393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0a76e36f5a169139d641aaf27bb00be2b670490a4339163677aefb0f9d72eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0a76e36f5a169139d641aaf27bb00be2b670490a4339163677aefb0f9d72eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0a76e36f5a169139d641aaf27bb00be2b670490a4339163677aefb0f9d72eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0a76e36f5a169139d641aaf27bb00be2b670490a4339163677aefb0f9d72eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:54 compute-0 podman[439966]: 2025-10-02 13:33:54.114466263 +0000 UTC m=+0.131667053 container init 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:33:54 compute-0 podman[439966]: 2025-10-02 13:33:54.121125856 +0000 UTC m=+0.138326606 container start 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:54 compute-0 podman[439966]: 2025-10-02 13:33:54.124892857 +0000 UTC m=+0.142093657 container attach 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:33:54 compute-0 nova_compute[257802]: 2025-10-02 13:33:54.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:54 compute-0 keen_morse[439982]: {
Oct 02 13:33:54 compute-0 keen_morse[439982]:     "1": [
Oct 02 13:33:54 compute-0 keen_morse[439982]:         {
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "devices": [
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "/dev/loop3"
Oct 02 13:33:54 compute-0 keen_morse[439982]:             ],
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "lv_name": "ceph_lv0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "lv_size": "7511998464",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca035294-3f2d-465d-b3e6-43971a2c0201,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "lv_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "name": "ceph_lv0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "tags": {
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.block_uuid": "VPNiNa-ljkG-wj15-rZkd-fCgC-2dGy-AFfxCK",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.cluster_name": "ceph",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.crush_device_class": "",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.encrypted": "0",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.osd_fsid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.osd_id": "1",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.type": "block",
Oct 02 13:33:54 compute-0 keen_morse[439982]:                 "ceph.vdo": "0"
Oct 02 13:33:54 compute-0 keen_morse[439982]:             },
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "type": "block",
Oct 02 13:33:54 compute-0 keen_morse[439982]:             "vg_name": "ceph_vg0"
Oct 02 13:33:54 compute-0 keen_morse[439982]:         }
Oct 02 13:33:54 compute-0 keen_morse[439982]:     ]
Oct 02 13:33:54 compute-0 keen_morse[439982]: }
Oct 02 13:33:54 compute-0 systemd[1]: libpod-3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be.scope: Deactivated successfully.
Oct 02 13:33:54 compute-0 podman[439992]: 2025-10-02 13:33:54.9223193 +0000 UTC m=+0.022976234 container died 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d0a76e36f5a169139d641aaf27bb00be2b670490a4339163677aefb0f9d72eb-merged.mount: Deactivated successfully.
Oct 02 13:33:54 compute-0 podman[439992]: 2025-10-02 13:33:54.968461729 +0000 UTC m=+0.069118633 container remove 3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_morse, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:33:54 compute-0 systemd[1]: libpod-conmon-3c3d03e54b7737d3c49ec0951a03c19472ed2130d212b19f36017398e8a496be.scope: Deactivated successfully.
Oct 02 13:33:54 compute-0 sudo[439855]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:55 compute-0 sudo[440007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:55 compute-0 sudo[440007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:55 compute-0 sudo[440007]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:55 compute-0 nova_compute[257802]: 2025-10-02 13:33:55.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:55 compute-0 sudo[440032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:55 compute-0 sudo[440032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:55 compute-0 sudo[440032]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:55 compute-0 sudo[440057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:55 compute-0 sudo[440057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:55 compute-0 sudo[440057]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:33:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 sudo[440082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 -- raw list --format json
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 sudo[440082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:33:55 compute-0 ceph-mon[73607]: pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3140606409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:33:55 compute-0 ceph-mon[73607]: from='client.? 192.168.122.10:0/3140606409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:33:55 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.520470985 +0000 UTC m=+0.035322645 container create 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:55 compute-0 systemd[1]: Started libpod-conmon-71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe.scope.
Oct 02 13:33:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.592887177 +0000 UTC m=+0.107738837 container init 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.600461042 +0000 UTC m=+0.115312702 container start 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.50594703 +0000 UTC m=+0.020798710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.603770083 +0000 UTC m=+0.118621743 container attach 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:33:55 compute-0 priceless_volhard[440166]: 167 167
Oct 02 13:33:55 compute-0 systemd[1]: libpod-71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe.scope: Deactivated successfully.
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.607103355 +0000 UTC m=+0.121955025 container died 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:33:55 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:55 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:55 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-752328b9fbf8ee219c6fc812e931e5cca83f1ef93ce85eba26b58de95f830586-merged.mount: Deactivated successfully.
Oct 02 13:33:55 compute-0 podman[440150]: 2025-10-02 13:33:55.642103291 +0000 UTC m=+0.156954951 container remove 71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:33:55 compute-0 systemd[1]: libpod-conmon-71dd5936b485103ebec9894f165aa4279737e6e8bd69717645f25757dbe38cbe.scope: Deactivated successfully.
Oct 02 13:33:55 compute-0 podman[440188]: 2025-10-02 13:33:55.789379695 +0000 UTC m=+0.038489022 container create 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:55 compute-0 systemd[1]: Started libpod-conmon-5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0.scope.
Oct 02 13:33:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c8f3e710340594554b4b4af7a642299138ec2960728f99658c60e42ee613e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c8f3e710340594554b4b4af7a642299138ec2960728f99658c60e42ee613e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c8f3e710340594554b4b4af7a642299138ec2960728f99658c60e42ee613e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c8f3e710340594554b4b4af7a642299138ec2960728f99658c60e42ee613e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:55 compute-0 podman[440188]: 2025-10-02 13:33:55.773813354 +0000 UTC m=+0.022922701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:55 compute-0 podman[440188]: 2025-10-02 13:33:55.881044957 +0000 UTC m=+0.130154314 container init 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:33:55 compute-0 podman[440188]: 2025-10-02 13:33:55.888612183 +0000 UTC m=+0.137721510 container start 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:33:55 compute-0 podman[440188]: 2025-10-02 13:33:55.892310024 +0000 UTC m=+0.141419361 container attach 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:56 compute-0 ceph-mon[73607]: pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:56 compute-0 vigilant_easley[440204]: {
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:     "ca035294-3f2d-465d-b3e6-43971a2c0201": {
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:         "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:         "osd_id": 1,
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:         "osd_uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201",
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:         "type": "bluestore"
Oct 02 13:33:56 compute-0 vigilant_easley[440204]:     }
Oct 02 13:33:56 compute-0 vigilant_easley[440204]: }
Oct 02 13:33:56 compute-0 systemd[1]: libpod-5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0.scope: Deactivated successfully.
Oct 02 13:33:56 compute-0 podman[440188]: 2025-10-02 13:33:56.73733349 +0000 UTC m=+0.986442827 container died 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-83c8f3e710340594554b4b4af7a642299138ec2960728f99658c60e42ee613e7-merged.mount: Deactivated successfully.
Oct 02 13:33:56 compute-0 podman[440188]: 2025-10-02 13:33:56.790224134 +0000 UTC m=+1.039333461 container remove 5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:33:56 compute-0 systemd[1]: libpod-conmon-5a61af22feb5aeefe3cb6bbfd06dceddac4f0b6eea26b12e2d2a98f701db9ef0.scope: Deactivated successfully.
Oct 02 13:33:56 compute-0 sudo[440082]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:33:56 compute-0 sshd-session[440238]: Accepted publickey for zuul from 192.168.122.10 port 49388 ssh2: ECDSA SHA256:fTITq0yWhcfR1B7+nevW6ClbkyOqjAJG01DLp1KXr/U
Oct 02 13:33:56 compute-0 systemd-logind[789]: New session 81 of user zuul.
Oct 02 13:33:56 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:56 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:33:56 compute-0 systemd[1]: Started Session 81 of User zuul.
Oct 02 13:33:56 compute-0 sshd-session[440238]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:33:57 compute-0 sudo[440242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:33:57 compute-0 sudo[440242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:33:57 compute-0 ceph-mon[73607]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev 2dd29ddf-bcc4-4efc-a02f-1f3846773a6f does not exist
Oct 02 13:33:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d806f057-5413-4de1-8618-a897fff970fd does not exist
Oct 02 13:33:57 compute-0 ceph-mgr[73901]: [progress WARNING root] complete: ev d3f96068-1cf7-4925-a8c2-c735b9eef27f does not exist
Oct 02 13:33:57 compute-0 nova_compute[257802]: 2025-10-02 13:33:57.093 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:57 compute-0 nova_compute[257802]: 2025-10-02 13:33:57.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:57 compute-0 sudo[440269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:57 compute-0 sudo[440269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:57 compute-0 sudo[440269]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:57 compute-0 sudo[440295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:33:57 compute-0 sudo[440295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:57 compute-0 sudo[440295]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:57 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:57 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:57 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:57 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2526761597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:57 compute-0 ceph-mon[73607]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct 02 13:33:57 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2473710781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:58 compute-0 sudo[440326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:58 compute-0 sudo[440326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:58 compute-0 sudo[440326]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:58 compute-0 nova_compute[257802]: 2025-10-02 13:33:58.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:58 compute-0 sudo[440351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:58 compute-0 sudo[440351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:58 compute-0 sudo[440351]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:58 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:59 compute-0 ceph-mon[73607]: pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:59 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:59.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:59 compute-0 nova_compute[257802]: 2025-10-02 13:33:59.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:59 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:33:59 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:33:59 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:59.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:33:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39858 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:33:59 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48473 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48479 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:00 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:00 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086056846' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:00 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47437 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:00 compute-0 podman[440592]: 2025-10-02 13:34:00.934925098 +0000 UTC m=+0.077751214 container health_status 609b473a04c1a87b28d7d2194c1aff40a166440e594f90b309a8bbe646c6d8c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:34:01 compute-0 nova_compute[257802]: 2025-10-02 13:34:01.099 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:01 compute-0 ceph-mon[73607]: pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:01 compute-0 ceph-mon[73607]: from='client.39858 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:01 compute-0 ceph-mon[73607]: from='client.48473 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:01 compute-0 ceph-mon[73607]: from='client.39864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2086056846' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:01 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1382525877' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:01 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:01 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:01 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:01 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:01 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:01.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:02 compute-0 nova_compute[257802]: 2025-10-02 13:34:02.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:02 compute-0 ceph-mon[73607]: from='client.48479 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:02 compute-0 ceph-mon[73607]: from='client.47437 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:02 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3741218328' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:03 compute-0 ceph-mon[73607]: pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:03 compute-0 ceph-mon[73607]: from='client.47443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:03 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:03 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:03 compute-0 ovs-vsctl[440654]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:34:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:03.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:03 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:03 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:03 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:04 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:34:04 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:34:04 compute-0 virtqemud[257280]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:34:04 compute-0 nova_compute[257802]: 2025-10-02 13:34:04.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:04 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:34:04 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:05 compute-0 lvm[440973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:34:05 compute-0 lvm[440973]: VG ceph_vg0 finished
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:05 compute-0 ceph-mon[73607]: pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:05 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:05.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39882 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48494 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:05 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:05 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882036591' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:34:05 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:05 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:05 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39897 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3882036591' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2173317173' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000952882' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 13:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714866568' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39924 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:06 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:06.715+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:34:06 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:06 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48536 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:06 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:06.786+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:06 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:34:06 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216607763' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: ops {prefix=ops} (starting...)
Oct 02 13:34:07 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:07 compute-0 nova_compute[257802]: 2025-10-02 13:34:07.098 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:07 compute-0 nova_compute[257802]: 2025-10-02 13:34:07.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:34:07 compute-0 nova_compute[257802]: 2025-10-02 13:34:07.099 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:34:07 compute-0 nova_compute[257802]: 2025-10-02 13:34:07.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317717122' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.39882 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.48494 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.39897 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.48506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4000952882' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3186655812' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4026358957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3714866568' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3726434202' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3216607763' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3317717122' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:07 compute-0 nova_compute[257802]: 2025-10-02 13:34:07.321 2 DEBUG nova.compute.manager [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39942 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542708639' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47461 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48563 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:07 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:07 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:07.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/966613960' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.39963 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:34:07 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj Can't run that command on an inactive MDS!
Oct 02 13:34:07 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:07 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.odxjnj asok_command: status {prefix=status} (starting...)
Oct 02 13:34:07 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48575 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47479 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373515731' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559519314' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.39924 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.48536 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/460104981' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1006836127' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3744183020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/542708639' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/966613960' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1711372420' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2373515731' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1188429990' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2559519314' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886967079' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284144720' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47506 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:08.800+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:08 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:08 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:08 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044044873' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40014 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:08.998+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:08 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48629 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:09.216+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:09 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262780911' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.39942 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.47461 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.48563 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.39963 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.48575 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.47479 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1601998562' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1775481542' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3886967079' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2508060939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/284144720' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/397104129' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2180784943' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4044044873' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3463687866' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1966899698' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942682815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:09 compute-0 nova_compute[257802]: 2025-10-02 13:34:09.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:09 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:09 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:09 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:09.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:09 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47524 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860516388' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:09 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:34:09 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792702170' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47539 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40050 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.47506 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.40014 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.48629 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4262780911' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/248155687' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2695947281' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1942682815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.47524 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3106330298' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2860516388' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/944630855' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/821893910' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3792702170' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.47539 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/190382831' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.40050 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.48665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4177335310' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283247565' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40062 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48686 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:10 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527760641' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40083 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:10 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48701 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40104 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734950122' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47581 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:11.315+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48713 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd4f410e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:09.701225+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633233 data_alloc: 234881024 data_used: 19156992
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:10.701369+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:11.701439+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:12.701590+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:13.701742+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:14.701911+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633233 data_alloc: 234881024 data_used: 19156992
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:15.702050+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 68296704 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7eb50e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd81c05a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a259a000/0x0/0x1bfc00000, data 0x3233fc5/0x3444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:16.702165+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427491328 unmapped: 71942144 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7804960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:17.702308+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:18.702455+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:19.702573+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4450620 data_alloc: 218103808 data_used: 9314304
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:20.702699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:21.702936+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:22.703081+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:23.703241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd6812960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:24.703389+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4450620 data_alloc: 218103808 data_used: 9314304
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:25.703534+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426844160 unmapped: 72589312 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:26.703682+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7211860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.247697830s of 19.366950989s, submitted: 37
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bce00e3400 session 0x55bcd5535e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:27.703867+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a34b3000/0x0/0x1bfc00000, data 0x231bfb5/0x252b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:28.704311+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426852352 unmapped: 72581120 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:29.705137+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423313408 unmapped: 76120064 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4322784 data_alloc: 218103808 data_used: 7610368
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:30.705350+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826ad20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:31.705996+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:32.706155+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:33.706347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:34.706484+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:35.706788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:36.706914+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:37.707222+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:38.707394+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:39.707524+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:40.707691+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:41.707904+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:42.708113+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:43.708430+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:44.708647+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:45.709002+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423346176 unmapped: 76087296 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:46.709189+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:47.709345+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:48.709640+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a418b000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:49.709861+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 76079104 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321496 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7bf41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd7804d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd5211860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd826a5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:50.709972+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.051149368s of 23.211111069s, submitted: 39
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd6891a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd78052c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd68910e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 75726848 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd81c03c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd7253680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd749cb40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd820b4a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72154a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7bf5e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7fd6960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:51.710230+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:52.710379+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:53.710610+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:54.710790+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4424811 data_alloc: 218103808 data_used: 7610368
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:55.711064+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:56.711223+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7805a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:57.711379+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd81c1a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 77963264 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7805860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd55874a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd7bf5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:58.711502+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 78159872 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:03:59.711655+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 78159872 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425324 data_alloc: 218103808 data_used: 7614464
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:00.711801+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:01.711954+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:02.712089+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:03.712321+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:04.712449+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4494924 data_alloc: 234881024 data_used: 17330176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:05.712586+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:06.712719+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:07.712863+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:08.712982+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:09.713154+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4494924 data_alloc: 234881024 data_used: 17330176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.235544205s of 19.393142700s, submitted: 51
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:10.713282+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a36b6000/0x0/0x1bfc00000, data 0x2118fa2/0x2328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 421093376 unmapped: 78340096 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:11.713405+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428171264 unmapped: 71262208 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1db2000/0x0/0x1bfc00000, data 0x3606fa2/0x3816000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:12.713521+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:13.713685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:14.713823+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4678504 data_alloc: 234881024 data_used: 18624512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c87000/0x0/0x1bfc00000, data 0x3726fa2/0x3936000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:15.714012+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:16.714148+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c87000/0x0/0x1bfc00000, data 0x3726fa2/0x3936000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:17.747145+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 68182016 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:18.747289+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431390720 unmapped: 68042752 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:19.747422+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431390720 unmapped: 68042752 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4667948 data_alloc: 234881024 data_used: 18624512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.733645439s of 10.034718513s, submitted: 451
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76af000 session 0x55bcd81c01e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:20.747587+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:21.747714+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:22.747928+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c56000/0x0/0x1bfc00000, data 0x3767fb1/0x3978000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79c800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:23.748153+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:24.748288+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 68034560 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4669736 data_alloc: 234881024 data_used: 18624512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd68130e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7fcfc20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:25.748430+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd4917a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1c44000/0x0/0x1bfc00000, data 0x3779fb1/0x398a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79c800 session 0x55bcd7486b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426737664 unmapped: 72695808 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:26.748584+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 72687616 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:27.748726+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7253860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7bf5c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 72687616 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:28.748866+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd81c1e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:29.749014+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:30.749187+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:31.749317+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:32.749453+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:33.749627+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:34.749764+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:35.750105+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:36.750243+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:37.750438+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:38.750648+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:39.750890+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344424 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:40.751088+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7fce3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424599552 unmapped: 74833920 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:41.751247+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:42.751397+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:43.751554+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 74825728 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd5cd43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec400 session 0x55bcd7804960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75b92c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7595a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.821767807s of 24.315164566s, submitted: 73
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:44.751699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7eb50e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7219860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bceb79d400 session 0x55bcd5587e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 74498048 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4375227 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7fd7680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7c9cd20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:45.751851+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:46.752041+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:47.752260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af1000/0x0/0x1bfc00000, data 0x18cdfa1/0x1add000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:48.752512+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 74481664 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:49.752779+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 74473472 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374215 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:50.753019+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 74473472 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:51.753229+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af1000/0x0/0x1bfc00000, data 0x18cdfa1/0x1add000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 74465280 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7fcf680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7253c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fe000 session 0x55bcd4f97c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd6ffe3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:52.753413+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:53.753665+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:54.753850+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3af0000/0x0/0x1bfc00000, data 0x18cdfb1/0x1ade000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 74457088 heap: 499433472 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4376039 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:55.754047+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576940536s of 11.361857414s, submitted: 25
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd74a43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 434495488 unmapped: 73334784 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd74a6000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7fd7c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7253c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7fd7680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:56.754180+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd81c03c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7eb50e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd826b680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7253860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:57.754331+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:58.754493+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3118000/0x0/0x1bfc00000, data 0x22a5fb1/0x24b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424869888 unmapped: 82960384 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd81c1860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:59.754749+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424869888 unmapped: 82960384 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463945 data_alloc: 218103808 data_used: 8957952
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:00.754974+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:01.755165+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:02.755393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:03.755599+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:04.755740+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4536105 data_alloc: 234881024 data_used: 19079168
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd749d0e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd749cb40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:05.755888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:06.756023+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:07.756159+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:08.756324+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:09.756491+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.923907280s of 14.041229248s, submitted: 27
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76ac800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 83025920 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537017 data_alloc: 234881024 data_used: 19107840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3117000/0x0/0x1bfc00000, data 0x22a5fd4/0x24b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,2,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:10.756612+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428998656 unmapped: 78831616 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:11.756754+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd7804f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76ac800 session 0x55bcd7252780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 429047808 unmapped: 78782464 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:12.756925+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:13.757116+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:14.757259+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4589719 data_alloc: 234881024 data_used: 20172800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:15.757384+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428376064 unmapped: 79454208 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:16.757542+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428384256 unmapped: 79446016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:17.757685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428384256 unmapped: 79446016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd6891e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd7bf5e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:18.757793+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ab9000/0x0/0x1bfc00000, data 0x2903fd4/0x2b15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 83877888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd7c9cd20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:19.757919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.748688698s of 10.229146004s, submitted: 114
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd7487860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446863 data_alloc: 218103808 data_used: 10043392
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:20.758017+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:21.758151+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ec000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:22.758227+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:23.758388+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:24.758512+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3492000/0x0/0x1bfc00000, data 0x1f2bfa1/0x213b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81ec000 session 0x55bcd749cf00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 83869696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446687 data_alloc: 218103808 data_used: 10043392
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:25.758661+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7486b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5cd5c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 82812928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:26.758789+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:27.758931+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd79d72c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:28.759102+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:29.759476+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360291 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:30.759615+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:31.759784+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7b000/0x0/0x1bfc00000, data 0x1643fa1/0x1853000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.338652611s of 12.466805458s, submitted: 33
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd75b8b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae9400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:32.759903+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae9400 session 0x55bcd55343c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:33.760065+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75745a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd6196780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:34.760199+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359002 data_alloc: 218103808 data_used: 7606272
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:35.760402+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd72185a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:36.760588+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd72123c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7c000/0x0/0x1bfc00000, data 0x1643f92/0x1852000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:37.760762+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:38.760913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:39.761098+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:40.761288+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:41.761427+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 83861504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:42.761573+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:43.761729+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:44.761885+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 83853312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:45.762033+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:46.762191+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:47.762336+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:48.762477+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:49.762603+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358122 data_alloc: 218103808 data_used: 7602176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:50.762740+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:51.762898+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:52.763029+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 83845120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:53.763242+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.123798370s of 21.309267044s, submitted: 33
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 82747392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3d7d000/0x0/0x1bfc00000, data 0x1643f82/0x1851000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208c00 session 0x55bcd75b8960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7486960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:54.763400+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd74ae960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd4f43c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4916b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395345 data_alloc: 218103808 data_used: 7602176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:55.763561+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:56.763692+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:57.763793+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:58.763956+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81fd000 session 0x55bcd72143c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826b4a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:59.764079+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395345 data_alloc: 218103808 data_used: 7602176
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:00.764235+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd7c9c000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:01.764363+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd53374a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:02.764499+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:03.764704+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a39c2000/0x0/0x1bfc00000, data 0x19fef82/0x1c0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 84811776 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.176534653s of 10.768110275s, submitted: 26
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:04.764845+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd4f43680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c9000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd87c9000 session 0x55bcd7805a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826ad20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd72194a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd820b2c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423190528 unmapped: 84639744 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490929 data_alloc: 218103808 data_used: 8429568
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:05.765031+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:06.765167+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:07.765482+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ec7000/0x0/0x1bfc00000, data 0x24f9f82/0x2707000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:08.765621+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:09.765799+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423141376 unmapped: 84688896 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505175 data_alloc: 218103808 data_used: 10043392
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:10.765897+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd7fd6960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423149568 unmapped: 84680704 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f7000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:11.766039+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2ec6000/0x0/0x1bfc00000, data 0x24f9fa5/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 423149568 unmapped: 84680704 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:12.766202+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:13.766501+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:14.766923+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 81829888 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4578963 data_alloc: 234881024 data_used: 20344832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.565558434s of 10.998103142s, submitted: 28
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:15.767271+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:16.767429+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2b47000/0x0/0x1bfc00000, data 0x2878fa5/0x2a87000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:17.767920+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:18.768262+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 80470016 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:19.768413+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 427843584 unmapped: 79986688 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4619425 data_alloc: 234881024 data_used: 21434368
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:20.768598+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428105728 unmapped: 79724544 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:21.768749+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 428531712 unmapped: 79298560 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2983000/0x0/0x1bfc00000, data 0x2a3bfa5/0x2c4a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:22.768941+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75776000 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:23.769085+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 434946048 unmapped: 72884224 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:24.769395+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435134464 unmapped: 72695808 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4774745 data_alloc: 234881024 data_used: 24383488
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.375843048s of 10.013811111s, submitted: 192
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:25.769890+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:26.770217+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:27.770492+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a174a000/0x0/0x1bfc00000, data 0x3c74fa5/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a174a000/0x0/0x1bfc00000, data 0x3c74fa5/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:28.770741+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:29.770904+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1729000/0x0/0x1bfc00000, data 0x3c96fa5/0x3ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783807 data_alloc: 234881024 data_used: 24764416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:30.771024+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1729000/0x0/0x1bfc00000, data 0x3c96fa5/0x3ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:31.771212+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:32.771450+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 72671232 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1726000/0x0/0x1bfc00000, data 0x3c99fa5/0x3ea8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:33.771680+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:34.771913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784367 data_alloc: 234881024 data_used: 24764416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:35.772082+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1726000/0x0/0x1bfc00000, data 0x3c99fa5/0x3ea8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:36.772381+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435167232 unmapped: 72663040 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:37.772558+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:38.772810+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:39.773032+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.932869911s of 14.232981682s, submitted: 17
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435175424 unmapped: 72654848 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784731 data_alloc: 234881024 data_used: 24772608
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:40.773328+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1723000/0x0/0x1bfc00000, data 0x3c9cfa5/0x3eab000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435183616 unmapped: 72646656 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:41.773653+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435183616 unmapped: 72646656 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:42.773919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435191808 unmapped: 72638464 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd81f7000 session 0x55bcd72105a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd75b8f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:43.774147+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd7208400 session 0x55bcd826ba40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7575860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 72630272 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:44.774349+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75b94a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1721000/0x0/0x1bfc00000, data 0x3c9efa5/0x3ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435224576 unmapped: 72605696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677607 data_alloc: 234881024 data_used: 20029440
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:45.774519+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435224576 unmapped: 72605696 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:46.774773+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd5350800 session 0x55bcd75b9e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:47.774976+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd72185a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7253680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7eb5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd81c0d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:48.775122+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5ae8400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd5ae8400 session 0x55bcd82aef00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435232768 unmapped: 72597504 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:49.775306+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682073 data_alloc: 234881024 data_used: 20041728
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:50.775445+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:51.775590+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:52.775759+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:53.775996+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:54.776233+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a203f000/0x0/0x1bfc00000, data 0x337ebfe/0x358e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4683033 data_alloc: 234881024 data_used: 20144128
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:55.776393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:56.776525+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:57.776689+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 ms_handle_reset con 0x55bcd7208400 session 0x55bcd7bf5c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.647792816s of 18.847537994s, submitted: 42
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 401 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7215a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:58.776803+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:59.777007+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686007 data_alloc: 234881024 data_used: 20144128
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:00.777160+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:01.777389+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:02.777554+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:03.777761+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:04.777974+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x33808ab/0x3591000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 72589312 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4701431 data_alloc: 234881024 data_used: 21639168
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:05.778257+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 72581120 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:06.778479+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:07.778762+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x33823ea/0x3594000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:08.779013+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:09.779198+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705717 data_alloc: 234881024 data_used: 21663744
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:10.779349+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:11.779562+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x33823ea/0x3594000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 435257344 unmapped: 72572928 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:12.779753+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7c9c780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.350227356s of 14.460050583s, submitted: 27
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7210d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b3000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430178304 unmapped: 77651968 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:13.779991+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd76b3000 session 0x55bcd7486960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:14.780280+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493ea/0x185b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:15.780736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:16.781019+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:17.781163+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:18.781292+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:19.781536+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:20.781675+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:21.782380+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:22.782972+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:23.783474+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:24.783719+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:25.784021+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:26.784337+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:27.784534+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:28.785054+0000)
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3283247565' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2849923825' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.40062 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2304674771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.48686 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1389248408' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3527760641' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.40083 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1110813031' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.48701 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3423444673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3111688935' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.40104 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1734950122' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:29.785412+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 77635584 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:30.785578+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389515 data_alloc: 218103808 data_used: 7618560
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:31.785737+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:32.785961+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3d73000/0x0/0x1bfc00000, data 0x16493c7/0x185a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:33.786210+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430202880 unmapped: 77627392 heap: 507830272 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.157569885s of 21.315719604s, submitted: 45
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd5cd43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd5339860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4f434a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:34.786373+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd4f965a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f7000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd81f7000 session 0x55bcd7487e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd82381e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7805680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 81420288 heap: 512032768 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82385a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3236000/0x0/0x1bfc00000, data 0x21873c7/0x2398000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7804b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd820e1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd81c1c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd74af680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd826ab40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd7208400 session 0x55bcd79d70e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:35.786522+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579925 data_alloc: 218103808 data_used: 7618560
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:36.786669+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd820a780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:37.786806+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f9000/0x0/0x1bfc00000, data 0x2dc43c7/0x2fd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 87236608 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd81c0780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:38.786958+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 87228416 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd75941e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd72154a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f9000/0x0/0x1bfc00000, data 0x2dc43c7/0x2fd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,2])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd622d400 session 0x55bcd7210d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:39.787087+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 87203840 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:40.787211+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590834 data_alloc: 218103808 data_used: 8327168
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 87203840 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:41.787357+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:42.787613+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:43.787806+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:44.788042+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:45.788310+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4749074 data_alloc: 234881024 data_used: 30711808
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:46.788388+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:47.788530+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:48.788686+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:49.788888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a25f7000/0x0/0x1bfc00000, data 0x2dc43fa/0x2fd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:50.789112+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4749554 data_alloc: 234881024 data_used: 30724096
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 76857344 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.757154465s of 16.976579666s, submitted: 50
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:51.789241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446365696 unmapped: 71491584 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:52.789383+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a11ea000/0x0/0x1bfc00000, data 0x41d13fa/0x43e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [0,2] op hist [0,0,0,2])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450527232 unmapped: 67330048 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:53.789537+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:54.789648+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:55.789765+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f995000/0x0/0x1bfc00000, data 0x48863fa/0x4a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973042 data_alloc: 234881024 data_used: 32993280
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:56.789889+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:57.790029+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450265088 unmapped: 67592192 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:58.790156+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f971000/0x0/0x1bfc00000, data 0x48aa3fa/0x4abd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:59.790304+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:00.790383+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973526 data_alloc: 234881024 data_used: 32993280
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:01.790557+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450273280 unmapped: 67584000 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.594237328s of 11.148545265s, submitted: 226
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:02.790671+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:03.790857+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 heartbeat osd_stat(store_statfs(0x19f95f000/0x0/0x1bfc00000, data 0x48bc3fa/0x4acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:04.790997+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd87c8c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 ms_handle_reset con 0x55bcd87c8c00 session 0x55bcd7214f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 67567616 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:05.791116+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4980708 data_alloc: 234881024 data_used: 33009664
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd826bc20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450306048 unmapped: 67551232 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd622d400 session 0x55bcd5337680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd68910e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd5535e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:06.791229+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 ms_handle_reset con 0x55bcd521e000 session 0x55bcd72101e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461914112 unmapped: 55943168 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:07.791299+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd521e000 session 0x55bcd82ae1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:08.791403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:09.791570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461922304 unmapped: 55934976 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:10.791664+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100033 data_alloc: 251658240 data_used: 45219840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461930496 unmapped: 55926784 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:11.791817+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:12.791972+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7804780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.599666595s of 11.089976311s, submitted: 74
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7eb5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:13.792171+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:14.792332+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 heartbeat osd_stat(store_statfs(0x19f032000/0x0/0x1bfc00000, data 0x51e29d7/0x53fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:15.792459+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099901 data_alloc: 251658240 data_used: 45219840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461938688 unmapped: 55918592 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521ec00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd826ad20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd521ec00 session 0x55bcd7211c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5cd54a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd75b8960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd5cd4960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:16.792597+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461963264 unmapped: 55894016 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:17.792691+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461963264 unmapped: 55894016 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:18.792989+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:19.793104+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:20.793297+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100427 data_alloc: 251658240 data_used: 45219840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:21.793446+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:22.793570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 ms_handle_reset con 0x55bcd828d000 session 0x55bcd749cf00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622d400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461971456 unmapped: 55885824 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:23.793743+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.123583794s of 10.139816284s, submitted: 12
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f030000/0x0/0x1bfc00000, data 0x51e4516/0x53fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd622d400 session 0x55bcd6fff2c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5210000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7eb50e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7252f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:24.793888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7eb4b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:25.794043+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdfcf7800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5103533 data_alloc: 251658240 data_used: 45219840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461987840 unmapped: 55869440 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bceb79cc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bceb79cc00 session 0x55bcd7575680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:26.794250+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd749cf00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462004224 unmapped: 55853056 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:27.794409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd5cd4960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd75b8960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:28.794540+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd711f800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:29.794703+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462536704 unmapped: 55320576 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:30.794880+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5120236 data_alloc: 251658240 data_used: 46493696
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462618624 unmapped: 55238656 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:31.795067+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7486960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcdfcf7800 session 0x55bcd820b680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:32.795226+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:33.795583+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x520a17f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:34.795711+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 55066624 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.776150703s of 11.827246666s, submitted: 13
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd82ae000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:35.795819+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288082 data_alloc: 251658240 data_used: 48136192
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 heartbeat osd_stat(store_statfs(0x19f009000/0x0/0x1bfc00000, data 0x672517f/0x5425000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd826a960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:36.795943+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 408 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7eb45a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:37.796081+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:38.796208+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462856192 unmapped: 55001088 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 408 heartbeat osd_stat(store_statfs(0x19f005000/0x0/0x1bfc00000, data 0x520be2c/0x5428000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:39.796323+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462864384 unmapped: 54992896 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:40.796460+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198850 data_alloc: 251658240 data_used: 48144384
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462864384 unmapped: 54992896 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:41.796564+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464134144 unmapped: 53723136 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:42.796702+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464224256 unmapped: 53633024 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19ee39000/0x0/0x1bfc00000, data 0x53d696b/0x55f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:43.796868+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:44.797049+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:45.797239+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5249326 data_alloc: 251658240 data_used: 51548160
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:46.797417+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464273408 unmapped: 53583872 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19ee39000/0x0/0x1bfc00000, data 0x53d696b/0x55f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:47.797963+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.192468643s of 12.316826820s, submitted: 47
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464707584 unmapped: 53149696 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:48.798111+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464707584 unmapped: 53149696 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:49.798241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464715776 unmapped: 53141504 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:50.798361+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5262504 data_alloc: 251658240 data_used: 52187136
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464306176 unmapped: 53551104 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:51.798496+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edca000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:52.798655+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:53.798944+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:54.799110+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:55.799221+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5269704 data_alloc: 251658240 data_used: 52932608
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edca000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:56.799339+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:57.799449+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19edb2000/0x0/0x1bfc00000, data 0x544496b/0x5662000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1b7cf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:58.799586+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd7218b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.053912163s of 11.107004166s, submitted: 12
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd72143c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 53485568 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd5cd52c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:59.799748+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:00.799898+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4953532 data_alloc: 251658240 data_used: 38354944
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:01.800025+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:02.800151+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:03.800317+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:04.800409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:05.800557+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4953532 data_alloc: 251658240 data_used: 38354944
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:06.800707+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:07.800858+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 heartbeat osd_stat(store_statfs(0x19fed6000/0x0/0x1bfc00000, data 0x3f2a96b/0x4148000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1bbdf9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820e1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:08.801006+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd61961e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd8007c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.946537018s of 10.045336723s, submitted: 37
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 62251008 heap: 517857280 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd8007c00 session 0x55bcd7bf5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd6ffe780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 410 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f425a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:09.801165+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 411 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd4f974a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 60481536 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:10.801396+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5256647 data_alloc: 251658240 data_used: 44515328
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd82af680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:11.801527+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdb12f800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c828000/0x0/0x1bfc00000, data 0x6430fba/0x6655000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcdb12f800 session 0x55bcd6812960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c0960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d7c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:12.801670+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 60522496 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:13.801851+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c822000/0x0/0x1bfc00000, data 0x6435fba/0x665a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 heartbeat osd_stat(store_statfs(0x19c822000/0x0/0x1bfc00000, data 0x6435fba/0x665a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:14.802019+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 heartbeat osd_stat(store_statfs(0x19c825000/0x0/0x1bfc00000, data 0x6435f58/0x6659000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd81c0f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:15.802510+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5010368 data_alloc: 251658240 data_used: 44523520
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:16.802641+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd828d000 session 0x55bcd7804780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd711f800 session 0x55bcd79d61e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 60506112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5587c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:17.802767+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:18.802920+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 heartbeat osd_stat(store_statfs(0x19ef87000/0x0/0x1bfc00000, data 0x3cd5b9f/0x3ef7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:19.803051+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 60489728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.788119316s of 11.300113678s, submitted: 149
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:20.803256+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d6f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4844875 data_alloc: 234881024 data_used: 32546816
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 456900608 unmapped: 65159168 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f89c000/0x0/0x1bfc00000, data 0x33bc399/0x35df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:21.803451+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 456908800 unmapped: 65150976 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82aef00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7fd7c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:22.803610+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd6197e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a15fc000/0x0/0x1bfc00000, data 0x1660366/0x1881000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:23.803767+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd4f974a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f425a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd6ffe780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd61961e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:24.803931+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 71999488 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd820e1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd7208400 session 0x55bcd82ae000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:25.804113+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd7486960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 ms_handle_reset con 0x55bcd622c400 session 0x55bcd5cd4960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4550295 data_alloc: 218103808 data_used: 7684096
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:26.804240+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:27.804429+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449921024 unmapped: 72138752 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:28.804615+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0fdd000/0x0/0x1bfc00000, data 0x1c7e3d8/0x1ea1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449929216 unmapped: 72130560 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:29.804766+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449929216 unmapped: 72130560 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0fdd000/0x0/0x1bfc00000, data 0x1c7e3d8/0x1ea1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:30.804894+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7488c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.252073288s of 10.649868965s, submitted: 134
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd7488c00 session 0x55bcd7575680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556122 data_alloc: 218103808 data_used: 7692288
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:31.805064+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:32.805225+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449945600 unmapped: 72114176 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:33.805393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:34.805543+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:35.805671+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600762 data_alloc: 218103808 data_used: 14016512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 71581696 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:36.805799+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:37.805923+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:38.806057+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:39.806203+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:40.806308+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600762 data_alloc: 218103808 data_used: 14016512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:41.806450+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0fd8000/0x0/0x1bfc00000, data 0x1c7ff3a/0x1ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450486272 unmapped: 71573504 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:42.806604+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.795286179s of 11.819027901s, submitted: 19
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 70467584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0f00000/0x0/0x1bfc00000, data 0x1d58f3a/0x1f7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,1,3])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:43.806806+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450519040 unmapped: 71540736 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:44.807009+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:45.807205+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651102 data_alloc: 218103808 data_used: 14012416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:46.807329+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:47.807473+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:48.807623+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:49.807756+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:50.807875+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:51.808033+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:52.808260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:53.808455+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 71491584 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:54.808628+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:55.809378+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:56.809640+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:57.809871+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:58.810020+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:59.810214+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:00.810740+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651118 data_alloc: 218103808 data_used: 14012416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:01.810889+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:02.811307+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.828195572s of 20.298160553s, submitted: 52
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 71483392 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:03.811524+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 71475200 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:04.811800+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7fd61e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:05.812149+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650903 data_alloc: 218103808 data_used: 14016512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd7208400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:06.812301+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:07.812546+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7252f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd74a45a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:08.814493+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:09.814643+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 71467008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:10.814900+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650831 data_alloc: 218103808 data_used: 14016512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:11.815096+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:12.815340+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:13.815731+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:14.816007+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.773118019s of 12.470500946s, submitted: 7
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:15.816188+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4650903 data_alloc: 218103808 data_used: 14016512
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a09ea000/0x0/0x1bfc00000, data 0x226ef3a/0x2494000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 71458816 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:16.816409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd5535e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 71450624 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:17.816564+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd7208400 session 0x55bcd4f40f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd78054a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 71450624 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:18.816681+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd826a1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15cf000/0x0/0x1bfc00000, data 0x1685eeb/0x18aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:19.816942+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:20.817084+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522595 data_alloc: 218103808 data_used: 7692288
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:21.817245+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:22.817570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd987a000 session 0x55bcd820e960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bce58b2800 session 0x55bcd6890780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446349312 unmapped: 75710464 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd5d7f400 session 0x55bcd79d65a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:23.817738+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15f9000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15f9000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:24.817901+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:25.818051+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518487 data_alloc: 218103808 data_used: 7688192
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 75685888 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.642345428s of 10.898589134s, submitted: 78
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7c9c5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:26.818247+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5534960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15fa000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:27.818466+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:28.818624+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446398464 unmapped: 75661312 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:29.818771+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd987a000 session 0x55bcd820be00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bce58b2800 session 0x55bcd74a70e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:30.818947+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a15fa000/0x0/0x1bfc00000, data 0x1661ea5/0x1884000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517719 data_alloc: 218103808 data_used: 7688192
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:31.819076+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7487680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:32.819215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:33.819385+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446406656 unmapped: 75653120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd622c400 session 0x55bcd8238b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:34.819559+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14c8000/0x0/0x1bfc00000, data 0x1793ea5/0x19b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c0000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 ms_handle_reset con 0x55bcd828d000 session 0x55bcd4f97680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446414848 unmapped: 75644928 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:35.819718+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559079 data_alloc: 218103808 data_used: 7688192
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446414848 unmapped: 75644928 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.069718361s of 10.233050346s, submitted: 48
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:36.819842+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7210780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce58b2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bce58b2800 session 0x55bcd7fd6780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd6f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd4f423c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd78043c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446521344 unmapped: 75538432 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7574780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74aeb40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:37.819941+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c05a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7210780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a112a000/0x0/0x1bfc00000, data 0x1b2fafe/0x1d53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:38.820058+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:39.820231+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd81c0000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987a000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd987a000 session 0x55bcd7487680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:40.820367+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd74a70e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd820be00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609389 data_alloc: 218103808 data_used: 7700480
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:41.820525+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd68912c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd7c9c5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:42.820745+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446496768 unmapped: 75563008 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:43.820868+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b93000/0x0/0x1bfc00000, data 0x20c5b31/0x22eb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:44.820998+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:45.821138+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4643469 data_alloc: 218103808 data_used: 12431360
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:46.821329+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:47.821448+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76b1800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.463706017s of 11.551223755s, submitted: 21
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd76b1800 session 0x55bcd7805860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446963712 unmapped: 75096064 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:48.821576+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446971904 unmapped: 75087872 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b92000/0x0/0x1bfc00000, data 0x20c5b54/0x22ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:49.821731+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447676416 unmapped: 74383360 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:50.821865+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4674079 data_alloc: 218103808 data_used: 16248832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:51.822042+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:52.822176+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448380928 unmapped: 73678848 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:53.822426+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b92000/0x0/0x1bfc00000, data 0x20c5b54/0x22ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd63c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd622c400 session 0x55bcd5528960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 70746112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:54.822629+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd81c0b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 70746112 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:55.822755+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4715805 data_alloc: 218103808 data_used: 16478208
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:56.822947+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5211860
Oct 02 13:34:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3ac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd5c3ac00 session 0x55bcd7253c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:57.823139+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 70729728 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:58.823409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.650910378s of 11.011292458s, submitted: 92
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 ms_handle_reset con 0x55bcd521e000 session 0x55bcd55281e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451346432 unmapped: 70713344 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a06ff000/0x0/0x1bfc00000, data 0x2559b31/0x277f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:59.823600+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd622c400 session 0x55bcd75b8000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451346432 unmapped: 70713344 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd820b4a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:00.823804+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd828d000 session 0x55bcd5528f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4655939 data_alloc: 218103808 data_used: 12673024
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:01.824016+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:02.824266+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:03.824570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd81c03c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:04.824718+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7215e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:05.824909+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4655795 data_alloc: 218103808 data_used: 12673024
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5211a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:06.825068+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:07.825260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:08.825444+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0bc5000/0x0/0x1bfc00000, data 0x20927de/0x22b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:09.825579+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 70680576 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.221710205s of 11.399076462s, submitted: 51
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:10.825763+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd53372c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659617 data_alloc: 218103808 data_used: 12681216
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcda41b400 session 0x55bcd4f40f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd82383c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453058560 unmapped: 69001216 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:11.825877+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a4b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd53385a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a03fa000/0x0/0x1bfc00000, data 0x285a37f/0x2a83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 70598656 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:12.826114+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:13.826373+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7c9cd20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:14.826525+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:15.826665+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:16.826864+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:17.827009+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:18.827260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:19.827462+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:20.827577+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:21.827739+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:22.827921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:23.828072+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:24.828227+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:25.828345+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612497 data_alloc: 218103808 data_used: 7712768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e29000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:26.828488+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.276529312s of 16.571434021s, submitted: 92
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7214f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:27.828650+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:28.828925+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447905792 unmapped: 74153984 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:29.829075+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447430656 unmapped: 74629120 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:30.829277+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663682 data_alloc: 218103808 data_used: 14639104
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:31.830947+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:32.831647+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:33.833081+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:34.833656+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:35.834762+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663682 data_alloc: 218103808 data_used: 14639104
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:36.837326+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e2a000/0x0/0x1bfc00000, data 0x1e2d34c/0x2054000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:37.837506+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:38.839448+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:39.840008+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449003520 unmapped: 73056256 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.600367546s of 12.624567032s, submitted: 7
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:40.840606+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453623808 unmapped: 68435968 heap: 522059776 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd828d000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4785748 data_alloc: 218103808 data_used: 15151104
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:41.841052+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a09c3000/0x0/0x1bfc00000, data 0x229334c/0x24ba000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,18])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 66846720 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd828d000 session 0x55bcd74ae960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd81c01e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7212960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd7804d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd82ae5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:42.841748+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:43.842032+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:44.842267+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2834c/0x2d4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:45.842398+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4779116 data_alloc: 218103808 data_used: 15372288
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:46.842705+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454696960 unmapped: 71041024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6fdcc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd6fdcc00 session 0x55bcd5338b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:47.842883+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454705152 unmapped: 71032832 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2834c/0x2d4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd521e000 session 0x55bcd826b680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:48.843005+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454705152 unmapped: 71032832 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7804b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 ms_handle_reset con 0x55bcd622c400 session 0x55bcd749cf00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:49.843162+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.861032486s of 10.011510849s, submitted: 117
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454868992 unmapped: 70868992 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:50.843311+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454868992 unmapped: 70868992 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0109000/0x0/0x1bfc00000, data 0x2b4c37f/0x2d75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 420 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd75b9c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4822614 data_alloc: 234881024 data_used: 19931136
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:51.843435+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 67444736 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5351800 session 0x55bcd826a5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:52.843566+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:53.843750+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:54.843992+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:55.844144+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849730 data_alloc: 234881024 data_used: 23068672
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:56.844287+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:57.844432+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:58.844565+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 66453504 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a00fe000/0x0/0x1bfc00000, data 0x2b4fc7c/0x2d7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:59.844681+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7595860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7215a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:00.844935+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847602 data_alloc: 234881024 data_used: 23072768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:01.845074+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.320135117s of 12.432018280s, submitted: 29
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:02.845213+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460251136 unmapped: 65486848 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd1d000/0x0/0x1bfc00000, data 0x2f2ac7c/0x3159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:03.845446+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460120064 unmapped: 65617920 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:04.845563+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:05.845692+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd14000/0x0/0x1bfc00000, data 0x2f32c7c/0x3161000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fd14000/0x0/0x1bfc00000, data 0x2f32c7c/0x3161000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:06.845860+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889466 data_alloc: 234881024 data_used: 23203840
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:07.846006+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 460136448 unmapped: 65601536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd622c400 session 0x55bcd6fff2c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fe800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:08.846128+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459464704 unmapped: 66273280 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:09.846207+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459489280 unmapped: 66248704 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:10.846323+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf3000/0x0/0x1bfc00000, data 0x2f5cc7c/0x318b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:11.846399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890884 data_alloc: 234881024 data_used: 23379968
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:12.846525+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:13.846674+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd75b8f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.524916649s of 11.728118896s, submitted: 41
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd4f43e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:14.846974+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:15.847108+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:16.847236+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914444 data_alloc: 234881024 data_used: 23437312
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:17.847388+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:18.847529+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:19.847672+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19fcf1000/0x0/0x1bfc00000, data 0x3267c7c/0x318d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:20.847776+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 66240512 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:21.847897+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4967962 data_alloc: 234881024 data_used: 26353664
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 62013440 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:22.848040+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:23.848188+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:24.848382+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:25.848546+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:26.848742+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969562 data_alloc: 234881024 data_used: 26501120
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 63553536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.103944778s of 13.150414467s, submitted: 6
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:27.848917+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 458989568 unmapped: 66748416 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 65K writes, 256K keys, 65K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 65K writes, 23K syncs, 2.74 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4809 writes, 20K keys, 4809 commit groups, 1.0 writes per commit group, ingest: 19.69 MB, 0.03 MB/s
                                           Interval WAL: 4809 writes, 1955 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3c430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bcd3b3cdd0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:28.849202+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:29.849402+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:30.849635+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:31.849788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4965690 data_alloc: 234881024 data_used: 26886144
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:32.850112+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 66715648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f9e1000/0x0/0x1bfc00000, data 0x3577c7c/0x349d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:33.850384+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:34.850561+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:35.850724+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 66445312 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:36.850957+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4993036 data_alloc: 234881024 data_used: 26882048
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459857920 unmapped: 65880064 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f7a8000/0x0/0x1bfc00000, data 0x37adc7c/0x36d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:37.851130+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 459857920 unmapped: 65880064 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd826ab40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.982535362s of 11.166376114s, submitted: 14
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd52114a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:38.851283+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 457621504 unmapped: 68116480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd79d63c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:39.851404+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f7ab000/0x0/0x1bfc00000, data 0x37adc7c/0x36d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:40.851527+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:41.851708+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4822849 data_alloc: 234881024 data_used: 18599936
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 72409088 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:42.852009+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:43.852231+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd820b680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81fe800 session 0x55bcd74a70e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:44.852342+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a041b000/0x0/0x1bfc00000, data 0x2b6cc49/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:45.852492+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:46.852657+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a041b000/0x0/0x1bfc00000, data 0x2b6cc49/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831417 data_alloc: 234881024 data_used: 18472960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd521e000 session 0x55bcd72185a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:47.852810+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:48.852912+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454385664 unmapped: 71352320 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5528000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:49.853092+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:50.853272+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:51.853453+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831417 data_alloc: 234881024 data_used: 18472960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.426866531s of 13.683501244s, submitted: 62
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0445000/0x0/0x1bfc00000, data 0x2b42c49/0x2a39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:52.853633+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7fd6960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:53.853907+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454393856 unmapped: 71344128 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd7805860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd622c400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:54.854037+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd622c400 session 0x55bcd749c5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:55.854210+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:56.854363+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a099e000/0x0/0x1bfc00000, data 0x22b18f6/0x24df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4780425 data_alloc: 234881024 data_used: 18477056
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6ffef00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcda41b400 session 0x55bcd7211860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454410240 unmapped: 71327744 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:57.854525+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454418432 unmapped: 71319552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:58.854683+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454418432 unmapped: 71319552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:59.854866+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 71311360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a099f000/0x0/0x1bfc00000, data 0x22b18f6/0x24df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:00.855006+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 71311360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 423 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd820b4a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:01.855134+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4782683 data_alloc: 234881024 data_used: 18489344
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.637521744s of 10.001954079s, submitted: 97
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 423 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd6ffef00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:02.855309+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a15e0000/0x0/0x1bfc00000, data 0x166e3d3/0x189c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:03.855491+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:04.855663+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444440576 unmapped: 81297408 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:05.855800+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444456960 unmapped: 81281024 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:06.855936+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4595427 data_alloc: 218103808 data_used: 7753728
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd75b9c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444473344 unmapped: 81264640 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:07.856068+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a15df000/0x0/0x1bfc00000, data 0x1670062/0x189e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444473344 unmapped: 81264640 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd521e000 session 0x55bcd55341e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd68912c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6813860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcda41b400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcda41b400 session 0x55bcd72101e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a15e1000/0x0/0x1bfc00000, data 0x1670035/0x189c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd521e000 session 0x55bcd75b8b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd4f410e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd68910e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:08.856224+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd7217c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd52105a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:09.856367+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0a13000/0x0/0x1bfc00000, data 0x223d0a7/0x246b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:10.856522+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 445571072 unmapped: 80166912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:11.856697+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0a13000/0x0/0x1bfc00000, data 0x223d0a7/0x246b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4689912 data_alloc: 218103808 data_used: 7753728
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444178432 unmapped: 81559552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:12.856820+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444178432 unmapped: 81559552 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd7213e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:13.857033+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444186624 unmapped: 81551360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5535a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:14.857200+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444186624 unmapped: 81551360 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5529c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.008628845s of 13.355698586s, submitted: 123
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd79d6d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:15.857331+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444334080 unmapped: 81403904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:16.857485+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7fd6f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4693869 data_alloc: 218103808 data_used: 7757824
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7fd7c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444334080 unmapped: 81403904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:17.857644+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:18.857895+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5cd41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:19.858057+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:20.858287+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446734336 unmapped: 79003648 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7574000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:21.858478+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7bf41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783390 data_alloc: 234881024 data_used: 20123648
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f2800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 446742528 unmapped: 78995456 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x2262be6/0x2492000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81f2800 session 0x55bcd4f963c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5534d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:22.858723+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:23.859031+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:24.859161+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:25.859312+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447799296 unmapped: 77938688 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:26.859609+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4837251 data_alloc: 234881024 data_used: 20123648
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 77930496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0395000/0x0/0x1bfc00000, data 0x28b8bf6/0x2ae9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:27.859775+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 77930496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.694820404s of 13.001426697s, submitted: 43
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:28.859918+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:29.860042+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:30.860246+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:31.860379+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4867081 data_alloc: 234881024 data_used: 20242432
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:32.860509+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a007f000/0x0/0x1bfc00000, data 0x2bcebf6/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd79d6000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:33.860675+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd78054a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7805a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:34.860804+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdada4c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcdada4c00 session 0x55bcd7218d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:35.860940+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a007e000/0x0/0x1bfc00000, data 0x2bcec19/0x2e00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1cd7f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 77668352 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:36.861063+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882818 data_alloc: 234881024 data_used: 22142976
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450920448 unmapped: 74817536 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:37.861182+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.053140640s of 10.173756599s, submitted: 39
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:38.861300+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:39.861438+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:40.861575+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6891e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd5211c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451149824 unmapped: 74588160 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:41.861694+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4712946 data_alloc: 218103808 data_used: 14405632
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7805c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a0b52000/0x0/0x1bfc00000, data 0x1cebba7/0x1f1b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd7fd7860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 81321984 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd6ffe3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd5528f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd55294a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd6ffe3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:42.861812+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd78054a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd4f963c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd5cd41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd5535a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd52105a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:43.861987+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd68910e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd4f410e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:44.862136+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5512000 session 0x55bcd75b8b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81ed400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:45.862319+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444735488 unmapped: 81002496 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:46.862497+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783685 data_alloc: 234881024 data_used: 18337792
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 444547072 unmapped: 81190912 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a068c000/0x0/0x1bfc00000, data 0x21b0bb7/0x23e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:47.862604+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:48.862809+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:49.863117+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.070795059s of 11.480711937s, submitted: 135
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:50.863330+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:51.863519+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4855783 data_alloc: 234881024 data_used: 18796544
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:52.863685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:53.863922+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:54.864053+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:55.864167+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:56.864325+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4855943 data_alloc: 234881024 data_used: 18800640
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 78143488 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd5cd45a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd82ae1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:57.864473+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fdef000/0x0/0x1bfc00000, data 0x2a4ebb7/0x2c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [0,0,0,4,5])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 76169216 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd6812960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f85b000/0x0/0x1bfc00000, data 0x2fe2bb7/0x3213000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:58.864728+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 76169216 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:59.864948+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449372160 unmapped: 76365824 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:00.865084+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449372160 unmapped: 76365824 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:01.865251+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900751 data_alloc: 234881024 data_used: 19607552
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:02.865381+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:03.865584+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:04.865722+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:05.865884+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:06.866045+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900751 data_alloc: 234881024 data_used: 19607552
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:07.866190+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:08.866328+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 76357632 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:09.866513+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.579032898s of 20.090600967s, submitted: 74
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449388544 unmapped: 76349440 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81ed400 session 0x55bcd72101e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd7214000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7d8000/0x0/0x1bfc00000, data 0x3066b94/0x3296000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1d18f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:10.866633+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd521e000 session 0x55bcd61961e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449396736 unmapped: 76341248 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:11.866793+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786173 data_alloc: 218103808 data_used: 14860288
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:12.866930+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:13.867122+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:14.867226+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [0,0,0,1,0,1])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:15.867340+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:16.867490+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786173 data_alloc: 218103808 data_used: 14860288
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:17.867653+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449421312 unmapped: 76316672 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd6812d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:18.902620+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fac00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:19.902760+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449429504 unmapped: 76308480 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:20.902870+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:21.903046+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786465 data_alloc: 218103808 data_used: 14880768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:22.903201+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:23.903365+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:24.903515+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:25.903736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449454080 unmapped: 76283904 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:26.903919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4786465 data_alloc: 218103808 data_used: 14880768
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:27.904076+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:28.904250+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:29.904426+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.245162010s of 20.124031067s, submitted: 286
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:30.904550+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:31.904681+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4798557 data_alloc: 218103808 data_used: 15982592
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:32.904810+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:33.905009+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:34.905161+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a131a000/0x0/0x1bfc00000, data 0x2565b84/0x2794000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:35.905360+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449462272 unmapped: 76275712 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:36.905549+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802325 data_alloc: 218103808 data_used: 15990784
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:37.905694+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:38.905888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:39.906003+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:40.906122+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449470464 unmapped: 76267520 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:41.906241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802325 data_alloc: 218103808 data_used: 15990784
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:42.906423+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:43.906629+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.248320580s of 13.613334656s, submitted: 9
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 76259328 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:44.906788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 425 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a130e000/0x0/0x1bfc00000, data 0x2571b84/0x27a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:45.906935+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:46.907083+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4821151 data_alloc: 218103808 data_used: 17117184
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd5cd5e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd6ffe780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:47.907259+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:48.907405+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450535424 unmapped: 75202560 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:49.907521+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450543616 unmapped: 75194368 heap: 525737984 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd4f423c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820a1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd749d0e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:50.907706+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fa000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fa000 session 0x55bcd8238f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a1307000/0x0/0x1bfc00000, data 0x25e284f/0x27a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 461848576 unmapped: 67559424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:51.907879+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7213e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5cd54a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd5535860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7fd6000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fd800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81fd800 session 0x55bcd74a45a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902445 data_alloc: 218103808 data_used: 17117184
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:52.908075+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:53.908284+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:54.908408+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:55.908553+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7eb41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0837000/0x0/0x1bfc00000, data 0x30b285f/0x3277000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:56.908675+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902445 data_alloc: 218103808 data_used: 17117184
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7fd6b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 78831616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:57.908792+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0837000/0x0/0x1bfc00000, data 0x30b285f/0x3277000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd8238b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.560773849s of 14.498044014s, submitted: 21
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820e1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:58.908965+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.909197+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a0811000/0x0/0x1bfc00000, data 0x30d6892/0x329d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 77545472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.909399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521f400 session 0x55bcd7c9c000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 77545472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.909547+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7487680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4991495 data_alloc: 234881024 data_used: 28463104
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.909694+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7fd63c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd75b9c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.909914+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdcdd4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.910045+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.910178+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.910288+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4993244 data_alloc: 234881024 data_used: 28475392
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.910421+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.910551+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.910739+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a080f000/0x0/0x1bfc00000, data 0x30d68b5/0x329f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.910886+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.712766647s of 12.775095940s, submitted: 17
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453255168 unmapped: 76152832 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.911027+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057974 data_alloc: 234881024 data_used: 29130752
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453345280 unmapped: 76062720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.911166+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453525504 unmapped: 75882496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.911387+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.911940+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.912051+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a003f000/0x0/0x1bfc00000, data 0x389e8b5/0x3a67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 75874304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.912188+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063758 data_alloc: 234881024 data_used: 29274112
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.912347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.912455+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a003f000/0x0/0x1bfc00000, data 0x389e8b5/0x3a67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.912546+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.912914+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.913129+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5064558 data_alloc: 234881024 data_used: 29294592
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 75866112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.913271+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941401482s of 12.116731644s, submitted: 55
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.913513+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.913657+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x39708b5/0x3b39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.913736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454852608 unmapped: 74555392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.913884+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7574000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcdcdd4000 session 0x55bcd7eb5860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5072190 data_alloc: 234881024 data_used: 29294592
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7210780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.914021+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.914189+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.914310+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5351000 session 0x55bcd5cd54a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd987b800 session 0x55bcd4f403c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.914424+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454025216 unmapped: 75382784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.914574+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ff75000/0x0/0x1bfc00000, data 0x3970892/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067642 data_alloc: 234881024 data_used: 29278208
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454033408 unmapped: 75374592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.914708+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd5336000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd6f3f400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd6f3f400 session 0x55bcd79d6f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454049792 unmapped: 75358208 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.914897+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.793792725s of 10.926726341s, submitted: 51
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd521e000 session 0x55bcd749cf00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 ms_handle_reset con 0x55bcd5351000 session 0x55bcd5534f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454057984 unmapped: 75350016 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.915069+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7eb52c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd987b800 session 0x55bcd8239a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7fd6960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd521e000 session 0x55bcd4f40f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454066176 unmapped: 75341824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7c9c780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.915230+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5351000 session 0x55bcd749dc20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454066176 unmapped: 75341824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.915369+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0047000/0x0/0x1bfc00000, data 0x383261f/0x3a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057984 data_alloc: 234881024 data_used: 29282304
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd5512000 session 0x55bcd6891680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd81fac00 session 0x55bcd75750e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454074368 unmapped: 75333632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.915477+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820a3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454074368 unmapped: 75333632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.915616+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454082560 unmapped: 75325440 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.915753+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0048000/0x0/0x1bfc00000, data 0x383261f/0x3a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454082560 unmapped: 75325440 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.915866+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454090752 unmapped: 75317248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.916002+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 428 ms_handle_reset con 0x55bcd5512000 session 0x55bcd72101e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5061826 data_alloc: 234881024 data_used: 29372416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454090752 unmapped: 75317248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.916132+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd987b800 session 0x55bcd7213e00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd74a43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd74af0e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.916303+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:44.916444+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0f43000/0x0/0x1bfc00000, data 0x2934dfb/0x2b6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.916585+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454098944 unmapped: 75309056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.916729+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879242 data_alloc: 234881024 data_used: 19992576
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.926608+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0f43000/0x0/0x1bfc00000, data 0x2934dfb/0x2b6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.926788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 75300864 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.384347916s of 15.611342430s, submitted: 86
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.926999+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.927118+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _renew_subs
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.927238+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889576 data_alloc: 234881024 data_used: 20553728
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.927782+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.928419+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0f40000/0x0/0x1bfc00000, data 0x293693a/0x2b6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455204864 unmapped: 74203136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.929071+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455213056 unmapped: 74194944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.929295+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455213056 unmapped: 74194944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.929798+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889576 data_alloc: 234881024 data_used: 20553728
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.929979+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7575c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd7fd6f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.930115+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd5cd4f00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.930326+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.930515+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.930703+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.930888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.931074+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.931241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.931387+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.931515+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.931685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.932112+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.932254+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.932409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.932556+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.932713+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.932877+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.933090+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.933233+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.933405+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657353 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.933607+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.933798+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.933939+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a21ff000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.934056+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf5a40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5512000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5512000 session 0x55bcd7bf5680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf4d20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd7bf4780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.934227+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.655097961s of 32.044475555s, submitted: 67
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd820f2c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd820fc20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820e5a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820ed20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd820e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4734651 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.934370+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.934513+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.934669+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.934949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a18a5000/0x0/0x1bfc00000, data 0x1fd58f7/0x2209000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.935097+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4734651 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.935249+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd820e780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.935687+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451133440 unmapped: 78274560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81fc400
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.936047+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451133440 unmapped: 78274560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.936340+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.936566+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805964 data_alloc: 234881024 data_used: 17387520
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.936957+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.937321+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.937453+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.937698+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.937968+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805964 data_alloc: 234881024 data_used: 17387520
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.938165+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.938395+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.938544+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.866851807s of 18.948247910s, submitted: 19
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.938688+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1881000/0x0/0x1bfc00000, data 0x1ff98f7/0x222d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [0,0,0,0,0,6,0,10])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.938801+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453902336 unmapped: 75505664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894062 data_alloc: 234881024 data_used: 18345984
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.938932+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453902336 unmapped: 75505664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.939176+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.939331+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e47000/0x0/0x1bfc00000, data 0x2a2b8f7/0x2c5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.939469+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.939647+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4908178 data_alloc: 234881024 data_used: 18374656
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.939779+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455221248 unmapped: 74186752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.939961+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.940215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd820e3c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81fc400 session 0x55bcd55874a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.940359+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.940500+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455229440 unmapped: 74178560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4907530 data_alloc: 234881024 data_used: 18374656
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.940656+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.940896+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.941071+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.941219+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.941371+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4907530 data_alloc: 234881024 data_used: 18374656
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.941558+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7bf41e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.941727+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.941886+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.596574783s of 19.309139252s, submitted: 103
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.942039+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.942151+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909422 data_alloc: 234881024 data_used: 18493440
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.942299+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.942456+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.942584+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.942788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.942937+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909422 data_alloc: 234881024 data_used: 18493440
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.943103+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.943266+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.943457+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.943609+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.943796+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.275099754s of 12.280827522s, submitted: 1
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4915422 data_alloc: 234881024 data_used: 18903040
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.943982+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.944188+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.944364+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.944568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.944727+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918622 data_alloc: 234881024 data_used: 19460096
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.944918+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.945052+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.945284+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0e28000/0x0/0x1bfc00000, data 0x2a528f7/0x2c86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.945498+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd82392c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5351000 session 0x55bcd82390e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 455245824 unmapped: 74162176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd5cd4960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.945719+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.945891+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.946064+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.946272+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.946438+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.946572+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.946694+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.946807+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 79888384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.947066+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.947255+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.947473+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.947700+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.947917+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.948134+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.948359+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449527808 unmapped: 79880192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd7219680
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd521e000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4800 session 0x55bcd5336b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5350c00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd7eb54a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcdfcda000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.948530+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449536000 unmapped: 79872000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.948672+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.948942+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.949166+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.949428+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.949653+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.949919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672226 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.950074+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449544192 unmapped: 79863808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd81c05a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7eb43c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd749cb40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd53385a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.950455+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.354831696s of 32.466384888s, submitted: 51
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd5529c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd82aef00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd74a54a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449552384 unmapped: 79855616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd7eb4b40
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd7211860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.950613+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.950786+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.950932+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686822 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.951102+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd81c1c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.951301+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bce219b000 session 0x55bcd7804780
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd55294a0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.951456+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd55283c0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd987b800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.952309+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449560576 unmapped: 79847424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.952498+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.952737+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.952881+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449568768 unmapped: 79839232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.953031+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.953187+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.953392+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449576960 unmapped: 79831040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.953610+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.953921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2117000/0x0/0x1bfc00000, data 0x17638f7/0x1997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.954231+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449585152 unmapped: 79822848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.954399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449593344 unmapped: 79814656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.954620+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4690502 data_alloc: 218103808 data_used: 8343552
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 449593344 unmapped: 79814656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.766798019s of 18.784076691s, submitted: 8
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.954783+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 77807616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.954944+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 77807616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1d58000/0x0/0x1bfc00000, data 0x1b1a8f7/0x1d4e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.955140+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.955286+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.955482+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733898 data_alloc: 218103808 data_used: 8380416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1cbb000/0x0/0x1bfc00000, data 0x1bb78f7/0x1deb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1cbb000/0x0/0x1bfc00000, data 0x1bb78f7/0x1deb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.955596+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.955722+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 77586432 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.955887+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd820a1e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd987b800 session 0x55bcd81c1c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.956011+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.956215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.956393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.956536+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.956692+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.956876+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.957005+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5c3b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5d7fc00
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.957194+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.957326+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.957467+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1c9f000/0x0/0x1bfc00000, data 0x1bdb8f7/0x1e0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.957707+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5c3b000 session 0x55bcd5534000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5d7fc00 session 0x55bcd5529c20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.957931+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729870 data_alloc: 218103808 data_used: 8380416
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd81f4000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.845039368s of 20.065383911s, submitted: 66
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.958234+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451215360 unmapped: 78192640 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd81f4000 session 0x55bcd79d7860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.958425+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451223552 unmapped: 78184448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.958571+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.958737+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451231744 unmapped: 78176256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.958890+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.959072+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.959218+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.959422+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.959661+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.959949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.960238+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451239936 unmapped: 78168064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.960427+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.960577+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.960713+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.960852+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.960975+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.961098+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.961220+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451248128 unmapped: 78159872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.961343+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.961528+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.961889+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.962171+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.962373+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.962660+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.962888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.963156+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451256320 unmapped: 78151680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.963347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451264512 unmapped: 78143488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.963574+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451272704 unmapped: 78135296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.963787+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.963949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.964174+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.964456+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.964677+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451280896 unmapped: 78127104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.964821+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.964989+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451289088 unmapped: 78118912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.965204+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.965404+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.965655+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.965852+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.965994+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.966190+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.966393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451297280 unmapped: 78110720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.966606+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.966762+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.966896+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.967027+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.967236+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.967399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451305472 unmapped: 78102528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.967599+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.967864+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 78094336 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.968119+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.968337+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.968526+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.968686+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451321856 unmapped: 78086144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.968944+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.969138+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.969387+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.969685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451330048 unmapped: 78077952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.969869+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.970040+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.970260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.970404+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.970529+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.970717+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.970870+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.971094+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451338240 unmapped: 78069760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.971261+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.971400+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.971597+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.971758+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.971989+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.972279+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.972448+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.972617+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451354624 unmapped: 78053376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.972764+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451362816 unmapped: 78045184 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.972913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.973055+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.973184+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd76af000 session 0x55bcd820bc20
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bce219b000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.973328+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451371008 unmapped: 78036992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.973463+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.973626+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.973757+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451379200 unmapped: 78028800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.973948+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.974118+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.974322+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.974472+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.974646+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.974791+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.974964+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451395584 unmapped: 78012416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.975140+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451395584 unmapped: 78012416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.975319+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.975478+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.975702+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.975883+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.976036+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451411968 unmapped: 77996032 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.976179+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.976396+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.976600+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.976921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.977089+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.977278+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.977488+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.977638+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.977771+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.977892+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.978036+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.978123+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.978229+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.978403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.978577+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.978757+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.978896+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.979023+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.979151+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.979275+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.979392+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.979500+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.979625+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451452928 unmapped: 77955072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.979765+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 462741504 unmapped: 66666496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:22.980887+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451346432 unmapped: 78061568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:23.981047+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:24.981166+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451387392 unmapped: 78020608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:25.981304+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451395584 unmapped: 78012416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:26.981428+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:27.981545+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:28.981668+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451403776 unmapped: 78004224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:29.981785+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451411968 unmapped: 77996032 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:30.982505+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451411968 unmapped: 77996032 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:31.982683+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451411968 unmapped: 77996032 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:32.982794+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:33.982947+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:34.983070+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:35.983199+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:36.983326+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451420160 unmapped: 77987840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:37.983454+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:38.983578+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:39.983686+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:40.983812+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451428352 unmapped: 77979648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:41.984919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451436544 unmapped: 77971456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:42.985039+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:43.985223+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:44.985365+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:45.985480+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:46.985612+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:47.985728+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:48.985899+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451444736 unmapped: 77963264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:49.986012+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:50.986143+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:51.986275+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 77946880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:52.986385+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451469312 unmapped: 77938688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:53.986535+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:54.986650+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:55.986752+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:56.986866+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:57.986997+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:58.987121+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:59.987316+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:00.987492+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 77930496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:01.987734+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 77922304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:02.987966+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 77922304 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:03.988165+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 77914112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:04.988310+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 77914112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:05.988506+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:06.988686+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:07.988843+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:08.988954+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:09.989085+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:10.989228+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:11.989371+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 77897728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:12.989519+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 77889536 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:13.989689+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451526656 unmapped: 77881344 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:14.989808+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:15.989951+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:16.990131+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:17.990266+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:18.990407+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:19.990539+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:20.990745+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:21.990940+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 77873152 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:22.991124+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:23.991289+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:24.991424+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:25.991568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:26.991746+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:27.991863+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:28.991998+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:29.992186+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451551232 unmapped: 77856768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:30.992305+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451559424 unmapped: 77848576 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:31.992441+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 77840384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:32.992578+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 77840384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:33.992734+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 77832192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:34.992897+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 77832192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:35.993026+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 77832192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:36.993228+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451584000 unmapped: 77824000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:37.993386+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451584000 unmapped: 77824000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:38.993503+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:39.993650+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:40.993794+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:41.993973+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:42.994405+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:43.994570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:44.994748+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 77815808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:45.994913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 77807616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:46.995065+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:47.995291+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:48.995520+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:49.995917+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:50.996037+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:51.996253+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:52.996452+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451608576 unmapped: 77799424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:53.996635+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451616768 unmapped: 77791232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:54.996903+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:55.997026+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:56.997294+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:57.997438+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:58.997578+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:59.997797+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:00.997939+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 77783040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:01.998180+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451641344 unmapped: 77766656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:02.998409+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451641344 unmapped: 77766656 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:03.998632+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:04.998902+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:05.999045+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:06.999162+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:07.999301+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:08.999504+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451649536 unmapped: 77758464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:09.999699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451657728 unmapped: 77750272 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:10.999863+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451657728 unmapped: 77750272 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:12.000114+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451665920 unmapped: 77742080 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:13.000236+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451665920 unmapped: 77742080 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:14.000466+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451665920 unmapped: 77742080 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:15.000675+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451674112 unmapped: 77733888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:16.000855+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451674112 unmapped: 77733888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:17.001006+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451674112 unmapped: 77733888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:18.001177+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451682304 unmapped: 77725696 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:19.001328+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:20.001501+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:21.001651+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:22.001809+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:23.002015+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:24.002209+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:25.002346+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451690496 unmapped: 77717504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:26.002480+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:27.002622+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:28.002752+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 67K writes, 264K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2545 writes, 8403 keys, 2545 commit groups, 1.0 writes per commit group, ingest: 7.16 MB, 0.01 MB/s
                                           Interval WAL: 2545 writes, 1106 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:29.002886+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:30.003041+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:31.003188+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:32.003330+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:33.003467+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451706880 unmapped: 77701120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:34.003641+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451715072 unmapped: 77692928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:35.003910+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451715072 unmapped: 77692928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:36.004115+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451723264 unmapped: 77684736 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:37.004224+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451723264 unmapped: 77684736 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:38.004366+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451731456 unmapped: 77676544 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:39.004536+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451739648 unmapped: 77668352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:40.004702+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451739648 unmapped: 77668352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:41.004933+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451739648 unmapped: 77668352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:42.005114+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 77660160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:43.005303+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 77660160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:44.005467+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 77660160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:45.005643+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 77660160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:46.005791+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 77660160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:47.006337+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 77651968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:48.007049+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 77651968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:49.007292+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 77651968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:50.007884+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:51.008313+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:52.008891+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:53.009360+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:54.009610+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:55.010079+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:56.010233+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:57.010479+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 77643776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:58.010769+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451796992 unmapped: 77611008 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:59.010920+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:00.011155+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:01.011387+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:02.011660+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:03.011812+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:04.012036+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:05.012215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451805184 unmapped: 77602816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:06.012441+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451813376 unmapped: 77594624 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:07.013259+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:08.013567+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:09.013712+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:10.041410+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:11.041557+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:12.041770+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:13.041919+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451829760 unmapped: 77578240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:14.042225+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:15.042403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:16.042592+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:17.042728+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:18.042928+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:19.043043+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 77561856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:20.043187+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451854336 unmapped: 77553664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:21.043322+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451854336 unmapped: 77553664 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:22.043486+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 77545472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:23.043630+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 77537280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:24.043774+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:25.043973+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:26.044192+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:27.044302+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:28.044390+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:29.044535+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 77529088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:30.044708+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 77520896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:31.044850+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451895296 unmapped: 77512704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:32.045073+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451895296 unmapped: 77512704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:33.045225+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451895296 unmapped: 77512704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:34.045398+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 77504512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:35.045518+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451911680 unmapped: 77496320 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:36.045708+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451911680 unmapped: 77496320 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:37.045986+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451911680 unmapped: 77496320 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:38.046145+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:39.046347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:40.046569+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:41.046750+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:42.046949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:43.047209+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:44.047499+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 77488128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:45.047658+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 77479936 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:46.047900+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451936256 unmapped: 77471744 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:47.048079+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451952640 unmapped: 77455360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:48.048181+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451952640 unmapped: 77455360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:49.048395+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451952640 unmapped: 77455360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:50.048606+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451960832 unmapped: 77447168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:51.048960+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451960832 unmapped: 77447168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:52.049162+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451960832 unmapped: 77447168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:53.049330+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451960832 unmapped: 77447168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:54.049563+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:55.049791+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:56.049985+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:57.050195+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:58.050369+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:59.050602+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:00.050802+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:01.051001+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 77430784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:02.051206+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:03.051375+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:04.051594+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:05.051756+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:06.051943+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:07.052200+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 77422592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:08.052332+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 451993600 unmapped: 77414400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:09.052541+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2200000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c14f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452001792 unmapped: 77406208 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:10.052729+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452009984 unmapped: 77398016 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 347.150909424s of 347.194274902s, submitted: 14
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:11.052878+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452018176 unmapped: 77389824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:12.053005+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452042752 unmapped: 77365248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:13.053139+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452091904 unmapped: 77316096 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:14.053326+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:15.053453+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:16.053588+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:17.053737+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:18.053878+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:19.054049+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:20.054191+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:21.054348+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:22.054494+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:23.054656+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:24.054984+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:25.055154+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:26.055296+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:27.055481+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:28.055627+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:29.055779+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:30.055884+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:31.055961+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:32.056086+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:33.056231+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:34.056471+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452149248 unmapped: 77258752 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:35.056581+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452157440 unmapped: 77250560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:36.056747+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:37.056879+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:38.057023+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:39.057142+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:40.057299+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:41.057555+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452165632 unmapped: 77242368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:42.057687+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452173824 unmapped: 77234176 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:43.057813+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:44.057999+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:45.058128+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:46.058255+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:47.058384+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:48.058559+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:49.058686+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452182016 unmapped: 77225984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:50.058921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:51.059076+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:52.059256+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:53.059407+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:54.059583+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:55.059721+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:56.059896+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452190208 unmapped: 77217792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:57.060063+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452206592 unmapped: 77201408 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:58.060217+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:59.060350+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:00.060448+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:01.060606+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:02.060764+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:03.060901+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:04.061053+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:05.061178+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 77193216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:06.061288+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452222976 unmapped: 77185024 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:07.061445+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452222976 unmapped: 77185024 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:08.061567+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452222976 unmapped: 77185024 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:09.061706+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452222976 unmapped: 77185024 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:10.061918+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452231168 unmapped: 77176832 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:11.062098+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452231168 unmapped: 77176832 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:12.062245+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452231168 unmapped: 77176832 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:13.062423+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452239360 unmapped: 77168640 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:14.062633+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:15.062816+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:16.062995+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:17.063124+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:18.063307+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:19.063482+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:20.063620+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:21.063811+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452247552 unmapped: 77160448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:22.063983+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:23.064130+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:24.064322+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:25.064544+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:26.064670+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:27.064985+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:28.065175+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:29.065322+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452255744 unmapped: 77152256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:30.065487+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452263936 unmapped: 77144064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:31.065677+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452263936 unmapped: 77144064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:32.065910+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452272128 unmapped: 77135872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:33.066041+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:34.066249+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452272128 unmapped: 77135872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:35.066488+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:36.066688+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:37.066947+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:38.067141+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452280320 unmapped: 77127680 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:39.067294+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:40.067487+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:41.067791+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:42.068014+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:43.068240+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:44.068509+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:45.068699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:46.068908+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452288512 unmapped: 77119488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:47.069045+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452296704 unmapped: 77111296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:48.069198+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:49.069399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:50.069558+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:51.069720+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:52.069970+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:53.070152+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 77094912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:54.070355+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:55.070570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:56.070768+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:57.070983+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:58.071116+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:59.071335+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 77086720 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:00.071515+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 77078528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:01.071742+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 77078528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:02.071929+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452345856 unmapped: 77062144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:03.072087+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452345856 unmapped: 77062144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:04.072320+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452345856 unmapped: 77062144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:05.072493+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452345856 unmapped: 77062144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:06.072666+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452345856 unmapped: 77062144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:07.072896+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452354048 unmapped: 77053952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:08.073052+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452354048 unmapped: 77053952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:09.073289+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452354048 unmapped: 77053952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:10.073443+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452362240 unmapped: 77045760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:11.073596+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452362240 unmapped: 77045760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:12.073788+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452362240 unmapped: 77045760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:13.074001+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452362240 unmapped: 77045760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:14.074203+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452370432 unmapped: 77037568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:15.074394+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452370432 unmapped: 77037568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:16.074640+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452370432 unmapped: 77037568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:17.074869+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452386816 unmapped: 77021184 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:18.075027+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452395008 unmapped: 77012992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:19.075199+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:20.075364+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:21.075568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:22.075712+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:23.075910+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:24.076118+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:25.076285+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452403200 unmapped: 77004800 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:26.076439+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452411392 unmapped: 76996608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:27.076645+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452411392 unmapped: 76996608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:28.076901+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452411392 unmapped: 76996608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:29.077071+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452411392 unmapped: 76996608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:30.077257+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452411392 unmapped: 76996608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:31.077473+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452419584 unmapped: 76988416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:32.078568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452419584 unmapped: 76988416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:33.078729+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452419584 unmapped: 76988416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:34.078897+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452427776 unmapped: 76980224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:35.079113+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:36.079261+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:37.079952+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:38.080095+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:39.080696+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:40.081197+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452444160 unmapped: 76963840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:41.081682+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:42.082157+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:43.082568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:44.083022+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:45.083207+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:46.083403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452452352 unmapped: 76955648 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:47.083673+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452460544 unmapped: 76947456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:48.083979+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452460544 unmapped: 76947456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:49.084260+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452468736 unmapped: 76939264 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:50.084507+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452476928 unmapped: 76931072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:51.084646+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452476928 unmapped: 76931072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:52.084913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:53.085171+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:54.085381+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:55.085644+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:56.085903+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:57.086150+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452485120 unmapped: 76922880 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:58.086622+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:59.086978+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:00.087228+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:01.087374+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:02.087596+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:03.087868+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:04.088131+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452493312 unmapped: 76914688 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:05.088308+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 76906496 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:06.088486+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:07.088685+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:08.088871+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:09.089115+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:10.089325+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:11.089466+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:12.089624+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:13.089891+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452517888 unmapped: 76890112 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:14.090118+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452534272 unmapped: 76873728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:15.090294+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452534272 unmapped: 76873728 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:16.090449+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452542464 unmapped: 76865536 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:17.090583+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452542464 unmapped: 76865536 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:18.090719+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452550656 unmapped: 76857344 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:19.090902+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452550656 unmapped: 76857344 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:20.091091+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452550656 unmapped: 76857344 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:21.091237+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452550656 unmapped: 76857344 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:22.091563+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 76840960 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:23.091718+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 76840960 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:24.091907+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 76840960 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:25.092050+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 76840960 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:26.092212+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:27.092389+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:28.092573+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:29.092784+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:30.093069+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:31.093223+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452575232 unmapped: 76832768 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:32.093359+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 76824576 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:33.093538+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 76824576 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:34.093804+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 76824576 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:35.094155+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 76824576 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:36.094343+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452591616 unmapped: 76816384 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:37.094509+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 76808192 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:38.094644+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:39.094921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:40.095070+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:41.095225+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:42.095441+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:43.095671+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:44.095910+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:45.096052+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 76800000 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:46.096232+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:47.096403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:48.096568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:49.096735+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:50.096913+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:51.097086+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:52.097327+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 76791808 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:53.097537+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452624384 unmapped: 76783616 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:54.097757+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452632576 unmapped: 76775424 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:55.097907+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452640768 unmapped: 76767232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:56.098074+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452640768 unmapped: 76767232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:57.098253+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452640768 unmapped: 76767232 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:58.098414+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452648960 unmapped: 76759040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:59.098586+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452648960 unmapped: 76759040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:00.098758+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452648960 unmapped: 76759040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:01.098990+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452648960 unmapped: 76759040 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:02.099354+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:03.099545+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:04.099752+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:05.099974+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:06.100201+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:07.100362+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:08.100576+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:09.100755+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452657152 unmapped: 76750848 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:10.101170+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 76734464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:11.101358+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 76734464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:12.101563+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 76734464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:13.101871+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 76734464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:14.102155+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 76734464 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:15.102293+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 76726272 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:16.102570+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 76726272 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:17.102787+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 76726272 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:18.102997+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 76709888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:19.106150+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 76709888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:20.106340+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 76709888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:21.106604+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 76709888 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:22.106900+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 76701696 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:23.107098+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 76701696 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:24.107337+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 76701696 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:25.107504+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 76701696 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:26.107706+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 76693504 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:27.107895+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452722688 unmapped: 76685312 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:28.108048+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452722688 unmapped: 76685312 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:29.108241+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452722688 unmapped: 76685312 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:30.108450+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 76677120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:31.108614+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 76677120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:32.108905+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 76677120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:33.109096+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 76677120 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:34.109293+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:35.109447+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:36.109595+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:37.109743+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:38.109907+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:39.110048+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:40.110178+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452739072 unmapped: 76668928 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:41.110396+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452747264 unmapped: 76660736 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:42.110619+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 76644352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:43.110787+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 76644352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:44.111009+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 76644352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:45.111179+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 76644352 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:46.111432+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 76636160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:47.111650+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 76636160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:48.111949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 76636160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:49.112128+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 76636160 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:50.112309+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452780032 unmapped: 76627968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:51.112448+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452780032 unmapped: 76627968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:52.112656+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452780032 unmapped: 76627968 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:53.112818+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452788224 unmapped: 76619776 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:54.113063+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452796416 unmapped: 76611584 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:55.113196+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452804608 unmapped: 76603392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:56.113507+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452804608 unmapped: 76603392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:57.113756+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452804608 unmapped: 76603392 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:58.113937+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:59.114170+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:00.114364+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:01.114575+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:02.114973+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:03.115121+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:04.115439+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:05.115708+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:06.116013+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452812800 unmapped: 76595200 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:07.116318+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452829184 unmapped: 76578816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:08.147247+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452829184 unmapped: 76578816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:09.147511+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452829184 unmapped: 76578816 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:10.147938+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 76570624 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:11.148078+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 76570624 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:12.148341+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 76570624 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:13.148490+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 76570624 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:14.148762+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452853760 unmapped: 76554240 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:15.148887+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:16.149043+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:17.149249+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:18.149416+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:19.149589+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:20.149884+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:21.150158+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:22.150363+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452861952 unmapped: 76546048 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:23.150520+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:24.150757+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:25.150953+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:26.151239+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:27.151494+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:28.151641+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:29.151793+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452870144 unmapped: 76537856 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:30.152079+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452886528 unmapped: 76521472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:31.152393+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452886528 unmapped: 76521472 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:32.152567+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:33.152712+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:34.152929+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:35.153262+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:36.153540+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:37.153792+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 76513280 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:38.154025+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452902912 unmapped: 76505088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:39.154305+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452902912 unmapped: 76505088 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:40.154442+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:41.154677+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:42.154812+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:43.154992+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:44.155275+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:45.155546+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 76496896 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:46.155736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 76488704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:47.155994+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 76488704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:48.156248+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 76488704 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:49.156481+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452927488 unmapped: 76480512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:50.156699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452927488 unmapped: 76480512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:51.156936+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452927488 unmapped: 76480512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:52.157147+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452927488 unmapped: 76480512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:53.157309+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452927488 unmapped: 76480512 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:54.157477+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 76464128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:55.157682+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 76464128 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:56.157821+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 76455936 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:57.158002+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 76455936 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:58.158180+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 76455936 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:59.158343+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452960256 unmapped: 76447744 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:00.158502+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452960256 unmapped: 76447744 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:01.158676+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452960256 unmapped: 76447744 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:02.158958+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:03.159141+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:04.159351+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:05.159613+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:06.159879+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:07.160025+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:08.160201+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:09.160365+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 76431360 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:10.160553+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 76423168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:11.160699+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 76423168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:12.160893+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 76423168 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:13.161037+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 76414976 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:14.161212+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 76414976 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:15.161394+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 76414976 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:16.161630+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 76414976 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:17.161774+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453001216 unmapped: 76406784 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:18.162010+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453009408 unmapped: 76398592 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:19.162215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:20.162423+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:21.162560+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:22.162893+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:23.163096+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:24.163420+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:25.163592+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453017600 unmapped: 76390400 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:26.163736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453033984 unmapped: 76374016 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:27.163862+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453042176 unmapped: 76365824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:28.164081+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453042176 unmapped: 76365824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:29.164294+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453042176 unmapped: 76365824 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:30.164454+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453050368 unmapped: 76357632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:31.164689+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453050368 unmapped: 76357632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:32.164921+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453050368 unmapped: 76357632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:33.165081+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453050368 unmapped: 76357632 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:34.165273+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453058560 unmapped: 76349440 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:35.165479+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453066752 unmapped: 76341248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:36.165625+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453066752 unmapped: 76341248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:37.165795+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453066752 unmapped: 76341248 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:38.165970+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:39.166151+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:40.166337+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:41.166493+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:42.166653+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:43.166808+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:44.167016+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T13:30:45.167161+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _finish_auth 0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:45.168115+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453074944 unmapped: 76333056 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:46.167398+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453091328 unmapped: 76316672 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:47.167539+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453091328 unmapped: 76316672 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:48.167692+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453091328 unmapped: 76316672 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:49.167888+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453091328 unmapped: 76316672 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:50.168108+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:51.168347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:52.168623+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:53.168901+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:54.169152+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:55.169406+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:56.169664+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:57.169937+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 76300288 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:58.170102+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453115904 unmapped: 76292096 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:59.170329+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453115904 unmapped: 76292096 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:00.170540+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:01.170743+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:02.170948+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:03.171130+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:04.171332+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:05.171476+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453124096 unmapped: 76283904 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:06.171617+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453140480 unmapped: 76267520 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:07.171872+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453140480 unmapped: 76267520 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:08.172042+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453140480 unmapped: 76267520 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:09.172184+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453140480 unmapped: 76267520 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:10.172408+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453140480 unmapped: 76267520 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:11.172565+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453148672 unmapped: 76259328 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:12.172748+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453148672 unmapped: 76259328 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:13.172948+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453148672 unmapped: 76259328 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:14.173126+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453156864 unmapped: 76251136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:15.173332+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453156864 unmapped: 76251136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:16.173517+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453156864 unmapped: 76251136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:17.173676+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453156864 unmapped: 76251136 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:18.173926+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453165056 unmapped: 76242944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:19.174120+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453165056 unmapped: 76242944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:20.174363+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453165056 unmapped: 76242944 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:21.174537+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453181440 unmapped: 76226560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:22.174695+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453181440 unmapped: 76226560 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:23.174834+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:24.175032+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:25.175115+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:26.175254+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:27.175387+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:28.175785+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:29.176159+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453189632 unmapped: 76218368 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:30.176372+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:31.176770+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:32.176928+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:33.177096+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:34.177292+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:35.177447+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:36.177555+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453206016 unmapped: 76201984 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:37.177671+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453214208 unmapped: 76193792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:38.177791+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453214208 unmapped: 76193792 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:39.177918+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:40.178066+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:41.178219+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:42.178399+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:43.178568+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:44.178795+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453222400 unmapped: 76185600 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:45.178935+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453230592 unmapped: 76177408 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:46.179035+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:47.179245+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:48.179430+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:49.179583+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:50.179715+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:51.179870+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:52.179991+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:53.180135+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453238784 unmapped: 76169216 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:54.180328+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:55.180498+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:56.180642+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:57.180786+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:58.181039+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:59.181182+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:00.181347+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:01.181491+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:02.181678+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453271552 unmapped: 76136448 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:03.181822+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453279744 unmapped: 76128256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:04.182034+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453279744 unmapped: 76128256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:05.182205+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453279744 unmapped: 76128256 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:06.182419+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:07.182560+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:08.182809+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:09.183117+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:10.183266+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:11.183403+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:12.183520+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:13.183644+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453287936 unmapped: 76120064 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:14.183865+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453296128 unmapped: 76111872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:15.184065+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453296128 unmapped: 76111872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:16.184144+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453296128 unmapped: 76111872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:17.184288+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453296128 unmapped: 76111872 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:18.184400+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453312512 unmapped: 76095488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:19.184564+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453312512 unmapped: 76095488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:20.184737+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453312512 unmapped: 76095488 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:21.184895+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453320704 unmapped: 76087296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:22.185100+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453320704 unmapped: 76087296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:23.185234+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453320704 unmapped: 76087296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:24.185413+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453320704 unmapped: 76087296 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:25.185598+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453328896 unmapped: 76079104 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:26.185779+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:27.186149+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:28.186418+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 68K writes, 265K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 68K writes, 25K syncs, 2.71 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 513 writes, 779 keys, 513 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 513 writes, 256 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:29.186722+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:30.187017+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:31.187244+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:32.187506+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:33.187661+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453337088 unmapped: 76070912 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:34.187908+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453353472 unmapped: 76054528 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: mgrc ms_handle_reset ms_handle_reset con 0x55bcdada6800
Oct 02 13:34:11 compute-0 ceph-osd[83986]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3158772141
Oct 02 13:34:11 compute-0 ceph-osd[83986]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3158772141,v1:192.168.122.100:6801/3158772141]
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: get_auth_request con 0x55bcd987b800 auth_method 0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:35.188093+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453369856 unmapped: 76038144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:36.188240+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd521e000 session 0x55bcd7805860
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd76af000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcd5350c00 session 0x55bcd79d70e0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5513000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 ms_handle_reset con 0x55bcdfcda000 session 0x55bcd820e960
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: handle_auth_request added challenge on 0x55bcd5351000
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453369856 unmapped: 76038144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:37.188382+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453369856 unmapped: 76038144 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:38.188545+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 76029952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:39.188736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 76029952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:40.188995+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 76029952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:41.189150+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 76029952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:42.189320+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 76029952 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:43.189461+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:44.189620+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:45.189752+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:46.189924+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:47.190142+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:48.190307+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:49.190522+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453386240 unmapped: 76021760 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:50.190736+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:51.190949+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:52.191144+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:53.191324+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:54.191494+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:55.191688+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:56.191889+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453394432 unmapped: 76013568 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:57.192096+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453402624 unmapped: 76005376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:58.192253+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453410816 unmapped: 75997184 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:59.192422+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:00.192559+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:01.192758+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:02.192856+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:03.193044+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:04.193252+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:05.193413+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453419008 unmapped: 75988992 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:06.193537+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:07.193695+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:08.193907+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:09.194032+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:10.194181+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:11.194346+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:12.194529+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:13.194696+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453435392 unmapped: 75972608 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:14.194867+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453443584 unmapped: 75964416 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:15.195054+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:16.195215+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:17.195379+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:18.195546+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:19.195704+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:20.195850+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:21.195974+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453451776 unmapped: 75956224 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:22.196120+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:23.196332+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:24.196542+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:25.196672+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:26.196962+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:27.197143+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:28.197271+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:29.197468+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453468160 unmapped: 75939840 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:30.197616+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:31.197789+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:32.197958+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:33.198117+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:34.198269+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:35.198406+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:36.198524+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:11 compute-0 ceph-osd[83986]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:11 compute-0 ceph-osd[83986]: bluestore.MempoolThread(0x55bcd3c1bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4677922 data_alloc: 218103808 data_used: 7786496
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:37.198682+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453484544 unmapped: 75923456 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:38.198814+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453500928 unmapped: 75907072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:39.198926+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453402624 unmapped: 76005376 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1df0000/0x0/0x1bfc00000, data 0x167a8f7/0x18ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1c55f9c7), peers [0,2] op hist [])
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:40.199062+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453500928 unmapped: 75907072 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: tick
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_tickets
Oct 02 13:34:11 compute-0 ceph-osd[83986]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:41.199193+0000)
Oct 02 13:34:11 compute-0 ceph-osd[83986]: prioritycache tune_memory target: 4294967296 mapped: 453115904 unmapped: 76292096 heap: 529408000 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:11 compute-0 ceph-osd[83986]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40116 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:11 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:11 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:11.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:11 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:11 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977338050' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48725 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40134 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 nova_compute[257802]: 2025-10-02 13:34:12.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286888802' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48746 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47620 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.47581 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.48713 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/303777078' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2736873891' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.40116 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3977338050' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3850657111' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.48725 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/38276567' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1111808518' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.40134 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4286888802' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.48746 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4134743442' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1045062937' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211259976' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48761 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40164 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47635 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 crontab[442300]: (root) LIST (root)
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48779 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40182 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47653 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48794 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277129817' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:13 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:13 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:13 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:13.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.47620 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.40146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3211259976' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.48761 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1708908358' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.40164 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3587561828' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.47635 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.48779 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.40182 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1670926851' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.47653 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4277129817' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636451827' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:13.920+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:13 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:14 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47674 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429002303' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48821 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:14.313+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:14 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427857477' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:14 compute-0 nova_compute[257802]: 2025-10-02 13:34:14.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:14 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47692 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508445388' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.48794 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.47665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/206892401' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2273320171' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/636451827' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.40209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.47674 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/177922527' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3429002303' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4278751497' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3427857477' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2037802205' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/508445388' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/771179353' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610952143' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47713 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796899002' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636573773' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47725 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311911710' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546973789' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:15 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:15 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:15 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139677018' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47740 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.48821 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.47692 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3909806530' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1610952143' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2154797546' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.47713 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1796899002' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3386432754' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1636573773' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2925061133' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3831175002' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2311911710' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2546973789' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1496143167' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4099705086' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692295900' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368176881' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:16 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 13:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299111121' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 systemd[1]: Started Hostname Service.
Oct 02 13:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897903708' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47758 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:16 compute-0 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-0-fmcstn[73897]: 2025-10-02T13:34:16.641+0000 7f4c74221640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:16 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405942524' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.47725 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4139677018' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.47740 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3692295900' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3299708536' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3651288603' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3368176881' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2866728388' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/299111121' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2879993378' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1897903708' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1087908314' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2093367489' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1302999161' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1405942524' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40320 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:34:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1070031655' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.097 2 DEBUG oslo_service.periodic_task [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.151 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.151 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.152 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.152 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.152 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:34:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3299647270' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48917 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40341 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:17 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:17 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205445886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.589 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40353 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:17 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:17 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:17 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:17.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.732 2 WARNING nova.virt.libvirt.driver [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.733 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3718MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48941 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.734 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.735 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:17 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48947 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.810 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.810 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:34:17 compute-0 nova_compute[257802]: 2025-10-02 13:34:17.970 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.47758 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.40320 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/744023568' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1070031655' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3299647270' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.40335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/178438410' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2205445886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/182568512' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2380963739' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40377 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48962 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 sudo[443089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:18 compute-0 sudo[443089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:18 compute-0 sudo[443089]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:34:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074109332' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:18 compute-0 podman[443133]: 2025-10-02 13:34:18.317655734 +0000 UTC m=+0.111214871 container health_status a79dd716af46f161b41d5c747f1e8a9b88a4dbcd1b9aef71228bc5abeb318bd4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 02 13:34:18 compute-0 sudo[443153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:18 compute-0 sudo[443153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:18 compute-0 sudo[443153]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:18 compute-0 podman[443134]: 2025-10-02 13:34:18.336427183 +0000 UTC m=+0.115481836 container health_status b547ad6e26e1522db6726741cccc7a9710dde7cf34cbd0e2de3292ed21f92676 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:34:18 compute-0 podman[443132]: 2025-10-02 13:34:18.336526076 +0000 UTC m=+0.130075863 container health_status 3efe123903f1f5e8456b2bcec2616b21d890bfd0217c3a8fb42c1a90624ff9ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784206430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40395 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 nova_compute[257802]: 2025-10-02 13:34:18.436 2 DEBUG oslo_concurrency.processutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:18 compute-0 nova_compute[257802]: 2025-10-02 13:34:18.442 2 DEBUG nova.compute.provider_tree [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed in ProviderTree for provider: a293a24c-b5ed-43d1-8783-f02da4f75ad4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:34:18 compute-0 nova_compute[257802]: 2025-10-02 13:34:18.468 2 DEBUG nova.scheduler.client.report [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Inventory has not changed for provider a293a24c-b5ed-43d1-8783-f02da4f75ad4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:34:18 compute-0 nova_compute[257802]: 2025-10-02 13:34:18.470 2 DEBUG nova.compute.resource_tracker [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:34:18 compute-0 nova_compute[257802]: 2025-10-02 13:34:18.470 2 DEBUG oslo_concurrency.lockutils [None req-8b770be8-46d6-4003-8bcd-82b5f4278dcd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48974 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:34:18 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623500499' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131507089' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40422 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.48917 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.40341 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.40353 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.48941 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.48947 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.40377 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1043042639' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.48962 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4053814205' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1074109332' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1784206430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/18255196' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/23752323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1623500499' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3114995335' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2127147942' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/854352252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49001 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 13:34:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:19.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 13:34:19 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094037088' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 nova_compute[257802]: 2025-10-02 13:34:19.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:19 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:19 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:19 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:19 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49034 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.40395 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.48974 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.40410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.48983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4131507089' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3763429194' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.40422 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.49001 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3509026750' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1094037088' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3202899176' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/92235263' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2222229722' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/437354534' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/577632163' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47875 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:34:20 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337308618' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47881 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47893 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40485 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47902 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606578659' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49079 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.40440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.49019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.49034 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.47875 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1337308618' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3580891169' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/606578659' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47914 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636147978' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47926 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:21 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:21 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:21.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:21 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937600155' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47938 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 nova_compute[257802]: 2025-10-02 13:34:22.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:22 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194967873' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.47881 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.47893 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.40485 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.47902 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.49079 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.47914 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2636147978' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/2813790633' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1165872989' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2937600155' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1192991307' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1726994292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47944 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40539 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.47959 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818318450' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:23 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:23 compute-0 ceph-mon[73607]: pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.47926 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.47938 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/4194967873' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.47944 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3762790924' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.40539 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2296587244' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.47959 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1426765085' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/818318450' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3865028971' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:23 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135803504' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:23 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:23 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:23 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:23.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:23 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40587 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48001 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:34:24 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31582533' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:24 compute-0 nova_compute[257802]: 2025-10-02 13:34:24.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:24 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40605 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.49133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2135803504' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4069982995' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.40587 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3215246089' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.48001 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/31582533' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3783214530' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49160 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40611 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:34:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075390762' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:25 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:25 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:25 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:25 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.40605 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/564563942' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.49160 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.40611 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4268324709' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1277625248' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1075390762' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1435986497' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 13:34:25 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472157431' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49187 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40638 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48037 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40644 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:26 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:26 compute-0 ceph-mon[73607]: pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.49178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1472157431' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/727963643' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.49187 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.40638 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/1628396375' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 13:34:26 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197478318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:34:27.020 158261 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:34:27.021 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:27 compute-0 ovn_metadata_agent[158256]: 2025-10-02 13:34:27.021 158261 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49205 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 nova_compute[257802]: 2025-10-02 13:34:27.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:27 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 13:34:27 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271877846' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49211 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:27 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:27 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:27 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:27.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40662 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48064 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.48037 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.40644 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2871022366' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/3197478318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/410133881' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.49205 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1271877846' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/1969755456' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40668 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ovs-appctl[445066]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:28 compute-0 ovs-appctl[445070]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:28 compute-0 ovs-appctl[445074]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:28 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861635018' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48079 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49235 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.49211 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.40662 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.48064 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/3410867609' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.40668 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4240853379' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/4049805176' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/1861635018' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48088 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.49247 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715066969' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:29 compute-0 nova_compute[257802]: 2025-10-02 13:34:29.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:29 compute-0 radosgw[92027]: ====== starting new request req=0x7ff29c9596f0 =====
Oct 02 13:34:29 compute-0 radosgw[92027]: ====== req done req=0x7ff29c9596f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:29 compute-0 radosgw[92027]: beast: 0x7ff29c9596f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:29.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:29 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.40692 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.48079 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.49235 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/86026644' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.48088 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.49247 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.100:0/2715066969' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/4030804255' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/638309961' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.102:0/3463620609' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73607]: from='client.? 192.168.122.101:0/2330172091' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73607]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:30 compute-0 ceph-mon[73607]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627528367' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.48112 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
